Skip to main content

Is Robotic Automation competitive with BPMS?

No, Robotic automation extends and complements BPMS and SOA initiatives which are attacking the automation challenge from a different, top down, IT driven angle. Robotic automation is aimed at small-to-mid size automation initiatives. Where speed and size and agility are major factors, then robotic automation is often the fastest and most efficient approach. When larger initiatives are required with a fuller “Business Process” character then BPMS may be better suited.
This difference in scale is illustrated with the so called Long Tail of Automation Requirements. This says that core IT deals with the high volume bulk processing requirements an organisation may have. Typically, these are core ERP systems, mainframe accounting and core data bases. As we move towards the middle of the graph requirements become more specialist and diverse. This is where an organisation often differentiates its product and service offerings. Typical technologies here are workflow, desktop integration, BPMS, agent acceleration. These are large IT control programs that service to offer a platform for automation and work management.

Finally we have the third section of Long Tail – these tasks are characterized by their diversity. Often they are too diverse to make an IT change program, and may be too small to justify IT project costs. Here traditional approaches have been to outsource, or offshore in order to adjust labour rates to make the task more competitive. Robotic automation offers an alternative to off shoring or outsourcing – presenting a new cost-band of labour based on robots.

Comments

Popular posts from this blog

What is Tensor Parallelism and relationship between Buffer and GPU

  Tensor Parallelism in GPU Tensor parallelism is a technique used to distribute the computation of large tensor operations across multiple GPUs or multiple cores within a GPU .   It is an essential method for improving the performance and scalability of deep learning models, particularly when dealing with very large models that cannot fit into the memory of a single GPU. Key Concepts Tensor Operations : Tensors are multidimensional arrays used extensively in deep learning. Common tensor operations include matrix multiplication, convolution, and element-wise operations. Parallelism : Parallelism involves dividing a task into smaller sub-tasks that can be executed simultaneously. This approach leverages the parallel processing capabilities of GPUs to speed up computations. How Tensor Parallelism Works Splitting Tensors : The core idea of tensor parallelism is to split large tensors into smaller chunks that can be processed in parallel. Each chunk is assigned to a different GP...

What's replicated, what's not?

Logged operations are replicated. These include, but are not limited to: DDL DML Create/alter table space Create/alter storage group Create/alter buffer pool XML data. Logged LOBs Not logged operations are not replicated. These include, but are not limited to: Database configuration parameters (this allows primary and standby databases to be configured differently). "Not logged initially" tables Not logged LOBs UDF (User Defined Function) libraries. UDF DDL is replicated. But the libraries used by UDF (such as C or Java libraries)  are not replicated, because they are not stored in the database. Users must manually copy the libraries to the standby. Note: You can use database configuration parameter  BLOCKNONLOGGED  to block not logged operations on the primary.

Data Wrangling vs EDA

  Aspect Data Wrangling (Data Preprocessing) Exploratory Data Analysis (EDA) Objective Prepare raw data for modeling by cleaning, transforming, and formatting it appropriately. Explore and understand the data to gain insights, identify patterns, and make decisions on data handling and modeling. Order Typically performed as a preliminary step before EDA. Usually conducted after data wrangling to further investigate data characteristics. Data Handling Focuses on data cleaning, filling missing values, encoding categorical variables, and scaling features. Involves data visualization, statistical analysis, and summary statistics to uncover patterns, relationships, and anomalies. Techniques Techniques include imputation, outlier detection, feature scaling, and one-hot encoding. Techniques include histograms, scatter plots, box plots, correlation matrices, and descriptive statistics. Data Transformation Involves structural changes to the dataset, such as feature engineering, data normaliz...