Robotic Process Automation (RPA) is a reliable and cost-effective solution for automating repetitive digital processes.
Organizations implementing RPA initially target high-value processes that require little to no human intervention. These automations are the low hanging fruit that provide immediate high-value returns. But beyond this initial set, what can be done to simplify identifying potential automation targets? From both a social and technical standpoint, how can RPA be deployed and maintained more broadly in an organization?
Consider, for example, the following workflow comprised of both digital and manual sub-processes. It represents the challenge facing RPA deployments, namely, how to identify which digital processes to automate (the blue segments) and how to coordinate the handoffs and key decision points that have typically been mediated by people (the orange circles).
Modeling process automation using this approach is helpful as it distinguishes between the data entry and decision-making roles that people juggle. These are fundamentally different roles when seen from an automation perspective. While the digital processes (blue lines) representing keystrokes and mouse clicks can be automated using RPA tools, the decision-making tasks (orange circles) are more challenging. Behind each orange circle is a person possessing intuition who is actively learning on the job to make the right decision. A successful RPA implementation needs the same ability to “juggle” these disparate task types to truly automate the workflow.
Guiding Principles and Assumptions
The remainder of this document details the guiding principles we use at Intwixt for approaching process automation challenges like this. If a process involves a reasonably linear set of keystrokes and mouse clicks, then it is a good candidate for RPA automation. But when the task shifts to decision making, requiring human insight to move forward, then it is no longer effective to automate entirely using RPA and instead requires different systems to model the complexity.
It is our position that learning systems (ML, AI, etc) will need to be integrated in order to handle the decision-making vagaries now handled by people. And importantly, given that many business processes are sufficiently complex as to necessitate humans for the foreseeable future, people will also need to be integrated. If done correctly, this multi-system solution will be more capable of replacing the old one by modeling the different roles that people now play using the most appropriate system.
Principle 1: Limit Automation Length
Any organization of sufficient scale is under constant environmental pressure to adapt its existing business processes. From an automation standpoint, this means that longer automations (those comprised of many steps and/or many systems) are more likely to become out of sync than shorter ones. For example, it would be better for long-term maintainability to create two automation segments (D3 and D4) instead of a single, longer segment.
Automation length (segments) should be limited to an optimal maximum. As a rule of thumb, consider any handoff between segments to be a natural boundary as well as any sufficiently complex decision point (typically those requiring human intervention).
Principle 2: Treat Automations like Software
Automations have a life-cycle that corresponds well with that of a traditional software. This makes intuitive sense (they’re both digital tools) and helps illuminate the areas of focus necessary to keep automations relevant over time.
Importantly, creating and deploying automations is not a one-time event. Instead, the tools and processes used to generate the first wave of automations, should be part of a daily set of tools available to front-line users as they both update automations and update the models that determine when they run. This is essentially the software development life-cycle:
- Identify | Identify digital process/es that can be automated and the individuals responsible.
- Develop | Record the automation/s.
- Test/QA | Verify the quality of the automation/s. Is it possible to use an automation to verify an automation? In other words, can those automations that involve system updates be validated by creating a second automation that can verify the expected output?
- Deploy | Catalog each automation, including inputs and outputs. Automations should be discoverable and invokable.
- Evaluate | Track automation use. Determine how successfully the automation ran over time (fitness and validity). Identify and repeat.
But, perhaps, the most important principle of all is understanding this as a social challenge as much as a technical one. Once an automation is out-of-date, it ultimately ends its utility. Maintenance must be seen as an end unto itself for proper long-term integration into the business and its processes. And this means that people must be properly motivated to maintain the automations. If this is to be their new job reality (managing robots), then they must be incentivized to do it well.
Principle 3: Design Stateless Automations
Statelessness is critical to scalable systems, allowing for concurrency in isolation. This stateless approach scales well and is common to high-throughput systems like HTTP. In the following workflow, the automations D1, D2, and D3 would all run stateless once deployed as RPA automations.
Done properly, digital segments D1, D2 and D3 would be automated and catalogued, including their inputs and outputs. The automation index is effectively a RESTful API that will be invoked by the orchestration layer.
Principle 4: Design Stateful Orchestrations
Consider again the following process flow involving digital and manual sub-processes. Prior to RPA, a user would have decided to open a software application (H1) in order to complete the first leg of a digital on-boarding process (D1). The user would then decide (H2) whether to complete process D2, D3, or M2. In such a system, it is the employee who is stateful, retaining all knowledge necessary to choose which leg of the process to invoke next.
In the proposed system, an orchestration layer would replace human-initiated decision points (H1 and H2) with a Slack-mediated strategy. For example, instead of using the old system to enter data and onboard the new employee, the user would open Slack and type a slash command like “/onboard” (right side of diagram). The orchestration layer would then own the process, prompting the user for all information required by D1. It would then invoke automation D1 and pause until it completed. Finally, the orchestration would message the user one last time (via Slack) to choose whether to run automation D2 or D3 or begin manual step M2.
People are no longer driving and managing the process in the proposed system. Essentially, all that remains for people to do is the one thing people are most valued for, namely, provide insight and make decisions. And importantly, teasing it apart in this manner makes it possible to incorporate advanced learning systems into the orchestration, including AI/ML services.
Principle 5: Formalize Change
The environment is constantly changing, requiring automations to be kept current. An important aspect of this is updating the models used by the supporting Machine Learning system. For example, the workflow shown below (modeled with Intwixt) is designed to query a Natural Language Processing (NLP) service to understand intent when it encounters an unknown phrase. But no matter how much design and training are put into the NLP service and its models, it inevitably encounters unresolvable patterns (something common to any AI-type service that uses Machine Learning).
In this particular example, the flow was designed to “fail” when the NLP service returns a confidence score below 80%. It then routes the call to an actual person on Slack for proper handling (Connect to Person). Finally, it logs the failed phrase in order to provide feedback to the model to better handle it the next time it occurs.
A separate workflow handles the “Low Confidence” event. It begins by sending a message to Slack, prompting users to categorize the phrase. Once they respond with the necessary details, the categorized entity is folded back into the original model, training it to handle the phrase (and similar variants). It’s a real-time feedback loop that formalizes long-term learning and real-time error resolution.
It’s useful to distinguish between the multiple roles that people juggle, particularly data entry, learning, and decision-making responsibilities. These are fundamentally different tasks when seen from an automation perspective. In the end, successful RPA implementations need to model and orchestrate these roles differently, using a combination of situation-specific systems, including RPA, Machine Learning, a messaging channel (e.g., Slack) and an orchestration layer.