Applying the Theory of Constraints to Data Analytics
Business users often have no concept of what it takes to design and deploy robust data analytics. The gap between expectations and execution is one of the main obstacles holding the analytics team back from delighting its users. Managers may ask for a simple change to a report. They don’t expect it to take weeks or months.
Analytics teams need to move faster, but cutting corners invites problems in quality and governance. How can you reduce cycle time to create and deploy new data analytics (data, models, transformation, visualizations, etc.) without introducing errors? The answer relates to finding and eliminating the bottlenecks that slow down analytics development.
Your Deployment Pipeline
Analytics development in a large data organization typically involves the contribution of several groups. Figure 1 shows how multiple teams work together to produce analytics for the internal or external customer.
Tasks in development organizations are often tracked using Kanban boards, tickets or project tracking tools. Figure 2 is a Kanban board, representing a project, with a yellow sticky note for each task. As tasks progress through milestones, they move from left to right until they reach the “Done” column.
Each of the groups shown in figure 1 tracks their own projects. Figure 3 shows the data-analytics groups again, but each with their own Kanban boards to track the progress of work items. To serve the end goal of creating analytics for users, the data teams are desperately trying to move work items from the backlog (left column) to the done column at the right, and then pass it off to the next group in line.
Data professionals are smart and talented. They work hard. Why does it take so long to move work tickets to the right? Why does the system become overloaded with so many unfinished work items forcing the team to waste cycles context switching?
To address these questions, we need to think about the creation and deployment of analytics like a manufacturing process. The collective workflows of all of the data teams are a linked sequence of steps, not unlike what you would see in a manufacturing operation. When we conceptualize the development of new analytics in this way, it offers the possibility of applying manufacturing management tools that uncover and implement process improvements.
The Theory of Constraints
One of the most influential methodologies for ongoing improvement in manufacturing operations is the Theory of Constraints (ToC), introduced by Dr. Eliyahu Goldratt in a business novel called “The Goal,” in 1984. The book chronicles the adventures of the fictional plant manager Alex Rogo who has 90 days to turn around his failing production facility. The plant can’t seem to ship anything on time, even after installing robots and investing in other improvements dictated by conventional wisdom. As the story progresses, our hero learns why none of his improvements have made any difference.
The plant’s complex manufacturing process, with its long sequence of interdependent stages, was throughput limited by one particular operation — a certain machine with limited capacity. This machine was the “constraint” or bottleneck. The Theory of Constraints views every process as a series of linked activities, one of which acts as a constraint on the overall throughput of the entire system. The constraint could be a human resource, a process, or a tool/technology.
In “The Goal,” Alex learned that “an improvement at any point in the system, not at the constraint, is an illusion.” An improvement made at a stage that feeds work to the bottleneck just increases the queue of work waiting for the bottleneck. Improvements after the bottleneck will always remain starved. Every loss of productivity at the bottleneck is a loss in the throughput of the entire system. Losses in productivity in any other step in the process don’t matter as long as that step still produces faster than the bottleneck.
Even though Alex’s robots improved efficiency at one stage of his manufacturing process, they didn’t alleviate the true system constraint. When Alex’s team focused improvement efforts on raising the throughput of the bottleneck, they were finally able to increase the throughput of the overall manufacturing process. True, some of their metrics looked worse (the robot station efficiency declined), but they were able to reduce cycle time, ship product on time and make a lot more money for the company. That is, after all, the real “goal” of a manufacturing facility.
Finding Your Bottleneck
To improve the speed (and minimize the cycle time) of analytics development, you need to find and alleviate the bottleneck. This bottleneck is what is holding back your people from producing analytics at a peak level of performance. The bottleneck can often be identified using these simple indications:
- Work in Progress (WIP) — In a manufacturing flow, work-in-progress usually accumulates before a constraint. In data analytics, you may notice a growing list of requests for a scarce resource. For example, if it takes 40 weeks to provision a development system, your list of requests for them is likely to be long.
- Expedite — Look for areas where you are regularly being asked to divert resources to ensure that critical analytics reach users. In data analytics, data errors are a common source of unplanned work.
- Cycle Time — Pay attention to the steps in your process with the longest cycle time. For example, some organizations take 6 months to shepherd 20 lines of SQL through the impact review board. Naturally, if a step is starved or blocked by a dependency, the bottleneck is the external factor.
- Demand — Note steps in your pipeline or process that are simply not keeping up with demand. For example, often less time is required to create new analytics than to test and validate them in preparation for deployment.
Example Bottlenecks in Data Analytics
You may notice a common theme in each of the example bottlenecks above. A bottleneck is especially problematic because it prevents people on the analytics team (analysts, scientists, engineers, …) from fulfilling their primary function — creating new analytics. Bottlenecks distract them from high priority work. Bottlenecks redirect their energy to non-value add activities. Bottlenecks prevent them from implementing new ideas quickly.
When managers talk to data analysts, scientists and engineers, they can quickly discover the issues that slow them down. Figure 5 shows some common constraints. For example, data errors in analytics cause unplanned work that upsets a carefully crafted Kanban board. Work-in-progress (WIP) is placed on hold and key personnel context switch to address the high-severity outages. Data errors cause the Kanban boards to be flooded with new tasks which can overwhelm the system. Formerly high priority tasks are put on hold, and management is burdened, having to manage the complexity of many more work items. Data errors also affect the culture of the organization. After a series of interruptions from data errors, the team becomes accustomed to moving more slowly and cautiously. From a Theory of Constraints perspective, data errors severely impact the overall throughput of the data organization.
A related problem, also shown in figure 5, occurs when deployment of new analytics breaks something unexpectedly. Unsuccessful deployments can be another cause of unplanned work which can lead to excessive caution, and burdensome manual operations and testing.
Another common constraint is team coordination. The teams may all be furiously rowing the boat, but perhaps not in the same direction. In a large organization, each team’s work is usually dependent on each other. The result can be a serialized pipeline. Tasks could be parallelized if the teams collaborated better. New analytics wouldn’t break existing data operations with proper coordination between and among teams.
A wide variety of constraints potentially slow down analytics development cycle time. In development organizations, there are sometimes multiple constraints in effect. There is also variation in the way that constraints impact different projects. The following are some potential rate-limiting bottlenecks to rapidly deploying analytics:
- Dependency on IT to make schema changes or to integrate new data sets
- Impact Review Board
- Provisioning of development systems and environments
- Long test cycles
- Data errors causing unplanned work
- Manual orchestration
- Fear of breaking existing analytics
- Lack of teamwork among data engineers, scientists, analysts, and users
- Long project cycles — deferred value
When you have identified a bottleneck, the Theory of Constraints offers a methodology called the Process Of On-Going Improvement (POOGI) to address it. If you have many active bottlenecks that all need to be addressed, it may be more effective to focus on them one at a time. Below, we will suggest a method that we have found particularly effective in prioritizing projects.
Alleviating the Bottleneck
Once identified, the Theory of Constraints recommends a five-step methodology to address the constraint:
1. Identify the constraint
2. Exploit the constraint — Make improvements to the throughput of the constraint using existing resources
3. Subordinate everything to the constraint — Review all activities and make sure that they benefit (or do not negatively impact) the constraint. Remember, any loss in productivity at the constraint is a loss in throughput for the entire system.
4. Elevate the constraint — If after steps 2–3, the constraint remains in the same place, consider what other steps, such as investing resources, will help alleviate this step as a bottleneck
5. Prevent inertia from becoming a constraint by returning to step 1.
The Theory of Constraints Applied to IT
A leading book on DevOps, called “The Phoenix Project,” was explained by author Gene Kim to be essentially an adaptation of “The Goal” to IT operations. To alleviate their bottleneck, the team in the book implements Agile development (small lot sizes) and DevOps (automation). One important bottleneck was a bright programmer named Brent who was needed for every system enhancement and was constantly being pulled into unplanned work. When the team got better at relieving and managing their constraints, the output of the whole department dramatically improved.
Prioritizing DataOps Projects Based on Desired Outcomes
If you have identified multiple bottlenecks in your development process, it may be difficult to decide which one to tackle first. DataOps is a methodology that applies Agile, DevOps and lean manufacturing to data analytics. That’s a lot of ground to cover. One way to approach this question is to think like a product or services company.
The data organization creates analytics for its consumers (users, colleagues, business units, managers, …). Think of analytics as your product and data consumers as your customers. Like any product or service organization, perhaps you should simply ask your customers what they want?
The problem is that customers don’t actually know what products or services they want. What customer would have asked for Velcro or Post-It notes or Twitter? Many data professionals can relate to the experience of working diligently to deliver what customers say they want only to receive a lukewarm response.
There is much debate about how to listen to the voice of the customer (Dorothy Leonard, Harvard Business School, The Limitations of Listening). Customer preferences are reliable when you ask them to make selections within a familiar product category. If you venture outside of the customer’s experience, you tend to encounter two blocks. People fixate on the way that products are normally used, preventing them from thinking outside the box. Second, customers have seemingly contradictory needs. Your data-analytics customers want analytics to be error-free, which requires a lot of testing, but they dislike waiting for lengthy QA activities to complete. Data professionals might feel like they are in a no-win situation.
Management consultant Anthony Ulwick contends (Harvard Business Review) that you should not expect your customers to recommend solutions to their problems. They aren’t expert enough for that. Instead, ask about desired outcomes. What do they want analytics to do for them? The customers might say that they want changes to analytics to be completed very fast so they can play with ideas. They won’t tell you to implement automated orchestration or a data warehouse which can both contribute to that outcome.
The outcome-based methodology for gathering customer input breaks down into five steps.
Step 1 — Plan outcome-based customers interviews
Deconstruct, step by step, the underlying processes behind your delivery of data analytics. It may make sense to interview users like data analysts who leverage data to create analytics for business colleagues.
Step 2 — Conduct Interviews
Pay attention to desired outcomes not recommended solutions. Translate solutions to outcomes by asking what benefit the suggested feature/solution provides. Participants should consider every aspect of the process or activity they go through when creating or consuming analytics. A good way to phrase desired outcomes is in terms of the type (minimize, increase) and quantity (time, number, frequency) of improvement required. Experts in this method report that 75% of the customers’ desired feedback is usually captured in the first two-hour session.
Step 3 — Organize the Data
Collect a master list of outcomes, removing duplicates and categorize outcomes into groups that correspond to each step in the process
Step 4 — Rate the outcomes
Conduct a quantitative survey to determine the importance of each desired outcome and the degree to which the outcome is satisfied by the current solution. Ask customers to rate, on a scale of 1–10, the importance of each desired outcome (Importance) and the degree to which it is currently satisfied (Satisfaction). These factors are input into the opportunity algorithm below which helps rate outcomes based on potential.
The opportunity algorithm makes use of a simple mathematical formula to estimate the potential opportunity associated with a particular outcome:
Opportunity = Importance + (Importance - Satisfaction)
Note that if Satisfaction is greater than Importance, then the term (Importance - Satisfaction) is zero not negative.
When you are done, you should have produced something like the below example.
Step 5 — Guide Innovation
The table above reveals which outcomes are important to users and deprecates those outcomes that are already well served by the existing analytics development process. The outcomes which are both important and unsatisfied will rise to the top of the priority list. This data can be used as a guide to prioritize process improvements in the data analytics development pipeline and process.
The Path Forward for DataOps
DataOps applies manufacturing management methods to data analytics. One leading method, the Theory of Constraints, focuses on identifying and alleviating bottlenecks. Data analytics can apply this method to address the constraints that prevent the data analytics organization from achieving its peak levels of productivity. Bottlenecks lengthen the cycle time of developing new analytics and prevent the team from responding quickly to requests for new analytics. If these bottlenecks can be improved or eliminated, the team can move faster, developing and deploying with a high level of quality in record time.
If you have multiple bottlenecks, you can’t address them all at once. The opportunity algorithm enables the data organization to prioritize process improvements that produce outcomes that are recognized as valued by users. It avoids the requirement for users to understand the technology, tools, and processes behind the data analytics pipeline. For DataOps proponents, it can provide a clear path forward for analytics projects that are both important and appreciated by users.