The Four-Way Fit — Chapter 10: The “Heads Down” Motion in Phase II (Part 2)

Tom Mohr
CEO Quest Insights
Published in
19 min readFeb 2, 2021

--

If drawing up the game plan on the side of the field is the “heads up” motion, the “heads down” motion is the game itself. In Phase II, the game becomes three-dimensional chess. To grow requires a mix of big systemic change, incremental continuous improvement and test execution. As always, these occur at the intersection of people, workflows, technology and money flows.

Now your enterprise has rising heft and momentum. Its purpose cascades down into operating system purposes, which cascade down into domain purposes. If the enterprise is organized based on the principles of domain-driven organization design, then each domain within each operating system is managed by one or more domain teams. Each team sharpens pursuit of its domain purpose by its embrace of clear business outcome objectives associated with key result expectations (OKRs). These OKRs are tracked by KPIs and achieved via certain methods. Each team maintains and continuously improves its domain, executes plans that emerge from settled assumptions, and runs tests that emerge from testable claims.

In this chapter we will consider different types of claims, and how the claim type impacts “heads down” actions. Then at the end of the chapter I’ll share some thoughts on process optimization — an important part of the “heads down” motion in Phase II.

Settled Assumptions

I have said that in Phase II, most claims (seventy percent or more) should be settled assumptions. The rest need to be tested. Claims emerge from learning and experience. Each sits atop some assembly of facts. When that set of facts is strong enough, you can designate the claim a settled assumption. Every settled assumption is the basis for action. The change prompted by settled assumptions comes in two varieties: a great leap forward, or continuous improvement. Let’s take a look at both.

The Great Leap Forward

Sometimes, settled assumptions can spark a great leap forward. “Great leap forward” changes have broad scope; they are multi-step and multi-dimensional. They touch multiple domain teams — even multiple company operating systems (as with a CRM change, which impacts both the revenue engine system and the accounting system). They tend to take months or even years to complete. They require many sequential steps, and exhibit impacts in three dimensions (people, workflows and technology). Plans that emerge from companywide strategic imperatives are often of this type.

At the highest level, a “great leap forward” change progresses through nine steps:

  • Name the problem
  • Envision the change
  • Design it
  • Architect it
  • Build it
  • Implement it
  • Stabilize it
  • Optimize it
  • Scale it

To choreograph “great leap forward change” takes a playbook — either drawn from leader past experiences, or from vendor best practices, or from industry best practices. Such a playbook is itself a settled assumption — a proven methodology for executing a complicated change within a certain problem domain. If no playbook exists, this change approach is not advised. You are better off narrowing the scope of the plan and moving more incrementally. To invent a “great leap forward” plan from scratch is a risky proposition indeed.

Your playbook will give you a template for change, but it can’t give you the specifics. For that, you need a project plan. An experienced project manager will build a plan that breaks change into phases, tasks, roles, and timelines (considering sequencing and dependency issues), like this:

The biggest limiting factor in achieving “great leap forward” change is leadership capacity. Complex change requires leaders who are 10Xers. They must be systems thinkers who can balance now, near and far, and can think through the interdependencies between people, workflows, technology and money flows. These leaders are your most precious assets; their time must be deployed to the most important plans and tests. As Netflix CEO Reed Hastings said, the only antidote to the rising complexity of scale is to increase the density of 10Xers in the company.

As important as the leader’s competency is, success demands more. Most big change efforts also require strong team members as well. Project team members must be both competent and available (capable of dedicating the required time to the project). As to competency, needs will vary based on the type of project and the specifics of the situation. But in general, three types of competency are needed on a project team:

  • Functional competency
  • Thinking competency
  • Interpersonal competency

Each project will call for a set of functional capabilities. Do you need engineering skill or market knowledge? Next, you will need certain thinking competencies. Does your project need a strategic thinker, a systems thinker, a design thinker or a lean thinker, or some mix of these? Finally, it is important for every member of the team to exhibit interpersonal effectiveness. Do team members possess the ability to “seek first to understand, then be understood”? Is each team member able to “attack a problem instead of a person”? Can each team member balance “I” needs (ego) with “we” needs (the needs of all members of the team)? Can every team member accept “shared accountability for success”? These basic teamwork capabilities are key in executing a successful “great leap forward” initiative.

Consider the following example.

Yours is a B2B SaaS company. It has just moved from the Minimum Viable Repeatability stage to the Minimum Viable Scaling stage. Traction is rising and the price point is holding. At this point, it has become clear that your product boasts an LTV profile exceeding $500,000. Let’s assume this LTV profile is now a settled assumption. It’s time to move beyond CEO and founder selling; you are ready for the revenue engine to be built out and scaled up.

Given your LTV profile, there is a known playbook for how to build this revenue engine. It is established best practice that it should conform to a customized, “account-based” design. The list of accounts to crack open is clear. If even ten percent of these accounts become customers, your company will achieve a nine-figure valuation, so the battle must be engaged within these accounts.

Since brand reputation and category leadership will be decisive, the marketing functions of brand development and product marketing will predominate. A robust content creation engine will need to be built to drive thought leadership. Growth marketing will be a secondary consideration.

Sales will follow a customized prospect engagement approach, with a research team conducting preparatory research before prospects are engaged within a target enterprise. SDRs will need to seek account entry via multiple connection methods with multiple people at multiple levels. Their objective will be to create an Opportunity that an account executive can take on and develop. These Opportunities will then need to be nurtured in four stages — “discover”, “prove”, “negotiate” and “close.” This sales approach will require the hiring of account executives who understand the “challenger selling” approach. The customer success function will also be critical here — customers will need “white glove” handling at all times.

Now you must put together the team that can help you turn the playbook into your new reality. You will need strong heads of marketing, sales and customer success. You’ll need a strong marketing and sales ops leader to conceive, build and implement the workflows and supporting technologies.

The above example addresses the scenario in which you build out a new capability for the first time. The revenue engine is nascent; there’s an empty canvas in front of you. A smart leader with a solid team will be able to build out a sound project plan based on a known playbook. He will roll out the change in phases. The challenge (and the benefit) is that there is no prior practice — everything is new. On the one hand, this requires you to figure everything out in all its multidimensional complexity, from scratch. On the other hand, no existing operating procedure needs to be dismantled. Leveraging the playbook, you can simply follow the assembly requirements, testing each component as you build it.

There is a second type of “great leap forward” initiative.

This type requires that you dismantle an existing system so as to replace it with a new, upgraded one. Let’s call such initiatives rip and replace initiatives. It’s like turning your bicycle into a motorcycle while peddling. This type is at play when, for instance, you replace your CRM system or accounting system. Once again, you must figure out how to integrate change across three dimensions: people, workflows and technology. But now the change includes a dismantling step, which adds to the complexity.

A “rip and replace” initiative is best executed by following the “strangler pattern”. This term comes from the world of engineering. In refactoring a monolithic technical system, it’s rarely advisable to replace the entire monolith all at once. That “big bang” approach is a known anti-pattern. Smart engineering teams, following the strangler pattern, will seek out a seam in the monolith. They will then isolate and separate one component, reconnecting it back to the monolith via an arms-length API. This frees the team to refactor that single component without messing up the overall system. Once the component is refactored into microservices and works properly, they can then move on to the next component and do the same. And then on to the next. By this method, slowly but surely, they “strangle” the monolith and transform it into a modular, microservices-based system.

Just as the strangler approach makes sense with technical systems, so it does with socio-technical systems. I once made the mistake of taking the “big bang” approach with a complex change effort. Early in my career, I ran sales and marketing at the Star Tribune newspaper in Minneapolis. I decided to license a CRM system so we could better manage customer engagement. We found the best CRM system on the market at the time. But instead of rolling out this new CRM in stages, I tried to simultaneously move every sales team over to the new CRM system. Despite many months of careful waterfall planning, the implementation was a disaster. Had I adopted the strangler approach — introducing the CRM into just one sales team, and then getting that team’s people, workflows and technology right before moving on to the next team — I would have been successful. As it was, we had to roll back and start over.

The point is that “great leap forward” initiatives are best approached step by step, component by component. In the revenue engine example, you might build just one sales pod (with two SDRs and one AE) first, testing and iterating until it begins to perform properly. Only then might you add a second, third, or twelfth pod. Even though you have a playbook, don’t rush. It’s best to prove performance at each step and within each component before you move on to the next one.

Continuous Improvement

In the rising enterprise, there is a second type of change that builds upon settled assumptions: continuous improvement. Domain teams spend most of their time doing routine work. A product development team must execute a new sprint every week. An infrastructure team must make progress on tools deployment every week. A marketing team must execute a new campaign every week. A sales development / sales executive pod must convert a certain number of leads into opportunities every week. The beating pulse of any domain team is its core, repetitive routines. These routines can always be improved.

Remember that the mission in Phase II is to optimize value and build sustainable competitive advantage with the minimum waste of money and time. If waste is the virus, continuous improvement is the medicine. Every team that plays a role in optimizing customer-defined value and building sustainable competitive advantage can continuously improve. The secret is to do so while keeping the focus on outcomes, not outputs. It’s so easy to make the process the thing. The process is never the thing; the process must serve the business outcome. As you teach teams to adopt continuous improvement capabilities, make sure they remember that the only improvements that matter are those that improve business outcomes.

No team is more important to value optimization than the product development team. Over the past twenty years, agile methods have been embraced worldwide by such teams. But agile methods have themselves evolved. Whereas once agile implied almost no upfront planning, that’s changed. A new agile approach, called “Disciplined Agile Delivery” (DAD), incorporates light planning. In DAD, development work is broken down into a light inception process at the beginning, a light transition process at the end and a more intensive construction process in the middle.

The goal in the inception phase is to understand the business problem to be solved and to provide the minimum architecture necessary to guide initial construction. On the one hand, it avoids the “heavy upfront” approach of the waterfall method. But on the other hand, it allows for enough up-front design to model the problem domain well, and to draw up a basic system architecture. Notice that this “inception phase” closely mimics the Four-Way Fit’s “heads up” motion, wherein the framework spreadsheet and plans are developed.

The construction phase is the longest phase of a project. It mimics the “heads down” motion. In this phase, the goal is to produce a potentially consumable solution in every iteration — one that addresses stakeholder needs. The team builds component by component, feature by feature, in sprints — referred to in DAD as iterations.

Finally, in the transition phase, solutions are confirmed to be consumable and are prepared for release. It is a return to the “heads up” motion. Throughout the project, the goal is to fulfill the mission, grow team members, improve the process, address risk, coordinate activities and leverage and enhance existing infrastructure. DAD rebalances agile, acknowledging that work at the front end of the project is required to get basic system architecture right. The resulting systems development life cycle looks like this:

This DAD method is designed for efficiency and quality. A list of initial requirements and a release plan is developed during the inception stage, along with the initial modeling and high-level architecture. Work proceeds based on iterations (sprints), with daily coordination meetings (scrums). Regular retrospectives and demos to stakeholders keep the project aligned with stakeholder needs. Enhancement requests and defect reports feed the backlog with new work items — a continuous improvement feedback loop.

In “Going Beyond Scrum”, Scott Ambler (the originator of the DAD approach) predicts that over time, inside a domain team, the inception and transition phases will shrink as the model and architecture matures, and the team will focus more and more on the construction phase — the world of continuous delivery. Like this:

With continuous delivery, a single developer might release new code into production as often as twice a day. That’s continuous improvement.

When the product road map needs to be changed, and during periods of remodeling and architectural renewal, the length of the inception and transition phases will expand again. As teams return to periods of architectural stability, these two phases will shrink again. For most teams it is likely that the contraction-expansion cycle will recur episodically over time.

Testable Claims

As said before, in Phase II at least seventy percent of the claims in your framework spreadsheet should be settled assumptions. If it were to be otherwise — if that percentage were to be much lower — it would either mean you were making too many unsubstantiated bets, or that you didn’t have confidence in your path forward. In such a case, the only smart thing would be to stop everything and devote your energies to confirming your assumptions. Though seventy percent or more of your assumptions are settled, the rest need to be tested before you invest into them. In the previous chapter, I suggested that in business there are two broad types of testable claims: “already true” claims and “if / then” claims. The testing approach is different for each.

The “already true” claim (in which you claim something is already true, such as “Our TAM is $2B”) is validated by evidence. Both the quantity and the quality of evidence matter; these evidentiary facts must meet a certain threshold (set by you) to become a settled assumption. In your research, you will seek to confirm that something you already believe to be true is true.

On the other hand, with an “if / then” claim, you are considering an investment in a change initiative. It assumes a current state, and posits an intervention that will result in a future state. Since you don’t want to waste money and time, you decide to conduct a small-scale test first so as to confirm that “if” this change is fully implemented, “then” the expected beneficial outcome will be achieved. A proper test will validate the current state and, once the intervention has occurred, validate whether the desired future state has been achieved.

The nature of your test will depend on the nature of the claim and the degree of certitude you seek. An “if / then” test can be research-based. For instance, in developing product strategy you might conduct various types of customer research, or review platform data showing current product usage patterns, or consult leaders in sales, marketing and product. You might leverage your data infrastructure to conduct ad hoc queries of existing transaction data. That’s research. If it provides sufficient validation, you may conclude you are ready to invest in the change initiative.

Or you might choose to conduct an experiment. For instance, you might conduct an A/B test of alternative product features, or campaign messaging. Or you might conduct a field test of different price points in different markets. Google Adwords lets you run multiple experiments to optimize ads. The bigger you are, the more you can leverage data science to execute experiments.

Test Design

Test design matters. Who will be assigned the task of running the test? What, exactly, will be tested? What test outcomes will prove the claim? How will you run the test? What data will you gather? How will that data be analyzed? How can you optimize test validity?

The most delicate test type is an experiment. A sound experiment requires that you follow the scientific method, and ensure your results meet standards of statistical validity. In experiments, you need to ensure that:

  • The hypothesis clearly defines the independent variable, and its assumed impact on the dependent variable
  • There are no confounding variables present that could also cause changes to the dependent variable
  • Participants in the control group and the experimental group are randomly assigned
  • The test replicates the real-world setting as closely as possible
  • There is enough data gathered to ensure statistical reliability within predefined confidence levels and intervals
  • The test is easily replicated

If you follow these test design principles, you’ll have confidence in the results.

A Word About Process Optimization

As a development method, disciplined agile delivery follows Kaizen principles. “Kaizen” means continuous improvement in Japanese. In a moment, I’ll share the Kaizen philosophy and method. But first it is important to get across some basic process improvement concepts.

I’ve said many times that a domain team achieves its business outcome objectives through the interaction of people, workflows, technology and money flows. These interactions follow a known process. High performing teams keep the process focused on business outcomes, and then work to continuously improve it.

Perhaps the best book ever written on process optimization was Eli Goldblatt’s 1984 classic The Goal: The Theory of Constraints. In this book, he notes that the mission of the enterprise is to optimize long-term net profit, ROI and cash flow (in other words, enterprise value). To get there, the enterprise (and, by extension, each domain team) must continuously attack waste, and then find and remove the limiting factor — what he calls the bottleneck.

A process exists to achieve some transformation that increases enterprise value. The transformation might be physical, such as the assembly of a machine in a factory. Or it might be digital — for instance, completion of a customer payment workflow within an e-commerce site. Or it might be a human service, such as managing a customer support line or performing customer success routines. Regardless of the process, its capacity is the capacity of its bottlenecks.

A step can either be a bottleneck step or a non-bottleneck step. A bottleneck step is any step for which the capacity is less than the demand placed upon it. An hour lost at a bottleneck is an hour lost throughout the system. The transformation that occurs at any step is performed by a resource. The resource may be a person, or a machine, or a computer.

Bottlenecks attract backlogs. In fact, this is how you find bottlenecks. Small backlogs are fine — they are buffers, ensuring consistent flow through a bottleneck step. But if they grow past some modest buffer level, they become a source of suboptimization. Backlogs are caused by dependent events and statistical fluctuations in the flow of materials (or data). To avoid excessive backlogs, you design the process so that any items needed to enable materials to go through the bottleneck are addressed first. This includes completion of any non-bottleneck steps that are ahead of the bottleneck in the workflow.

Non-bottleneck steps should run at the same throughput rate as the bottleneck step — not at their own potential capacity rates. Otherwise a backlog will emerge at the bottleneck. In other words, the capacity of the bottleneck determines the proper pace of work through non-bottleneck steps. This, of course, requires that you have feedback loops in place — so that the non-bottleneck step can be regularly updated as to the bottleneck step’s throughput rate.

Once you know when the materials (or data) that have been transformed and have passed through the bottleneck step must reach final assembly, you can calculate backwards to determine the timing of release of all materials (or data) in the process. By this means you can ensure that the release of materials through non-bottleneck steps occurs at a pace consistent with the capacity of the bottleneck.

In process redesign it’s best (wherever possible) to move bottlenecks to the beginning of a process. That makes it easier for the bottleneck to set the pace for the rest of the process. If people are needed to execute the bottleneck step, top performers should occupy these positions. If the resource that executes the bottleneck transformation is a machine or a computer, its performance is a first-order priority. Be sure not to build a separate quality control step into a process. Quality control should occur within a step, not after it.

For every process step, there are four sub-steps:

  • Set-up (the wait time awaiting the resource necessary to work on a part)
  • Process time (the time actually working on the process step)
  • Queue time (the time a part spends in line for a resource while the resource is busy working on something else ahead of it)
  • Wait time (the time a part is waiting for another part)

For parts or data that go through bottlenecks, queues are the primary issue. To solve for queues, cut the batch sizes that flow into bottleneck steps. This simplifies processing. For parts or data going through non-bottleneck steps, waits are the primary issue. Non-bottleneck steps can become capacity constraint resources if their sequencing of work creates holes in the work-in-process buffers in front of bottlenecks. In general, it is best to work on parts or data in the sequence in which they arrive (first come, first done). This should cause fewer holes in buffers, and will simplify tracking of parts or data.

If you can increase the quality of a process while reducing its cost, you have improved profit, ROI and cash flow for the process and the enterprise. Costs go down when rework is reduced, when human labor is replaced with automation, when process input cost is reduced or when the volume of required process inputs is reduced per unit of output.

To continuously improve business outcomes, forward-thinking domain teams follow Kaizen principles. As mentioned earlier, disciplined agile delivery methods follow Kaizen principles. But these principles are relevant everywhere — for marketing teams, sales pods, customer success groups, a business development group, or even a price optimization team.

Kaizen is both a philosophy and a method of action. Kaizen philosophy is grounded in the principle of continuous, incremental improvement. To accomplish that outcome, it espouses a culture of team-based employee engagement. Within a domain team, each employee is expected to live by a standard procedure. But as she does, she is also encouraged to identify waste and limiting factors, propose improvement ideas, participate in experiments and support the implementation of validated improvements. By these feedback loops the standard procedure is itself improved. That’s Kaizen philosophy.

As a method of action, Kaizen involves the following steps:

  • Center on team purpose — the business outcomes to be achieved to enhance customer-defined value or build sustainable competitive advantage
  • Define standard procedure so that every role is clear
  • Map current procedures (people, workflows, technology and money flows)
  • Measure KPIs
  • Identify waste and any limiting factors to growth
  • Identify improvement ideas
  • Turn these ideas into hypotheses and test them via experiments
  • If verified, implement improvements and update the standard procedure
  • Repeat

Teams following Kaizen principles leverage seven tools to help track and analyze the system. Of course, each team will use only those tools that fit their circumstances. The first five of lend themselves to automation:

Check Sheet

Histogram

Pareto Chart

Control Chart

Scatter Diagram

These final two tools are used (manually) by teams to identify problems, and to map and optimize processes:

Fishbone Diagram

Flowchart

Summary

By acting on settled assumptions, growth happens day by day, step by step. Most days, teams pursue continuous improvement by following Kaizen principles. Sometimes, when the situation demands and a playbook exists, you will mobilize a great leap forward initiative. Done well, both lead to growth.

And then there are tests. Whether the claim to be tested is of the “already true” variety or the “if / then” variety, your mission is to find the lightest test that can meet your validation standard. Perhaps all that requires is simple research. Or perhaps a formal scientific experiment is required. It’s your job to choose the right test for the situation, and to execute it well. The goal, of course, is to validate the testable claim, so that you can either reject and replace it or name it a settled assumption. By this means you grow, stage by stage, along the journey of company-building.

___________

If you liked this article, please show your appreciation by “clapping” — click rapidly on the hands’ icon below — so that other people can find it. Thank you.

To view all chapters go here.

If you would like more CEO insights into scaling your revenue engine and building a high-growth tech company, please visit us at CEOQuest.com, and follow us on LinkedIn, Twitter, and YouTube.

--

--