De-risking Innovation

By Michael B. Horn

Personalizing learning for all students to help each and everyone fulfill their potential requires that schools innovate — and not just once, but repeatedly. Yet innovating when children are involved and the odds are uncertain can feel risky and unwise.

At the same time, not innovating in our schools also carries huge risks that are increasingly well known.

So what is a school leader to do?

Fortunately we don’t have to choose between two risky endeavors. There is a way to de-risk the innovation process: discovery-driven planning.

In our book Blended: Using Disruptive Innovation to Improve Schools, Heather Staker and I write about this process, which was first introduced by Rita Gunther McGrath, a professor at Columbia Business School, and Ian C. MacMillan, a professor at the Wharton School of the University of Pennsylvania. Discovery- driven planning bears a strong resemblance to the newer design methodology called ‘‘lean start-up,’’ an approach that Steve Blank first conceptualized in 2003 based in part on the concept of discovery-driven planning. Because most schools are not start-ups seeking to ‘‘acquire’’ students — rather, they are already working with students, parents, and teachers who have existing expectations for their school — we think the discovery-driven planning framework, which helps reduce the risks of innovation, is more appropriate for most school leaders and teachers innovating.

Discovery-driven planning flips the conventional planning process on its head. In the standard planning process, you make a plan, look at the projected outcomes from the plan, and then, assuming those outcomes look desirable, you implement it.

This approach works well when you have tried something similar before or the innovation is familiar and proven. But if you are doing something radically different from anything you’ve done before and it feels unfamiliar and unpredictable with a low ratio of knowledge to hypotheses — and personalizing learning through blended learning for the first time often qualifies — you need a very different process. The standard planning process won’t work because the assumptions, both implicit and explicit, on which the projected outcomes rest are often wrong. The key to success will instead often be the ability to test hypotheses and continue to iterate on plans as you gain more information.

A discovery-driven planning process follows these four steps:
Step 1: List desired outcomes.
Step 2: Determine what assumptions must prove true for outcomes to be realized.
Step 3: Implement a plan to learn whether the critical assumptions are reasonable.
Step 4: Implement the strategy when key assumptions prove true.

Start with the outcomes

If everybody knows what the outcomes must look like for the innovation to be worthwhile, then there is no sense in playing a game of Texas Hold ’Em. Just lay the cards out on the table at the outset. What does the final state of the innovation need to do? What are you trying to accomplish? And how will you know you have been successful?

The key in this step is to declare your desired end state clearly and in a way that can be measured. One way to do this is by crafting the outcome as a SMART goal to make sure your answer is specific, measurable, and time-bound so that everyone in the organization knows what success looks like.

Create an assumptions checklist

The second step is where the real work begins. With the desired goals and outcomes identified, compile an assumptions checklist. Look at the plan you have designed (yes, this assumes you have at least a high-level design and plan in place!) and list all of the assumptions being made that must prove true in order for the desired outcomes to materialize.

Be exhaustive in this stage. All of the assumptions that schools make implicitly should be on the table, including the use of time and school schedules, space, staffing, curriculum, software, hardware, and the budget. That means everything from “This math software will be rigorous enough” to “Our teachers will have the data they need to intervene in the right ways” to “The time we give students to learn is enough for them to master the curriculum.”

This process of listing assumptions should take a day or two, and it is time well-spent. Sometimes the list of assumptions at this stage will number more than one hundred! This exercise is also a great way for a leader to learn where there is and isn’t agreement within an organization, so invite people who represent a variety of departments and perspectives to the table.

Once you are done compiling all of the assumptions, the next job is to rank the assumptions from the most to the least crucial. We have found that having the same group of individuals ask two questions about each assumption is the best way to accomplish this. First, ask what could happen if you are wrong about an assumption. In other words, which of these assumptions, if proved untrue, would most seriously derail the success of the project? Second, ask how confident you are that each assumption is correct. A fun test of how confident people are is to see if they are willing to give up one year’s salary if they are wrong — meaning they have a high degree of confidence that they know the answer. Perhaps they are willing to give up only one week’s salary if they are wrong? Or maybe they aren’t willing to bet any of their salary because they have no sense of whether the assumption is correct. From here, you can prioritize which assumptions are the most crucial to the project’s success and about which you have the least knowledge if they are correct. You’ll want to test these assumptions first.

Implement a plan — to learn more

With the prioritized assumptions checklist in hand, the next step is to implement a plan to test the validity of the assumptions. Plan to check the most important assumptions first because those are the assumptions with the least confidence behind them that are also the most crucial to the project’s success.

In the initial stages of planning, the tests should be as simple, inexpensive, and quick as possible. They should simply provide a sense — not a clear answer — about whether the most critical assumptions are reasonable. For example, it is a good idea to look at other schools that have implemented something similar to see whether the assumptions hold water before going too far down a road. Reading the existing research, having early conversations with experts, or creating quick mock-ups or prototypes makes sense. A prototype is anything that helps communicate the idea of what you are doing, which can mean everything from mock-ups and models to simulations and role-playing experiences. It is often helpful to create what people call the ‘‘minimum viable product.” This means slapping together the simplest product or prototype that allows the testing of the salient assumptions as quickly as possible. More concretely, perhaps a key assumption being made concerns the rigor of a math program. To test its rigor, after reading about it and talking to others who use it as an initial test, a school could then ask for one license for the math program so that its teachers can poke around and see if it passes their own smell test for being rigorous enough as a second test. If it passes, the school might then implement a third test by finding a place — such as in summer school or after school — to pilot the math program for a couple of weeks before buying it and using it for all of its students for an entire year. And it might do this for a couple of other programs as well.

Move forward if assumptions prove true

The last step is to decide whether to continue implementing the strategy.

You should set a checkpoint — a specific date when the tests of several of the assumptions should be completed — so that the team can come together and evaluate what it has learned. The period leading up to the first checkpoint could last one month and be designed to give team members time to study other blended-learning schools and test some (but not all) of the assumptions at a high level, for example.

Then, if your assumptions are proving true, keep moving forward to the next checkpoint.

If they are not — as will more than likely be the case — you have a few options. Perhaps you can tweak the plan to keep moving forward. For example, maybe the math software an educator had planned to use will be good for only twenty minutes of instruction a day rather than thirty minutes; this means the rotation schedule will have to be adjusted. Alternatively, there may need to be bigger adjustments. Perhaps you need a totally different team to implement a blended-learning model you have designed, for example. Or finally, perhaps the assumptions underlying the success of the plan are wildly unrealistic, and the plan just won’t work. If this is the case, then there is an opportunity to shelve the plan before too much money has been invested and the stakes have become too high to abandon the idea. This is important in a district setting where capital — financial and political — is a scarce resource.

If you do decide to move forward, don’t just move to implement the plan whole hog.

Look at your assumptions again and brainstorm tests that are more comprehensive, precise, and perhaps more costly than the previous ones. The key is to keep your tests as low-cost and quick as possible, but precise enough that you will gain more knowledge than you had before. Assumptions that you didn’t test before might now be tested. The important thing is to not invest a lot of time and resources early before knowing whether the assumptions are proving true — or at least are in the right ballpark.

Establish a rhythm for your tests. Set up more checkpoints — perhaps the second one will occur in another month and a third will be a month after that. The tests during the second checkpoint might include an analysis of the software market. Further down the line, a checkpoint might include a working prototype or pilot of the blended-learning model, and then the launch of the blended-learning model itself.

At each checkpoint, the team will gain new information. An assumption that seemed correct at a previous checkpoint may be revealed to be more complex than it was originally thought to be. That’s OK. And if the team learns that ultimately the assumptions are unrealistic and that it won’t be able to pull off the program, that is not a reason for despair. Fast failure is a success; the team learned that the idea would not work before wasting a lot of time and money implementing the plan. The key is to celebrate each time a decision is made. People should not feel that they have to defend a pet idea; the victory is in learning more about an assumption, not in proving that someone is right or wrong.

Ultimately, as the team makes adjustments and iterates, it may find that it is going down a path with assumptions that are proving true. Even though the design and plan that is emerging and gradually being implemented is different from the one that was foreseen originally, if it will be successful in realizing the desired outcomes, then that’s a resounding success — and the ultimate value of the discovery-driven process. And it’s a great way to avoid both the risk of innovating when children are involved and the risk of inaction.

Michael Horn is the co-author of two books which fundamentally changed the way we think about education: Disrupting Class and Blended. He is both an Education Elements board member and one of the predominant thought leaders about personalized learning.