Iterating != Pivoting ∴ Agile != Agility

Swedish psychologist Berndt Brehmer once observed that our feeling of “learning” from experience can be misleading. At times it would be more accurate to say we are conditioned to believe we are learning from experience, when in fact our judgment is not improving (Brehmer, 1980). The real world, he noted, bears little resemblance to what we call “learning” in school. A classroom is a contrived environment where an artificial “gold standard” is provided by the teacher. There, “learning” is just being able to deliver what the teacher asks. This maps very nicely to most product work, where in place of the “teacher” is the “business.” It doesn’t make for a learning environment, which is necessary for agility.

As Patton (2018) has noted, this is a client-vendor arrangement, where the “client” chooses the bets to make and the “vendor” places them. Typically, the “client” is rewarded for getting “vendors” to place more bets faster, with no one paying much attention to the accuracy of the betting process itself. If users abandon an offering and develop their own workarounds, this doesn’t get tied back to those choosing the bets. It just percolates up in the form of future requests in a disconnected, uninformative way, and the “shot callers” proceed as usual, with no wondering how good their predictions actually are.

This is partly what Schwartz (2016) was arguing in his excellent book, The Art of Business Value. Contrary to popular belief, the “business” typically doesn’t know what’s going to provide value any better than IT Scrum teams. Ignoring this perpetuates the planning approach to product work, where “success” is equated with delivery and “done” is just a handoff. This reduces your Agile teams to a drive-thru window.

Now, what if a research lab was assessed on its “experiment output,” with no one really focusing on the interpretation and practical import of the results produced? Apply this to product work. Would this tend to decrease or increase costs? Increase, duh. If you don’t focus on whether the hypotheses are supported you can’t assess whether there might be quicker, cheaper ways to test them. This brings us to a key issue with the Agile movement. Agile kept the focus on output and “requirements,” ignoring that delivery is never a good gold standard. In many contexts this just creates waste.

Agilists like to counter with the refrain that there is no value “until working software is delivered,” but this is equally misleading. Really, there is no value until value is created, and most coded software has negative actual value (see below). Furthermore, delivering software is not the only way to create value, an epiphany a “software delivery team” is not likely to have. Someone trained in design research, facilitation, and research interviewing can remove a lot of guesswork in a couple hours, even over Skype. If you really think you can only remove guesswork by delivering product increments, you’re probably wasting a lot of money — you’re overengineering your assumption tests.

Value is whatever the business values, and paths to value (impact) should be treated as hypotheses (or bets). This can (and should) be mapped out similar to Adzic’s (2012) concept of impact mapping. Below is a generic example using Lean UX terminology. As Adzic points out, most product work ties output directly to actors (users), which generates waste. It neglects clear pivot signaling.

Now, consider Reinertsen’s suggestion that it’s better to think of “agility” as your ability to pivot, to change directions (see Powers, 2016). The implication running through some of his interviews seems to be that Lean Startup has more to do with real agility than Agile does. Most “Agile” teams are doing Scrum, which I used to try to reconcile with Lean Startup by arguing that “iterating” is akin to “pivoting”…but it’s really not.

Agile is often said to be “incremental and iterative.” The “increment” is the chunk of product delivered at the end of the time box. Then you repeat the cycle and deliver another increment at the end of the next time box. This repetition is what Agile means by “iterating,” which is not the same thing as pivoting. To “pivot” you might need to throw out your previous product increments and try something else entirely. Almost no Agile team does this, a point Alan Cooper has been making for years. Typically, Agile teams aren’t making “pivot or stop” decisions at all — they just incrementally “persist” and call it “iteration.” They don’t reject hypotheses and then try other ways to achieve outcomes. They can’t, because that’s not something anyone is even paying attention to! (Talk about fragility!)

The practical result is the more product increments are added, the fewer degrees of freedom there are, which, when you think about it, is really the opposite of “agility.” Now, as Allen Holub likes to remind me, there is no such thing as “Agile.” There’s the Manifesto, and there’s different people’s take on it, and that’s it. Most orgs equate Agile with “Scrum,” period. Sutherland figured out how to sell Scrum to businesses with the alluring promise of “twice the work in half the time” (so…four times the work?). Though that might be how the “business” tends to think of Agile, it really doesn’t have much to with agility. Other Manifesto coauthors have other ideas. Jeffries, for instance, has been very adamant lately that he does not care about org agility and that the point of Agile is to make life better for developers. Note that the business’ take often has the opposite effect.

OK. Back to outcomes. It’s often argued that the way out of this mess is to put “outcomes over output,” but even this can go sideways depending on the definitions used. Outcomes are best described in the Lean Startup book, Lean UX (Gothelf & Seiden, 2013). Many seem to equate outcomes with vague goals or objectives, which misses the point. (In my opinion this is why the concept of OKRs is not value-adding.) An “outcome” is not “impact,” which is likely your real goal. An outcome is the concrete behavior change you’re trying to create to deliver business impact. This spotlights a point I first saw made in Anderson’s (2011) Seductive Interaction Design: There is only one way to create business value, and that’s to change someone’s behavior. Treating “delivery” as a proxy to this stops short. (This is why cost of delay should ultimately be tied to outcomes — the value isn’t created until the value-adding behavior change actually happens.)

Anyone who’s toyed with this will know it can be very challenging to get people to state concrete outcomes. Interestingly, the same point is made in therapy, and the best techniques I’ve come across for eliciting outcomes actually come from therapy and change work! It was in reading about Clean Language and Symbolic Modeling that I learned that people just naturally like to talk about “problems” and “solutions,” whereas eliciting good outcomes typically requires skilled coaching. Talking about problems can be cathartic, but the goal is not to be in therapy for decades. What changes, specifically, are you trying to create? How will you know when the therapy is “done?” It’s not when it’s been delivered! It’s when certain sustainable changes have taken place.

Apply this to product work: Agile teams also tend to think of their work in terms of “problems” and “solutions.” In fact, we tend to call what we build the “solution,” ignoring that this is largely metaphorical. When you solve a math problem, for instance, the problem is known, it’s taken for granted. In product work, however, the assumed “problem” is often not the real issue. Further, a math problem typically has an objective and unique “solution.” This just doesn’t map across. What you’re building is not “THE SOLUTION.” What you’re asked to “solve” is commonly not “THE PROBLEM.” There’s also a psychological difference of orientation. A problem-solving frame has a negative, away-from orientation. A “problem” is something that needs to be escaped or remedied.

An outcome frame, on the other hand, is a positive frame. It has a “towards” orientation. I sometimes see people say that achieving an outcome is the same thing as solving a problem. Not really though. As Tompkins and Lawley (2006) observe, knowing what you’re moving away from does not mean you know what you want to move towards. I’ve encountered this many times in an exercise I like to run in classes. I have teams of people freelist, affinitize, and dot vote in response to the question, “What are your difficult problems with delivering value to customers?” The top cluster that emerges is almost always the same. It’s some variation of, “No one can really say what we’re trying to achieve.”

Without concrete desired outcomes in place, you don’t have what I’ve been calling “unambiguous pivot signals.” An outcome gives you a line in the sand, letting you know whether to pivot, persist, or stop. It’s an entirely different mindset. As Tompkins and Lawley put it, digging into a problem and coming up with a “solution” often narrows your options, which reduces paths to value. By instead spotlighting the desired outcome, you open up the field and leverage greater creativity in service of discovering different ways to achieve it. This increases options, which should be one of your primary goals.

Instead of the client-vendor arrangement, where the “customer,” a business rep, tells Agile teams what to build, what if you instead worked with the customer to align on what outcomes she wants achieved? This keeps the degrees of freedom unspent, allowing the path forward to emerge as warranted by evidence. Now, as Kohavi et al. (2009) rightly point out, there are some who perceive this as a “loss of power.”

Many of us have seen Kohavi’s famous results from Microsoft’s A/B testing platform: For existing products, when you shift the focus to whether outcome metrics are positively affected, a full 66% of what’s built has zero or negative value. In other words, what was delivered either had no effect on outcome measures or made them worse. Backing a dud still costs money, however, not to mention the opportunity cost of not having delivered something more value-adding.

When business reps push back on such points, Kohavi et al. note, they are essentially claiming an empirical approach to product work isn’t needed. If this is true, however, then they should be able to predict the results of the hypothesis tests they’re claiming are unnecessary. In another experiment, Kohavi et al. tested this as well, asking people to guess the results of eight A/B tests. Anyone who could successfully predict the results of six would win a shirt. After more than 200 attempts, they ended up giving away…zero shirts. Out of eight guesses, participants were correct 2.3 times on average, meaning their predictions were wrong more than 70% of the time.

If you don’t see the import of this, you’re likely betting a lot of money on people’s hunches, passion, and advocacy. This speaks to the importance of optionality, which Lean Enterprise ties to Taleb’s concept of antifragility (Humble, Molesky, & O’Reilly, 2015; Taleb, 2012). The result is a fundamental principle: If your guesses are wrong more often than right, then you should not optimize for being right.

To close, I’m not saying we shouldn’t talk about problems or solutions anymore, just that we should be more mindful of how concepts influence thinking. Words have consequences, and metaphors carry baggage. Ultimately, I agree with Kees Dorst (2015), who argues much of the failure in applying design thinking to business comes from keeping the focus on generating “solutions” as opposed to frames. (I wrote about problem frames here.) In the next post we’ll need to talk about techniques for eliciting outcomes, as well as what makes for a good outcome. For now, I’ll leave you with this thought:

References

Adzic, G. (2012). Impact mapping: Making a big impact with software products and projects. UK: Provoking Thoughts Limited.

Anderson, S.P. (2011). Seductive interaction design: Creating playful, fun, and effective user experiences. Berkeley, CA: New Riders.

Brehmer, B. (1980). In one word: Not from experience. Acta Psychologica, Volume 45, Issues 1–3, pp. 223–241.

Dorst, K. (2015). Frame innovation: Create new thinking by design. Cambridge: The MIT Press.

Gothelf, J. & Seiden, J. (2013). Lean UX: Applying Lean principles to improve UX. Sebastopol, CA: O’Reilly Media, Inc.

Humble, J., Molesky, J. & O’Reilly, B. (2015). Lean enterprise: How high performance organizations innovate at scale. Sebastopol, CA: O’Reilly Media, Inc.

Kohavi, R., Crook, T., Longbotham, R., Grasca, B., Henne, R., Ferres, J. L. & Melamed, T. (2009). Online experimentation at Microsoft. Retrieved on October 18, 2016 from: http://ai.stanford.edu/~ronnyk/ExPThinkWeek2009Public.pdf.

Patton, J. (2018). 5 things you’ll need to fix Agile product ownership. Open Charity. Retrieved on December 17, 2018 from: https://www.youtube.com/watch?v=bgdVJVeqHX8.

Powers, S. (2016). Adventures with Agile interviews — Don Reinertsen. LinkedIn. Retrieved on February 3, 2017 from: https://www.linkedin.com/pulse/adventures-agile-interviews-don-reinertsen-simon-powers.

Schwartz, M. (2016). The Art of Business Value. Portland, OR: IT Revolution.

Taleb, N. N. (2012). Antifragile: Things that gain from disorder. New York: Random House.