Lean thinking in service design — A talk by Akanska Chaubal

sean lurie
SDN New York Chapter
9 min readNov 5, 2019

Last Tuesday Akanska Chaubal took the stage at the NYC Service Design Network Meetup to teach us about how practicing lean thinking can (and should) be incorporated in service design. Akanska is a venture designer working at Bionic where she helps clients build new ventures by testing value and growth hypotheses (essentially she helps them fail fast, and fail often). For context, Bionic “create and launch external startups [within large organizations] that leverage outside talent, new technologies, and insider insights to attack big opportunities, fast”.

Akanksa has a love for science and a background in management consulting, both of which have given her a suite of analytical skills she is now able to bring to her more creative line of work. She practices at the intersection of art and science and this gives her a unique perspective not held by many other designers. This talk drew on her duel passions to explore how both science and design can benefit from thinking just a little bit more like each other.

Three main takeaways from the talk:

  1. There is a sweet spot between art and science — a way of thinking that can be both analytical as well as creative, and as designers we should be looking to adopt a more scientific approach to research.
  2. Traditional service design methods don’t work well in fast paced and uncertain environments. Adopting lean thinking in service design can help us design better service ecosystems that are able to pivot and evolve with changing customer needs and new technologies.
  3. When creating experiments, identify your assumptions, categorize them, prioritize them and design discrete tests that target specific things using as little time, effort and resources as possible.

Part one: Thinking like a scientist

Traditional service design methods don’t work well in uncertain startup environments. There is a need for a faster way to test and learn in service design as the lead time that it takes to implement initiatives conflicts with the needs of a team working on ambiguous and faced paced projects.

The reasons why traditional service design methods don’t work in fast paced environments:

  1. Long lead times from gathering insights to building solutions and testing outcomes
  2. Difficult to use when there is no defined problem, service or user
  3. Traditional tools and artifacts take a long time to create
  4. Difficulty pivoting quickly.

Akanska warned us that there are some instances where you may be designing services/experiences for customers who do not know what they want. If you are designing for an experience that does not exist yet, you can‘t ask customers if they will find value in it. You need to test more subtly, looking for the types of behaviors that support the use of the end solution. Additionally, with technology evolving so quickly, sometimes the tech stack we start designing for is not the best solution by the end of the project, however by the time you work out how to incorporate newer technology, it is often too late. Akanska asks “How do you design for customers that don’t know what they want, and how do you design for technologies that don’t exist yet?”

The answer: build + test, measure, learn and iterate!

As we build out our understanding of how both customers, staff and systems will behave we should be incrementally releasing parts of the service ecosystem and iterating on what is to come. Doing so will ensure that the solutions being delivered do not rely on old assumptions or technology.

Incase you are sitting there thinking, well I am not a start-up, so I am exempt from the lessons in this talk, Akanska outlined what constitutes uncertain environments:

  • New behaviors
  • New technologies
  • New customers
  • Lack of norms
  • Ambiguity

These are factors every organization is dealing with and should be relevant to every designer as they face different challenges in their career. The next time you face a design challenge in which you are faced with one, some or all of the above characteristics you should consider how you can use some scientific thinking to validate your designs and reduce the risk of creating something that is not valued by users.

Part two: Using the Lean methodology

Lean thinking is a way of designing that tries to create the most amount of customer and business value with the least amount of effort. Eric Ries brought Lean Thinking to popularity through his book ‘The Lean Startup’ in which he describes lean thinking as “a scientific approach to creating and managing startups…to get a desired product to customers’ hands faster.” Akanska broke down using the lean methodology into four practical steps:

  1. Designing a good experiment
  2. Executing the experiment
  3. Measuring the outcomes
  4. Learning and iterating

1 — Designing the experiment:
The most important part of setting up a successful experiment is to create a clearly articulated and easily testable hypothesis. By “structuring your idea into a testable assumption” you provide a target which are able to prove or disprove through your experiment. This should be focussed on testing a specific thing you are trying to uncover. Examples of what you may be testing for are:

  • a new behavior you are trying to establish
  • whether or not new customers will adopt your product,
  • or, what is the most appropriate way to bring a new technology to market.

Start by considering all the things you want to know and then prioritize what are the most important assumptions to test before moving forward. Akanska recommended that it is best to only test the most important factors and never test to many assumptions at once. Choose one assumption, build a hypothesis and and create a targeted experiment.

Note: Assumptions are usually value based (customer focussed) or growth based (business focussed). Be sure to categorize them as such as then consider the relationship between different hypotheses when designing your experiments.

Akanska used the following example: An appliance manufacturer wanted to test to see if customers in the US market would be interested in an all-in-one washer-dryer. To provide direction for their test, the team defined their hypothesis as: “If we provide people with a small load all-in-one washer-dryer for urban apartment dwellers they will use it”. Notice how the hypothesis is extremely specific with the product, market segment and expected outcome of the experiment all clearly indicated in the statement:

Product: small all-in-one washer/dryer

Customer segment: Urban apartment dwellers

Assumption: They will use it

2 — Executing the experiment:
This is all about deciding how you will prove or disprove the stated hypothesis. The goal here is to come up with a way of doing so that uses the least amount of time, effort and resources as possible. Akanska outlines a few different types of tests that can be used — these are also known as MVP tests:

  • Concierge — This is a bespoke hand crafted experience. This is a good type of test when trying to understand why people behave in the way that they do as the facilitator will sit with the test subject during the experience.
  • Wizard of Oz — This is when an experience looks like an actual experience but is faked on the backend, oftentimes with manual workarounds to patch systems together that don’t currently speak.
  • Sell — Selling your product or service before you have it. Many kickstarter pages use this approach to see if there is enough market appetite before going into production. This can also be achieved by selling a similar product with a fake brand through facebook adds or tracking traffic on a dummy website.

In the washer-dryer experiment they put 15 all-in-one washer-dryers in the homes of urban apartment dwellers for a week. At the end of the week they said they would pick it up and ask them what they thought… keep reading to hear what happened.

3 — Measuring outcomes:
Know what metrics you want to gather before you begin and be sure to have the right mix of quantitative and qualitative inputs. Experiments are not random and do not aim to find unexpected data points. These are targeted assessments of particular aspects of an experience that will test the stated hypothesis.

“Know what are the pivotal qualitative and quantitative metrics you want to track and use the outputs of your experiment to help you assess your hypothesis.”

In the washer-dryer example, the team told the test subjects that they were unable to collect the washer-dryer at the end of the test period and would only be able to come the following week. They then tracked usage data on the machine through smart sensors and found that 12/15 participants kept using it passed the trial period and when given the option of keeping it permanently, 8/15 said yes. In summary, this validated the stated hypothesis and concluded that round of testing.

4 — Learning and iterating
Make sure you are always building on your understanding of customer behavior through your experiments. There is no exact science to how new findings should be incorporated into the customer or market profile you are developing, however, always staying conscious of what you have learnt in the past will help you develop your understanding of product-market fit.

In service design, oftentimes actually iterating based on test outcomes is not possible due to the size and complexity of change. Make sure that every time you do a test you know that there is a way you can actually iterate the final product based on the outcomes. Encourage your wider team to release in shorter, more targeted sprints that run just behind the design and research streams so you can continue to adapt the final solution.

Wrap up

Anytime you are starting a new business you have assumptions about what customers will value and how the business will grow. You need to define what the essential assumptions and non-essential (leap of faith) assumptions are. What are the critical things you need before you can progress and what are the assumptions you feel comfortable moving forward with not knowing the answers for.

Regularly testing essential hypotheses will help mitigate the risk of an investment failing to realize its intended outcomes. Adopting a more scientific approach when conducting these tests will help you hold data driven conversations with your stakeholders. Pro tip: get the stakeholders you are working with to create or agree to the hypotheses beforehand, then build your tests around these to ensure the outcomes continue to inform the service of product you are working on.

Akanska encouraged us all to consider the following questions before starting research or committing investment to an initiative:

  • What’s your biggest assumption?
  • What hypothesis can be created for this assumption?
  • How would you test this hypothesis with your user?
  • What type of prototype could you create (Concierge, Wizard of Oz, Sell, other)?
  • What would make this experiment a success or failure?

At the end of the day it’s all about how you think. Switch from thinking about research as an exploratory endeavor to something that is more scientific in nature. Look to disprove hypotheses and don’t be afraid to make assumptions BEFORE testing. Consider how your prototypes change in fidelity as you move from testing interest to intent. For example, testing interest can be a conversation (Concierge test) versus if you were testing to see how many customers will actually purchase your product in the market (e.g. Sell). Increasing the fidelity of your experiments in line with the action being ‘requested’ plays a critical role in the success of your research, and may bring you closer to guaranteeing your findings will reflect reality when your service or product is released.

--

--

sean lurie
SDN New York Chapter

I am a experience and product designer living in NYC.