A layered illustration with the framework highlighted as the second layer
Framework — the setting for contextual creation.

Becoming a customer-obsessed tech company is the blueprint mission. Part 2

Here is how DAYONE and our Digital Innovation as a Service make it happen.

10 min readJun 9, 2020

--

Our framework for contextual experiments

DAYONE and Digital Innovation as a Service

The first part of the Digital Innovation as a Service-series (DIaaS) explained our mindset, how we see partnerships with clients and our understanding of customer experience excellence. In this article we present our way to reach customer experience excellence. This framework is the heart of DIaaS and combines our mindset with lean principles and industry-standard methodologies.

Good design is proven design.

We believe good design is proven to solve real problems your customer faces. The more problems your product solves or the more impactful these problems are, the more value your business generates for customers. Good design thus drives customer experience excellence. But how can you identify real customer problems? And if you do, how can you offer solutions that bring value?

Simple: by continuously measuring and validating.

As designers, we are aware that we don’t know everything. In fact, it’s our job to make an educated guess and accept that the truth is unknown until proven.

…it’s our job to make an educated guess and accept that the truth is unknown until proven.

Our solutions are just proposals, until we gathered evidence to show their effectiveness. That is why everything we do is defined by experiment. It’s our way to make the right product — fast. We don’t assume problems; we pinpoint customer’s pains and gains and formulate hypotheses based on customer insights. We don’t go with just a gut feeling: we validate our hypotheses. We collect evidence that a design is performing.

What might be true today, can change tomorrow

The core of DIaaS: our Framework

In a world where customers expect products and services that fit their particular needs and wishes, there is no room for a one-fits-all design process. Our framework gives us a structure wherein we can apply our experiment-driven mindset yet allows us to approach each business case individually. This framework should not be interpreted as a linear process, but rather as a continuous cycle of build-measure-learn and ultimately scale. By keeping the product-to-market loop short, we avoid building big monoliths right from the start and instead continuously improve and add on to existing products. This approach reduces the risk of investment in complex technologies without knowing their effectiveness.

Whether this framework is applied in a fast-track format (like Design Sprints) or a long-term partnerships, our approach to good design remains the same:

Understand and analyze

We focus on the most important or impactful problems to pinpoint the biggest value proposition that businesses can offer. Therefore we always kick-off any project with a thorough analysis of the context to gain a deep understand of the problem. This customer experience analysis is based on a broad palette of qualitative and quantitative data. This includes desktop researches, “listen and learn” practices like observing our target audiences in forums and online communities, competitor analysis, in-depth assessment of tracking analytics, surveys and user interviews.

Explore Opportunities

With a deep understanding of customers and the problems they’re facing, we identify opportunities to improve products or create new value propositions for products and services. At this stage we use the Customer Journeys help us identify touchpoints, tasks and problems customers face and how we might solve these.

Define use-cases

Customers don’t need features, they need ways to perform their jobs. For this reason, we focus on use cases rather than features. With a solid understanding of customer’s jobs and needs we’re able to identify opportunities to improve the customer experience. Value generation grows proportionally to problem solving, therefore we focus on the most important or impactful problems when defining use cases.

Formulate Hypotheses

It’s common practice to define promising use cases and propose solutions right away. That’s not our approach: we believe that good design is proven design. The goal is to work with as little assumptions as possible and that means continuous testing, measuring and validating. To make sure our use cases can be validated, we define hypotheses. These are assumptions and beliefs we have about how we could solve each use case successfully. For a hypotheses to be validated, we define metrics for success. How we practice hypothesis-based design will be explained in a case study later on in this article.

Define experiment

After we defined our use cases and hypotheses, we develop experiments to test them. Just as any other scientific method, the goal of these experiments is to collect empirical and measurable evidence. To design the right experiment for the right hypotheses, we facilitate workshops where we explore possible solutions and ways to verify our hypotheses. These workshops are characterised by bringing together cross-disciplinary expertise and a curious mindset.

Prototyping

The only way to know if a solution works is by testing it and proving that it does. Every experiment needs a prototype to validate our proposed solutions. DIaaS comes with a broad set of prototyping methods to ensure we can validate our solutions at any stage of our framework. These prototypes go beyond the obvious clickdummies and are rather seen as media to run an experiment. This can range from teasers to high-fidelity and interactive prototypes or fully developed features in a quantitative environment.

Validate or falsify

At this point we have a hypothesis we want to validate, defined our metrics and create our tool to test this hypothesis. What follows is the most important step: define the experiment to validate or falsify. Some examples from our experiment library are A/B testing, card sorting, usability tests or fake door tests. Validation is not a one time happening; it is a continuous practice.

Enable Standards

The experiments we run prove if our hypothesis are verified or not. Verified use cases are documented in design systems and repositories. These use cases are taken into the existing (or evolving) (digital) brand ecosystem, where more feature developments and validation take place. In case a hypothesis can not be proven, we might set up different experiments to further test the hypothesis. When ultimately the hypothesis is falsified, we will research alternatives or abandon the hypothesis completely.

Scaling

Prototypes show us if we can make a solution work, scaling helps us to make it right and make it fast. We don’t design monoliths but small building blocks. This should not be confused with stand-alone features but instead can be seen as micro front-ends. These micro front-ends can grow into stand-alone features or a set of features (modules) in an existing ecosystem. A metaphor to further explain this can be found in our childhood: LEGO building blocks. LEGO provides players with all kinds of small bricks that can be used to build houses. These buildings combined form entire cities but can also exist on their own. By keeping the problem-solution loop small, we are able to build products that are flexible and can be continuously improved, changed and extended.

However, scaling is not only expanding the features itself. It’s scaling features across customer touchpoints like web, app and other points of sale or integrating it into third party services and offers.

Hypothesis-based design for comparing offers on TUI.com

In 2019 we started a partnership with travel company TUI. Our mission was to rethink the mobile experience for package holiday bookings and by that driving conversion up with a three-digit increase. A greenfield project with a challenge like that was a great opportunity for us to employ hypothesis-based design.

Team 300 defined ten hypotheses to drive conversion of package holiday bookings on mobile phones. Through customer experience analysis we learned that customers struggle with comparing offering and transparency of package holidays. Based on this insight we defined the use case “comparing package holiday offers on mobile”. Normally the next step organisations would take is to define a list of features that would make up this functionality or even design possible ways to enable customers to do just that. This leads to design that is not proven and products that won’t have an impact. Instead, we took this use case and defined a hypothesis that we wanted to validate through experiments:

We believe that offering a way to compare offers based on personal criteria will drive conversion.

We first identified the most important criteria customers want to compare. Among these are costs, personal preferences and travel dates. Based on user research and our customer experience analysis, we designed a possible solution: a comparison table based on the customer’s personal search criteria. This way we ensured that customers could compare criteria that are relevant to them.

To prove this hypothesis, we had to take a step back and consider our customer journey. Assuming customers indeed have a need for a way to compare offers, at what stage in the customer journey would this functionality make sense?

We tested the proposed solution with at two different places. First, we offered this functionality directly on the results-page with all offers. Alternatively we placed the functionality within the Favorites-feature.

Left: Favorites with integrated Comparison table. Middle: Resultspage with Hotelcards displaying package holiday information and checkbox to add offer to comparison. Right: Detailview of our comparison table.

The first experiment was a test-run with a limited number of visitors from the TUI.com-website. We wanted to find out how many visitors would add offers to compare on their own initiative. For the second experiment we invited participants over to our UX Lab and studied how they would book a package holiday. We took note if the participant compared offers on his own accord and if not, we invited the participant to find a way to compare offers. It turned out that when participants were made aware of this functionality, they reacted positive to it though they did not notice the functionality without our help.

The insights from these experiments gave us better understanding in how customers compare offers and at which point in the customer journey they expect to be able to compare offers. Almost 10% all customers that used the “Favorites”-feature also used the comparison functionality. From all the customers that visited the results-page, less than 0,5% used the comparison table. This lead to the conclusion that customers rather use a comparing functionality on their Favorites-list instead of the generic results page.

Knowing at which point in their journey customers expect the compare offers, the next step would be to prove our initial hypothesis: “We believe that offering a way to compare offers based on personal criteria will drive conversion.”

Hypothesis-based design in the mobility industry

In one of our partnerships with a mobility provider we established hypothesis-based design as the standard across different projects. The goal is to work with as little assumptions as possible and that means continuous testing, measuring and validating.

The experience team established a routine and mindset that enables experiment. The “always beta”-mindset acknowledges that what might be true today, can change tomorrow. The routine is build out of several steps. First off, feature demand tests determine the potential of possible features. For each solution there is a product briefing which covers the user story, the longterm goals, scope, benchmarks and most importantly: why this solution is proposed and how would success look like. Secondly, the experience team formulates hypotheses to validate these features.

One of the projects our team works on is an app for commuters. The experience team wanted to know if the app was used daily and if it would replace other mobility apps. The assumption was that since this app focuses on commuters, it would indeed be used daily. To validate this assumption, the experience team formulated the following hypothesis:

“The majority of the users is not longer than four days inactive.”

For the hypothesis to be validated, not more than 50% of the users could be inactive for more than four days. The team set up an experiment to prove this. The prototype was a react native app that was made available to 3000 commuters in the Berlin region. They could use the app was they wished, but were asked to give feedback about its features. Every week new features were released and the team observed the usage of the app within several commuter groups. The results were stunning: the majority of all users used the app regularly, meaning at least two days but oftentimes up to ten days. The hypothesis was validated and the team could use the insights to further improve the app.

Summary

This was the second part of “Digital Innovation as a Service”. In this article we explained our framework for contextual experiment. This framework is not a linear process but much more a guide that helps us ask the right questions and design the right product by continuously testing and validating our solutions.

Our framework for experiment is the heart of our work, fuelled by the right mindset and put into practice by state-of-the-art methodologies. Part three will cover our use of methodology and which techniques we like to employ on a daily basis.

Companies like Volkswagen, Deutsche Bahn, TUI and Burda as well as start-ups already put their trust in our Digital Innovation as a Service. Are you the next to be convinced?

Reach out to Nico Wohlgemuth (info@dayone.de) for more information

When this sounds like a place you want to work at, we’ve got good news for you: we’re hiring! Reach out to Annika Köbsch (career@dayone.de) for more information.

Written by Maureen Herben, UX Designer at DAYONE

Read more about DIaaS here:

Part 1: Mindset
Part 2: Framework (article above)
Part 3: Methodologies

Image of Maureen
Maureen Herben

About the author

Maureen works as a UX Designer at DAYONE. Besides her daily job she continuously improves herself in the field of user experience design and loves to share her expertise and insights not only with her colleagues at DAYONE but with a growing community on her Instagram account UX.Collection. What else does she like? Art, music and really good food are her favorites…

--

--

A studio for service and digital product design — we partner with organizations on their way to a user-centered tech company. www.dayone.de