Deep tech: finding the right problem

How to accelerate problem-solution fit with discovery sprints

Alezeia Brown
stellargraph
8 min readJun 25, 2020

--

Co-authored by Georgina Ibarra and Maria Wikstrom

Any founder, product manager or user experience (UX) designer will tell you that the key to building products that people want is to first find the right problem to solve.

Many of the product and design frameworks that have come to life over the past decade concentrate on human-centred design, and begin with a customer problem. But what if you are starting from the other end of the spectrum, with a technology in hand looking for a problem to solve?

While this approach appears contrary to contemporary product design, a ‘tech push’ scenario can play out in many ways:

  • Existing product company expanding into a greenfield market
  • New infrastructure technology looking for an entry use case
  • Deep technology looking for an entry use case
  • Product teams looking to pivot.

As a product and UX team working for a national research organisation, we often find ourselves looking at how we can take deep tech — which could be applied to multiple problems and multiple markets — and determine a problem-to-tech match that provides a real solution pathway.

This article explores the way we applied and repurposed Google Ventures’ Design Sprint framework to deliver insights in the discovery phase to validate a new use case for our deep tech innovation.

How is deep tech product genesis different?

A 2019 BCG and TomorrowWorld report found that deep tech companies take on average 1.8 years to go from incorporation to first prototype, and a further 1.5 years to go from prototype to market. Add to this that in research organisations, it can take years to transition from basic research to applied research and early proof of concept development.

(Image: BCG-The-Dawn-of-the-Deep-Tech-Ecosystem-Mar-2019.pdf)

We have found that in many cases the right amount of discovery work can enable a deep tech program to really zero-in on their problem space much earlier in the process. This can be particularly helpful if the ecosystem is still in early development, or the innovation so disruptive that the target market doesn’t exist yet.

The ‘tech push’ of deep tech

Deep tech products and businesses are those founded on a scientific discovery or meaningful engineering innovation, characterised by:

  • having a big impact; like creating new markets, instigating social/environmental change
  • taking a long time to reach market-ready maturity
  • requiring a significant amount of capital to develop and scale (source).

The very nature of deep tech is to generate disruptive innovation. As such, the products and businesses arising from deep tech are often market creators (i.e Internet of Things (IoT)), or can be transformative to multiple market problems (i.e. machine learning, quantum computing).

In a traditional research and development environment the creation of deep tech produces a ‘tech push’ where the challenge is to find the right problem to solve with the technology. Because deep tech can be applied so broadly, finding the right problem-to-innovation match to deliver both impact potential and a solution pathway is complex.

Discovering the perfect problem

One of our programs was focused on building graph machine learning technology. The potential applications for this technology are numerous; whenever there is a large dataset with interconnected data, graph analytics can provide deeper insights, faster.

So when you can go anywhere; where do you start?

We chose to start with some early discovery work to flesh-out our assumption that our innovation could be applied in the sophisticated financial crime problem space. To validate these assumptions we worked our way through the following steps:

  1. Understand the innovation — determine what the tech does (the output), and the outcome of this (the abstract). Map out the technical capability needed to make the innovation fit-for-purpose.
  2. Background research — review the associated industry landscape and develop a macro-level view of the ecosystem and all the actors in it. This might include things like industry talks, conferences and customer discovery activities.
  3. Business idea generation — document a range of global trends and match them to your innovation to ideate what the market problem/benefit could be.
  4. Discovery activities — unearth the context, current state and problems with research activities such as customer and user interviews, surveys, jobs-to-be-done analysis.
  5. Validation exercises — match the problems with potential solutions, conducting early concept exploration and testing with customers and users.

We use these steps to enable our UX and product teams to ‘front load’ domain exploration to provide a solid grasp of the ecosystem of a given market or industry sector — as well as the technologies being used in it — to generate insight into the business and user needs that are not yet solved by existing technology.

The discovery validation phase

Having completed the domain groundwork using the above steps, we found that using the Design Sprint methodology in a discovery context was an effective validation exercise that accelerated our ability to learn whether there was a problem-to-innovation fit.

The Design Sprint gives teams a shortcut to learning without building and launching (image: https://www.gv.com/sprint/)

In keeping with the Design Sprint methodology, we ran a full five-day process to investigate a future product scenario with the aim to better understand technology proposals, and surface possible risks and opportunities by contrasting them to the current landscape.

Through this process, we found a problem to solve with our deep tech, where solving that problem would result in measurable value for the user, and gave us signals there was strong potential for customer value and ability to deliver impact.

Learnings from our discovery phase

Warm up before you sprint

The Design Sprint methodology starts with the assumption that teams already have expertise in their problem space, and that the first day will be sufficient to bring everyone up to a common understanding. But in a tech push environment, this is rarely the case.

We found it critical in the discovery phase to put aside two to eight weeks to adequately cover all the areas needed to build a working understanding of the problem space before launching into the Design Sprint.

In our case, we were investigating whether we could use graph machine learning in the sophisticated financial crime problem space. We’re not domain experts, so worked through discovery steps one, two and four to build the following context:

  • Understand the innovation by abstracting the technology from output to outcome to map out the keystone capability. The below is an example of a technical output abstracted to a business outcome:
  • Background research including building an understanding of both industry and government perspectives, including a survey of the legal landscape and expert interviews.
  • Discovery activities based on organisation versus user challenges, Jobs to be Done (JTBD), user personas, and researching how people can trust and therefore act on machine learning predictions in high consequence situations (stay tuned for our upcoming article on trust in machine learning).

Be clear on buyer/user scope

If your innovation is an enterprise product, be aware of the difference between the buyer, and the user. With consumer products, buyers and users are typically one and the same; this is not always the case with enterprise products.

We encourage making time to distinguish your user and buyer personas to make it clear which one you are focusing on from day one of the Design Sprint. Because on the fifth day, it should be clear whether the persona you choose would want to use and/or pay for the product.

As we were testing to learn the problem-tech fit, we narrowed our scope to test desirability and feasibility only (based on Larry Keeley’s triangle). Further, as our technology was geared towards enterprise use, we focused on potential users, not buyers.

Larry Keeley’s triangle.

For us, this meant if the design sprint showed that our innovation didn’t provide feasibility-desirability we could pivot the innovation to another use case. On the other hand if we got strong positive indications, we would still need to test viability in a second round of validation exercises.

Use low fidelity prototypes when testing deep tech concepts

Our core goal was to validate the desirability of the innovation and whether potential users would employ this type of technology for their JTBD.

So, we had to ensure that our day five user testing component was focused on validating the value the innovation would provide for users rather than getting distracted with its visual elements. To achieve this we used:

  • low fidelity sketch prototypes — pencil on paper, no colour
  • targeted questions around validating the value.

We find this low fidelity approach for deep tech concept testing helps keep participants focused on the tasks presented to them, mitigating visual distractions and users getting stuck on the interaction detail.

Recruit testing participants early

The success of any sprint depends on having the right mix and right number of participants for day five testing.

Ideally, five participants are needed, all of whom are homogenous and match the persona you are targeting for the innovation.

For innovations targeting general consumers, you can advertise for participants using open forums like Gumtree (or Craigslist in the US). However for specialised or enterprise products, you will be looking for participants who have specialised skills or qualifications. These experts can be much harder to find and/or engage in testing due to confidentiality/security concerns, as well as availability during business hours.

Many of the participants who supported our discovery phase fall into this category, so we found initiating the recruiting ahead of the Design Sprint by several weeks critical to our success in getting the right participants.

In summary —

In our tech push environment, the discovery phase and Design Sprint process are valuable tools to either fail fast, or accelerate the deep tech to a level of disruptive innovation and market creation.

While the viability of our innovation still needed to be validated, for us the discovery phase and Design Sprint process demonstrated the value of combining product and UX approaches to validate the problem-solution fit early in the process.

The learnings we extracted set the foundation for further adaptation of the Design Sprint framework to suit our tech push environment, where our bread and butter is deep tech solutions.

Have you used design sprints in a discovery phase for deep tech or to build enterprise systems? We’d love to hear from you — please share your experience as a comment below.

The StellarGraph Library is an open source python library that delivers state of the art graph machine learning algorithms on Tensorflow and Keras. To get started, run pip install stellargraph, and jump in to one of the demos.

This work is supported by CSIRO’s Data61, Australia’s leading digital research network.

--

--

Alezeia Brown
stellargraph

Problem solver, start-up supporter, deeptech investor and D&I advocate