Evidence-Based Decision Making

Matthew Godfrey
Ingeniously Simple
Published in
5 min readJan 26, 2018

In our efforts to evolve our research and design practices (UX) here at Redgate we continually look to reflect on and review our overall approach to design, as well as how we individually support the work of our respective product teams.

Like any good practice, we believe that being humble and knowing that there is always room for improvement allows us to evolve and adapt to both the changes in our industry and the needs of a growing business.

One of our core tenants is around delivering value to our customers by ensuring the work we deliver meets their needs and solves real problems. But how do we know what the right problems are to solve and furthermore, how do we know when we’ve found good solutions to those problems?

To this end, the big question we asked of design last year was how can we help teams make better product decisions?

Over the last year we’ve worked with our product teams to increase the amount of research we undertake, but we don’t just do it because it’s the right thing to do…research and gathering feedback from our users is fundamental to our ability to learn and make well-reasoned decisions.

At Redgate we run a Design Guild (our internal Community of Practice) where designers spend Friday mornings working together on practice improvements. During these sessions we called out the need to define and strengthen our research practices; in particular teams’ ability to identify the core needs of their users and extract key insights from research.

We hypothesised that:

Teams who could bring the evidence of research into their decision-making activities would be able to make both better strategic (what) and tactical (how) product decisions.

To explore this, we started by thinking about what we are now referring to as different ‘modes’ of design, how our research activities might align to these modes and more importantly, how these modes might map to our product development cycles.

Three ‘modes’ of design

Thinking about the various types of work we all do across the product portfolio our efforts broadly fell into one of three categories; which we’re now referring to as Strategic Design, Tactical Design and Interface Design.

Strategic Design

Strategic Design sees us applying the fundamentals of Design Thinking to empathise with and understand the broader needs of users, identify new opportunities (unmet/underserved needs) for innovation and to help the team/wider business envision a desired future state.

Tactical Design

Tactical Design should naturally follow (and be informed by) a strategy that defines the user/business problem we are setting out to address. Tactical design work is about helping the team to gain a better understanding of the problem from a user’s PoV and helping the them decide what to explore, test and ultimately create in order to solve that problem.

Interface Design

Once a team finds problem-solution fit, the work of design shift to supporting a team implementation of a solution. Crafting specific workflows, interactions and interface elements and refining these incrementally through testing and iteration. The goal here is to support the team in releasing this solution, evaluating how it is being used in the wild and iterating accordingly.

The requirements and therefore what mode we found ourselves operating in were often determined by a combination of the team’s current understanding, confidence in their current approach and the stage in the lifecycle of the product in question.

Example: A team might be embarking on an entirely new project or at a stage where the vision and/or strategy for an existing product is hazy, undefined or uncompelling. In this scenario we would expect our activities to be more strategic in nature, such that they would help inform a teams’ understanding of its users, their new, emergent or under-served needs and their ability to frame and identify compelling opportunities.

Research…the currency of decision-making

So how does this understanding impact and inform our research practices and in-turn, our ability to make better decisions?

  1. Well, for a start it helps guide and shape the frequency, methods and outputs we can expect from the research we conduct with our teams.
  2. It explicitly calls out the need for different types of research depending on what a team needs to learn at any given time.
  3. It helps us to acknowledge the need to concurrently explore research activities at strategic (what), tactical (how) and interface (now) levels.
  4. It gives us model from which to understand how our research activities can better align with and support the motions of our planning and development cycles.

Focusing on the fourth point around alignment, we currently follow a cadence of a bi-annual planning cycle, with a quarterly practice of teams setting their Objectives and Key Results (OKRs). This provides us with key decision points for leadership and product teams; with the former framing the high-level problems/challenges and the later deciding how best to go after these.

With this in mind and in the spirit enabling better decision-making, we are now working towards a model that seeks to map our research activities to these cycles, enabling teams to make informed, evidence-based decisions at the most appropriate times.

Design Cadence at Redgate

In practice, this now sees us starting to apply a different set of research methods and design artefacts to our product work, on a cadence that enables our products teams to regularly test their assumptions, gather feedback and know (with increasing confidence) how best to proceed.

Six-month cycle

We utilise more generative research methods over a six month period, to build out a broad understanding and surface the insights and opportunities that will inform and guide teams planning activities (strategic inputs).

Three-month cycle

In parallel we’ll often use more evaluative methods, this time over a three month period, to validate the various ideas our teams commit to explore in pursuit of their OKRs.

Per-sprint (..or every other) cycle

Lastly, we apply more traditional usability methods, typically on a per-sprint basis, to evaluate the usability of the work teams ship to users from release to release.

Over time our hope is that by defining our practices, following the rhythm of our planning/delivery cycles and applying more rigour to our research activities we’ll:

  • Reduce internal biases and over-reliance on static, historical knowledge
  • Ensure product decisions are based on evidence and objectivity
  • Help teams negotiate ambiguity and avoid decision paralysis
  • Increase team confidence in the value of their efforts
  • Spend more time/effort working on the most valuable problems

--

--