A report by the Economist Intelligence Unit suggests that a truly evidence-driven culture is one where evidence (or data, as its often referred) becomes fundamental to every job and every role, rather than being relegated to select job functions. Geckoboard also produced a good report analyzing what 368 startups are doing to further grow a strong evidence-based culture. Geckoboard also provides an excellent definition for an evidence-driven culture as “one where data is valued as much as intuition and experience; where data is visible and accessible to everybody.” Others such as Alistair Croll, co-author of Lean Analytics, believe data is even more valuable than intuition and experience, and its reached a point where companies can no longer ignore it and remain competitive.

Below are a set of suggestions, if your goal is to foster a strong evidence-based product culture. Some of these ideas are high-level, but I’ve tried to provide as many tangible, actionable items as possible. Some of these ideas are quite specific to product teams, but some of them can applied more generically across the entire business.

Understand what it means

Before we get too far, its helpful to define a few things. You’ll notice I tend to use the terms evidence and data interchangeably here. I prefer “evidence” for two reasons:

  1. For many people, “evidence” is a form of data more likely to be associated with a process of experimentation, where that data is used to inform decisions. This concept more closely aligns with the purpose of data in what I’m referring to as an evidence-based product culture.
  2. There are two types of data we are concerned with: qualitative and quantitative. The two are equally important in product. However, for many people the term “data” tends to be more closely associated with the quantitative type. Using the term “evidence” does not have the same strong connotation and makes it easier to ensure we’re considering both types.

Staff for it

With medium-sized and larger teams, there are evidence-focused roles you would expect to exist in a company that is truly focused on being evidence-driven. For smaller teams, everyone tends to wear many hats. However, these roles are still critical regardless of team size, so make sure they’re covered one way or another.

  1. Data Analyst/Scientist (quantitative data) — Its not enough to simply collect evidence. Grooming, auditing, interpreting, and communicating the evidence are critical to deriving value from it.
  2. User Researcher (qualitative data) — In an evidence-driven product culture, I would expect every team (or at least the teams working on customer-facing products) to be speaking with customers weekly if not daily (i.e. “customer development”). If this is not a low-friction, super-efficient process for you, it can greatly benefit from not only tooling and automation, but also the full-time focus of a smart individual.

I do not advocate these roles become entirely responsible for owning evidence end-to-end. Its important the teams themselves be as close to the actual evidence as possible. However, these roles should be embedded with the teams to help facilitate best practices, training, and understanding. These roles should also be strong contributors to optimizing processes around evidence collection, deriving actionable insight from the evidence, and communicating learnings to the rest of the company.

I would also suggest there is at least one person in product, one person in engineering, and one person in UX, who is genuinely passionate about evidence-driven practices. That person should understand how the use of evidence applies specifically to their role, understand how to use the tools, and evangelizes best practices to their peers.

Invest in tooling & instrumentation

Tooling is important for an evidence-driven culture, and I believe the investment in tooling directly reflects how important evidence is to an organization. The purpose of tooling it to make it so easy for teams to use, that there is little motivation to skip over this step. Many teams get caught up here, where they push off tooling & instrumentation to future iterations (in reality, that often means it just never gets done). There are a few areas to focus on here.

“…the investment in tooling directly reflects how important evidence is to an organization.”

Understand, your tools. I’ve used a variety of tools in my career. That’s just the nature of things today, especially if you’re being scrappy and not trying to build everything yourself (my preferred approach). Try not to have redundant tools, but don’t be afraid to have several tools that all serve a meaningful purpose. Our team uses tools like Mixpanel, Totango, and Google Analytics for collecting quantitative evidence. Other tools like Google+ Hangouts, Google Docs, Invision, Balsamiq, and Intercom, are key for us to collect qualitative evidence. They all serve their purpose, and it is important that someone (or a group of people) understand the nuances of each product and your team’s needs in order to get the full value from them (as well as justify the cost).

Create proper tools for split-testing and feature flagging. Most evidence-driven organizations have robust solutions for both split-testing and feature flagging, as both are important tools for running efficient, repeatable product experiments.

Keep the evidence clean, and well-organized. Your evidence must be both accessible and reliable, to be useful. If your evidence is not clean, you either run the risk of making poor decisions based on bad information or end up disregarding the evidence altogether because you can’t trust it, falling back on what’s left: guesses. Best case scenario, with dirty data you spend so much time auditing and manipulating it, it becomes difficult to make fast progress towards your end goal. An evidence-driven process is a tough sell if it gets a stigma in your organization for being the biggest bottleneck.

When evidence is not well-organized, you run into similar problems. You spend an unnecessary amount of time hunting down the right information. What’s more common is to give up and do it again by either re-running your experiment or re-instrumenting your product, creating redundant data. If you’re headed down this path, its not long before the entire system becomes more hassle than its worth.

Teach people

In many organizations, I find there is a lack of basic understanding when it comes to tools, technical instrumentation, evidence collection, interpretation, best practices, etc. There are a variety of resources at your disposal to address this. Some ideas for what can be done here include:

  • Invite vendors in for training on their tools (or a video chat, or webinar, or whatever people are willing to offer).
  • Invite vendors, or other people in your network in to talk about how they foster an evidence-driven culture in their own companies.
  • Send people to conferences that tend to talk about being evidence-driven, such as Lean Startup Conference and Growth Hackers Conference.
  • Do a book club on the Lean Analytics book (or any book in the Lean series really, as they all focus to varying degrees on evidence-driven processes).
  • Have people internally conduct brown bag sessions about any evidence-related topics they feel comfortable talking about.

The list goes on, but the general idea is that there are lots of opportunities out there to educate the entire company about being evidence-driven.

Make it accessible

The Geckoboard report mentioned earlier suggests “data becomes irrelevant if organizations cannot make sense of it or communicate insights clearly throughout the organization.” I’ve seen many teams frequently underestimate the difference between just having the evidence and actually having evidence that is easily accessible by everyone. Making evidence accessible involves two things. First people should be able to get to it. This can be accomplished by:

  • Everyone should be able to login and at least have read-access to the tools, and know how to get the information they need.
  • Post important metrics on large monitors throughout the office. Metrics that are more real-time in nature, and have the potential to promote short-term action, tend to work best for this.
  • Regularly share interesting evidence with your company. Important changes in key metrics, as well as video recordings or notes from customer interviews are all valuable to share with a broad group. When doing this, it is best to provide context by connecting the evidence with the associated experiment, as well as what key learnings the team derived from the evidence.

“Teams frequently underestimate the difference between just having the evidence and actually having evidence that is easily accessible by everyone.”

Secondly, evidence needs to be understandable. There isn’t much value in passing around a spreadsheet full of numbers that nobody understands. Access to the raw evidence by everyone is good, but evidence should be provided to people in a way that is curated, designed, and “humanized.” Most people don’t have the time, context, or ability to do this themselves. The evidence-focused roles outlined earlier, as well as good designers, can help with this.

Get better at recognizing assumptions

A strong evidence-driven culture is one where assumptions are rigorously validated or invalidated based on evidence. One of the most challenging mental shifts that needs to occur for most people is the ability to recognize your assumptions in the first place (rather than writing them off as “experience” or “expertise”). My teams will often run design sprints, where we directly address this by going through an exercise of declaring our assumptions and indicating how those assumptions will be tested. In some cases, the team may choose to not test the assumption . This is acceptable (its not always feasible to test everything). What’s important is that the team is being honest and upfront with itself about what assumptions (i.e. risk) it chooses to carry forward in the project.

“A strong evidence-driven culture is one where assumptions are rigorously validated or invalidated based on evidence.”

Answer some basic questions

Being able to easily answer some basic questions based on quantitative evidence at any given time is a good way to assess whether your tools and instrumentation are at the point they need to be. By “easily answer,” I mean anyone has the ability to quickly access an automated report that shows this information in real-time. We can start out by using Dave McClure’s pirate metrics framework to come up with some basic questions. These are specific to our business model at Desk.com, so you may need to adapt them slightly to suit your needs:

Acquisition

  • How many unique visitors come to our website each day?
  • What % of website visitors perform at least one action (click) or stay on the site for more than 10 seconds?

Activation

  • What % of website visitors sign up for a trial each day?
  • How many total customers do we have today?
  • What % of our total customers are in trial?

Retention

  • What % of trials perform some meaningful action during the trial?
  • What % of trials are still active after the first day?
  • What % of trials are still active after the first week?

Referral

  • What % of customers score a 9 or 10 on a Net Promoter Score survey?

Revenue

  • What % of customers who sign up for a trial, convert into paying customers (at any payment plan tier)?
  • What % of paying customers are at each payment plan tier?
  • What % of new customers each month choose to pay us annually vs. monthly?
  • What % of our total customers pay us annually vs. monthly?

In addition to understanding your overall funnel, it is extremely helpful to understand how customers use your product. If you’re like many teams I’ve seen that have been around for a while, you’ve piled on feature after feature while removing very little. I would propose that removing underused features is as important as adding new features. Removing underused features not only increases the value of the remaining features, but it also decreases the overhead of maintaining the product, allowing the team to move faster and do more with less. As with anything else, I would advocate feature removal decisions be made based on evidence. Intercom wrote a good article on visualizing feature creep using quantitative evidence. My suggestion would be to generate a real-time report to visualize adoption across all your core features features, similar to what’s in this article. This evidence should drive conversations about either removing underused features (in a situation where they don’t deliver enough value to your customers) or better promoting them (in a situation where you have evidence that the feature drives real valuable to a representative group of customers, and you believe the barrier to overall adoption is more about discovery).

Focus projects and initiatives on outcomes over outputs

Consistently measuring team success based on the outcomes over outputs not only keeps the team focused on real customer value, but it is also a great way to reinforce a strong evidence-based culture. This can be done with some basic steps:

  1. At the beginning of each meaningfully-sized project or initiative, your team should establish success criteria in the form of a measurable result. This criteria should be documented and communicated with whomever necessary (ex. internal stakeholders). One common way to do this is in the form of a hypothesis statement: “We believe if we provide [solution] to [customer], it will result in [outcome] as measured by [measurable success metric].
  2. During development of the solution, the team must ensure the necessary systems are in place to collect evidence so they can later report on the determined success metric. If engineering work is needed for this, it should be part of the engineering team’s definition of done.
  3. Unless a decision is made to completely kill the project, the team should agree to iterate until the success criteria are met and they can prove it with evidence.

Celebrate successes

When product updates are later communicated to the company, the focus should be on the team’s ability to achieve the desired outcomes. The outputs the team produced to lead to those outcomes should be secondary. Take advantage of these successes to share how the team used an evidence-based approach, and how that led to delivering meaningful results to your customers faster and with a higher level of certainty.

“The team should agree to iterate until the success criteria are met and they can prove it with evidence.”

Have the willingness to make tough decisions

Building a product that doesn’t actually solve your customers’ problem, or even building a product that solves a problem but not one your customers have, is a common form of waste among product teams. Minimizing this form of waste is one of the best benefits of an evidence-based approach. However, you must have the discipline to make the tough decisions when the evidence points that direction. This includes:

  • Moving on from the problem you thought you needed to solve when you cannot show evidence that its a real problem, or that its big enough to be worth solving.
  • Killing your idea when you cannot show evidence that your solution will solve your customers’ problem.
  • Not considering the project “done” until you have evidence that you have met the success criteria.

Create a growth team

Many companies that see massive growth get there because of the diligent efforts of a focused user growth team. Not only are growth teams excellent at driving user acquisition for the product, but a good growth team will naturally be evidence-driven. Growth teams run many experiments frequently, and the results of those experiments are often easily understood by everyone in the company. For these reasons and others, a growth team is an excellent model for an evidence-driven culture to other product teams.

Discuss and act

I’d love to hear about experiences in your own company. Either where you’ve been successful with an evidence-driven approach, or perhaps where you’re struggling to get others to play along. Regardless of your situation, my hope is that this post isn’t just something you read through and pass around. Hopefully there’s enough meat here that it fuels healthy conversation across your organization (and in turn, action) for quite some time.


Follow us at @SalesforceUX.
Want to work with us? Contact
uxcareers@salesforce.com
Check out the
Salesforce Lightning Design System

Salesforce UX

A collection of stories, case studies, and ideas from Salesforce UX

    Chris Abad

    Written by

    I'm a designer, developer, and Internet entrepreneur. Doing awesome stuff with @SalesforceUX.

    Salesforce UX

    A collection of stories, case studies, and ideas from Salesforce UX

    Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
    Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
    Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade