7 Lessons on driving impact with Data Science & Research

Karen Church
intercom-rad
Published in
10 min readOct 21, 2022

--

Last year I gave a talk at a Women in RecSys keynote series called “What it really takes to drive impact with Data Science in fast growing companies”. The talk focused on 7 lessons from my experiences building and evolving high performing Data Science and Research teams in Intercom. Most of these lessons are simple. Yet my team and I have been caught out on many occasions.

Lesson 1️: Focus on and obsess about the right problems

We have many examples of failing over the years because we were not laser focused on the right problems for our customers or our business. One example that comes to mind is a predictive lead scoring system we built a few years back.

The TLDR; is: After an exploration of inbound lead volume and lead conversion rates, we discovered a trend where lead volume was increasing but conversions were decreasing which is generally a bad thing. We thought, “This is a meaty problem with a high chance of impacting our business in positive ways. Let’s help our marketing and sales partners, and do something about it!

We spun up a short sprint of work to see if we could build a predictive lead scoring model that sales and marketing could use to increase lead conversion. We had a performant model built in a couple of weeks with a feature set that data scientists can only dream of 😍 Once we had our proof of concept built we engaged with our sales and marketing partners.

Operationalising the model, i.e. getting it deployed, actively used and driving impact, was an uphill battle and not for technical reasons. It was an uphill battle because what we thought was a problem, was NOT the sales and marketing teams biggest or most pressing problem at the time.

It sounds so trivial. And I admit that I am trivialising a lot of great data science work here. But this is a mistake I see time and time again.

My advice:

  • Before embarking on any new project always ask yourself “is this really a problem and for who?”
  • Engage with your partners or stakeholders before doing anything to get their expertise and perspective on the problem.
  • If the answer is “yes this is a real problem”, continue to ask yourself “is this really the biggest or most important problem for us to tackle now?

In fast growing companies like Intercom, there is never a shortage of meaty problems that could be tackled. The challenge is focusing on the right ones.

The chance of driving tangible impact as a Data Scientist or Researcher increases when you obsess about the biggest, most pressing or most important problems for the business, your partners and your customers.

Lesson 2️: Spend time building strong domain knowledge, great partnerships and a deep understanding of the business.

This means taking time to learn about the functional worlds you look to make an impact on and educating them about yours. This might mean learning about the sales, marketing or product teams that you work with. Or the specific sector that you operate in like health, fintech or retail. It might mean learning about the nuances of your company’s business model.

We have examples of low impact or failed projects caused by not spending enough time understanding the dynamics of our partners’ worlds, our specific business or building sufficient domain knowledge.

A great example of this is modeling and predicting churn — a common business problem that many data science teams tackle.

Over the years we’ve built multiple predictive models of churn for our customers and worked towards operationalising those models.

Early versions failed.

Building the model was the easy bit, but getting the model operationalised, i.e. used and driving tangible impact was really hard. While we could detect churn, our model simply wasn’t actionable for our business.

In one version we embedded a predictive health score as part of a dashboard to help our Relationship Managers (RMs) see which customers were healthy or unhealthy so they could proactively reach out. We discovered a reluctance by folks in the RM team at the time to reach out to “at risk” or unhealthy accounts for fear of causing a customer to churn. The perception was that these unhealthy customers were already lost accounts.

Our sheer lack of understanding about how the RM team worked, what they cared about, and how they were incentivised was a key driver in the lack of traction on early versions of this project. It turns out we were approaching the problem from the wrong angle. The problem isn’t predicting churn. The challenge is understanding and proactively preventing churn through actionable insights and recommended actions.

My advice:

Spend significant time learning about the specific business you operate in, in how your functional partners work and in building great relationships with those partners.

Learn about:

  • How they work and their processes.
  • What language and definitions do they use?
  • What are their specific goals and strategy?
  • What do they have to do to be successful?
  • How are they incentivised?
  • What are the biggest, most pressing problems they are trying to solve
  • What are their perceptions of how data science and/or research can be leveraged?

Only when you understand these, can you turn models and insights into tangible actions that drive real impact 💪

Lesson 3️: Data & Definitions Always Come First.

So much has changed since I joined intercom nearly 7 years ago

  • We have shipped hundreds of new features and products to our customers.
  • We’ve sharpened our product and go-to-market strategy
  • We’ve refined our target segments, ideal customer profiles, and personas
  • We’ve expanded to new regions and new languages
  • We’ve evolved our tech stack including some massive database migrations
  • We’ve evolved our analytics infrastructure and data tooling
  • And much more….

Most of these changes have meant underlying data changes and a host of definitions changing.

And all that change makes answering basic questions much harder than you’d think.

Say you’d like to count X.
Replace X with anything.
Let’s say X is ‘high value customers’.
To count X we need to understand what we mean by ‘customer’ and what we mean by ‘high value’.
When we say customer, is this a paying customer, and how do we define paying?
Does high value mean some threshold of usage, or revenue, or something else?

We have had a host of occasions over the years where data and insights were at odds. For example, where we pull data today looking at a trend or metric and the historical view differs from what we noticed before. Or where a report generated by one team is different to the same report generated by a different team.

You see ~90% of the time when things don’t match, it’s because the underlying data is inaccurate/missing OR the underlying definitions are different.

Good data is the foundation of great analytics, great data science and great evidence-based decisions, so it’s really important that you get that right. And getting it right is way harder than most folks think.

My advice:

  • Invest early, invest often and invest 3–5x more than you think in your data foundations and data quality.
  • Always remember that definitions matter. Assume 99% of the time people are talking about different things. This will help ensure you align on definitions early and often, and communicate those definitions with clarity and conviction.

Lesson 4️: Think like a CEO

Reflecting back on the journey in Intercom, at times my team and I have been guilty of the following:

  • Focusing purely on quantitative insights and not considering the ‘why’
  • Focusing purely on qualitative insights and not considering the ‘what’
  • Failing to recognise that context and perspective from leaders and teams across the organization is an important source of insight
  • Staying within our data science or researcher swimlanes because something wasn’t ‘our job’
  • Tunnel vision
  • Bringing our own biases to a situation
  • Not considering all the options or alternatives

These gaps make it difficult to fully realise our mission of driving effective evidence based decisions.

Magic happens when you take your Data Science or Researcher hat off. When you explore data that is more diverse that you are used to. When you gather different, alternative perspectives to understand a problem. When you take strong ownership and accountability for your insights, and the influence they can have across an organisation.

My advice:

Think like a CEO. Think big picture. Take strong ownership and imagine the decision is yours to make. Doing so means you’ll work hard to make sure you gather as much information, insights and perspectives on a project as possible. You’ll think more holistically by default. You won’t focus on a single piece of the puzzle, i.e. just the quantitative or just the qualitative view. You’ll proactively seek out the other pieces of the puzzle.

Doing so will help you drive more impact and ultimately develop your craft.

Lesson 5️: What matters is building products that drive market impact, not ML/AI

The most accurate, performant machine learning model is useless if the product isn’t driving tangible value for your customers and your business.

Over the years my team has been involved in helping shape, launch, measure and iterate on a host of products and features. Some of those products use Machine Learning (ML), some don’t. This includes:

  • Articles: A central knowledge base where businesses can create help content to help their customers reliably find answers, tips, and other important info when they need it.
  • Product tours: A tool that enables interactive, multi-step tours to help more customers adopt your product and drive more success.
  • ResolutionBot: Part of our family of conversational bots, ResolutionBot automatically resolves your customers’ common questions by combining ML with powerful curation.
  • Surveys: a product for capturing customer feedback and using it to create a better customer experiences.
  • Most recently our Next Gen Inbox: our fastest, most powerful Inbox designed for scale!

Our experiences helping build these products has led to some hard truths.

  1. Building (data) products that drive tangible value for our customers and business is hard. And measuring the actual value delivered by these products is hard.
  2. Lack of usage is often a warning sign of: a lack of value for our customers, poor product market fit or problems further up the funnel like pricing, awareness, and activation. The problem is rarely the ML.

My advice:

  • Invest time in learning about what it takes to build products that achieve product market fit. When working on any product, especially data products, don’t just focus on the machine learning. Aim to understand:
    If/how this solves a tangible customer problem
    How the product / feature is priced?
    How the product / feature is packaged?
    What’s the launch plan?
    What business outcomes it will drive (e.g. revenue or retention)?
  • Use these insights to get your core metrics right: awareness, intent, activation and engagement

This will help you build products that drive actual market impact 🙌🏻

Lesson 6️: Always strive for simplicity, speed and 80% there

We have plenty of examples of data science and research projects where we overcomplicated things, aimed for completeness or focused on perfection.

For example:

  1. We wedded ourselves to a specific solution to a problem like applying fancy technical approaches or utilising sophisticated ML when a simple regression model or heuristic would have done just fine…
  2. We “thought big” but didn’t start or scope small.
  3. We focused on getting to 100% confidence, 100% correctness, 100% accuracy or 100% polish…

All of which led to delays, procrastination and lower impact in a host of projects.

Until we realised 2 important things, both of which we have to continually remind ourselves of:

  1. What matters is how well you can quickly solve a given problem, not what method you are using.
  2. A directional answer today is often more valuable than a 90–100% accurate answer tomorrow.

My advice to Researchers and Data Scientists:

  • Quick & dirty solutions will get you very far.
  • 100% confidence, 100% polish, 100% accuracy is rarely needed, especially in fast growing companies
  • Always ask “what’s the smallest, simplest thing I can do to add value today”

Lesson 7: Great communication is the holy grail

Great communicators get stuff done. They are often effective collaborators and they tend to drive greater impact.

I have made so many mistakes when it comes to communication — as have my team. This includes….

  • One-size-fits-all communication
  • Under Communicating
  • Thinking I am being understood
  • Not listening enough
  • Not asking the right questions
  • Doing a poor job explaining technical concepts to non-technical audiences
  • Using jargon
  • Not getting the right zoom level right, i.e. high level vs getting into the weeds
  • Overloading folks with too much information
  • Choosing the wrong channel and/or medium
  • Being overly verbose
  • Being unclear
  • Not paying attention to my tone…….. And there’s more!

Words matter.

Communicating simply is hard.

Most people need to hear things multiple times in multiple ways to fully understand.

Chances are you’re under communicating — your work, your insights, and your opinions.

My advice:

  1. Treat communication as a critical lifelong skill that needs continual work and investment. Remember, there is always room to improve communication, even for the most tenured and experienced folks. Work on it proactively and seek out feedback to improve.
  2. Over communicate / communicate more — I bet you’ve never received feedback from anyone that said you communicate too much!
  3. Have ‘communication’ as a tangible milestone for Research and Data Science projects.

In my experience data scientists and researchers struggle more with communication skills vs technical skills. This skill is so important to the RAD team and Intercom that we’ve updated our hiring process and career ladder to amplify a focus on communication as a critical skill.

We would love to hear more about the lessons and experiences of other research and data science teams — what does it take to drive real impact at your company?

In Intercom, the Research, Analytics & Data Science (a.k.a. RAD) function exists to help drive effective, evidence-based decision making using Research and Data Science. We’re always hiring great folks for the team. If these learnings sound interesting to you and you want to help shape the future of a team like RAD at a fast-growing company that’s on a mission to make internet business personal, we’d love to hear from you.

--

--

Karen Church
intercom-rad

Head of Research, Analytics & Data Science @intercom. Ex-scientist @YahooLabs @telefonica. Love Data, HCI, Wine & Crafts. Big foodie. Founder @herplusdata