How to identify BIG IMPACT ideas

Eva Lond
Pipedrive R&D Blog
Published in
6 min readApr 24, 2024

Have you ever had an overwhelming amount of ideas in your backlog that you struggle to prioritize? Or perhaps you already developed many of these ideas and now your days are filled with maintaining solutions that have questionable value? It’s often difficult to tell how valuable an idea or solution will be for your customers and how big its impact will be. At Pipedrive, we have also been in this situation…

In November 2023, Pipedrive formed the AI Acceleration team. The goal was to optimize the discovery phase of the AI product lifecycle by picking BIG IMPACT ideas and rapidly prototyping and validating them. I joined the team from our Data Tribe, where I was the Lead Data Engineer. Then, temporarily in a new role: Data Scientist.

As part of the program, my team had a workshop to set concrete and measurable goals. As the primary goal, we targeted the number of experiments to be run in 6 months, intending to ramp up the speed of learning. That was a team goal. Toomas Eilat, our Principal Product Data Analyst, and I are responsible for the secondary goal, defining a methodology to identify BIG IMPACT.

This is what this post is about.

The dual challenge of big-impact ideas

The first thing we realized in our conversation was that there are two main parts of this puzzle that can’t be examined independently. We need to identify big-impact ideas and then measure whether the value provided to customers is as big as we thought it could be. These two processes need to be intertwined and continuous. You can only get big ideas when you measure while ideating. Also, you can only measure the impact efficiently if you stay connected to the same methods you used during ideation.

Let’s start unpacking this.

…it’s way easier to do big things fast than small things…

An important paradox to consider here is that it’s way easier to do big things fast than small things because it’s easier to identify a big impact. So, picking the right ideas to work on is critically important.

Let me explain.

It all comes down to how statistical significance is achieved. For example, you need huge sample sizes (about one million in this case) to measure a 0.1% shift in some metric. Yet, when 10 out of 10 people in your target group are willing to pay you right away for what you’re building, then this is already a solid indicator of a huge impact. What comes easier, building something to go to live in a good enough shape for 100k customers or showing a few people a hacky prototype that conveys the idea well?

An additional consideration here is the danger of sunk cost fallacy sneaking in later when you start building something similar to what customers need, but not exactly what customers need. It’s hard to pull the plug on things that don’t work because so much has been invested already, even though that cost is gone already and only future risk/reward should be considered to evaluate what to work on next. This also brings us to the opportunity cost: if you spend your time working on mediocre solutions, you can’t spend that time working on great solutions.

Plus, each feature you create introduces maintenance costs and complexity, and the impact of these costs is tough to estimate when developing. Based on my experience, it always tends to get more complex in the end than it originally seemed.

Measuring customer value

Now, how can we quantify the value a solution provides for customers? A simple yet effective proxy for that is money. Are they willing to pay to get their problem solved, and how much? Answers to these questions could be probed iteratively throughout the development of a new solution.

First, you can just try to get a feel for it during customer interviews. For example, ask them to sign up for an upcoming beta right away by sending them a link to a form to fill up. You’ll see if they’re willing to invest their time and energy in solving this problem. Or maybe they’re already paying for something else to solve this problem? If you want to be more certain, one option is a smoke test on your landing page with a “Buy now” button that leads to a pre-order form, but then you’ll miss all the juicy extra context you would get during an interview.

Then, incrementally, the more of an actual solution you have built, the more reliable tests you can run to validate its value, increasing each prototype’s fidelity until you eventually have a small group using the actual solution.

At each step, only grow further if the fit is right. Scaling does not make solutions better if they don’t solve the problem in the first place.

Getting BIG IMPACT ideas

Both me and Toomas have seen many attempts to use spreadsheets or even fancier algorithms to evaluate and compare competing ideas. There is no shortage of ideas that could be done, though none of these systems are doing any better than random guessing — at least in our experience.

Complex business case calculators that use assumptions on top of assumptions to produce an objective quantifiable result are time-consuming to complete, yet they don’t seem to fare better than simple estimations. The assumptions made are just too subjective.

Also, estimating engineering efforts has proved to be prone to errors when there is no common understanding of what the product ideas exactly mean. Engineering and Product just end up comparing apples and oranges.

Why does it not work? Collective decision-making based on difficult-to-compare data, low customer empathy and misaligned personal values and biases is not a good combination. While the latter two deserve equal attention, I’ll just touch on the first two here.

Limitations of data and low customer empathy are quite related and, one might argue, point to the same root cause, but let’s start with the first one. When looking at quantitative data, it’s usually not too insightful. It just answers very precise questions, but these are usually very hard to ask and time-consuming to measure. Overall numbers can be very superficial and even misleading. Qualitative data goes deeper into real customer needs. Short descriptions of ideas in themselves don’t contain much information and their interpretation depends heavily on who is reading them.

It’s tough to process and quantify the customer needs information present in qualitative data. Current AI solutions (e.g., getalign.com) are already scratching the surface of doing this kind of analysis, but the best computation algorithm to process qualitative data is still the human brain.

Of course, you can’t just squeeze human brains in a jar and hook them up to a spreadsheet until they come up with some way to quantify the potential impact of ideas.

What you can do instead is use your brain to talk to your customers. Listen to them. Understand their problems. Learn about their work. Observe them doing their work. Try out their work. Try to teach them how to do their work. Develop that empathy and then you’ll know which problems are big enough to be worth solving and what solutions could solve their pains. That’s the only way I know that works.

With this empathy, you can start nurturing your ideas from scratch. It’s good to start from the beginning with ideas so that you can really take your time to understand the customer needs around it, otherwise, it’s really easy to start building on assumptions and copying other companies’ experiments for no deeper reason.

Work with a cross-functional and multi-talented team, without the misaligned siloes usually present in bigger orgs. Do product discovery and delivery with the whole team focused on one big problem at a time, working iteratively, moving fast, designing the solutions together and strengthening customer empathy together.

In a company with an existing product, you can use existing research to triangulate your vision and customer understanding to see if what you’re thinking is aligned with what you already know through long-term tracking like NPS, churn, exit surveys and Voice-of-Customer initiatives.

So, what’s the methodology then?

Always start with a customer in mind and get to know them deeply. Qualitative data usually provides much better insights, while the quantitative side can serve as a sanity check rather than the driving force.

Iterate with small steps – writing code and building the real thing should be the last resort. Incrementally build stronger validation for the solution, but keep it simple and preferably related to how much are they willing to pay.

For more concrete examples and theoretical wisdom on how to build things that matter while minimizing waste, read the legendary The Lean Startup book and the whole Lean series.

--

--

Eva Lond
Pipedrive R&D Blog

Curious soul, working as a Data Engineer at Pipedrive