Navigating the Path to Informed Product Decisions — Part 1

PASS3 Development Team
PASS3 Development Team
4 min readOct 9, 2023

by Aji Kisworo Mukti
Monday 09 October 2023

Product development isn’t always straightforward. It often involves many tests, tries, mistakes, and do-overs. At the center of this process is “experimentation”, which helps teams come up with new ideas and handle market challenges.

By using experimentation, companies can quickly check if an idea works. It also helps teams learn and adapt, making sure their products are even better than what customers expect. Every company experiments in its own way and for its own reasons. For us at PASS3, we experiment to learn, not just to prove something or improve numbers.

In this article, we’ll cover the Why and What of our experiments. We’ll explain our reasons and the kinds of tests we run at PASS3. We’ll dive into the How and Who in a later article, so stay tuned!

Beginning with Why

When we talk about testing new ideas in products, it’s often about numbers, about growing metrics. We want more clicks, more users, or more sales. That’s important, but it’s not the whole picture.

“We need more people to click…”

With this mindset, we might make decisions that only boost numbers without really helping our product. Imagine putting up a big ad that people can’t easily close to get more clicks. That’s not really solving a user problem; it’s just trying to get a quick win.

Instead, we should always start by asking: Why are we doing this? What value does it bring to our users? When we base our decisions on these questions, we come up with better and more meaningful tests.

The real aim of these tests is to learn. To understand our users better. If a test leads to a better product, great! If it doesn’t, it’s still a win if we learn from it. An unsuccessful experiment isn’t one that fails to improve the product. It’s one from which we learn nothing.

A memorable quote by Jim Barksdale, former CEO of Netscape, underscores this sentiment:

“If we have data, let’s look at the data. If all we have are opinions, let’s go with mine.”

When discussing data, the focus shifts to how we measure our tests and how these measurements inform our decisions. Essentially, these tests serve as a pathway for us to draw conclusions grounded in data. However, a word of caution is needed. At PASS3, we strive for a balanced approach rather than being purely data-driven.

While data might show a rise in conversion rates, it doesn’t automatically translate to our product being the best iteration possible. Numerous factors influence user decisions, from shifting market dynamics to external events. It’s crucial to interpret data in context and not see it as the sole determinant.

Once we’ve effectively fine-tuned our iterations, we are closer to genuine innovation. Innovation isn’t a spontaneous burst; it’s the outcome of consistent iterations, numerous trials and errors, and systematic experimentation. It’s a gradual process, building on each small step. Nothing transformative occurs overnight.

Using experimentation within our iterations is a strategic move to prevent us from big mistakes. Before diving into a massive project, investing extensive resources, money, and time, it’s wise to test a scaled-down version. By observing the outcomes and making decisions rooted in data, we position ourselves better. It’s essential to remain open, learn, and be ready to adjust our direction based on the feedback we gather. For us, innovation means making these small, steady changes.

The Method we choose

There are many ways to try out new ideas for products. You can:

  • Competitor Analysis, see how our product stands against others
  • User Interviews, can be surveys or just asking questions
  • A/B Testing, test different versions and see which one works best
  • Cohort Analysis, like A/B but based on cohort like when users tried (time) or where they’re from (location)

We use A/B Testing. We want everyone, from engineers to marketers, to think about trying new ideas, making decisions based on data, and doing it cost-effectively.

Even though it’s called A/B Testing, we test more than just two versions. We split our users into groups, give each group a different version, and test them all at once. This helps us get accurate results. However, A/B Testing can be tricky since it requires special tools and setup.

In the next article, we’ll share a real example of how and who do these tests. We’ll look at an idea we had:

We should be bringing value back to our users, by rewarding users

See you next time!

--

--