Happy new quarter season

Stephanie Le Geyt
Attest Product & Technology
7 min readJan 10, 2020

How Attest uses Attest to guide product prioritisation

Our last few weeks before Xmas

At Attest, our users are anyone in an organisation who wants to directly understand consumers. We are building a survey product to democratise consumer insights.

As a lot of other product teams will have been doing in the run up to the new year, we launched new OKRs and product focus areas. Part of our challenge was to fully synthesise data and feedback from across our customer-base and organisation to understand what problems to tackle first.

I only joined the team at the end of November in our run up to planning, so had to build relationships with stakeholders and tech, upskill on the product and get my head around the subtleties of customer use cases and user segments. All fun and games 🙃

In this article I’ll briefly go through :

  1. Our techniques to prioritise
  2. Roadmap vs. experiments + the growing pains of rolling this thinking out

TECHNIQUES TO UNDERSTAND KEY PROBLEMS

I’m joining as a product manager in our newly formed engagement tribe, so my initial job is to understand the impediments to customer value and usage.

How I’ve been explaining Value and Usage together to stakeholders

Like many places I’ve worked before, customer feedback and product insight is there but it’s buried across the organisation and systems.

We have 3 different customer facing teams all with different incentives, priorities and customer touchpoints. Our CRM isn’t currently brilliantly integrated with other customer data so it’s hard to get one source of truth on a customer, let alone across all our portfolio. But it’s there and there’s a lot of people here who have a really good grasp of what customers are trying to do so it was a matter of rolling up our sleeves and being systematic. Here’s how we did it…

  1. Read allllll the feedback and meet as many customers as possible 🤝

This is an ongoing activity as a PM when you’re thinking about priorities but is probably the most important thing you can do as a new starter. I immersed myself into as much already captured feedback as we had via a designated Slack channel, emails and CRM notes.

Our AMs kindly brought me along to some pre-scheduled meetings including Quarterly Business Reviews. This gave customers a chance to give me their feedback directly and me to dig into what they were actually trying to do. Naturally people are inclined to solutionising but if you dig into the problem this helps more and also makes them feel listened to. I took these opportunities to ask them what would the product need to do for them for them to encourage other colleagues to use it. And when they gave me a list, I’d ask which one was most important.

2. Group it 🏦

While I and the other PMs were gathering feedback we were also building out an “Ideas bank” in Notion, and linking them to specific customer pages where feedback was captured.

You can see here that themes started to emerge. These themes were essentially “what do our customers want us to do better”. Volume didn’t necessarily mean that they were the most critical, but it helped us separate features from problems.

3. Force prioritisation with Attest surveys 🤓

From steps 1 + 2 we finalised 11 key themes that we could build hypotheses around to help us increase engagement. Even if we don’t hit all of these it’s likely these themes will be relevant for the next few quarters and as the tech team grows.

We then wanted to do a forced prioritisation exercise with customers and our stakeholders who managed them. So we grouped the themes into a natural product order and built Attest surveys around them — one for our own teams, one for customers. Think of it like one third of a digital ICE (Impact Confidence Effort) model.

One question of the 18 step survey for our team to answer

This was a really useful way to force people to make trade offs and prioritisations. Attest’s dynamic filtering feature also made it very interesting to compare different priorities by different role types and organisations.

The product team then reviewed the results across surveys.

4. Data validation + ICE prioritisation

We all know that what people say doesn’t necessarily mean that’s how they behave. There’s recency bias, subjectivity baked into questions and misunderstanding. However, we think step 3 was a really valuable input that could be augmented by further datapoints we had access to on Pendo (our awesome analytics tool), Fullstory and our CRM.

Finally, product had a theme prioritisation session using the ICE framework (we even did this remotely using Notion + Slack hangouts as one of our PMs was WFEurope before Xmas!). We gave a Confidence score based on Itamar Gilad’s confidence model here.

We’re definitely not 7–10 level but by speaking to many customers and running surveys we’re in a good place to start testing our hypotheses with experiments and setting ourselves up to increase our confidence in the bets we make.

Now we have a loosely grouped themes we want our squads to experiment around to see if we can move the right needles.

That leads us nicely to…

Rolling it out: ROADMAP < EXPERIMENTS

I quickly learnt that “Roadmap” is a bit of a dirty word at Attest. Our engineers (like many others) have been burnt in the past by long-planned out projects that grow and grow, and you feel like you need to keep building the road because you’ve promised it in the unmoveable roadmap. I mean you can’t just change a map!?

Product people know that roadmaps aren’t set in stone and that it’s a useful tool to give people, inside the business and out, confidence that you’re moving in the right direction. But semantics are important, so we decided to ditch roadmap and talk instead about Mission, OKRs and supporting Experiments. Our CEO Jeremy (a marine biologist in a former life) really wants to encourage people experimenting. He’s explicitly called on the company more broadly to increase our failure rate, because it will mean we’ll be trying more ambitious and creative experiments. With that in mind we’re framing the product bets we’re making in terms of experiments.

This has been a useful way for people to visualise how it all routes up to the broader Attest vision.

Quarterly OKRs are company wide. And our squads and experiments are all geared around driving those OKRs. We’re working with squads and business ambassadors to come up with the experiments themselves to test if we can fix the problems. So that’s how it’s been communicated internally.

Externally, we know that large customers especially like to have long roadmaps so this is more of an education piece. We’ve come up with slides for our account managers to go through in their checkins with our product methodology and focus areas in black and white.

Our customer-facing product methodology

Attest is a product-led company that’s dedicated to adding as much value as we can for our clients. To make this happen, we use an experimental cadence; meaning we use client input and our own research to quickly try out new features with real product users, and see how they work. This means that when a feature is rolled out, you know it has been thoroughly tested with the people (like you!) that’ll use the feature most.

Because of our experimental methodology, development moves really quickly, but our vision and focus areas remain the same.

We use words like investigating and current focus areas to help people know that we’re actively looking into tests in these areas, without committing to a specific feature for them upfront, and certainly not on a particular timeline. On the other hand, we will be looking to our customer feedback with prototypes and beta testing features so we hope that they feel listened to and involved.

These are the breakdowns of our typical users and accordingly they may respond to our product methodology and priorities a little differently. Building a successful roadmap for such different users is impossible. Building out a series of experiments and seeing what user segment is most affected, is much more achievable (and fun).

  1. Traditional insights types — they feed rigorous insights and analysis to other parts of enterprise organisations

2. Marketing managers/heads of brand — want quick access to understand consumer perception and route-to-purchase journeys in their verticals, pre-launch campaign and messaging testing and ongoing brand tracking

3. Product developers (I’ll group together CEOs/Founders, Product people and Innovation managers in this bucket although they have different needs within) — want to validate problems pre-build and understand consumer requirements for solutions.

4. Otherthere’s everyone from Heads of Growth to Procurement to Legal to HR using our product to understand their own customers.

It’s part of our product prioritisation maths to understand who are our most important users in terms of revenue and usage. I won’t go into detail here but group 1 and 2 are most important for us commercially at this point.

So, that’s that! We have the usual week or so of hangover projects from last quarter before we officially reorg around these prioritised areas and get started with some rapid fire experiments. Hopefully we’ll be moving the needle on some and failing fast with others, all the while building up confidence in our product bets. Wish us luck!

ps — we’re hiring

--

--

Stephanie Le Geyt
Attest Product & Technology

Head of Product@Attest via @Nested @MarketInvoice Londoner, carer, gif lover, techphile, snow boarder, pub goer, reader of things, face-palmer extraordinaire.