Changing The Way We Listen To Product Feedback — The Outcome

Part 2 of 2: A case study of the Customer Insights Project that I ran to improve our feedback processes in our products.

Cody Lindsay Gordon
Bootcamp
6 min readMay 15, 2023

--

At Rex I had identified some key problem areas in the way we collected and utilised customer feedback, and saw some opportunities for improving not only our design and research processes but also our relationship with our customers. This is part covers the (abbreviated, again) outcomes of the project.

⬅️ Read Part 1 — The Proposal here

The Anatomy Lesson of Dr Nicolaes Tulp, Rembrandt, 1632

Collecting Customer Sentiment & Feedback

Objective 1: Continuously measure the overall customer experience and collect general feedback

Rex CRM continuously collects a value score from customers via in-app surveys. This asks customers to answer on a scale of 1–5 “How valuable is Rex to you?”, along with the option to submit freeform feedback.

Customers will be surveyed no more than once every 90 days, which results in a continuous stream of scores and feedback coming in via this survey.

Value scores are tracked over time in the below dashboard. This allows segmentation of the score across regions and roles.

A screenshot of a dashboard showing various graphs and scores. An overall value score is broken down across different time periods, regions, and user segments.
The Rex CRM value score dashboard

The value score has been used as a key result in Rex CRM in a few ways:

  • Increasing overall value score for the quarter for UK customers
  • Decreasing the number of detractors (scoring a 1–3) to x%
  • Increasing the number of promoters (score 5/5) to x%

Feedback that is submitted along with the survey is also captured. This is a good source of non-specific product feedback about whatever area the customer would like to see improvement on. More on how this is captured and used in Milestone 3.

This method was extended to the Customer Care team to help measure satisfaction and gather feedback from new users about our onboarding processes.

Post-onboard surveys are sent out to key contacts by the success manager shortly after onboarding is completed. Responses are logged in Slack and tracked over time in the below dashboard.

A screenshot of a dashboard showing NPS scores and feedback related to the onboarding process.
The Rex CRM onboarding dashboard

In addition, a Slack bot was created to help streamline churn updates and to capture controlled churn reasons as product feedback.

Customer care members can trigger the Churn Bot in Slack to fetch data from the churned account in Hubspot, which will post an update in Slack and then log the controlled churn feedback in our feedback database.

Objective 2: Evaluate the success of projects by measuring satisfaction and collecting targeted feedback

Rex CRM uses targeted feedback surveys to measure success of a feature post-launch as well as gather feedback during the design process.

An example success measurement is the Leads Improvement Project surveys. This collected an ease of use score before, during, and after the Leads improvements were released. The ease of use score increase from 3.2 to 4.2, showing the changes were working — but there was still room to improve.

Information gathering surveys are also used often during the design process. For example, the Leads surveys also included questions allowing users to rate the importance of certain actions when processing their leads, which helped to prioritise these improvements.

Another example is sending out in-app surveys showing a Figma prototype of potential Reporting Dashboard improvements to users who are engaging with the existing dashboard. This meant we were targeting relevant users, and resulted in the UK dashboard improvements becoming validated asynchronously.

A Figma prototype with an embedded survey was sent to users that engaged with the existing dashboard

Spoke has used feedback surveys to gauge satisfaction of different user segments, measure the success of the branding update, and assess prototypes during design.

The product-market fit surveys were sent to both using agents and non-using agents. These collected a general satisfaction score from both parties along with their most valued features. This revealed that non-using agents were far less satisfied, and were often unaware of the value provided. These results led to projects to help increase engagement of this segment.

52% of non-using agents would not be disappointed if they could no longer use Spoke — compared to 13% of using agents.

Results of a brand perception survey

Measures of Success

The measures of success for this milestone were:

  • Product teams are continuously measuring the overall customer experience (objective 1) with an average response rate of at least 12% for in-app surveys and 2.5% for email surveys
  • Product teams measure customer satisfaction of a specific feature in-app (objective 2) at least five times with a response rate of at least 15% (measured over a three month period)

One True Customer Feedback Source

Objective 1: Establish a platform to house customer feedback

ProductBoard was determined as the most suitable platform and a large amount of historical feedback was imported and categorised using the feedback taxonomy outlined below.

Feedback that is logged in ProductBoard is automatically shared in Slack to allow people outside of the product team to have a view on the feedback that is coming in.

Objective 2: Establish a feedback taxonomy

The purpose of the taxonomy was to avoid the feedback database quickly becoming overloaded and unusable. The taxonomy supports categorising, searching, and analysis of feedback by the product team. I’ve written this up as a seperate article here.

Objective 3: Funnel existing customer feedback sources into this platform

All existing sources as well as some new sources were connected to ProductBoard so that feedback collection is mostly automated. This was mostly achieved through a series of Zaps, Slack integrations, and webhooks. Some of the sources were:

  • Continuous in-app surveys
  • Controlled churns (via Slack)
  • Uservoice suggestions (above a threshold of votes or comments)
  • Responses to release announcements
  • Feedback shared ad-hoc in Slack or a Chrome extension
  • Zendesk tickets (manually selected)

Objective 4: Establish a process for the broader team to submit feedback

The customer care team felt that product feedback they were passing on was not being heard or would not have an impact on the product. Several measures have been implemented to help encourage them to collect and share more customer feedback and be more engaged in the product feedback process:

  • Feedback is now stored and weighted in Productboard, meaning that over time issues that are logged more often will move to the top of the ‘user impact score’ list — meaning they are more likely to be addressed by the product team as they have greater value
  • To help streamline feedback submission, customer care members have more options to share feedback (via Slack, Zendesk, or a Chrome extension)
  • Success Managers have training requests (that come in via other feedback sources) referred to them in Slack
Over time, feedback pushes the most requested improvements to the top of this list

Measures of Success

The measures of success for this milestone were:

  • All existing sources of Rex customer feedback are feeding into the platform
  • Feedback is being submitted at least five times per week by Rex operational teams (measured over a three month period)

--

--

Cody Lindsay Gordon
Bootcamp

Australian product designer & coder with over a decade of experience in user‑centered design practices → https://clg.name