5 Lessons from Gathering User Feedback

Some obvious and not-so-obvious lessons

Tom Fernandes
6 min readAug 27, 2018

At The Prince’s Trust, we’re very fortunate to always be working in close proximity to our main user base — i.e. young people. The head office often hosts training sessions and we have centres across the UK where our programmes take place.

We should be using this as a chance to collect user feedback. This doesn’t guarantee that we will collect feedback though and — as I have argued in the past — larger organisations can unwittingly drift away from the needs of their beneficiaries. However, I’m determined to make the most of these opportunities. Gathering user feedback is part of my job after all.*

I recently arranged two ten minute sessions with separate groups where participants were asked to annotate screenshots of the desktop and mobile homepage. The sessions were kept short so as not to disrupt the flow of the programme courses and although it would have been great to show them the live website on a device, asking participants to record their thoughts on paper allowed me to type up the feedback afterwards.

Here are some lessons learnt about the process itself…

Lesson #1 — Be careful about not just what you ask participants, but also how you ask

I asked participants to annotate the paper as they pleased — focusing on what stands out as particularly good or what looks particularly bad. For one group, I used the analogy of a schoolteacher grading work, emphasising that there were no wrong answers and that they should primarily focus on the bad elements of the page.

The intention was to invite honest feedback from groups that may be less willing to criticise The Trust given that they were benefitting from one of our courses. In reality, this exposed my own desire to change the website when I should have been seen as a neutral force within the room. One participant fed back that it sounded like a trick question. Fortunately, it didn’t stop them from providing both positive and negative feedback and the participant even suggested switching to a more open question: “How can we improve the website & homepage?”

It would be better then, to start out with a broad scope and only narrow this down if groups struggle to pick out anything. In hindsight, it was the way I asked the question — more than what I asked — which served as an obstacle to generating real insight.

This leads nicely onto my next lesson…

Lesson #2 — Don’t take for granted the amount of prep feedback sessions need

“By failing to prepare, you are preparing to fail.” — Benjamin Franklin

The first group experienced the best of me. I was dutifully early, had introduced myself to the group earlier on in the day so they knew who I was, and came with sweets. Mentally as well, I was focused and ready to deliver instructions in a succinct manner.

The second group experienced the worst of me. My mind was focused on my emails, I came into the group without an introduction or prior explanation of what I was doing, and alas, I had no sweets!

The responses from each group were accordingly different.

The first understood my instructions and were responsive from the get go.

The second were less than enthused by my scrambled introduction and took a couple of minutes to get into the session. Given that this was only a ten minute session, it really did make a big impact.

Taking even just five to ten minutes before a session to decompress from the rest of my workload and asking for an introduction earlier on in the day would have yielded a more responsive group immediately.

Even such a simple exercise required a lot of prep!

Lesson #3 — Use the right tools to write up feedback

Source: https://zapier.com/blog/how-trello-uses-trello/

We’re currently exploring what are the right tools to use as an organisation — an internal schism emerging between Microsoft Planner and Trello. Although I lean towards the latter, Microsoft’s offering benefits from being part of our Office suite and is the “incumbent choice”. In the spirits of integration, I opted to type up feedback from first group on Planner.

Each participant had their own column for feedback and labels were used to categorise their comments according to elements such as “Hero CTA” or “General comments”. Filtering would then allow me to identify clusters and trends.

Unfortunately, Planner has a limited amount of categories.

I also kept one board with the “raw” feedback and copied comments over to another board where I rearranged them into columns based on themes (i.e. slides for a deck).

Unfortunately, Planner doesn’t let you copy whole boards so I had to switch between tabs and copy comments individually.

Using Trello the second time saved me at least 30 mins.

Lesson #4 — Focus on the clusters and ignore the outliers… but don’t discard them completely

There were some outlier comments that skewed positively and negatively, or were simply ideas that no one else thought about. Naturally they stood out, but it was important to focus on the thematic clusters. These will allow us to prioritise areas of improvement on the homepage (when supplemented with data) and I’ve kept the “raw” feedback boards so we can always track back to outlier comments.

These outlier comments have already proved useful in discussions with our designer and they have also pushed me to think about why certain page elements weren’t remarked on. For instance, our Hero CTA attracted some criticism — but some people didn’t comment because they didn’t understand what the button was supposed to do in the first place.

Lesson #5 — Present feedback in an appropriate way, balancing time commitments and the need to be engaging

The Trust is currently undergoing a digital transformation so feedback sessions can generate useful artefacts for this process. I’ve created two decks for Desktop and Mobile feedback respectively. This took time, but will be worth the effort when sharing user feedback across the organisation (an alien process hitherto for The Trust). The second deck also used the first as a template so saved time.

Each deck contained actual comments from participants (rather than just my conclusions) and incorporated some data analysis to supplement findings. The aim is to support our work with both data and user-driven insight — rather than simply relying on one or the other.

The feedback has also been referenced in a much more boring (but just as important) document that outlines a problem statement and goals for our UX designer to work from. The document is based on a work-in-progress template I’ve been developing and should hopefully tie feedback and development more closely together.** This way, feedback serves numerous functions and isn’t simply an excuse to put together some slides.

On that note, if you’d like to read the actual findings, feel free to scroll through the embedded decks above.

*We actually have a user researcher who I consulted before embarking on this exercise. Whilst it would be great to have a whole team of researchers, we’re a charity and I’m also a firm believer that every part of a team should join the party when it comes to getting user feedback — not just one user research function.

**I’ll probably blog about this at some point in the future!

I previously blogged about a new vision for The Prince’s Trust website here:

--

--