3 lessons for 6 months

Misaki Hata
Mar 13 · 10 min read

After working for NHS Digital for 6 months, I have been reflecting upon the months gone by, the methods I have tested, their shortfalls (and mine!) and the things I have learnt so far.

My first placement was in the NHS digital service manual team as a user research graduate. Soon after joining the team, I realised that the user researcher on my team was imminently leaving and I would be filling in until a new user researcher was hired. I felt a mixture of emotions that ranged from annoyance, mild anxiety and excitement that I had the opportunity to work.

I want to talk about three rounds of user research I did, because I found them to be particularly good learning experiences. I debated about what I would focus my first post on, but ultimately decided to concentrate on elaborating specifically upon the processes and methods I tried, in the hope that it would be helpful for someone, in the same way that many UX method orientated blog posts and articles have and continue to be helpful to me.

Lesson 1: Testing is a process. The hypothesis is the guide.

On my second week of work I was asked to conduct 8 usability test sessions on a new section on forms in the content style guide called “how to write good questions”.

Method: With help from the rest of my team I adapted a discussion guide that the previous user researcher on the team had left behind. As my participants were based around the country I conducted usability tests remotely by asking participants to share screens on microsoft teams. I asked two other graduates to take consistent notes throughout the sessions, whilst the rest of the team dialled in to observe as and when they could. The sessions were recorded so I could make notes later. I organised eight 60 minute test sessions over a day and a half.

The problem: The test session themselves went smoothly, but when it came to pulling together and analysing the observer’s notes, me and the other graduates found ourselves completely stuck on how we would proceed. We sat in virtual silence, trying to pull together notes that were in different formats, one set on post-it notes, the other typed in a word document, and the third being the video recordings. I felt disappointed that for all the hard work that had gone into moderating and conducting the tests, I found I could not pull together the notes to find some meaningful analysis from the results. It was only due to the experienced skills of the content designer (Sara Wilcox) and the interaction designer (Dave Hunter) on the team, that they were able to salvage some insight from the scattered notes and make any meaningful changes to the pages we had tested.

How I fixed it: I knew my first round of testing was a failure, and so started looking into why it had stalled and the ways in which I could improve for next time. I knew what I lacked was a process, something that would allow me to structure the testing and observer’s note-taking in such a way that analysis would become easier.

First, I reread the guidance on user research in the GOV.UK Service Manual, followed by blog posts I found on the GDS blog, medium, the NN group, corporate blogs from companies like Spotify, Netflix and monzo as well as people I found on twitter. (I could carry on listing all the wonderful resources available online but instead you can google it yourself.)

Following this, I asked the other NHS digital user researchers based in London to spare me some of their busy time to meet me individually and take me through the methods and tricks that they used to streamline their tests.

There were many useful tips I learnt through those conversations, like Lauren who assigns specific roles to her observers to record, Joe who affinity sorts his post it notes on a large roll of brown paper so he can easily store and move them and Nancy who puts up posters on how to note-take during the tests to act as visual cues for the observers.

Ana, a senior user researcher who took me under her wing , showed me how she had structured the research for her current project. She stressed the importance of having complete clarity on the research hypotheses before starting, as everything else depended on them. The research hypothesis would inform the method of testing, which would in turn inform the discussion guide, which would then become the guide for observers to take notes, before becoming foundations of the analysis.

Ana supplied her observer’s with an “observation notes” document that showed the hypothesis and observers would fill in the table collecting notes that directly related to the hypothesis. Observers would also take notes on post it notes.

When the testing had finished, Ana collated the information from the observation notes document and the post it notes into a hypothesis table she had made, and compiled all other issues into another table with their corresponding evidence.

I found this inspirational because Ana’s method was very thorough and made sure that the research would always stay focused on the hypotheses, ensuring that the primary reasons for doing the testing in the first place were properly being answered. Whilst I was not quite confident that I had enough brain power to pull off Ana’s method (not yet anyway!), I have taken the bones of Ana’s rigorous methodology that makes sure the hypothesis is involved throughout every aspect of the user research session.

What I learned: user research was not just about the test sessions, but also involved careful preparation before the tests and organising the note takers to insure a smooth and effective analysis afterwards. The hypotheses are the guiding principle that can help shape and structure everything else.

Lesson 2: Communicating findings means giving evidence based recommendations.

Shortly after my first usability test analysis ended, I was tasked to do an online and offline card sort as part of research towards redesigning the service manual.

Method: Based upon the numerous advice I received from the previous round of testing, I made sure to start with some strong hypotheses to test, which I created with Dave and Sara. I created an observation guidance sheet, which designated specific roles to the observers and unified how they would write on the post it notes. We created a discussion guide based upon our hypothesis and conducted our card sort. Afterwards, we affinity sorted the post it notes in person on a wall based upon the hypotheses that we had and discussed the key themes that emerged. I then added the data collected from the online card sort we had sent out using optimal workshop, and put everything I found in a lengthy report and made a powerpoint presentation which I showed to Dave.

The problem: Whilst the preparation, testing and analysis had gone well this time, I found Dave wasn’t raving about my powerpoint for some reason and he asked me to think carefully what I was trying to communicate. It was clear that I was missing something crucial but I wasn’t sure how or what it was.

How I fixed it: I went off and scoured the internet for more blog posts and articles about presenting findings, I attended an introductory course on user research at GDS, and (courtesy of a friendly developer Adam chrimes) I met with the user researcher who used to work in my team to go through my work and get a critique. I showed her my sad powerpoint presentation and thanks to her, I was able to realise the reason why my presentation had flopped. I had simply presented the data, but had not given any research driven design recommendations. It was a penny drop moment for me when I realised that evidence based recommendations were crucial to allow action, whether it be agreement or sparking a discussion for why people did not agree.

I returned to Dave, this time with a far more ugly but practical google spreadsheet, with a column for my recommendations, which we went through together and agreed on some concrete design actions.

What I learned: Communicating the results is just as important as conducting and analysing the results. I learnt that the best method of communicating prioritised the needs of those I’m trying to communicate to, over how pretty my slides were. It was important for me to think carefully what information Dave as a designer needed for him to do his job. Designers and team members may not need all the information, but they do need actionable recommendations that are backed by evidence from the tests, which they can then use to either act upon or use as the basis of a lively debate.

Lesson 3: I need to keep iterating my methods.

The last round of research I did for the service manual team was usability testing the redesigned service manual site. At this point it might be a good time to mention that I had also been learning about design and prototyping from Dave over the course of my first placement.

Method: We decided to try out Andrew Duckworth’s hypothesis driven design method to guide how we redesigned the service manual. We then used the same design hypothesis as the basis of our research questions, observation notes and analysis sessions later. Our test sessions would be remote again, so they would be conducted on microsoft teams, and observers were assigned specific roles to focus on either what the participant said or how they behaved. Around this time, between finishing up the prototype and starting to plan the tests, we had a new user researcher called Amy start in the team. I drew a plan, ran it through Amy, and together we vamped up my original plan. Upon her suggestion we decided to move our note taking from post it notes to a digital note taking method using a trello board. We took turns to moderate the sessions and each test session would precede a small break where the moderator and observer discussed their top 3 interesting things (a method used to make sure observers were paying attention, as well as capture insights straight away). Every day of testing ended with a 45 minute debrief with team members that did not attend the tests — to ensure they were kept up to date with how the redesign was testing. The day after testing, I and Amy sorted through the Trello board, before putting all the information together into a report.

The success: Testing was a success not only because the redesigned service manual tested really well, but because the whole test process ran efficiently.

As far as I have understood, extending Andrew’s design hypothesis to use them as research hypothesis helped ensure we were testing our service thoroughly.

It was wonderful to work with a user researcher who had 6 years of experience. I could bounce ideas with her, and she could tell me from her past experiences whether the idea would give results that were worth the time and effort. Although I was satisfied with my original research plan, by getting Amy’s input we were able to turn it into a better, more streamlined process.

What I learned: As with the products and service we iterate, I learnt that the methods we use needed constant feedback, challenging and iterating. By combining ideas with Amy, our research became better. Over the last months, I’ve noticed that every user research has their tricks and methods, but they’re also learning and testing strategies constantly because every scenario requires some tweaking. Learning from different user researchers has made me aware of the endless resources that are available, both in someone’s brain and online. Even senior user researchers seem to always be looking for new techniques to test. It is my hope that over the next years and decades that I may work in this line of work, I will continue to actively absorb, test, fail and make a success of different methods of working.

To summarise…

  1. Testing is about 10% of the work user researchers do. User research includes planning, organising observers, logistics, testing, gathering data, sorting data, analysing and communicating that information to the rest of the team in a way that is actionable.
  2. Hypotheses are and should always be at the core of research and design.
  3. By absorbing all the different knowledge that people are willing to share, I can learn new methods and make my research better.

I look back at the last 6 months and wonder whether if I had had a user researcher mentor from the start I could have learnt these seemingly blatant lessons more quickly and easily, without feeling those unpleasant emotions of frustration and incompetency. Yet I can also acknowledge that these teachings have been burnt into my brain because of those hours I spent muddling through, trialling and failing. If nothing else, I have started to understand how to think and how to learn in this line of work.

And finally, a word of thanks to the community around me that have been kind and supportive. I have relied heavily upon the service manual team members, Alex and Jack — my grad colleagues, Dave my design mentor, Matt and the user researchers in London who have been so kind to give me their time and patience. Whilst in the next few months I will still very much be more reliant than reliable, I hope I can eventually be a helpful participant in this community of perpetual learning.

Misaki Hata

Written by

Human learning things at @nhsdigital. Interested in UX design and anthro stuff 📖

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade