User research helps design and product teams understand their users, which will inform the decisions they have to make when creating software. Some research methods can involve understanding users within their real context — looking at the behaviour of people in the real world and using this to inform or inspire decisions around features or presentation. This helps teams ensure that their software will be useful to real people.
Other research methods can help evaluate the decisions that have gone into the things we’re making, by putting people through pre-defined tasks and observing that they can use it as intended. These sessions highlight gaps between the experience we hope we’re building for our users, and the one we’ve actually built and is extremely helpful as part of an iterative development process to ensure that our software will be usable for real people.
Here at Reach we’ve been building our capability to run this sort of evaluative research and have been equipping our UX lab — a bespoke room which is set up to allow this kind of research to happen. In this post, I wanted to describe the technical setup that has gone into our UX lab and what it’ll allow us to do.
Why do we need a UX lab?
When making websites and apps, we often find that our initial assumptions about what users will understand or do can be wrong — our own thinking is influenced by being around these products every day, but our users have an entirely different understanding or motivation when using our websites and apps. Exposing real users to our software and making changes based on that becomes an incredibly value part of the design process — by allowing our teams to see the gaps between their assumptions and real user behaviour early they can react to it before it incurs significant financial or reputational damage, or the opportunity cost of not doing something better.
A UX lab gives us the technical setup required to run this kind of research. Often the technical setup for this room can be reasonably complex, and if we were to rebuild it for each session it would introduce a large time overhead when running research. For that reason, we’ve decided dedicate a room in the Reach offices to this, and installed a permanent technical setup in there. We’ve also put the thought into ensuring the setup is robust and works reliably, so that researchers can focus their effort on designing and running studies, and be able to trust the technology will work…hopefully.
The sessions we run will often involve asking users to perform tasks on prototype software, either on desktop or mobile, and so we’ve equipped the lab to be able to handle these contexts. The function of the lab can be grouped in two areas — the feeds it captures, and what it does with those feeds.
What is captured in our research lab?
The setup has been designed to capture feeds from a variety of sources.
A room cam
A generic USB webcam is mounted in the room to capture what happens in the room during research, the video from which is fed into our AV box. Often this feed isn’t particularly useful for research purposes — all of the activities happen on screen or are captured by closer cameras, and the audio is captured separately, but despite that room cams are often popular. One reason for this can be that it is a confidence monitor for the other feeds — if you can see people talking, but hear nothing, you know there’s an issue. There is also the argument that being able to see the participant who is in the session can help increase empathy from those viewing the sessions, who will have a greater understanding that this is a ‘real’ person and that the events that occur during research are legitimate behaviours that should be considered.
A HDMI splitter is used to divide the HDMI signal from a PC or Mac laptop to two destinations. One goes to a normal monitor, so the user can use it like a normal device. The other HDMI feed goes into our AV box, so that we can record what happens on the screen. This allows us to run research on the desktop versions of our websites, and record people’s behaviour as we set them tasks using the website.
Mobile feed (direct)
When using our apps or the mobile versions of our websites, we want to be able to see what people do. To achieve this we capture the live video from both android and iOS devices. One method we’re exploring achieving this is by plugging the phones via USB into a laptop running Obs, open source software that can take in and show a variety of video feeds. This can then be output via the laptop’s HDMI, as if it was a desktop feed.
Mobile feed (overhead)
One issue with capturing only the direct video feed from the mobile device is that some behaviour is missed if the system doesn’t react — e.g. if someone taps somewhere that isn’t clickable. That kind of insight is relevant for many design decisions, and so we want to ensure that we see when this happens. To achieve this we use ‘Mr Tappy’ — an overhead camera that can be mounted onto mobile devices to record what people’s hands do on the device — including where they click. At the heart of it, Mr Tappy is a USB camera, which can be used for activities such as card-sorting in addition to mobile testing, and the feed from it can then be plugged into our AV box.
During a research session, we’ll often be asking the participants questions to try and understand their thinking and why we see the behaviour we observe. To capture the audio, we’ve been using a Samson ‘Go’ condenser mic. This is plugged into the laptop USB to power it, but also has an RCA output which we plug into our AV box to capture the audio directly.
What do we do with these feeds
All of these audio and visual feeds go into our AV box. We use an Epiphan Pearl 2 to do this, which allows us to use the feeds for a few different purposes. It allows us to combine the visuals from each into a single channel — so the room cam can be viewed as picture in picture on the desktop camera, or we can view both the direct and overhead mobile feed together. This makes the experience for people viewing the session much easier, as we can present them with all of the relevant feeds at once.
The AV box also records the sessions. This allows us to review the recordings later to take notes about what happened, or use it as the basis of our analysis of what happened in the session, and is an essential part of the research process.
Lastly, the Epiphan can also handle streaming. This means that we can broadcast the session live — either to people’s desks, or to a dedicated observation room. This makes it easy for people working on the product to see real users using it — potentially giving a greater depth of understanding of the issues people had that we might be able to convey in a report later. A dedicated observation room also allows us to run activities that help observers understand what’s occurring in the session and draw reliable conclusions from research.
How does the UX lab fit into the research process
This UX lab is an essential element of successfully running a user research study. However running the study is only a small part of what’s needed to run a good round of research. We use a semi-structured process of capturing research objectives, designing an appropriate study, running that study, then analysing the raw data from that study before being able to find reliable and robust conclusions that our teams can feel confident in making decisions based on. The raw data, including the video and audio captured by our UX lab is combined with our notes and deep thought before it becomes actionable information, and this setup is incredibly helpful for being able to ensure that the data is available and appropriate when needed.
In order to get this setup implemented it’s relied on a lot of great support from colleagues across Reach, including from senior managers ensuring that we have a room available for these activities, through to the support of our network and hardware teams in configuring the devices appropriately, and our office support in purchasing and finding all of the parts needed to make it happen. Getting this built has been a great show of support for user research at Reach, and we look forward to putting it into use!