What does it take to establish a new UX research practice?

Xian Gu
9 min readFeb 11, 2019

--

Unpacking the foundational work I did for A+E

Last year, I served as the first UX Researcher for A+E Networks in New York. I really enjoyed helping the company ground their UX practices in research, improving their products, and working with a team. This article is an outline and retrospective of the research projects I put into practice, and a glimpse of what’s hopefully ahead. I’m optimistic that the company will maintain them and with time, grow the research initiatives and further incorporate them into their creative process.

To give some background on the company, A+E is a long-standing media company with a large umbrella of TV brands and shows. Given its size and history, it has a streamlined approach to content creation honed over decades, but it’s much newer to tech, and has only recently been examining its approach to digital content distribution.

Part of that was competing with video streaming apps that have gained popularity over cable TV like Hulu and HBO Now. My team was one of 2 digital product teams. We were tasked with the maintenance and growth of 2 streaming video on demand apps (accessible via subscription) specific to A+E’s brands across 6 platforms, one of which was newly designed from scratch during my stay.

The UX team was established about 4 years ago, and the sprint team did standard biweekly runs. Design and developers actually had separate sprints and their own Jira boards when I started, but initiatives were later introduced to merge the team of about a dozen. I worked with primarily other UX designers and UI designers on my team, but also with product, dev, and QA. I reported to the head of design, but was obliquely linked to the head of dev, product, general digital managers, and legal.

In terms of the culture, A+E did their best to meet the demands of their executives by rolling out the two products very quickly for users. The philosophy was in keeping with the idea of an MVP that improves over time — done is better than perfect, after all. Over the years, when design was needed it was done rapidly, with limited research and iterations, and roll-outs were conducted as needed across the multiple platforms. The benefit was that the product launched out and the team was able to start observing usage numbers and making a wish list of features and ideas. The company’s strengths lay in the intermediate parts of design, ideation and prototyping, and UI design. Their relative weaknesses were research and usability.

Though lean and economical, the approach was not always consistent. Timelines for each product and platform were different, and the products were slowly diverging on each platform and from each other. Because decision-making was rapid-fire, it was sometimes plagued with partisan rather than user-informed thinking, where opinions overshadowed evidence. There were not always opportunities to revisit designs after launch, so quick fixes sometimes persisted over long-term solutions. Overall, the UX process was shaggy — a little haphazard, not always scientific. At the company at large, there was a heavier focus on the work of developers over designers, leading to a developer culture where design sometimes had to compromise.

My remit was to essentially to help the team correct course on those weak points by incorporating more user research. That would better inform design and product, and push back if necessary on dev. I established qualitative practices and maintained/expanded quantitative practices, encompassing both generative (beginning of design cycle) and evaluative (end of design cycle) methods. In addition, I did crossover work into UX design, which ended up being about 30–40% of my total workload. Finally, of course, I did the necessary to promote UX research and design communication and evangelism internally.

The role happily encompassed a lot of collaboration and autonomy. I appreciated the ability to shape their research program, show other team members how to understand research findings, and ultimately use them to shape design. Of course, being a founder always comes with challenges, and there were definite bumps in the road with timelines, corporate red tape, legal skirmishes, and confusion about how to best collaborate, among other growing pains. But all in all, I believe the team and products are well-served by the addition of research, and will continue to benefit in proportion to their investment.

***

With that, I’ll go over the major research tools and projects I added and organized:

Existing tools: SurveyMonkey, Amplitude

The team had used SurveyMonkey in the past to push out user surveys for screeners. I added a few similar surveys and also used the service for information-gathering about users. It’s easy to interpret the results but when necessary, I created additional visualizations.

Amplitude was used as a quantitative tool for data warehousing and analysis. With the ability to see conversion and retention, segment by user actions or user demographics, run funnel analyses, view A/B test results, see pathways taken by users, and other capabilities, it was a nice tool to manage the medium-sized datasets generated by the two products. There’s a way to do custom SQL querying too, which is great. Prior to using it, the product team had to reach out to the in-house data analysis team for specific questions, which could take a long time, open up to miscommunication, and become tedious with repetition. With Amplitude, there was a way to see live analytics and quickly answer questions, pinning them into dashboards when it was useful. Amplitude was very new to A+E when I started and the main holder of the of keys was the product owner. As I grew into the role, I took over the quantitative research and created some new dashes specific to the UX team and exported some visualizations for demos and presentations.

New tools: UserTesting, Optimal Workshop, AirTable, Calendly

UserTesting was the most impactful tool added to the arsenal. Prior to its introduction, the team had done very little usability testing, due to time, resource, and knowledge constraints. On rare occasion the team would do limited in-person usability testing, often with proxies, not actual users. With UserTesting, which runs digital remote tests, most notably recorded ones submitted to the platform in the users’ spare time, a batch/round usually takes no more than a few hours to turn around. Another benefit is its large tester base, large enough to immediately find suitable proxies based on segmenting, and large enough to find actual users with a little more time. With the addition of some in-person testing run in a makeshift usability lab (a converted office), we had a much more robust research resource. The main usage for UserTesting was evaluative, task-based usability tests, but the tool is flexible, and I also used it for generative user interviews, both moderated and recorded. It does have some weaknesses, most saliently the variable quality of its testers, and of course, with unmoderated tests, you can’t give clarification or reset a prototype if a user gets lost, but all in all, UT drastically simplified the research and testing environment, which made it easy for me to squeeze in a lot more of it. I’d usually run at least 2–3 rounds per feature per design depending on need. Finally, UT makes it easy to interpret results with notes and reels, which I would use to share findings with the team.

Optimal Workshop was the other research tool I added. I found it most useful for its card sorting feature, Optimal Sort. It does also have other tools like tree testing (reverse card sorting) and first click testing. Similar to UserTesting, it’s easy to set up and recruit users. We used it to run digital sorts of content, looking for ways users classified videos. Because card sorting hadn’t been done before, it was illuminating to glean this knowledge, and more is definitely slated for the future. The results are displayed in helpful ways too, preventing me from having to hand-calculate frequencies of cards being sorted together. It was definitely fun to present to higher-ups.

AirTable was a tool added for managing lists and data sets generated from internal surveys, and exports from SuveyMonkey. In its most basic form it’s a prettied-up spreadsheet, but it does also have a number of other helpful views like calendar, gallery, and kanban. I ended up being the main user, but one of its benefits is easy collaborating, so it’s great for teamwork.

Calendly was a wish-list tool for scheduling interviews and usability tests that happily got cleared for use.

Existing projects: Survey for LMC (Lifetime Movie Club) product, Initial user interviews for LMC

Prior to my joining, the team had done some limited surveying and user interviews for one of the two products. Since there wasn’t a dedicated staffer or sprint points set aside for research, it often got pushed to the back burner, and ended up taking a very long time for a small set.

New projects: Surveys for HV (History Vault) product, initial user interviews for HV, targeted user interviews for HV, data dashboards, card sorting for HV, in-person usability testing, remote usability testing

My biggest task was to make time and space for qualitative research, starting with initiating surveys and interviews for the other product and expanding interviews with the first product. I set up screeners on SurveyMonkey and conducted interviews through UserTesting. Similar to what had been done before, I started with sets of broad questions to ascertain the main motivations of users turning to the product and when relevant, staying with it. The script then delved into questions concerning individual usage and preferences, pain points, and pathways. Drilling down further, I asked about finer details on features, content, organization, and navigation. To finish out, I closed with sets of comparative questions on competitors, general experience, value, and other relevant broad topics. The synthesis of these interviews would help better understand user preferences and behaviors, bringing design closer to the intended audience.

To these sets of general interviews I supplemented with targeted interviews for specific design initiatives. These interviews were shorter and aimed specifically at gathering generative information relevant to the feature being designed at that moment. That way, we had both big-picture and detail-oriented intelligence of our user needs and wants.

As mentioned before, I coupled the user research from surveys and interviews, plus competitive and comparative analyses, with quantitative information generated and gathered from Amplitude.

For certain projects, including a new feature for HV, I introduced card sorting to gain insight into user-generated groupings. See my write-up about the project here.

On the evaluation side, I ran the usability lab, comprising both live (in-lab) and remote (over UserTesting) tests. For low-fi wireframes and prototypes I find it useful to get quick, more informal feedback, so I often used in-lab testers on paper or grayscale digital prototypes. With increasing fidelity comes more complex tasks and pointed questions to uncover weaknesses, at this point switching over to UT. As the tests come back cleaner over progressive prototypes and we move towards validation, the tasks can once again simplify. These varied rounds of testing, each tailored to the project and prototype, I find really helpful for efficiently evaluating a design for all its merits and shortcomings. As a bonus to doing usability testing, I always ended up with a handful of generative insights that can be applied towards future projects, starting the design cycle all over again.

Future projects: Card sorts for LMC existing content, additional card sorts for HV existing content, card sorts for future LMC and HV content, additional surveys for LMC and HV, additional interviews for LMC and HV, contextual inquiry for LMC and HV, focus groups for LMC and HV, heuristic evaluations for LMC and HV

There are a lot of possibilities for future research projects that I’ve proposed, with the most urgent being several sets of card sorts for both existing and future video content. The company will be massively increasing their supply of content for both apps, so it’s the perfect moment to dive into matters of information architecture.

In addition, there will continue to be need for surveying and interviewing as new features are added and old ones changed.

Finally, I see an opportunity to conduct other qualitative research methods that haven’t been attempted before, like contextual inquiry and focus groups. As the company develops and streamlines their UX workflow, heuristic reviews can also play a part to stay on track.

***

With this expanded set of research tools and practices, A+E is well on its way to rounding out its design process. I’m happy with the progress they’ve made, and really see the arc of development favoring better, user-facing design. I look forward to the continued success of their products!

--

--

Xian Gu

Maker, giver, learner, and all-around nerd. UX researcher and strategist with a background in HCI and psychology. Currently @ Microsoft