Scaling UX Research for the Enterprise

Jeanette Fuccella
LexisNexis Design
Published in
5 min readJan 24, 2019

I joined LexisNexis just over two years ago, at a point when the company was making a renewed investment in user experience. In addition to hiring half a dozen designers and researchers, we had also just hired a new VP of User Experience, signaling the importance of the discipline — not only to the rest of the company, but also to those of us who were taking on this new mission to establish the LexisNexis UX team as best-of-breed in the industry.

Perhaps one of the most important indicators of the organization’s commitment to this new mission was the acknowledgement of UX Research as a separate discipline from UX Design. Although there weren’t many of us — four in the U.S. (two of whom started within a month of me), five in the UK, and one in Canada — the opportunity to be a part of a dedicated UX Research team was thrilling!

Illustration of people building a website
illustration by BriAnna Berry

Previously, the team had been so small and spread apart that operationalizing processes wasn’t a huge pain point. However, with the rapid expansion of our team, it quickly became apparent that our inefficiencies were costing us more than just wasted time: they were preventing us from maximizing the full value of our research activities — to the detriment of our team and the company, and also to all those who took the time to participate in our research activities.

A review of the industry yielded a key insight: we were not alone in our struggles. A recent study by Tetra Insights indicates that “qualitative research is growing as a discipline, with no signs of slowing down” and that researchers, and the product professionals they work with, are “feeling the pain as research demand grows.” In response to these pains, a global ResearchOps community was formed, a place where user researchers could learn from the shared wisdom of their peers.

With our new organizational structure in place and a global community to draw from, we had full support to implement the tools and processes that would make our team as effective and efficient as possible.

Step One: Tools and process inventory

Our first step was to conduct a thorough — and honest — evaluation of what tools and functionality our team had access to and what they need in order to be successful. For each tool, we identified how often it was used and the impact each tool had on the organization. As a result of this step, we realized that we were paying for licensing that we weren’t using and — in some cases — paying for duplicate functionality across multiple products. This inventory also enabled us to identify which tools our team was lacking.

Similar to the tool inventory, we conducted a process inventory. We scrutinized where we were consistent in our processes and where we weren’t, where there was documentation (or not), and where there were process gaps. We also made note of which aspects of the processes were manual and which were automated or had the potential to be automated (in hopes of finding opportunities for greater automation).

illustration by BriAnna Berry

Step Two: Identify opportunity gaps

Once we conducted our inventory, we identified opportunities for improvement along with impact to the organization. Our goal was to be as data-driven as possible to focus our efforts.

For our team, the inventory revealed that our highest impact area was around participant recruitment. Most of the work involved in recruitment was extremely manual, and our tools inventory identified that we were spending most of our budget on a product that wasn’t being fully utilized. Even worse, it didn’t offer the functionality that we needed most to be able to automate our recruiting processes.

Step Three: Build a staged plan

With the initial analysis complete, it was super tempting to want to try to fix everything at once, but we managed to restrain ourselves. First, we knew that too many simultaneous changes would quickly be overwhelming. Change is hard in general, and we had to roll it out in a way that wouldn’t disrupt our small team’s productivity. We knew that if we tried to change too much at once the team would be reluctant to — or even unable to — adopt the changes.

Also important was our ability measure the impact of our changes, and we knew that it would be more difficult to measure the impact of any particular change if multiple changes were rolled out simultaneously.

In our case, while our analysis revealed a huge opportunity gap with regard to analyzing, storing, and disseminating results, we determined that the higher priority was to solve our upstream problems with regard to recruiting and panel management first.

Step Four: Execute and measure

We conducted our inventories, analyzed our gaps, built a staged plan, and then finally were able to tackle the fun part… execution! Despite our best efforts, it was still easy to underestimate the amount of time that deploying new tools and processes would take. In addition to the formal evaluations and vendor negotiations, we needed to ensure that each team member was amply trained. This meant that even after we defined and created our solutions we had to document them in the form of training materials (not to mention being available for occasional problem-solving and question-answering). Of course, the more we invested ourselves in our new systems, the more ideas we had about how we could make them better … which meant more execution and more training. Phew!

Most importantly, we wanted to be sure to set ourselves up to be able to measure the impact of our work. (Not to mention learning from and communicating the value of our hard work!)

illustration by BriAnna Berry

In our case, we were able to demonstrate a substantial drop in tools costs, coupled with an increase in number of user engagements per month per researcher. Anecdotally, our researchers have said that they are conducting far less administrative work, allowing them to spend more time applying their creative and analytical skills to solve our difficult research challenges.

Step Five: Revisit, revise, repeat

Once we started gathering and reviewing the data, we gained a better sense of where we needed to tweak our tools and processes. While there are still many improvements on our wish list for panel management and participant recruitment, the progress we’ve made has been substantial enough to move on to our next challenge: the research repository.

We look forward to tacking this new challenge, knowing that we have the support of our management team and the broader UX Research community, and a whole new set of tools to play with and choose from.

--

--

Jeanette Fuccella
LexisNexis Design

Cultivating curiosity — in myself and others. Student of people, cultures, traditions & the intersection with technology.