Profiles in UX Research: Becky Buck from Salesforce

Picture1
First Salesforce Office

Core to UX design is carefully crafted user research. This is Validately’s bread and butter — an innovative approach to UX design testing that is both lean and accessible.

But what are the different use cases for this kind of research? This interview with UX Research Lead Becky Buck from Salesforce is the first in a series that examines the many research methods and best practices currently employed in UX design.

At Salesforce, user research has been central to UX development since its inception in 1999. Over the last 17 years, they have focused on three waves of innovation: offering users a system of record in the cloud for CRM, creating actionable information in mobile and social spheres, and harnessing behavioral data via the Internet of Things for predictive analysis.

Ongoing examination of UX design and regular user research are integral to this culture of innovation. Becky Buck spoke with Validately recently about her role in designing and implementing effective user research methods.

Most people are familiar with IoT (Internet of Things) as a buzzword, but what is the IoT Cloud product?

Companies all want a deep understanding of their customers. Traditionally, CRM data is historic, capturing information about what has happened in the past through data points like sales orders, purchase history, or service cases. The IoT Cloud allows companies to capture behavioral data in real-time, enrich it with historic CRM data, then take action on that data. The combination of real-time and historic data is what will enable companies to to provide customized experiences, even when they are mass-automated. Since our cars, watches, and coffeemakers are all going to be talking to us, we need to give customer experience managers tools to manage those conversations, so that customers get the right information at the right time. The IoT Cloud empowers people without computer science backgrounds to write business logic so that when the tire pressure in your car is low, instead of being spammed by alerts and emails on all your devices, it feels like a knowledgeable and caring customer service agent is helping you.

Who is involved in research sessions on the IoT Cloud?

Picture2
Research Methods at Salesforce: Not every project needs to go through this full cycle, but this model represents the range of methods in our tool kit. We’ve found that diagrams like this are helpful for working with stakeholders to provide shared language and expectations

Our full development team: designers, product managers (PMs), engineers — even executives. The scope of our research spans everything from product usability to product definition, so to some extent, it depends on the project. If we’re running lab sessions, designers and PMs will be in the room with us watching people interact with the product. Engineers will often listen in remotely from their desks.

It sounds like you have solid company buy-in for UX research.

Absolutely. Because we are a customer-focused company, it’s never a question of whether or not we should engage in research with customers. The discussion starts with “how” and “when.” For us, issues of stakeholder buy-in tend to involve questions like: Are these insights truly reflective of the larger audience? Were the testing circumstances reflective of real world conditions? These questions have prompted us to be extremely rigorous about user segmentation and participant selection for our testing.

How do you go about finding the right user base for specific tests?

We start by identifying user types that represent groups of tasks to be accomplished. For the IoT Cloud, there are four user types — strategists, composers, constructers, and connectors. Before user testing, we build screeners to clearly define skill levels for each type.

IoT Cloud User Types

For each type, we have user segments ranging from expert to novice. We tend to reach out to advanced “extreme users” like consultants in our partner ecosystem when we’re developing new features. This guides our exploration of what could be part of future products. Those users often come to the conversation with clear examples of what they want to accomplish and a list of feature requests. On the flip side, when we want to understand ease-of-use and utility to average user, we reach out to individuals with less experience. They help us determine if the product is speaking their language and is easy to learn.

How frequently do you engage in testing?

Our current schedule includes six moderated sessions every Tuesday, which provides the team with a reliable routine. We’ll do a debrief after each session with a deeper analysis on Thursdays. Typically, one researcher plans the test for a given week with another taking notes and compiling data, then we’ll rotate the following week.

[caption id=”attachment_1076" align=”alignnone” width=”584"]

Picture11
Becky and design lead Arthur Che reviewing prototypes before a test.

Salesforce has a wealth of products and users at this point, so what are you usually testing?

Because we have a solid style guide — and we’re deeply committed to accessibility — we’re doing less usability testing and more testing that’s focused on work processes and expectations. What mental models are users bringing with them from past experiences? What does a user expect to experience when using our product? How do we meet (or not meet) those expectations? For example, most people creating automations in Salesforce today are familiar with workflow models, which are completely linear — like a flowchart. The IoT Cloud, however, is based on a state machine model that incorporates programmatic concepts like loops and recursiveness. These are big mental shifts for most of our users, so we’re doing lots of exploration on how to reduce the learning curve.

How do these approaches differ for new products versus existing ones?

The biggest difference is that with existing products, you have historical usage data. The more mature the product, the more evaluative and pointed our methods tend to be. My experience has been largely with redesign or new product development, and for those, we’re doing a lot of discovery work to understand people’s current processes and the roadblocks they encounter.

How does final data convert to development?

I don’t think of data as ever being “final.” We’re always gathering and interpreting data that address two questions. First, do we have enough evidence to make this specific design decision? Second, how is what we are learning adding up to big-picture behavioral trends? So, while we’re testing things like page layout and efficiency, we’re also learning about nuanced differences of language for people working in different industry verticals. The first sections of our reports tend to focus on three to four changes we can make to have impact immediately, while later sections examine interesting things we didn’t expect to find but want to remember. Over time, those seemingly random dots often get connected and offer profound insights.

What kind of metrics do you use when you share your insights?

In enterprise contexts, “successful task completion” can be tricky to measure. Even great usage metrics still give us an incomplete picture, so we’re passionate about leveraging mixed methods to triangulate truth. For example, we know that with very advanced features like process automation, administrators will sometimes give their login info to a consultant to setup and configure.

So, we also ask users to self-report their level of experience with different features in addition to looking at usage logs. The difference between what the usage data shows and what people self report becomes a good proxy for where systems integration partners are having impact.

Picture12

An example of an evaluative metric the research team created is called the “Sad”min-”Glad”min Ratio. This shows the level of success a user segment has with a specific product function. We’re in the process of creating similar evaluative metrics for the IoT Cloud.

What advice would you give to a UX team looking to implement user research for the first time?

First, start talking to users and potential users. Ask the magic question, “Can you say more about that?” It’s one of the most effective ways you can encourage participants to share their thoughts without asking leading questions. And, for people who want to make UX research a career, try to experience research for product development from the different contexts of startups, agencies, and large in-house teams. Each of these contexts has their own dogma, methods, and constraints. Having a wealth of experience to pull from has made me a better researcher and earned me greater business credibility with stakeholders.

Speaking of discovering new research methods, how did you learn about Validately?

A friend and former colleague had seen a demo and thought of me. I looked into it and discovered a tool that has been extremely helpful.

Picture13
UX researcher Meredith Lanska, PhD running a research session in Validately.

How so?

Early in my agency days, I was introduced to a proprietary research tool that allowed me to use video as visual evidence for my analysis. I could tag content, transcribe notes, and find specific segments of video that spoke to my research. Over the years, I’d seen somewhat similar tools, but they were clunky and integrated video poorly — if at all. I was thrilled when I found Validately because it enabled the workflows we needed most, like flagging clips in real time, tying notes to video timecodes, and making it super easy to share brief clips with product managers and scrum teams who can’t sit in on every session. They’ve reintroduced me — and our team — to video evidence, which has strengthened our reporting tremendously. Parsing that video for our research at Salesforce has been key to improving UX design and implementation.

#

12

Becky Buck is the UX Research and Service Design Lead for Salesforce IoT Cloud. Her experience in research for new product development spans diverse industries including public health, pharma, education, insurance, consumer packaged goods, and technology. While working at Salesforce, she has become an accidental expert in business process automation.


demo