The Future of Behavioral Science in Business: Part 2

How does an academic field become a business discipline?

Florent Buisson
Behavioral Design Hub
14 min readApr 13, 2022

--

DISCLAIMER: While my bread and butter as an author are to explain technical topics such as behavioral data or the Bootstrap, I sometimes indulge in more free-roaming musings such as this series. Feel free to tune out if you'd rather wait for my next technical article.

In Part 1 of this series, I took a broad, market-level view and discussed how the backgrounds, tasks, and titles of researchers in business related to each other and how those relations evolved. In this installment, I'll zoom in on the duties of researchers and explore how they get defined. I'll argue that the success and dramatic growth of UX research and Data Science over a few decades comes from their having established what I'll call repeatable "roles." I'll then explore roles emerging for behavioral science and what that means for the future of the discipline in business.

What is a scientist in business to do?

The question of how to best use scientific knowledge in business is far from new. In Ancient Greece, Greeks derided the philosopher and mathematician Thales of Miletus as teaching virtues only because he couldn't achieve wealth. But then he made a fortune and silenced his critics by predicting an incredible olive harvest and renting all the available olive presses in advance.

However, achieving success with a one-person operation is not the same as renting your skills to a company as a knowledge worker. Some companies find legendary success by hiring bright young people and letting them do their thing (e.g., the Guinness brewery hired William Gosset, who would invent the T-Test under the pseudonym of Student.) But that is not exactly a consistent and repeatable model. Imagine hiring someone just a tad less brilliant than Gosset (as most of us are); on their first day, they arrive bright-eyed and bushy-tailed and ask you, "I'm ready; what should I do?!". What should they do indeed?

The main challenge is that scientific knowledge gets acquired in universities. Academic expertise is primarily structured to facilitate its transmission, testing, and expansion (i.e., for courses, exams, and research). On the other hand, companies require the ability to answer relevant business-related questions.

I'll argue that the solution to bridge that gap efficiently is the definition of "roles" with predefined backgrounds and standardized tasks. This solution means that a role will neither use the entire experience of a recent graduate nor answer all possible business questions.

Thus, for example, actuaries don't use all their statistical knowledge and are not expected to answer all statistics-related questions in an insurance company. When the company hires a statistics graduate as an actuary, they can immediately put them to work on creating actuarial tables and so on. Similarly, whenever a job title includes the word "engineer," this is a vital clue that this cookie-cutter process has occurred. Civil engineers may know a lot of physics and chemistry, but they're only expected to answer a narrow subset of physics and chemistry questions. It is similar for computer science and software engineers, etc.

The phrase "cookie-cutter" may suggest that creating a role is easy, but it's not. Because academic education and business needs are constantly evolving, it is akin to shooting at a moving target from a moving boat. Creating a good and efficient role means identifying questions that fulfill the following criteria:

  1. You can get recent university graduates to answer them effectively with no or minimal additional training (i.e., measured in weeks at most);
  2. The value of answering them and articulating them convincingly for the business is high enough;
  3. These questions occur frequently and consistently enough in the ordinary course of business;
  4. Answering them can be standardized enough to fit into business planning — unless you'd like to try to convince a business leader that you'll probably answer their questions. Still, it might take between 3 and 18 weeks, and there's a 30% chance that you won't have an answer at all in the end.

Let's see how this framework applies to the primary roles of UX research and data science.

Photo by Annie Spratt on Unsplash

The primary UXR role: usability testing

UX researchers train in qualitative social sciences, such as ethnographic research. They're interested in social and cultural phenomena (as opposed to purely individual behaviors), often delay theorizing as long as possible, and strive to remain neutral concerning their participants' values. These disciplines' objects, methods, and ethos make the "what should I do?" problem especially acute. While this approach makes for great science, the business benefits of simply letting an ethnographer loose on your premises are far from guaranteed.

The typical role of UX researchers in business is the analysis of human-computer interaction (HCI) and in particular usability testing. Software has progressively become ubiquitous, leading to the realization that it must be designed "with the mind in mind" so that users can use it effectively. Thus usability testing checks the four boxes of a good role we described above:

  1. Someone straight out of a social sciences grad school can reasonably quickly learn that role in a short period and promptly earn their keep in a company;
  2. Usability testing offers tangible benefits: business leaders can (generally!) recognize that poor usability hurts business results, and they can see the changes on a website with their own eyes.
  3. Software is everywhere, and nowadays, even "brick and mortar" companies invest in their digital tools and online presence. This makes for big enough streams of usability requests in companies to keep UX researchers busy.
  4. Usability testing can occupy a somewhat predictable slot within software development processes and timelines.

Moreover, usability testing yields consistent and predictable benefits from a business management perspective. Up to a pretty high point, these benefits scale with the size of the team; one can expect twenty researchers to "produce" about twice as much usability testing as ten [1].

Of course, businesses would like to see many other sorts of questions resolved, and that social scientists could effectively answer, especially those scientists whose background is not in HCI. In the book The Moment of Clarity, Madsbjerg and Rasmussen offer many inspiring illustrations of how social sciences can answer deep questions and generate breakthrough strategy and innovation. For example, building LEGO models taps into deeply rooted human needs for autonomy and a sense of competency; capturing that insight was central to LEGO's renewal in the 2000s.

But identifying and solving these questions is not a well-defined role that would check the four criteria above. In particular, there are a lot of "unknown unknowns" there, which require a higher level of expertise to be teased out and translated into actionable and beneficial insights. Consequently, this type of deep, foundational research work represents only a tiny fraction of UX research work out there, as far as I can tell (although the design consultancy IDEO seems to have carved a niche in that space [2]). It requires business leaders who can insulate researchers from short-term requests, convert the insights obtained into a business strategy, and implement it. That's a tall order in most organizations.

The primary Data Science role: predictive analytics

Data science has been described as the intersection of statistics, business expertise, and computer science/coding. We can think of data science as a "new and improved" role in business for statistics, enabled by computers. Gosset had to identify a business problem, convert it into a statistical problem, collect data by getting measurements from one vat after another, and then analyze the data using ad hoc methods.

This work doesn't fit into an efficient and standardized role like foundational research in UX. However, computers, by definition, spout out numbers that can be subjected directly to statistical analyses to predict values: "given a series of X's and Y's, predict the value of Y for a certain X." This led to a very successful role:

  1. We can quickly and efficiently take graduate students with Ph.D. in theoretical physics and "plug them" into a computer (does that even still count as a metaphor?) to analyze the numbers that come out. For instance, in Kaggle competitions, thousands of data scientists worldwide compete in analyzing a dataset without ever talking to a business employee and often without any domain expertise on the business problem. One dataset may come from an engineering company making wind turbines, and the next from a pharmaceutical company trying to decode viruses' DNA.
  2. Even when you don't understand a business, business people will find a use for you if you can somewhat predict the future.
  3. As was the case with UXR, the ubiquity of digital experiences mediated by computers provides a steady stream of work. You can submit everything for predictive analytics when everything gets logged and measured. Computer usage applies even to seemingly absolutely "offline" environments: railroads can prioritize the replacement of tracks based on replacement history and accident records.
  4. Predictive analytics works "under the hood" without requiring much alignment from business partners or timetable modification. This makes it appealing for business leaders because it can generate value without disrupting the rest of the organization.

As one may object, data science is not just predictive analytics, and predicting the future often takes more than predictive analytics. If you want to predict "what will happen if we do this instead of that?" you need causal analytics (e.g., A/B tests). Indeed, causal analytics is becoming an increasingly common role in data science. I expect predictive analytics to remain dominant in the near term because it relies less on business knowledge and is more repeatable. Note that I said "dominant," not "better": The efficiency and potential for standardization of a role represents a path of least resistance that an organization will follow unless there is a conscious effort in a different direction.

The primary Behavioral Science role today: nudges?

When I set out to establish my behavioral science team, the case for centering on nudges seemed clear to me:

  1. The body of knowledge on nudges is reasonably well contained. It even has its own book, aptly titled Nudge!
  2. Nudges drive behavior change, and behavior change drives business outcomes. For example, people sign up for paperless bills, pay them on time, and so on, which is beneficial to the business and should be immediately appealing to business partners.
  3. Behaviors are all over the place in businesses, providing an endless stream of opportunities.
  4. What could be more straightforward than applying a nudge? Make some tweaks to an email or website, and voilà!

And indeed, as far as I can tell, doing nudges is the most common role of behavioral scientists in business. However, my team encountered several difficulties:

  1. First, the replication crisis has hit behavioral science quite hard, and we must face the reality that the value created by a nudge is not consistent. Adding a social proof can decrease the target behavior through reactance and increase it. Compare this situation to the value of usability and predictive analytics, which is consistently positive or null. You don't hear stories of UX research improving the usability of a website and then click-through dramatically dropping off. This lack of consistency can be compensated for somewhat by continuously validating a nudge through A/B testing and adopting a portfolio approach; if you try half a dozen nudges, the best of them will improve the target metric increases dramatically. But this requires the proper testing infrastructure and a longer time frame. Suddenly, unless there is the appropriate infrastructure already, we're talking about six months of developer time to hack a way to run an A/B test (if you're lucky) and then six to twelve months of testing itself. You're asking your business partner to wait a year before they see any benefits.
  2. Second, having as a mantra "behavior is a function of the person and the environment" is all well and good. But it implies that a recent graduate needs contextual information to implement all but the most straightforward and most repetitive nudges. Unless you want to become "the progress bar people" (and I got close to that), you need some qualitative diagnostic, which means interviews, interview logistics, costs, delays, etc.

All of that is to say that doing nudges is more complicated than it seems. Based on the limited and non-random sample of people I know (and the people they know), I'll venture the following guess: the median outcome achieved one year after hiring the first behavioral scientist in a company is a lot of fantastic presentations and exactly one nudge in production. This example does not say that doing nudges plays a harmful role in behavioral science. We're just still figuring stuff out.

What are we going to do tomorrow?

The same thing we do every night, Pinky — try to take over the world!

I argued that the success of UX research and data science over the past decades comes to a large extent from the usefulness and efficiency of their respective roles. Organizations have been hiring young graduates by the dozen and shoveling them into ever-larger research teams and departments because it has worked for them. Will the same thing happen with behavioral science? In a recent survey of behavioral science teams worldwide, the median team size was half a dozen people, which is pretty low. I believe this reflects that we're still figuring out our role(s).

I can see several potential "role models" for behavioral science:

First, we should note that most of the sizeable behavioral science teams are in consulting firms. Indeed, by taking on fixed costs and developing standardized processes, consultancies can make a role more efficient and more appealing to their clients. For instance, corporate strategy questions are too unique and infrequent to tackle efficiently in-house. Management consulting firms focus on standardized questions such as "should we enter this market?" or "should we buy that other company?" and answer them repeatedly across companies. This makes a strategy consultant's role much more efficient. Thus, I expect consulting to remain a significant model in applied behavioral science, and individual consultancies' financial success and growth will depend on their roles' efficiency. In other words, the "best" consulting firms will be those that carefully select which questions they answer and get very good at answering them consistently.

On the other hand, for the companies that decide to bring behavioral science in house, I expect a few divergent paths:

  • In some cases, the size of the behavioral science team will be too small for its scope of work, leading to inefficiencies and disappointment. In particular, there are only two situations where it can make sense to hire a single behavioral scientist: either as a first step towards building a more extensive, sustainable team; or as a highly-skilled specialist to answer deep and complex questions. Otherwise, a single person cannot achieve by themselves the efficiency required to make their role worthwhile for the company, especially if you factor in their "employment cycle." Each time the person leaves, you lose all their tacit and organizational knowledge, and you need to hire and train a new person from scratch.
  • Alternatively, some teams will achieve "escape velocity" and reach a scale where they can efficiently handle their scope of work. As in the case of consulting teams, good role definitions will be crucial to success.
  • Finally, a completely different way to resolve that challenge is to tack behavioral science on top of an existing, efficient role. Thus I expect to see specialized functions take shape at the frontiers of behavioral science with UX research and data science. Some questions are too specific to include in generic UXR or DS roles but are valuable and frequent enough to justify headcount within an already efficient team. For example, a UX researcher with a psychology background can map customers' mental models. A data scientist with a behavioral economics background can create variables that better reflect customers' behaviors.

As a side note, this leads me to a sad prediction: behavioral scientists with 0–3 years of experience who are the first BeSci hire in their company will face an uphill battle and most likely not be replaced when they throw in the towel and leave. (This fits my current impression from what I've heard of people in that situation, but I'd be relieved to be proven wrong, so feel free to message me if you have successes you'd like to point out to me!)

Summary and conclusion

Take-home points:

  • For my purposes, I define the "role" of a researcher in business as the type of questions they are tasked with answering.
  • A research function's ability to succeed and grow in the face of competing business priorities and recurring cost-cutting depends on its ability to develop a high-value and efficient role.
  • The dominant role of UX researchers is usability testing, and the dominant role of data scientists is predictive analytics, not because these are the only things they are good at, but because their repeatability and value are high.
  • The dominant role of behavioral scientists is currently to create nudges, but it has not (yet?) achieved a level of efficiency on par with UXR and DS.
  • Role efficiency will shape the successful ways to do behavioral science in the future. I foresee the two main paths are "large enough" teams (either in consultancies or in-house) and specialized hybrid roles at the frontiers with UXR and DS.

Behavioral economics first established a strong presence in the government and public sector from its original home in academia, with widely publicized "Nudge Units" before making forays into the private sector. As far as I can tell, the most prominent teams in business so far mostly work along the lines of an "academic lite" model (publishing white papers and such) or a consulting model. Is there another valid scalable model out there waiting to be invented? My opinion on the question has fluctuated over the years and can be best described as "maybe, maybe not." Only the future (or someone smarter than me!) will tell.

However, this post only takes a narrow view of behavioral science in itself and its adjacent fields. A broader perspective would show the profound transformation of the workplace. Once upon a time in the industrial era of the 20th century, economies of scale were the name of the game: the most successful companies achieved large-scale production while maintaining coherence and alignment around annual or even pluriannual strategic plans. Middle managers were the most valuable players (MVPs) of the time, driving that coherence and alignment from the top down.

With the advent of computers and the beginning of the digital era, information flows can primarily ensure that coherence, and the new MVPs are the knowledge workers processing information and generating relevant insights. If I push this trend to its logical conclusion, the very idea of a "behavioral science team" could become obsolete. The critical question would then not be "can a behavioral scientist earn their keep?" but "can behavioral science be a useful tool on the belt of generalist knowledge workers?". I'm much more optimistic regarding the answer to that question, which I'll try to tackle in more depth in this series's third and final installment.

PS: This post was surprisingly challenging to write and took several false starts. I realized that some of my ideas were ill-conceived or poorly defined, and I had to work significantly on them before a satisfying result. Therefore, I don't know when I'll be able to publish the final installment, so stay tuned but don't hold your breath!

Acknowledgments

Many thanks go to:

  • Jenny Rabodzenko for her deep experience in UX and applied anthropology (and for the time she spent correcting embarrassing typos and poor turns of phrases!)
  • Elif Buse Doyuran for her thoughtful and informed perspective on the sociology of professions

Footnotes

[1] There are certainly some diminishing returns coming from prioritizing the most critical questions. Still, there are also economies of scale, so I feel that proportionality holds reasonably well as a rough approximation.

[2] Thanks to Elif Buse Doyuran for pointing out that example to me!

Reference

Christian Madsbjerg & Mikkel B. Rasmussen, The Moment of Clarity: Using the Human Sciences to Solve Your Toughest Business Problems. 2014.

Richard Thaler & Cass Sunstein, Nudge: The Final Edition, 2021.

--

--