Thinking Like an Experimental Psychologist in Industry

This is the table whereon I wrote my dissertation; I miss this table.

You: academically-trained experimental psychologist, hearing the siren call of Industry Research skitter into your lab through a window your PI carelessly left open while out of town.

Me: academically-trained experimental psychologist turned Industry Researcher (whatever that is, but you folks keep using that phrase). Maybe you in a couple of years.


I’ve read dozens of articles, blog posts, and tweetstorms about moving from academia to industry. There are a lot of differences in environment, in lifestyle, and in output, and I’m gonna leap right over almost all of them: I want to talk about a few concrete ways that I use psychological science thinking in corporate work. I am often asked for these when grads muster the courage to cold email me about my work, so it seems hard for students to find examples.*

Here’s where my perspective comes from: my PhD was in Experimental Psychology. That means I’ve sat in a lot of breweries yelling about the file drawer problem, IRBs, and why machine learning people get to use those fancy words for what we thought were just statistics. Now, I lead research and explain it to people with a huge range of backgrounds, from science to software engineering to management consulting to venture capital and beyond (not as many humanities people as I’d like. Join us in tech, amazing humanities people!). Equal amounts of yelling.

I’ve chatted with many social science grad students about careers, and nearly all of them undervalue and fail to make explicit the skills that they bring to multiple industries.** Yet a good research mindset is important, even vital, to most of the work we do out here. You, hypothetical experimental psychology graduate student, do this already.

SOME EXAMPLES

What unit of analysis are we talking about?

You think about units of analysis.

My experience in developmental psychology was helpful to what I do now, as I’d been through the painful process of assuming that “a person” was the thing that we looked at and finding out, hah, that “things we look at” might be stacked complex relationships and organizations that involve lots of people, and their co-relationships, and then the way their behavior changes those relationships, and rolling all of that up into some model. This is a school, for instance, or a company.

The units of analysis in industry are sometimes strange, and often implicit. I’ve been asked to evaluate the impact of a technology on learners, the effect of being a certain kind of learner on the efficacy of the technology, and whether or not there are discernible “different kinds of learners,” all at the same time. It’s like building a house for someone who is simultaneously ordering couches from West Elm and questioning the seismology of the state they’ve chosen to put the house in.

This might seem like an obvious problem but it’s actually difficult to catch yourself when you are mixing units of analysis in the real world, when people come to you with research needs that blend methods, when access to populations is opportunistic, and when you’re worried about money and ROI in new ways. As when you were confronted with an immense system like a school, or you rashly decided to study “disclosure” for your dissertation, you need to sequence an investigation that is clear about the level it’s operating on: are we checking the soil or designing the living room? Can’t get the furniture delivered if the address is changing to a different state.

Where are the interaction effects?

You think not only about multiple variables, but how they relate to each other.

Psychological science training is particularly good for this. It’s become default for me to spend a lot of time calling out possible hidden interaction effects in every project. Many folks who haven’t lived and breathed people data as closely as you have carry assumptions about data being static and independent: this is a manager, and they’re in a separate group from non-managers, and “being a manager” can only ever produce a single, unidirectional main effect on an outcome (say, a person’s satisfaction at work). It can be very powerful to challenge these assumptions by describing the logic of an interaction effect, and multivariate thinking in general. What if being promoted to management usually makes people happier, but actually makes geographically-distributed employees less happy?

What’s the good question behind the bad question?

If you only cultivate one skill in grad school to take with you into the corporate jungle, let it be this one. Approach every misunderstanding from a place of empathy that is so radical, you can bypass the misunderstanding entirely and replace it with a better conversation.

Wow, Cat, what is this, the Mindfulness of Industry Research?*** Stay with me.

Once upon a time I led a very large qualitative research project, as part of an investigation into what problems we were dealing with in this workplace. We conducted long hours of rich, semi-structured interviewing and pulled out themes to discuss at what people shared. In the panel of high-level folks that I was presenting to, one person raised their hand and asked, “Ok, but how many people do we interview before we reach statistical significance?”

This isn’t even the right kind of question to ask about qualitative data. Significant how? This person was looking for a p-value, because to them, “p-value” meant “evidence.” This is, of course, applying the wrong evidence framework to qualitative research (and it’s likely emblematic of a problematic fixation on p-values that’s bled from social science into industry, more generally).

I didn’t say that. I said: “I think what you’re asking about is how we decide that we have enough evidence to move forward, is that right?” I shared why we had decided to trade the generalizability of quantitative research for a rich qualitative investigation, and how we intended to verify these themes against a second, quantitative experiment. I mentioned the diversity of the group we were interviewing. I suggested that what they were really asking was what kind of decision we could make based on this data, and when we needed to wait until we had a larger, quantitative project. I didn’t really say anything about p-values, but they emailed me after the meeting to say that they’d never seen such a valuable research project in this area before.

The bad question is, what’s the p value? But behind that bad question is a very good question: how do I know that I’m looking at something real? That’s an important conversation to have. It’s one that you’ve had to have in science, and it’s one that you’ll need to have in industry.

The habits of thought that you’ve been building as an experimental psychologist are critical to industry research. Keep that window open!


*I should write a post about how PhDs can cold email people like me, probably.

**As an example of why I’m writing this at all, my personal first five search results for “experimental psychologists in industry” give answers like “psychologists use scientific research to provide insights.” Wow. Super helpful.

***I should write a post about the Mindfulness of Industry Research, probably.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.