Inside Implicit

The difference between true implicit and “fast explicit” market research

Olson Zaltman
Published in
7 min readNov 1, 2015

--

Olson Zaltman’s Dr. Andrew Baron recently presented at a WARC webinar entitled “True Implicit vs. Fast Explicit.” The webinar was the last in a series exploring the budding specialty of Implicit Response testing in market research.

Implicit testing is fairly easy to do — you can download the software if you want to. It also is easy to do badly. Also what a lot of firms call Implicit testing is really just Explicit testing in disguise. WARC invited Dr. Baron to help practitioners understand the dos and don’ts, the often-confusing terminology, and the questions clients need to ask before embarking on Implicit research.
Some key excerpts from Dr. Baron’s presentation can be found below. We also encourage you to listen to a recording of the full webinar, which you can download here.

What are the distinguishing characteristics of a true Implicit test?

“In the academic literature, it is stated that if it is implicit it should be really difficult to control. There is probably not a holy grail where you can say something is 100% without control, but a good implicit tool is really hard to fake. In other words, people shouldn’t be able to tell you they prefer Brand A to Brand B [if, in fact, that is not true]. The kinds of tools that are out there vary considerably as to how easy they are to control in this regard. You want to know that there are studies showing people can’t fake a preference with the tool otherwise the problems that plague focus groups will also plague this particular approach.

“The second component is awareness. If it is a true implicit tool it should be capturing something that it is difficult for us to be aware of, and even if we are made aware that we have an implicit bias, it should not influence how we would subsequently perform on that measure. You can demonstrate whether or not a measure passes these tests. In short, when capturing our gut feelings this should happen outside the influence of our awareness.”

Implicit testing comes in various forms but they are not all interchangeable

What are the common misconceptions around Implicit testing?

“One misperception is that if it is fast, it must be implicit. This is patently false. Just because it is fast doesn’t mean it is implicit.

“A second misperception is that explicit measures don’t matter.Many studies of in consumer behavior show that both implicit and explicit measures can predict different behaviors. What you want is a full picture of both to get a comprehensive portrait of what a consumer might do.

“A third misperception is that cognitive neuro measures (like fMRI, EEG, and ERP), eye-tracking and physiological responses are implicit. That is false. These tools measure brain activity. Anything that we do, implicit or explicit, is going to have corresponding brain activity. So, just showing a picture of the brain lighting up in certain places or changes in heart rate or visual attention doesn’t reveal anything about whether what is being captured is implicit or not.”

Just showing a picture of the brain lighting up in certain places or changes in heart rate or visual attention doesn’t reveal anything about whether what is being captured is implicit or not.

Are there different forms of Implicit testing?

“There are generally two different types of response latency tests. One is an association task, like IAT. And when you look at IAT, there are different variants — the single-category IAT, the go/no-go association task, and the brief IAT. The logic for all of them is very similar and they each have strengths and weaknesses. And then you have classic priming tasks, which activate some emotion and then measure your reaction to that emotion.

“So there are multiple kinds of implicit tests that can measure functional and or emotional attributes. There are clear guidelines for what you can measure with each (e.g., priming is more suited for measuring emotion activation but less well suited for measuring functional attributes). Implicit association measures are very flexible in that they can measure both emotions and functional associations, which is what makes it very appealing.”

What are the red flags to watch out for?

“1. Is the method proprietary or different from those used in rigorous academic studies? The gold standard is peer-review, independently vetting the method for its reliability and validity. With the implicit tools in the academic literature, this has been going on for 20 years now. So when you come across a proprietary method, you really want to understand how much transparency is going to be offered in terms of how they design the study, what is the evidence that it is a valid and reliable tool, and what is the data analysis procedure and how does that compare with what statisticians say is the best way to analyze the data. And, is there any evidence that it is really different from an explicit measure. A good implicit tool should generally not correlate with a traditional explicit measure. If a company offers an implicit tool that does correlate significantly with an explicit tool, be wary.

“Lots of firms are claiming to offer proprietary implicit association tools. Critically, these things are not interchangeable, and only some methods are supported by a history of studies that are independently vetting the validity and reliability of those tools.

“They [implicit methods] differ on number of associations tested, length of study (from one minute to one hour) the types of associations tested (full sentences or single words), the sample size, or how they are combined with traditional explicit measures. For example, many studies show that there’s a science to determining how many associations you can test and how many trials you need to accurately measure an implicit thought or emotion. Make sure your tool follows these norms.

“A lot of people are claiming to be experts. We need to empower clients to understand who is an expert and who is not. PhDs are not enough. Just as you wouldn’t see a foot doctor when you need heart surgery, there are specialists in implicit cognition, and that is the kind of specialist you want to team up with — people who trained specifically in implicit cognition measures, have read the literature, have published in these areas and stay current with best practices.

A lot of people are claiming to be experts [but] PhDs are not enough…You wouldn’t see a foot doctor when you need heart surgery.

“2. What is the evidence this tool predicts behavior? Just because a method is modeled after a well-vetted tool doesn’t mean the modified tool is reliable. Even with IAT, small deviations — just changing the number of trials from what has been established in the academic literature — can totally destroy the reliability. Also there are specific statistical ways the data need to be analyzed in order for the analysis to be reliable. In short, even if a tool looks and feels like something that’s proven, you still want to be sure this specific version of the tool is considered reliable and valid.

“Perhaps most importantly, you want to look for experts in identifying the correct brand and product language. If you don’t get the language right, there is no benefit to doing the research. Implicit tests produce wildly different results depending on whether you get the language right or whether you get it wrong — even if you are measuring something like brand preference. Correct product/brand language is instrumental in measuring the right associations.

“3. Is analytic process opaque? Every few years, the academic field is updating the best way to analyze data from a particular method. You want to make sure that any method is up to date with the best practices for analyzing response latency measures, including the new statistics movement in the behavioral sciences.

“You also want to make sure the tool fits with your research goal. Some tools are really good at measuring emotions but don’t work as well at measuring functional or psychosocial outcomes. Not all implicit tools are interchangeable.”

At Olson Zaltman we follow the academic gold standard for measuring consumers’ implicit associations and have been doing so for a variety of global clients for more than a decade. Common applications include measuring brand health, concept testing, communication testing, and product experience and package testing. Furthermore, we have established a database of norms to help our clients interpret the significance of the insights provided.

We also combine our implicit association tool with our patented ZMET process to ensure we get to the important why explanation and obtain accurate consumer language when assessing implicit associations. Visit us at www.olsonzaltman.com/iae to read case studies that demonstrate the effectiveness of our implicit research and to take a demo.

James Forr is a Director at Olson Zaltman

Dr. Andrew Baron is Olson Zaltman’s Consumer Neuroscience Advisor and Assistant Professor of Psychology at the University of British Columbia. He holds a PhD in Psychology from Harvard University.

--

--

Olson Zaltman

Market researcher, baseball history nerd, wannabe polymath, beleaguered father of twins