Theories on Theory: Why Some of it is Bull****
The world of advertising theory and research is a mystical and enticing place. Nuggets of truth pulled from studies can have immense power; they can fuel a creative brief or help you win over a client. But, we can’t forget that theory isn’t fact and must be used cautiously. To avoid the advertising research rabbit-hole, here are a few tips.
1. Check the references
A popular statistic echoed throughout advertising blogs and forums is that people only remember about 10% of what they see. Interesting, right? Well, this “fact” is extremely misleading for two reasons. First, the original study that published this finding said that people only remember 10% of what they READ but 30% of what they SEE. That’s a huge difference. Second, this article was first published in 1946. It’s ancient! The moral of the story here is that before using a compelling bit of research, always consult the original source. As a rule of thumb, anything published before 2005 is probably no longer relevant. However, you might discover that newer papers reference original studies and build off their findings.
2. Beware of the dramatic headline
You may have heard that because of the internet and mobile devices, humans now have an attention span shorter than that of the average goldfish. A 2015 study showed that participants had an average attention span of 8.25 seconds while that of a goldfish is 9 seconds. I have mistakenly used this statistic several times to colorfully demonstrate why messaging needs to be short and sweet. Similarly, it has appeared in the New York Times, Time Magazine and splashed all over online marketing blogs and publications. But, when you dig deeper, there is not much substance to backup this “fact.” The stat, originally published in a report by Microsoft, was actually pulled from an outside source that claims to have pulled it from another source that also claims to have pulled it from another source, and so on. Long story short, there is no definitive source of this claim nor is there a reputable study to back it up. If you want to learn more about this statistical conspiracy check out this interesting blog post.
Maybe our attention spans really are shrinking due to our constant exposure to content. However, we probably should not make the comparison between human and goldfish attention spans until we get more data.
3. Be skeptical
Researchers are only human and the data they dig up are not always perfect. Conducting a study with reliable results is f***ing tough. It requires a lot of time, money and a great deal of patience. When conducting my own research in college on the effects of music therapy on students with developmental disabilities, I saw how frustrating this process can be. There is endless red tape with participants and institutional review boards. My study took over six months to get approved, cutting our window for data collection in half. Consequently, the results of our year long study were not as exciting or impactful as we had hoped. Although a study may not yield optimal results, researchers sometimes rush to publish in order to stay relevant in the academic community, placate clients or defend an investment in the study. So, it should come as no surprise that results are often cherry-picked to maximize their impact. This is not to say that these researchers aren’t brilliant, they almost always are. However, we have to be wary of studies with splashy and exciting claims and be cautious of taking them at face value. A few key questions to ask yourself if evaluating an advertising study:
- Are the authors transparent about their methodology? Do they talk about how many subjects they used and how data was collected?
- Is the research published in a peer-reviewed journal? Have experts supported the findings and checked for inconsistencies (e.g. Journal of Advertising Research, Psychology & Marketing, Advertising & Society Review)?
- Is the author affiliated with any institutions or corporations? Could the language or results be biased because of that affiliation?
A good example of healthy skepticism was discussed by Ashley Ringrose in a recent Mumbrella360 debate. Ringrose pointed to research conducted by Oxford BioChronometrics that had been frequently used by his colleague to argue that digital metrics were ineffective because of the abundance of fraudulent bot clicks. The claim here is that up to 98% of clicks on Google ads are from bots. If taken at face value, this research could have an enormous effect on how we use digital. However, there are several red flags. First, this research was conducted over a seven day period. Second, only£100 (GBP) was allocated as a budget for each platform (Google, Yahoo, LinkedIn and Facebook). Third, this study was not picked up or supported by any reputable journal or research publication. Ultimately, this tells me that these results are probably unreliable and definitely aren’t representative of the scope of digital ads. Accordingly, similar studies show that only 2% of ad clicks are from bots. Moral of the story: don’t ignore a study’s methodology and take results with a grain of salt.
4. Don’t make theory the law
While it can be perilous to oversimplify, theory should also never rule your life. Theory and research can open many doors; it can shed light on universal truths and similarities in the human experience. However, it shouldn’t close any doors. A recent article showed machine learning algorithms often risk over or under-extrapolating from data it’s given. Its answer to a simple SAT-style question is not only wrong but it also has an insane series of equations to support it. This is over-complicating things at its finest.

Even gravity is still considered a “theory” and can be disproved if there is sufficient evidence. So, advertising theory should be treated as such. Humans are strange and unpredictable creatures. No one theory or algorithm can conclusively explain how all people consume media, shop for products or become loyal to brands. Use advertising theory as a thought starter and creative fuel, not as a tool to shut down divergent thinking.
For a more fun take on how we should be wary of research watch this.

