MONEY FOR NOTHING
How collaboration between academia and industry can bolster the rigor, relevance, and value of research in both sectors
Only have two minutes? Here are the key takeaways.
Want the full story? Read on below.
- Initiatives introduced into the broader social psychology field in the last 5 years, in the wake of several scandals, have been beneficial. The field is now heading in the right direction, with rigor as a focal point for academic research. The next priority should be relevance — both to other academics, and for industry practitioners.
- Academia is not currently asking the right questions to effectively collaborate with industry practitioners. The broader social psychology field is too focused on the existence of effects, whereas organizations are more interested in the magnitude of effects, and how these affect their business.
- From the industry side, despite substantial (and growing) worldwide spend on market research, much of the output goes unused. There are typically three reasons for this outcome: 1) a lack of stakeholder buy-in, client-side; 2) the inability for clients to implement initiatives based upon the insights provided; and 3) the research does not solve the client’s business challenge. Put simply, industry also suffers from issues of relevance and rigor.
- The first step in collaboration is building rapport and cultivating a relationship — and as with any successful relationship, it is necessary to find common ground. Mutual benefit comes from understanding and empathy which, coincidentally, are the foundations of design-driven innovation.
- Academics can bring vast knowledge, new perspectives, and scientific rigor to industry practitioners. Industry practitioners can provide a practical lens to the heavily theoretical academic sector, and have far greater access to data than academics — both of which can help academics with securing funding and publications. Collaboration between the two sectors will ultimately enhance the rigor and relevance of both academia and industry.
— — —
Operating within paradigmatic boundaries is not something I am particularly good at. My Ph.D. was completed in a business school, but my dissertation was grounded more in social psychology than it was in marketing, and borrowed quite heavily from several other fields. This breadth of scope can be attributed to my belief that innovation and progress stem from a cross-pollination of ideas and perspectives — and that one cannot “think outside the box” if constrained by artefactual disciplinary or sector boundaries.
Stepping back in time…
To that end, I feel that academics and industry practitioners have a great deal to offer one another, and that by understanding each other’s respective goals and challenges, it is possible to bridge the gap between the two sectors. The aim of this post is to facilitate some of this understanding for mutual benefit; specifically, to emphasize the importance of rigor and relevance of research conducted in both sectors, thereby maximizing its value.
In early 2014, I wrote a post about my decision to leave academia to pursue a career in industry. I had become disillusioned with the state of the broader social psychology field (including the consumer behavior discipline from which I hail), owing to questionable, and sometimes downright fraudulent, research practices, which led to numerous retractions of published articles. Additionally, the relevance of much of the published literature was poor — both substantively, and even within academia itself. Together, a lack of rigor and relevance translated into money for nothing: a proportion of the funding going towards academic research was, essentially, being wasted.
So, where are we today? Whilst questionable research practices continue to be reported upon in the media, the difference now is that these instances appear to have been brought to light as a result of the changes instituted 3–5 years ago (e.g., replication projects). I am heartened by some of the initiatives I have seen in recent times (having attempted to keep a toe in the academic sector; a foot just isn’t practical when working a demanding industry role) — such as the Tutorials in Consumer Research series, released as part of the Journal of Consumer Research; as well as the Preregistration Challenge, which is conducted by the Center for Open Science.
Relevance: The “so what?” factor in academia
I believe the broader social psychology field is heading in the right direction, with a strong emphasis now on the rigor of the research that is being produced, and I am excited at its prospects for the future. However, the positive effects of implemented change will take time to fully emerge. For example, in 2012, the Association for Consumer Research (ACR) board approved the establishment of an ACR journal that would focus more on substantively relevant issues — but it took until January 2016 for the first issue to be published.
Despite this positive progress with respect to rigor, there remains a question of relevance — and whether initiatives such as the ACR journal will adequately address this. In his 2012 ACR Presidential Address, Professor Jeffrey Inman spoke about “the increasing need for useful insights”, and noted that a “quick scan of the implications section of a lot of consumer behavior articles reveals that the implications offered there are often rather far removed from the findings.”
Professor Inman went on to describe how the relative attendance of academics, government officials, and industry practitioners at ACR conferences had “grown quite lopsided”, with academics comprising over 99% of those in attendance at the 2012 conference in Vancouver. I doubt this attendance has balanced itself out since then. Case in point: I had the pleasure of attending a boutique Society for Consumer Psychology (SCP) conference at Columbia University last month, as the sole attendee not currently employed in academia — and I suspect it will be a similar story again in October, when I travel to San Diego to attend my first ACR conference since 2014.
Yet, attending the SCP Boutique Conference was tremendously insightful — and not just in terms of the high quality and incredibly engaging research that was presented on the day, regarding emotions and motivations, and the roles they play in consumer behavior. For me, it was fascinating to step back into the academic world with a completely new perspective, which has been shaped by solving client challenges for the past few years. What struck me, specifically, was how much my approach to addressing research questions has evolved, and how differently I interpreted the findings presented, relative to others in attendance (i.e., putting aside my own intellectual curiosity, would any of my clients want to know about these effects?).
I’ve heard it said that social psychologists care mostly about the existence of an effect, whereas economists are concerned more with the magnitude of an effect. I can understand this distinction; the rationale being akin to: to build knowledge, it is necessary to lay a foundation of understanding what effects exist, and identify those factors that drive or mediate the effects, those which moderate them, and those which attenuate and/or reverse them.
The economics field is older than the broader social psychology field, and already has a strong foundation from which to further develop; hence, by this rationale, it is free to address the question of effect size magnitudes. This misguided approach to building a discipline, however, prevents the progression of academia, causing the enduring lack of relevance that we see in the broader social psychology field now, both within academia, as well as from a practitioner perspective.
The other side of the equation: Rigor and relevance in industry
And therein lies the problem: out of the multitude of projects I have worked on since moving to industry (and bearing in mind that I, personally, am not involved in qualitative research), I cannot recall a single one in which a client did not ask for an effect or a relationship to be quantified. Indeed, understanding the magnitude of effects is an essential step in providing clients with a hierarchy of strategic priorities, and narrowing their focus to proactively solve the unique business challenge for which they have requested assistance. Put simply, academia is not currently addressing the right questions to effectively collaborate with industry practitioners.
If you’re an industry practitioner reading this (particularly in market research), you might be thinking that the grass isn’t any greener on this side of the fence. Based upon ESOMAR data, global spend on market research grew almost 7% (accounting for inflation) between 2009 and 2015, with the spend for 2015 totaling $44.35 million USD — but those in the industry know all too well that a sizable proportion of that spend goes towards research that is uninformative, or even unused. Typically, there are three reasons for this outcome.
First, there is no buy-in and, as such, research is seen as merely a box to be ticked. Whether it’s managers relying upon their own experience and intuition, and who trust their gut above all else; or a case of poor stakeholder management within a client’s business, such that no-one with decision-making responsibilities is an advocate for what the research is addressing — the result is the same: the research may get commissioned and completed, but the insights and recommendations go unused. Unfortunately, this is a client-side challenge; however, research firms can help to mitigate the issue, by clearly communicating both the objectives and benefits of the research (i.e., its relevance), to rally stakeholders within the client’s organization to become advocates for change.
Second, there is an issue in applying the findings from the research. When research is sold as a product, rather than a means to address a business challenge; or the research firm lacks strong advisory capabilities, the recommendations will be poorly applied, or will go unheeded altogether. Currently, the industry appears to be self-correcting here, as evidenced by larger, strategic advisory firms acquiring smaller research agencies en masse — both locally in the US, as well as around the world — including the knowledge and talent that goes into the bespoke solutions that boutique suppliers can design and offer.
What can industry offer academia?
Finally, the research simply does not solve the business challenge. There may be numerous antecedents to this outcome, but the most pertinent is that of poor design (i.e., a lack of rigor) — and this is the area where academic input is most applicable and valuable, because basic principles dictate that before we can accurately measure something, we must first understand it. Coincidentally, this understanding also helps to solve for the other two reasons, above, by facilitating clearer communication of the relevance of the research, including its aims, objectives, benefits, insights, and recommendations.
The volume of data that organizations generate and have access to is astounding, and has created an obsession with “big data” in industry. But data without design is worthless; you can have the most sophisticated analytics capabilities in the world, but these will ultimately be useless if you cannot translate the outputs into an outcome that 1) stakeholders care about and will buy into; 2) can be implemented to drive change; and 3) will solve a specific and important business challenge.
The most effective initiatives often come from the marriage of behavioral science and data science; those firms that can operate at the nexus of these two disciplines will, in my view, be the most successful. We have seen this already, particularly with the giants of the tech industry, such as Facebook and Google — as well as with the acquisition of research agencies by management consulting firms seeking to bolster these capabilities, as above.
What can academia offer industry?
And with the increasing ease of access to data, as well as the greater amounts of data being generated, comes an opportunity for academic expertise to be applied. For example, industry practitioners can collaborate with academics on individual projects, to enhance rigor (e.g., with input from academics regarding the research design and/or analytics). Conversely, these academics could use some of that industry data to generate and/or explore hypotheses as part of their research endeavors (i.e., enhancing the relevance of their own work).
The unfortunate reality of the consulting industry is that the popularization of fields like behavioral economics (including the plethora of books published on the subject) has resulted in simplified principles, derived from published “effects papers”, that laypeople mistakenly believe apply universally. In turn, it can be difficult for organizations in the market for research solutions to discern genuine expertise from a good sales pitch.
To communicate effects such as “choice overload” in a popular-science setting, it is necessary to apply relatable anecdotes (e.g., “how difficult is it to choose what to eat at a restaurant when you are faced with ten pages of options?”) — but these generalized examples neglect the nuances of these effects. In fact, these nuances are just as important as the core effect; for instance, if a business owner concluded that choice overload meant they should reduce the number of options they make available to consumers, they may inadvertently make their offerings less attractive, not more.
How can this be the case? Contrary to what a “generalized principle” of choice overload suggests, consumers prefer larger assortments (e.g., because of greater freedom of choice), as these generate positive affect; however, when it comes to actually making a choice, consumers’ rational cognitions take over, and choosing from a large assortment set can become more difficult (e.g., depending upon whether the category is hedonic or utilitarian).
Understanding rigor: The important role of context
Put simply, it is an understanding of these nuances, and their importance, that can separate a rigorous research firm from one that is not — and this understanding can come from collaboration with those whose work is the generation of knowledge: academics. In other words, academia can assist industry by providing consulting services to practitioners, and training the talent that goes to work in the sector (i.e., educating practitioners about these nuances), hence bolstering the rigor of industry-oriented research.
Academics know all too well that the best research is conducted neither quickly nor cheaply. But in industry, time and money are not luxuries that most organizations have; in today’s dynamic business environment, organizations must be agile, and research budgets are often slim. These circumstances have paved the way for some firms to produce poorly designed and executed research, which can be turned around fast and for low cost, but which is lacking in analytical rigor, thereby leading to the problems outlined above.
Many of the scandals seen in academia have emerged from the misuse of statistics; however, as with behavioral effects, understanding the nuances of analytical techniques and statistics can be very powerful — if used correctly. For instance, whilst the conventional alpha level for statistical significance (i.e., p < .05) is typically not relaxed in academia, it sometimes makes sense to relax this threshold in industry.
Take the case of a brand tracker: Is it riskier for an organization a) to institute an operational change based upon a statistically significant movement in tracked metrics that has a higher probability of being a false-positive (i.e., by relaxing the alpha level to define statistical significance as p < .10)? Or b) to keep the conventional alpha level and infer no meaningful movement in metrics (e.g., a statistically non-significant dip), and hence make no operational changes?
Identifying a middle ground: Collaboration between academia and industry to bolster rigor and relevance for research in both sectors
Obviously, the answer is dependent upon the cost to the organization; specifically, that of making a potentially unnecessary change, versus that of not making an early change. In other words, the context is crucial. The key point, however, is that understanding which rules can be bent and, more importantly, when and how to bend them, can save clients a lot of time and money, without compromising the quality of the research. Achieving this understanding requires insights from academia, to determine the implications of, and considerations for, relaxing statistical constraints.
In both academia and industry, research that lacks rigor and/or relevance equates to a waste of both time and money — the extent of which is dependent upon how invested stakeholders are in its outcomes. The purpose of research in academia is to contribute to knowledge, answering questions that matter to the discipline; in industry, it is to solve a business challenge. At the end of the day, though, the research process is the same in both sectors: objectives are identified, studies are designed and executed, and insights are derived. Put simply, research is an ongoing process of problem identification and resolution.
The best ideas and outcomes when problem solving will often be generated as a result of collaboration — and you need look no further than everyone’s favorite behavioral science duo for a case in point here. Consequently, it is collaboration which, in my view, is the optimal way forward — because it will yield positive outcomes for both industry and academia, including helping to ensure that neither sector continues to fund research that is not useful. And, at the end of the day, no-one wants to spend money for nothing, particularly when it comes to research.
— — —
Dr Ben Kozary is a behavioral scientist and research design specialist living and working in New York. He holds a Ph.D. in Consumer Behavior from the University of Newcastle, Australia.
Enjoy what you read? Share, like, and comment, to continue the conversation — and feel free to reach out: email@example.com
— — —
Originally published at https://www.linkedin.com on July 5, 2017.