Cracking the Code: A Science Media — Research Collaboration

This article is one of a multipart series exploring the unique media practitioner-academic research collaboration of Cracking the Code: Influencing Millennial Science Engagement (CTC) a three year Advancing Informal STEM Learning Innovations (AISL) research project funded by the National Science Foundation (NSF) between KQED, a public media company serving the San Francisco Bay Area, Texas Tech and Yale universities. KQED has the largest science reporting unit in the West focusing on science news and features including their YouTube series Deep Look.

The author, Scott Burg, is a Senior Research Principal with Rockman et al.

Fighting assumptions

Bringing academia and practice together to address or investigate pressing problems is nothing new. In the fields of science communication and science media, new data and technologies combined with more data-driven decision-making have media organizations to seek more sophisticated ways to measure impact and to understand shifting needs and interests of an increasingly diverse audience. While these collaborations have proliferated to some degree, these relationships can be challenging to manage, with different incentives, expectations, needs, and timelines. It does not take much to imagine that the range of conceptual and practical divisions between these two groups, between rigor and relevance, or between theory and practice could derail even the best intentioned of collaborations.

In many research-practitioner collaborations, for instance, even basic communication problems may occur because of team members’ use of technical or scientific language that is unique to their area of expertise and therefore unfamiliar to other members. The unique languages of the disciplines can reflect deeper differences in underlying assumptions, ways of knowing, and approaches to science and societal problems. The Cracking the Code collaboration between KQED, Yale and Texas Tech Universities were not immune from these problems.

Research concerns

At the outset of the project both KQED science staff and the academic research teams each brought a set of assumptions about the other. KQED Science staff had never worked closely with academic researchers, while the academic research team had relatively little exposure to science media practitioners. Each group knew that they could clearly benefit from the knowledge and expertise from the other, yet differences in work and communication styles, and contrasts in methodology and defining impact, necessitated a kind of ‘shake-out’ period between the two teams during the project’s first few months.

Members of the research team remarked that they came into the project with twin concerns. One being that the media ‘communicators’ would rely too much on the researchers (‘just tell us what the answer is’), the other being that if challenged by research findings, the KQED team would not actually change their existing communication practices or methods. After attending the project kick-off workshop, some members of the research team felt that KQED depended too much on marketing research as opposed to focusing more on how audiences actually process information.

The research team acknowledged that their method of conducting research was a bit more informal and less time sensitive than KQED’s.

Benchmarks we’re used to in academics are much more general. We are opportunistic and we do things that can be done when we can do them. There’s an academic leisure of greeds, the sense of time is basically just indefinite. We know that’s not the case at KQED, and have adjusted accordingly. Now we know where we’re at if we get behind and we know why and I think that that’s fine. — Research team

Members of the research team noted that Cracking the Code was more of a pure collaboration than others media collaborations they had worked on, where the media partner provides funding and simply wants the research team to provide answers. While the unique nature of this collaboration was appreciated by the research team, it did create some issues early on in adapting to the needs and demands of a nonacademic setting.

I think the approach is great but one of the places where it really caused a lot of difficulty is the difference in working style. A lot of academics or researchers aren’t involved in a lot of applied or program evaluation. Not only is the time thing a little bit weird but just the idea of deliverables and having a constant schedule of things that need to be turned it, it’s just not the way most academics work. — Research team

Research team members were confused with KQED’s definition of engagement, which they believed was too focused on marketing and bottom line financial concerns. Was this project really about boosting KQED’s viewing numbers? They felt that for research to be productive, KQED’s definition of engagement should be more line with social science, essentially on how people process information.

Members of the research team suggested that engagement should serve as an umbrella issue across groups rather than serve as a specific testing focus. They considered some of the initial research questions posited by KQED’s working group to be too open-ended. They also had concerns about testing cycle timeframes, and the order of specific testing experiments. A research member commented that during initial survey design KQED staff could have been more explicit about key objectives. Some were confused why KQED staff could not understand the team’s survey results and reports.

Misaligned timelines were another factor that hindered this collaboration throughout the project. Deadlines for science media reporters or producers tend to be more fixed and tied more to external events, production schedules, and resource availability. Academic timelines tend to revolve around semesters, tenure track progression, the academic calendar, and windows of opportunity to access funding.

Practitioner Concerns

KQED staff credited researchers for helping them overcome an initial sense of uncertainty that many of them had coming into the project. After learning that NSF had would provide funding, some KQED production and news staff felt that they were lacking the requisite knowledge or expertise to conduct this kind of research. They were concerned about the learning curve. The project’s more academically oriented research focus made it difficult for KQED science team members to convey to staff what Cracking the Code was about, why the research was important, and how their jobs and professional practices might be impacted by the research findings.

During the test planning stage, KQED staff wondered why the research team was not more involved in helping them develop research questions which would drive the various testing cycles. (‘Is it out of their scope of work?’) KQED staff often had difficulty understanding written and verbal communications from the research team. Some claimed it was too academic or ‘jargony’ Others felt that KQED was too passive or hesitant in speaking to the researchers about concerns or issues, or were concerned about that the research protocols and methodologies recommended by the research team were too rigid. There was a sense that members of the research team did not acknowledge the science team’s professional knowledge or experience when designing instrumentation.

At times it did not feel like an equivalent weight was given to the real world needs of why we (KQED) would want the questions answered. It felt that sometimes our experience and hunches were not valued. — KQED staff

Feeding some of these initial tensions between KQED and the research team, was KQED’s own internal uncertainties about the pace and scope of the project. KQED staff wondered how many deliverables the would need per cycle? How many studies would they need to conduct? How long would it take to develop a particular best practice? What if they ran out of time? How many questions were enough?

The pacing of media production and the production workflow versus the research and academic workload were very divergent. A three month testing cycle was just not long enough. Especially if each cycle you’re supposed to be discussing and developing the instrument and then implementing it. That was hard. — KQED staff

KQED staff acknowledged that even with the kickoff meeting, they should have built more lead time into the project with the researchers to get to know and define shared contexts and expectations, and to surface any misunderstandings or assumptions about communications and workflow.

Face-face and Zoom meetings and conversations between the respective teams over the first few months of the project did help to alleviate some of these concerns. Over the first year of the project KQED staff came to better understand how researchers conducted their work, and learned more about the nuances of developing research questions, survey items, and testing realistic hypothesis. Researchers came away with a better appreciation of KQED’s internal workflow, the value of professional expertise, production timelines, and reporting methods. Each group had their assumptions about the other challenged and then redefined for the better, which made project-related problem solving and consensus building more efficient and productive.

--

--