I Don’t Understand Science

Part III: Science Communication - A Paradox at The 68th Lindau Nobel Laureate Meeting

Michael Kisselgof
IKU Network
8 min readJul 24, 2018

--

Digital platforms dictate how societies consume information. Free or restricted — through Tweets, pictures, headlines, and short sound bites. Rapid fire and in short bursts. Search histories feed into A.I. algorithms created specifically to fetch results for predetermined queries, while we’re simultaneously bombarded by specifically tailored ads trying to convince us to buy some redundant, useless shit. Consumerism and technology conveniently crammed how we human’s process data and information into an immediate, brainless process. We used to seek out information and now it’s just…there.

And the repercussions on science? Vast.

Now more than ever, we need more attention to be devoted to science communication. The question of what are the methods best in explaining complex economic, social, political, and medical problems? is becoming increasingly more important. Science communication, a field in itself, needs serious improvement evidenced by the significant backlash on basic science in advanced societies.

It comes down to the source — scientists themselves recognize the necessity to communicate their research and its implications on mankind in a coherent manner.

Here’s a perspective from MD PhD candidate at Georgetown University in Computational Neuroscience Patrick Malone @patricksmalone perspective on science communication:

I personally think that scientists have a duty to communicate to the greater public what they’re doing … knowing your audience and presenting your work in a way that the audience can understand. It’s one area where scientists can improve … It’s really difficult to take a complicated thing and make it simple and easy to understand.The big, big thing to focus on in the future is a lot of science, at least in the states, is funded by taxpayer dollars so you really do need taxpayers understanding your work. It’s your job to explain why that money is well spent. What you’re discovering with it…

Computational neuroscientist, Louis-David Lord, a PhD at the University of Oxford share his opinion on the subject:

You can be a great scientist and be very successful at getting grants, do very well in the game of science, and actually be a very, very poor communicator to the general public. I do hope this is a skill that that becomes more and more valued in the future with the assessment of prospective scientists, and even senior scientists, because I think what we’re learning today is that there are very deep and potentially very negative implications to not sufficiently engaging the public or what scientists actually do and say.

The young scientists, as well as Nobel Laureates such as Peter Doherty, shared the same concerns that it really is on the scientists to be able to communicate the implications of their work in “human” fashion. Because let’s face it, a significant majority of the world’s population is not equipped with the knowledge to breakdown and understand high level science. Sorry if we’re hurting anyone’s feelings :)

Impact Factors

One controversial metric, and hot topic at the Nobel meeting, used in the science community to assess the quality of a publication is the Impact Factor. The IF is a measure of the frequency with which the average article in a journal has been cited in a particular year. It is used to measure the importance or rank of a journal by calculating the times it’s articles are cited. But can the IF really be used to measure the scientific integrity of a publication?

Immanuel Elbau, an MD from the Max Plan Institute of Psychiatry, explains how important the IF is to a young scientist:

The impact factor is the main currency by which our work is evaluated. So as a scientist it’s almost pivotal to advance your career in high impact journal. If you do four or five years of work, the best thing to get out of it is to publish in a high impact science like Science, or Nature Nuero Science in my field. I guess it’s reasonable to a certain degree because it does reflect the impact of your work and how much it is read. But there are multiple problems with it.

The more you’re published and cited, the better your chances are in finding funds for your research. A young scientist is of course then incentivized to pursue as high an IF as possible, where then communication within the publication to a broader audience doesn’t necessarily need to be as digestible. This is not to say the quality of the results are hurt, as Nature, Springer Nature, etc. are not in the business of publishing amateur articles. But they most certainly are in the business of publishing splashy and sexy material, like any publication.

Amy Shepherd, PhD at the Flora Institute of Neuroscience and Mental Health at the University of Melbourne, shares her opinion on the situation:

The journals are business and what they want is novel and exciting research that is going to be cited a lot and talked about a lot in the media. Which means the incentive of the scientist doesn’t necessarily have to do with good scientific method, but to get exciting scientific results. And while they are related, exciting results often means you’ve figured out a hard problems, but not necessarily the same thing. And there is also the problem — if you find something exciting, the incentive is not to drill down into it and make sure it’s real, you want to get it out there in some exciting journal because for young scientists, it’s all about publications, and if those publications are high impact journals, which are equated to be of better reputation, better science. Those better high impact factor journals are more likely to get you grants, fellowships, jobs, whatever…

Amy was brought on to the Publish or Peril panel with two Nobel Laureates, Randy Schekman and Harold Varmus, and CEO @DanielRopers of Nature Springer to discuss the current state of affairs in scientific publishing. The IF was at the heart of the discussion, with certain heated moments between Daniel Roper and the Laureates as they spoke on a publication’s responsibility to the science. Amy elaborates on the subject:

I agree with Randy Scheckman — the impact factor needs to be retired…Unfortunately there isn’t anything easy to replace it with, and all the things that make it easier to assess the science are much more long winded. If you have someone write a paragraph that explains their scientific research from the past 25 years and how it impacts the field, its fine if you’re reading 20 [applications]. It’s not fine if you’re reading 8,000. When you have, 8000 applications for a grant or some famous fellowship, people are always going to use some kind of way to cut it off. Impact factor is one way, but there are things people can use — number of journals, number of citations…people are always going to fall back on some kind of metric. So we as a scientific community need to decide on what kind of metric to go with because are going to have go with one. How else are you going to narrow down 8k applications to 10.

We need to figure out a way to judge the science in a fair away — where you judge the science of the methods and the rigor the people went through rather the results themselves. How you do that I’m not 100% sure but it’s something we need to strive for.

Science aside, communication channels have become distorted and convoluted in the digital age. It’s not more information that we need — it’s better communication of the information. Effective and tactical communication may not solve debates, but will advance them. Metrics used to assess the performance of content can guide a publication’s original intent astray as well. Immanuel continues with his gripes with the IF:

[One,] it really creates a strong publication pressure which takes the focus away from doing decent and rigorous work, and puts the focus on trying to assemble a set of data that is very sexy and seems to be very novel. [And two] I think that ties into one of the biggest problem in our field at the moment that hasn’t been discussed much, and that’s the reproducibility crisis. It’s known that maybe the majority of the research, especially in the field of cognitive neuroscience, is just not reproducible. Things are being published that are not true. In the rarest cases this is fraud…it’s almost a statistical problem that in order to control for false positives, you need to have large sample sizes, and you need to be very rigorous about your statistics. But if you do if you are doing that then you have a much smaller chance in reporting a novel problem. So it’s a very complex problem. The challenge is would be to shift the currency towards a currency that takes into account these important factors like how reproducible is your work, how rigorous is your work.

Not only does the science community struggle with effective science communication, but also suboptimal metrics in assessing scholarly papers. With all of the customizable performance analysis tools we have at our disposable, and A.I./ML accessible in parallel, we still haven’t determined appropriate metrics assessing the rigor of research and its socio-economic impact. Or at least widely adopted metrics. The question is how do can we apply traditional scholarly review to an increasingly competitive and voluminous science landscape?

Maybe with something like BioArchive…you can upload before [your paper] before its published, and people can comment and review it, and maybe there is a way in the future where the scientific community can develop some kind of score on how they think the scientific integrity and rigor is. So maybe that’s how we outsource it, everybody gets an equal say on reviewing. The digital age has changed scientific publishing a lot. It has lead to the reliance on these impact factors which is been an a negative fact but that doesn’t mean the technology can pull us back into the right direction.

Maybe immutable, permissionless systems with built in meritocratic voting mechanisms can help. Maybe with global computers like Ethereum…and then there are scientific prediction markets … but we will save that for a future post :)

What can we do?

Science communication and assessment are complicated tasks. And even the brightest minds in the world have yet to solve the two dilemmas. But at least they are acknowledged, and there are most certainly solutions being developed to address both.

In the mean time, what can be done? Devote more attention and capital to the science of science communication. Governments and private foundations should approach science communication with the same rigor of other complex scientific problems. Graduate programs in medicine and the sciences should give students more instruction on how to communicate their hard-earned scientific insights. And the academic community at large should consider innovative ways of communicating research of public importance, and perhaps develop stronger partnerships with news organizations. Every young scientist should be on twitter, engaging the community. If the goal is real-world impact, we can no longer ignore the tremendous potential that lies in leveraging strong media voices to promote or counter claims backed by the weight of scientific evidence.

--

--