Since the launch of the project six weeks ago, our Journalism AI team at Polis, LSE has been talking with journalists and technologists to find out what are their main thoughts, concerns and expectations about the future of AI-powered technologies in journalism.
Following informal meetings and conversations, both online and at industry events, in the last two weeks we have organised special gatherings in New York and in Paris in collaboration with the Google News Initiative, that supports the project. Each meeting saw the participation of ten experts from top international newsrooms including The New York Times, BBC, Quartz, Le Monde, the South China Morning Post, Spiegel Online, the Associated Press, among others, and were also joined by Cong Yu and Moustapha Cisse of Google AI.
The goal of the two meetings was to inform the direction that our research should follow, in order to make sure that we ask the right questions in the survey we are about to send to newsrooms across the world. This article explains what we learned so far.
AI in the newsroom: it’s complicated
We wanted to have an open conversation to hear what newsrooms are doing with AI, as well as what they think are the most interesting areas for technological development. We also hoped to understand their concerns about the ethical, editorial, and financial implications of adopting AI. As one participant beautifully described it, we were interested in the state of play and the state of mind.
The openness of the conversations genuinely surprised us and made for an invaluable exchange.
First, we heard numerous times the frustration about the lack of clarity around key definitions. What does AI actually mean? Can intelligence be artificial? Should we rather talk about ‘machine intelligence’? The hype around AI is making it difficult to have real conversations both within our newsrooms and with our audiences, as there’s a fog of unreal expectations and misunderstanding to navigate through.
We were also very interested in hearing the motivations guiding newsrooms to adopt AI. Participants confirmed this is at the top of their reflections, as the past decades have taught news organisations the risks that come with uncritically adopting the shiny trend of the moment, be it immersive technologies, Snapchat, or another pivot to video. Nobody in the room questioned the disruptive potential of AI, but newsrooms are thinking of how and where it can add value: not just from a pure financial point of view but also in terms of content creation and distribution, as well as in opening up new opportunities for journalists to be more creative and efficient.
What Is Our Core Business?
Exploring the present and future applications of AI is also stimulating publishers to reflect on some more fundamental questions. What is our core business? What is our journalism trying to achieve? A more holistic approach to AI seems to be needed, as this innovation is not happening in a vacuum. In other words, as we were discussing what an AI strategy should look like, we realised that before crafting one we should make sure that we first have an editorial strategy in place. This is essential to approach AI with a clear vision and make sense of which elements we should explore and what instead we should leave aside.
Approaching AI holistically is also necessary in order to assess the impact it will have on journalists. How can we make change more human-centred and design AI tools and strategies for adaptability and sustainability?
Linking back to the idea that AI innovation in the newsroom is not happening in a vacuum, we heard that news organisations want to find out what training journalists need. How can we enable them to be part of the process instead of passively being presented with yet more new tools and products they will have to use without understanding their functionality and potential? Lack of training is a well-known issue in newsrooms and goes way beyond AI: participants lamented how we expect our journalists to learn the craft just by doing trivial and repetitive tasks at the beginning of their careers. We rarely provide the support necessary to navigate the waves of change and innovation, and this is having negative ramifications at different levels.
It’s important to keep in mind that human-centred design does not refer to the people in the newsroom only. Our participants showed significant concerns in relation to how we can properly explain AI to the audience and how we can make sure that AI is implemented in a way that is ethical and transparent. How do we make it clear when a machine has contributed to our reporting, or even drafted an entire article? How can we illustrate the process the machine has followed and ensure that it has been designed and trained respecting the rules of journalistic integrity? We also heard that news outlets are still struggling to understand how much personalisation is too much personalisation. Can we create shared guidelines that could help us to be clear and transparent in regard to what elements have led the algorithms to present the audience with certain content in a certain way?
We were very encouraged in hearing that transparency concerns are very much on the mind of all newsrooms we have talked with. They also want to make sure that AI does not stay only in the hands of the few brands with enough resources to run experiments in-house or collaborate with external firms. How can we improve the sharing of knowledge, case-studies, and lessons learned, to help smaller and local newsrooms to not be left behind when it comes to AI innovation?
This is just a brief summary of some of the topics we have discussed in Paris and New York. For the Journalism AI team — and hopefully for all participants involved — the conversations have been enlightening and extremely useful to understand what are the key topics our research should explore. Newsrooms are asking for training and knowledge sharing. They need help in understanding where to start when crafting an AI strategy, and a roadmap for the way ahead.
The survey we will soon launch will gather insights from journalism voices across the world and will help us define what resources are needed by news organisations to navigate the AI waves. We have also heard from everyone we have spoken with so far the desire to be part of an international network that could foster collaboration and keep the conversation going. Stay tuned.
Let your voice be heard
This week, dozens of newsrooms in all continents will receive the Journalism AI survey. The questions we have crafted with the generous feedback of everyone we have spoken to in recent weeks touch upon the impact that AI technologies are having — and will have even more in the future — on a variety of areas and people in the industry: on the work of journalists and on the tools and products they will use; on newsroom processes and workflows; on new forms of content creation and distribution; on our audiences and customers; and on all the questions that AI technologies raise around ethics, transparency, and journalistic integrity.
If your newsroom has been thinking about and/or experimenting with AI, please get in touch as we would love to include your voice in the survey. This is just the first step towards the creation of a set of resources that could help newsrooms in designing strategies to benefit from the potential of AI technologies. We also hope to build a community of peers interested in sharing best practices and lessons learned, and we want you to be a part of it.
Journalism AI is a collaboration between Polis — the journalism think-tank at the London School of Economics and Political Science — and the Google News Initiative. The results of the survey will be presented in a public report this fall. You can follow all project updates on this blog and on Twitter via the hashtag #JournalismAI.
You can contact Mattia at: M.Peretti@lse.ac.uk