Hey Editor! It’s time to get involved in AI transparency

By Agnes Stenbom, Kasper Lindskow and Olle Zachrison

Agnes Stenbom
9 min readNov 29, 2022

Applied artificial intelligence has become an integral part of the day-to-day operations of many media companies, and new capabilities are emerging at stunning speed. But how informed are media consumers of what is going on? And who is accountable for the bigger picture? Looking back on discussions on AI transparency in the Nordic AI Journalism network, we see a need for increased leadership involvement.

On October 4th 2022 the Norwegian public broadcaster NRK published a standard-looking article about energy prices. However, the credentials for the associated image were not of your average photographer. Instead, the depicted power lines and serene mountainous landscape were attributed to “Midjourney (AI)”.

The initial illustration published by NRK

Did the byline make users understand and appreciate the synthetic nature of the content? The responsible editor at NRK didn’t seem convinced, and the image was swiftly taken down and replaced by a traditional stock photo. “There should be good reasons to use AI-generated illustrations as part of our news operations”, he explained in a subsequent interview with Journalisten.

We, three members of the Nordic AI Journalism network, find that this case highlights important strategic and ethical questions for media companies going forward.

Nordic AI Journalism is an industry network consisting of about 200 individuals working for more than 30 different news media organisations in the Nordics. The network seeks to contribute to responsible use and development of AI technologies in journalism by sharing learnings across organisational boundaries. The network was founded by Agnes Stenbom (Schibsted) and Olle Zachrison (Sveriges Radio) in 2020, and since then we have met virtually for bi-monthly sessions covering various cases related to AI and journalism — from synthetic voices to entity extraction models.

This autumn, we finally got to meet in person. With local hubs in Stockholm (hosted by Sveriges Radio), Copenhagen (Ekstra Bladet), and Oslo (Schibsted) we discussed AI transparency and why we need more user-accessible information about the use of AI in journalism.

In this blogpost, we will highlight different types of transparency discussed, describe the consequences of opaque processes, and suggest a framework to chart use cases and risks going forward. Our key message: we see an urgent need for action, not least by media executives.

The Nordic AI Journalism network gathers around 200 news media professionals from across the Nordics

Why AI transparency?

First, we want to stress why we find the question of transparency related to the use of AI in news media to be such a pressing issue for journalism at this moment of time.

  1. Transparency is intrinsically important for responsible AI practices
    Transparency in the use of AI towards the people who are affected by it is widely recognized as a central aspect of responsible AI practices, as it is the foundation for interacting with AI systems in informed ways as autonomous beings.
  2. Transparency to build trust
    In journalism, we don’t yet have a ‘contract’ with media consumers about how and for what to use AI — neither as an industry or individual media companies. We find that openness is necessary to enable discussions among news publishers and stakeholders about the dos and don’ts of AI in order to establish more mature guidelines, policies and frameworks for the news domain. Specifically, we believe that increased transparency towards users would enable the discussion needed to facilitate expectation alignment — which is essential to facilitate trust.
  3. Transparency should be prioritised given the immature state of AI in news
    As highlighted by recent reports in and beyond the Nordics — see e.g. Wiik (2022) and Beckett (2019) — the news media industry is still very much at the start of our AI journey. A positive aspect of this is how it provides us with an opportunity to lay a solid foundation. We find that this foundation will be especially important going forward given that transparency in automated decision making and AI is mentioned in e.g. the GDPR, the Digital Services Act, and the AI Act.

Contemporary, often open-sourced, AI models are producing staggering results and are already used at scale, both by the general public and journalists. Generative AI models are, as we speak, drafting quick versions of press releases and crafting synthetic video, audio and images, as mentioned above. These exciting developments add a sense of urgency to the importance of AI transparency.

Types and levels of transparency

Given the great diversity in AI applications for news, from recommenders to internal tagging tools or generative content creation systems, we find that different types and levels of transparency are needed in different situations.

Informed by network discussions, we have distinguished between four different types and levels of transparency of relevance to news organisations:

  • Tell users that an AI system is being used (basic visibility)
    The perhaps most basic type of transparency involves informing users when any type of AI system is being used to select or modify content they are being served. This can be done in multiple ways, ranging from general AI policies that describe in general terms that AI systems are used by the news publisher, to specific information whenever AI is used in a specific element on a news website or app.
  • Describe how the AI system makes decisions (technical visibility)
    Another type of transparency that builds on basic visibility is descriptions of how the algorithms that are being used work. This type of transparency may range from generic descriptions of the AI algorithms to detailed descriptions of each of the algorithms including what they optimise for, what input data is being used, what type of machine learning methods is used. These descriptions can be in technical terms or in layman’s terms.
  • Explain individual decisions made by the AI system
    Yet another type of transparency involves explanations of the individual decisions that are made by AI systems, such as the reasons why a specific set of news articles are being served to a specific reader. This type of transparency ranges from post hoc explanations (where another AI algorithm guesses the reasons for a specific decision by examining what underlying features the decision correlate with, e.g. the readers gender or propensity to read e.g. sports stories) to intrinsic explanations where the algorithmic that makes the decision shows what internal factors caused the decision (this is typically more precise, but difficult for laymen to interpret).
  • Enable users to directly affect individual decisions
    The final type of transparency we discussed involves enabling the reader to affect the AI algorithms they are being exposed to in order to build a deeper understanding of how the algorithm works. At one level this may involve simple opt in/opt out options where the users can choose between a version of a website or an element on a news website with or without AI systems. At another level, the user may be given “handles and levers” that influence the input to the AI algorithm allowing them to experiment with different settings and outputs.

A basic framework for charting use cases

So how do you know which kind of transparency your specific use case calls for? In the network, we discussed two different factors that should impact such assessments.

First, we must consider the editorial risks associated with the use of AI. We find that it is good to make a distinction between legal risks (which cannot be negotiated) and ‘pure’ editorial risks. Sometimes they overlap, but there are certainly cases where a specific AI usage can create editorial risks that would not be a formal legal liability. A bad cluster of AI recommendations, or serious mistakes committed by machine translation can cause brand damage without any legal overstepping.

Secondly, we must consider the impact on the user experience. Is the user getting a different type of media experience because of the use of AI, or are we using the technologies in purely newsroom-facing processes, e.g. by making existing processes more efficient through automation?

In the network, we structured these two topics — editorial risks and impact on user experience — as axes to create a basic discussion framework:

Our framework for facilitating discussions on AI transparency in news

Once a given team has agreed on where its use case places, we argue that they can have a more informed discussion about the need for informing users and/or other stakeholders.

Risks come in different forms

During the Nordic AI Journalism network meetups, we discussed a number of active use cases of AI in Nordic media from a transparency perspective. Should they have been communicated to users in a clearer way? If not, why not? If yes, how?

Most of the discussed use cases focused on recommender systems, but also included tagging of video content, audio transcriptions, and automated generation of news telegrams. Use cases included both ML-based systems and rule-based systems (described together below as “AI systems”). What they all had in common, though, was that none was communicated to users as an AI system.

When discussing our use cases in relation to the above framework, it quickly became clear that we assigned different levels of editorial risk to different types of use cases. In general, we were confident in our editorial process and decisions, and did not worry about AI e.g. recommending or translating content initially ‘vetted’ by news professionals. AI systems with higher degrees of automation, such as auto-generated news stories , were suggested to involve higher editorial risks, and with it, even higher demands for transparency.

… but do users want it?

It was argued by some in the network that users rarely express any interest in transparency and that they do not want direct control over AI systems (e.g. recommenders). To paraphrase one network member: “They just express a desire for smooth, relevant experiences — they don’t care how it’s produced”.

Others in the network (including the authors of this blogpost) argue the opposite, with examples of users requesting clear information about who/what institution that is “behind” the AI, the data used to train it, and why AI is being applied in the first place.

Despite the internal differences in perceptions of users’ desires, most members have argued that more transparency is important and desirable — not least in order to increase internal focus on ethical practices that withstand the light of day.

Implementation of AI across the journalistic process calls for transparency

Our network discussions have highlighted the importance of recognizing how AI applications are part of a journalistic value chain where the number of use cases and the scale of their impact is growing. Sometimes one specific AI use case in itself can be assessed as low risk, but combined with other AI applications the picture rapidly becomes more complex and the risks of opaqueness are harder to calculate. No AI use case appears in a silo.

Art by Nidia Dias for DeepMind/Unsplash.

Internal transparency as a basic requirement

Keeping this in mind, we think that internal transparency and a shared understanding of how different AI use cases interplay is a crucial starting point for developing products that promote AI transparency in journalism. We need to understand the wider picture and inform users accordingly.

In modern media organisations, we argue that newsroom leaders — not least editors and/or publishers — must have an overview of where, when and how AI is impacting the production process and user experience. How else can the public hold them accountable?

It is easy to imagine a number of problems if media leaders do not become more aware about AI applied in their newsrooms. On the one hand, the risks of leadership overlooking the great potential of AI can hamper innovation in the media field. On the other, failing to recognize risks can have severe negative consequences for media companies and consumers alike.

Not being able to explain the general workings of systems applied in the own editorial process can be damaging both to the news brand and the personal credibility of the editor. While “human-in-the-loop” has become a popularized idea in AI, we believe we need to invest more specifically in making sure that we have “editor-in-the-loop”-systems as we continue to explore and apply AI in journalism.

Telling users that AI systems are at play: a needed first step

Finally, we would like to again stress how a basic level of transparency towards the readers is essential whenever AI systems directly impact the news experience without a human being in the loop, e.g. news recommender systems or content created (or translated) and published automatically. Such setups increase the need for information that allows readers to understand the news they are being served (and the errors AI systems might make) in context.

The consumer has a right to be informed when AI is playing a crucial role in the media experience.

In this digital information landscape, we believe that the consumer has a right to be informed when AI is playing a crucial role in the media experience. Exactly how this is done must be up to each company. However, our experience from the Nordic AI Journalism network is that a peer-to-peer dialogue about AI transparency, both within your company and the wider industry, is an excellent place to start.

--

--

Agnes Stenbom

Head of IN/LAB @SchibstedGroup / PhD candidate @KTH. Dedicated to responsible use of algorithmic systems in news media.