Blog series: AI regulation is overlooking the need for third-party transparency in the media sector

Anna Schjøtt
AI Media Observatory
12 min readJul 15, 2024
Clarote & AI4Media / Better Images of AI / AI Mural / CC-BY 4.0

Artificial Intelligence (AI) has become a term that is hard to escape in today’s societies both in public debate and in the regulatory discussion — where we look forward to the final version of the AI Act with great anticipation. In this four-part blog series, we ask where EU regulation on AI is finding and missing the mark in the media sector drawing on recent research from the AI4Media project. Today we zoom in on how regulation can support good transparency practices for AI in media and find that there is a general focus on external (end-user) transparency, which overlooks the need for third-party transparency in the media sector.

Authors: Anna Schjøtt Hansen, University of Amsterdam, Rasa Bocyte, Netherlands Institute for Sound & Vision, Noémie Krack and Lidia Dutkiewicz, CiTiP KU Leuven. Thanks to Natali Helberger, University of Amsterdam, for reviewing and providing excellent feedback.

With the launch of ChatGPT in November 2022, AI entered the public debate in unprecedented ways, which led to renewed calls for the regulation of AI. However, regulators have for years attempted to grasp and address the emergence of AI and the European Union (EU) has taken regulatory steps to address the challenges and opportunities presented by AI — as we see today with the final version of the AI Act being published.

In this blog series, we dive into these new regulations to explore how they address the media sector — concretely we ask where they are finding and missing the mark based on six cross-cutting policy needs identified in the Horizon2020 project AI4Media.

The lifeblood of democracy
The media sector has historically been seen as the ‘lifeblood of democracy’ by providing citizens with the necessary information to make informed decisions and carry out their duties in democratic societies (see note 1). Due to the media’s vital role in democracy, it becomes pertinent to understand how the emerging legislation addresses the specific challenges and opportunities of the sector and supports a responsible uptake and use of AI in this crucial sector.

A four-part blog series
That is exactly what we will do in this four-part blog series, where we explore how particularly the AI Act and the Digital Service Act (DSA) are addressing six cross-cutting policy needs that are deemed central for supporting a responsible AI approach in the media sector.

In short, these needs include the need for policies:

  • that imposes and supports good transparency practices to ensure continued trust in the media sector and responsible use of AI technologies.
  • that supports research in and of AI solutions — particularly through funding and better access to crucial data from, for example, large platforms.
  • that stimulate the responsible development of AI by creating space for this in a highly challenged industry and by creating incentives for irresponsible practices.
  • that address and mitigate the growing AI divide, where particularly small and local news have difficulties in keeping up with AI developments, which could put at risk media plurality
  • that aim to address the current power imbalances in the AI landscape, particularly ensuring more bargaining power amongst media organisations and ensuring media independence.
  • that are more globally and societally focused and address broader risks of AI, such as labour changes, polarization and environmental impacts.

These six-crosscutting policy needs are a result of both desk research and several workshops conducted with media stakeholders, legal and media researchers as well as AI developers and researchers over the course of the AI4Media project. They are described in detail in the ‘Final white paper on the social, economic, and political impact of media AI technologies’ published in February 2024.

In the following four blog posts we address one or two of these policy needs and map how the emerging legislation is supporting these needs — or in some cases leave things unclear. This is aimed towards helping media organisations understand what legislation could support them and provide industry professionals with insights and findings they can leverage in policy discussions to point to weaknesses in the legislation that affect the media sector’s ability to act responsibly in the AI landscape.

However, the blog series does not provide an exhaustive discussion of all emerging legislation and initiatives on AI but focuses particularly on the DSA and AI Act.

Supporting good transparency practices
In this first blog post, we tackle the need for policies supporting good transparency practices. Transparency has always been a core concern in the media sector, but the rapid uptake of AI has posed new questions of how to be transparent towards the audience, but also internally in media organisations. Concretely, we here discuss the need for policies supporting good transparency across three levels:

The first, we call internal transparency drawing on a distinction made by Hannes Cools and Michael Koliska, which describes the need to ensure that journalists and other non-technical groups inside media organisations have sufficient knowledge around the AI systems they use — whether these are built in-house or bought by third-party vendors. This could relate to good disclosure practices inside media organisations or intelligibility training.

The second, which we call external transparency, refers to transparency practices directed towards the audience to make them aware of the use of AI, such as explanations, clear disclosures, or watermarks.

The third, which we call third-party transparency addresses the need for buyers of potential AI systems to gain insights into key information around the systems they are purchasing, such as what data it is trained on and how the datasets were annotated.

Transparency in the AI Act and DSA
Transparency is a key priority in the AI Act and the DSA. In the AI Act Recital 27 (see note 2), it is, for example, emphasised that: “Transparency means that AI systems are developed and used in a way that allows appropriate traceability and explainability while making humans aware that they communicate or interact with an AI system, as well as duly informing deployers of the capabilities and limitations of that AI system and affected persons about their rights.”

Looking across these two new regulations, we see a strong focus on external transparency to users — both end-users and professional users of, for example, social media platforms, such as newsrooms, which is very promising to ensure that audiences become more aware of the extensive use of AI in the media landscape.

However, we find that both internal and third-party transparency remain overlooked in the legislation, which can make it difficult for media organisations to purchase and use AI systems responsibly.

Part of this is related to the structure of the AI Act, which has a risk-based approach meaning that the legal requirements are tailored to the intensity and scope of the risks that AI systems can generate. Concretely, there are four risk levels: Unacceptable risk, High risk, Limited risk, and Minimal risk. Most of the applications used in the media sector will fall under the category of ‘limited risk’, meaning that there are also limited obligations attached to these applications (Article 50 outlines the requirements and obligations for transparency, see note 3). In the following sections, we unfold how the AI Act these provisions support the three levels of transparency in more detail, starting with external transparency.

External transparency: Audiences need to know they are interacting with an AI
One of the important aspects of the AI Act is that it requires providers of AI systems, such as OpenAI (see note 4), must, for example, make it clear to users (see note 5) that they are engaging with an AI system, such as chatbots and as a rule, they must also mark generative AI outputs as AI-generated or manipulated, such as deep fakes (see note 6). Thereby, making it very clear that users are to know when they are interacting with an AI or AI-generated/manipulated content. However, it remains unclear what forms of transparency will be sufficient and whether they will be meaningful to the audience or risk becoming another pro-forma transparency practice, such as the ‘cookie consent form’. The Nordic AI Journalism network has provided some initial guidelines directed at media on when and how to be transparent to the audience.

The AI Act also imposes transparency obligations on deployers of AI systems (see note 7). Transparency requirements apply to those who use AI systems that generate or manipulate images, audio or video (e.g., in a deep fake), which applies to media organisations who use, for example, image or text generators. Importantly, there are some exceptions which are relevant to the media sector.

  • First, if the content generation or manipulation is used in an evidently artistic, creative or satirical manner, the disclosure should not stand in the way of the audience enjoying the work (see note 8)
  • Second, if the AI-generated text has undergone a process of human review or editorial control within an organisation that can take responsibility for the content (such as a publisher), disclosure is no longer necessary.

While these exemptions will make it easier for media to comply with the AI Act, they also produce three potential issues.

  • First and more generally, these exemptions will make enforcement even more impossible, as it will be up to the regulator to assess whether AI-generated content has been subject to editorial control — a highly difficult task.
  • Second, the exemption regarding human review can lead to questions and uncertainty within the media organisation and the enforcement authorities about what counts as a ‘human review’ or ‘editorial control’ and who can be said to ‘hold editorial responsibility’. Previous research has shown how there already is much insecurity about disclosure and also often a gap between ideals around transparency and actual disclosure practices.
  • Third, according to a Reuters report audiences want media organisations to be transparent and provide labels when using AI. With the already growing mistrust in media sources, it will be even more important that media organisations contribute to ensuring trust via transparency practices, which the current exemptions could at a minimum de-incentivise transparency action.

Turning to the DSA, the text also provides some provisions around external transparency, including that platforms such as Facebook (known as intermediary information services) must publish their terms and conditions (T&C) in easily understandable language and openly report what content is moderated. In their T&C they must also include a description of the tools used for content moderation, including AI systems that either automate or support content moderation practices. In addition, users must be able to report harmful content and also receive a statement clarifying why the content was moderated. In case of a complaint, it is also required that a human must be in the loop.

To support transparency around these systems, these platforms must also draft yearly transparency reports on content moderation including a qualitative description, a specification of the precise purposes, indicators of the accuracy and the possible rate of error of the automated means used. In practice, this means that users are in a better position to understand why and how their content, for example, was removed, which might be important for media organisations to contest potential restrictions on their content (see Article 14–17 DSA, see note 9).

The European Media Freedom Act (EMFA) similarly requires very large online platforms (VLOPs, see note 10) to annually publish information on the grounds for any number of times they restricted or suspended access to the content of media service providers (see Art. 18 EMFA). It also requires VLOPs before suspending or restricting taking effect to communicate to the media service provider concerned a statement of reasons and to give a 24-hour window to reply.

Internal transparency: Closing the intelligibility gap
Ensuring better disclosure around AI usage, might, however, not ensure meaningful transparency that allows important stakeholders — whether they are end-users or professional users — which is why there is a need to support internal transparency practices.

Research has pointed to the importance of closing the intelligibility gap around AI within a media organisation to ensure that different stakeholders, such as journalists, have the necessary understanding of how AI systems work to use these systems responsibly. Importantly, such knowledge is crucial for media professionals to be able to confidently act if there are issues with the systems.

This need is also recognised by the AI Act, which highlights how providers and deployers of AI systems including media organisations must ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operations and use of AI systems on their behalf (see Art. 4 AI Act, see note 11). This article does state that the context of use will be taken into account, which means that AI literacy measures for AI systems used in media should consider the specific impact of AI on freedom of expression, journalist independence and media pluralism.

However, as highlighted by Deutsche Welle in a recent article, implementing sufficient governance and supporting literacy is highly resource-intensive and takes a lot of translational work and, so far, the regulation remains vague on how media organisations will be supported in this work. Thereby, leaving the burden on the individual organisations.

Third-party transparency: Ensuring intelligibility and bargaining power
The importance of having insights into how AI systems work leads us to the last need for more transparency from third-party providers, as many of the systems used by media organisations are not developed in-house.

In both the DSA and AI Act, there are no provisions that make such information widely available. In the AI Act, it is only general-purpose AI models (such as GPT3) or high-risk AI systems, where there are requirements for providing some information about the training datasets and documentation around the capabilities and limitations of the models (see Recital 66 & 67, Article 53, and Annex XII, see note 12).

Beyond the DSA and AI Act, the Council of Europe has recently published ‘Guidelines on the responsible implementation of artificial intelligence (AI) systems in journalism’. The Guidelines highlight how responsible procurement requires insights into the systems that are purchased — something that is currently not strongly stipulated in any regulation, which minimises the bargaining power of media. The report includes a checklist for media organisations to guide the procurement process (Annex 1), which lists several central themes and questions that could help in assessing the suitability of a particular AI provider, and in scrutinizing the fairness of a procurement contract with an external provider.

Where to from here?
With this discussion of the important regulatory strides that both the DSA and AI Act set forward, we can see a strong emphasis on providing the end-users with better conditions to both be aware of and know what kinds of systems they are interacting with. However, we also point to particular weak points in terms of supporting the policy needs that we have identified, particularly we see that there still is a need for more focus on supporting media organisations in having bargaining power through transparency for AI providers and also for creating more intelligibility around the systems that journalists use (either freely available tools like ChatGPT or purchased systems).

In the following blog post, we dive into how such responsible use and development could be better stimulated as well as the importance of ensuring vital access for journalists and researchers to investigate the potential harmful effects of the AI systems that drive large platforms.

Read more: Blog series: More policies and initiatives need to support responsible AI practices in the media

Notes for clarification

  1. In this blog post series, we discuss the media sector from the point of view of media organisations including, for example, news providers, audiovisual archives and public service media.
  2. The main purpose of recitals in EU law is to clarify the key goal intended by the legislative act. Note that recitals to EU law are not legally binding, but can be important in interpreting the ambiguous provisions. You can find more information on recitals here.
  3. AI Act Art. 50 (7). The European Commission will encourage and facilitate the drawing up of a Code of Practice for the effective implementation of transparency obligations laid down in Article 50 AI Ac. In case self-regulation is not adequate, it may adopt an implementing act specifying common rules for Article 50’s implementation.
  4. AI Act Art. 3 (3). Provider means “a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge”.
  5. AI Act Art. 50(1). Exceptions apply such as when “this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use”.
  6. AI Act Art. 50 (2). Exceptions apply to the extent that “the AI systems perform an assistive function for standard editing or do not substantially alter the input data provided by the deployer or the semantics thereof”.
  7. Art. 3 (4). Deployer means “a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity”.
  8. AI Act Art. 50(4). In the case of the AI-generated manipulated text which is published with the “purpose of informing the public on matters of public interest”, the disclosure obligation also applies.
  9. To facilitate access to content moderation data, platforms are also required to submit their Statement of Reasons (SoRs) for the content moderation decision in the DSA Transparency Database which is freely accessible. Research already showed a disparity in moderation practices and inconsistencies in the granular information provided in SoRs to users.
  10. For the full list of designated VLOPS see here: https://digital-strategy.ec.europa.eu/en/policies/list-designated-vlops-and-vloses
  11. AI literacy is defined in art. 3(56) AI Act as “skills, knowledge and understanding that allows providers, users and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause”
  12. For high-risk systems, the required information will only be provided, upon request, to the AI Office and the national competent authorities, not the users or the general public (for criticism see here).

--

--

Anna Schjøtt
AI Media Observatory

Technological anthropologist PhD Candidate at the University of Amsterdam working on the politics of designing AI for the media and cultural sectors.