Center for Cooperative Media

An initiative of the School of Communication at Montclair State University

Large text “AI DISCLOSURES” in gray letters against a blue background with a repeating pattern of lighter text.RetryClaude can make mistakes. Please double-check responses.

Could alt text be a useful framework for AI disclosures?

The way we approach accessibility for images could provide an effective framework for navigating the complex new terrain of AI disclosure in journalism

Joe Amditis
Center for Cooperative Media
4 min readMay 7, 2025

--

When we decide whether an image needs descriptive alt text, we’re essentially asking: “Does this contribute meaningful information, or is it purely decorative?” A stock landscape flyover that serves as visual ambiance might receive minimal or empty alt text because it doesn’t convey essential information — it enhances atmosphere without adding substantive content.

This could also be a useful way of thinking about how we handle AI disclosures. If an AI element is just decorative — like that background landscape — maybe it doesn’t need a big flashy disclosure. It all comes down to how much that element actually contributes to what people are getting from your content.

I’m not suggesting we ditch disclosures altogether, but they could be scaled based on impact. A central AI-generated image that’s crucial to your message might need clear disclosure, just like an important chart needs thorough alt text. But that little decorative flourish? Maybe just a brief mention would suffice in both cases.

The real question isn’t just “Did AI help make this?” but “How much does this AI element shape what people understand or experience?” It’s similar to how we approach alt text by considering what unique information an image provides.

This “materiality principle” recognizes that not everything carries the same weight. In accessibility, we already understand this information hierarchy — some visuals are essential while others just make things look nice. AI-generated content works the same way.

Think about the difference between AI-generated background music in a documentary versus an AI-generated narrator. The music sets a mood, but the narrator directly shapes how viewers receive and trust the information. They probably deserve different disclosure approaches.

Too many disclosures for minor AI elements could lead to people tuning them all out, even the important ones. Just as we’re careful not to overwhelm screen reader users with unnecessary descriptions, we could develop smarter disclosure approaches that maintain transparency without disengaging audiences.

This isn’t about dodging ethical responsibility. It’s about being thoughtful, matching the prominence of disclosures to how significant the AI contribution actually is. AI that slightly enhanced image resolution might need different treatment than a completely AI-generated person presented as real.

Left: Screenshot of an AI disclosure scenario tool. Right: Screenshot of an AI disclosure framework tool. Both tools were created by Joe Amditis with help from Gemini 2.5 Pro (experimental).

From a practical standpoint, this approach respects people’s time and attention. Yes, they deserve transparency, but they also need clean, functional experiences without constant interruptions.

We could implement this with tiered disclosure systems, perhaps using standardized symbols of varying prominence, or placing disclosures in locations that reflect the AI element’s importance. Decorative elements might be noted in credits, while central AI content might need immediate disclosure.

As AI becomes more integrated into our work, we’ll need to keep refining these approaches. What counts as “material” will evolve as people become more AI-savvy and as the technology advances.

Looking to accessibility practices for inspiration makes sense in this case, because those standards have been developed over decades by thinking about what information truly matters to different users. By applying similar thinking to AI disclosures, we can be ethically transparent while acknowledging that not all AI applications are equally significant.

📺 Here is an interactive demo to provide some examples of what this might look like.

🔨 Here is an interactive tool that helps you determine and draft an appropriate AI disclosure based on the type and material impact of the AI usage on your journalism.

Again, my goal here isn’t to prescribe one specific way or rule for what makes an appropriate AI-use disclosure. I’m only proposing a rough and somewhat similar framework that might help journalists and newsrooms approach these nuanced and complicated questions.

If this is something you or your newsroom has been talking about and you want to share your thoughts, please reach out to me or the Center at info@centerforcooperativemedia.org so we can work on this together.

Joe Amditis is the assistant director of operations at the Center for Cooperative Media at Montclair State University. Contact him at amditisj@montclair.edu or on Twitter at @jsamditis.

About the Center for Cooperative Media: The Center is a primarily grant-funded program of the School of Communication and Media at Montclair State University. Its mission is to grow and strengthen local journalism and support an informed society in New Jersey and beyond. The Center is supported with funding from Montclair State University, Robert Wood Johnson Foundation, Geraldine R. Dodge Foundation, Democracy Fund, the New Jersey Civic Information Consortium, the Independence Public Media Foundation, Rita Allen Foundation, Inasmuch Foundation and John S. and James L. Knight Foundation. For more information, visit centerforcooperativemedia.org.

--

--

Center for Cooperative Media
Center for Cooperative Media

Published in Center for Cooperative Media

An initiative of the School of Communication at Montclair State University

Joe Amditis
Joe Amditis

Written by Joe Amditis

Associate director of operations at the Center for Cooperative Media; Adjunct professor of multimedia at Montclair State University

No responses yet