Trust in content during the age of AI

Tom Ffiske
The Edge
Published in
3 min readFeb 22, 2023

--

Few technologies have penetrated the workspace as quickly or drastically as generative AI. Employees are dipping into new and emerging tools to cobble together quick emails, or prototype designs for their presentations. Some are even using the prompts by AI to brainstorm content ideas, or to pluck new metaphors to complement their written pieces. The adoption of these tools came in just a few months, underpinned by new types of AI models that improve how we work and play.

The trouble is that businesses are still trying to catch up to what their employees are already doing. Workers are already speeding up their day-to-day work with new tools, while companies scramble to get to grips with a new trend. The ramifications are enormous, as the concepts of ownership and authorship into a new morass of plucked ideas and content. What is unique anymore? How much AI use is allowable, if any? And should they go faster?

Some publications have taken swift steps, and done well to transparently show their actions. Take Sifted as one example. The FT-owned publication published an AI Code of Conduct, which transparently lays out how the team uses AI in its editorial process. The transparent declaration of its actions — from image generation to sourcing stories — is a smart and sensible way to navigate the waters. The wording and approach no doubt came from the journalists internally, speaking to experienced VCs investing in the technologies alongside their own work as reporters.

Sifted’s AI Code of Conduct

That level of transparency is important, to build trust with its readers — and trust is a currency that is disappearing over time. The Reuters Institute Digital News Report catalogues a decline in trust in publications (after a brief spike during the pandemic). The reasons why are complicated, but poorly-disclosed AI-generated content is one quick way to degrade that relationship further. Fail to do so, and the ramifications can be enormous. An investigation from The Verge found that half of CNET’s AI-generated articles had errors.

But where is the line for disclaimers? Larger-scale uses are clear-cut for publications, reinforcing the transparency and trust that readers are reliant on. But the line blurs when it comes to the smaller ways AI could be used, such as brainstorming content ideas or imagery. Not all articles should be peppered with a bullet-point list of every use of AI, cluttering the core of long-form articles. Sifted’s approach to a Code of Conduct is, in our view, the right step forward.

Disclosure is important, even if small levels of AI have been used as part of the process. For example, none of the content in this article has been generated by AI. We expect more publications to do the same as we go through 2023.

Tom Ffiske is a global thought leadership strategist at Accenture’s Metaverse Continuum Business Group.

--

--

Tom Ffiske
The Edge
Editor for

Works at Accenture's thoguht leadership team within the Metaverse Continuum Business Group, and runs the Immersive Wire.