Navigating the AI Tagging Dilemma on Meta Platforms

Dimitry Mak
4 min readJun 18, 2024

--

I know there’s been a lot of ‘made with AI’ posts about Meta platforms like Instagram, Facebook, and Threads. The current Meta tagging system tags both content using some Photoshop features and completely AI-generated content as ‘made with AI,’ which is causing confusion. This has sparked considerable frustration among users. But it’s important to remember that this feature is barely a few months old, and improvements will come with time.

For us photographers, AI has become an invaluable tool. It can help streamline our workflow, like removing an unwanted hydrant from a photo with a single click — something that used to take much longer with clone and stamp tools. AI tools save time and enhance creativity, allowing us to focus more on the art and less on the minutiae.

People in the comments often argue that Meta should’ve fine-tuned this feature before releasing it. That’s a valid point, and developing and refining such technology does take time. Yes, the current system has its flaws, but if the end goal is to create a safer online space where users can confidently distinguish between AI-generated content and human-made content. If this eventually leads to clearer distinctions and improved transparency, a few months of adjustment are worth it.

This isn’t the first time a tech feature has faced backlash during its initial rollout. Consider Facebook’s News Feed in 2006 — users were outraged by the sudden change, but with time and adjustments, it became a core feature of the platform. Apple Maps launched in 2012 with numerous issues, yet continuous improvements have made it a reliable navigation tool today. Even Windows Vista, initially criticized for performance and compatibility problems, paved the way for the successful Windows 7 after necessary refinements. These examples show that while initial rollouts can be rocky, continuous improvement and adaptation can lead to widespread acceptance and significant benefits.

Let’s look at a real scenario to illustrate the challenges Meta faces with AI tagging. Recently, an AI-generated image of a veteran, complete with a uniform, medals, and a backdrop of an American flag, was shared on Facebook and Threads. Users flooded the post with thousands of comments wishing the veteran a happy birthday, genuinely believing they were honoring a real person. Despite the AI-generated nature of the image, many users remained convinced. This example shows how deeply ingrained beliefs can be and how difficult it is to change them once they take hold.

I truly believe that even once Meta figures out how to identify fully AI-generated content, scenarios like the AI-generated veteran picture will still be used for scams and other malicious money-getting schemes. People just fall for things too easily.

This scenario highlights a critical point: Meta is not an education company. While it can provide tools and tags to help users identify AI-generated content, it cannot control how people perceive or react to that information. Human behavior is unpredictable, and people often believe what they want to believe, even in the face of clear evidence. Meta can guide and inform, but it cannot fundamentally alter human nature.

It’s unlikely that Meta — or any company — will ever achieve 100% accuracy in tagging AI content. Throughout history, humans have always found ways to fake things, and this behavior is not exclusive to Meta or Adobe.

Other platforms are also grappling with similar issues. YouTube, for example, has recently introduced a new tool in Creator Studio requiring creators to disclose to viewers when realistic content is made with altered or synthetic media, including generative AI. This initiative aims to provide more transparency to viewers about whether the content they’re seeing is altered or synthetic. For most videos, a label will appear in the expanded description, and for sensitive topics, a more prominent label will be displayed on the video itself. Additionally, YouTube may add a label even if the creator hasn’t disclosed it, especially if the content has the potential to confuse or mislead people. While YouTube presents this as a commitment to responsible AI innovation, it’s important to remain critical and see how these measures are actually implemented and enforced over time.

Hopefully, in the long run, Meta’s efforts will help in identifying full-blown AI images more effectively.

People should take a moment to understand that their photos tagged as ‘made with AI’ if they used a Photoshop tool are part of a new and evolving technology. This technology is still very new, and improvements are on the way. Patience and understanding will go a long way as we navigate these changes.

Ultimately, the responsibility lies with the viewer. It’s up to each individual to be educated and make informed decisions. Meta can provide guidance, but it’s the viewer’s critical thinking that will truly determine the effectiveness of these measures.

--

--