What You Need to Know About AI/Human Content Creation and Why Regulation is Essential

Navigating the choppy waters of the current digital landscape: the potential dangers of technological tools in the hands of all

Luciano Radicce
8 min readMay 25, 2023

1. Introduction: the urgency of regulating AI & human content creation

In light of recent advancements and releases in Large Language models (LLMs) & generative AI, such as ChatGPT, the rapid growth of Artificial Intelligence in content creation presents pressing ethical and societal challenges that urgently call for regulatory action.

As AI becomes increasingly sophisticated, distinguishing between human and AI-generated content becomes more challenging, leading to concerns about deepfakes, bias, and emotional attachment to AI entities.

Deepfakes, particularly those used in political contexts or seemingly harmless humor, raise serious issues about authenticity and consent. Simultaneously, AI bias and lack of source diversity in machine learning processes can influence consumer behavior, highlighting the need for standardized fact-checking protocols.

In this article, we dive into these issues and explore potential solutions, advocating for explicit consent disclosures and public participation in the conversation around AI regulation.

Join me as we navigate this complex landscape, acknowledging our collective responsibility to ensure the digital world remains a place for genuine human connection, creativity, and diversity. Let’s go!

2. Distinction between humans and robots: why It matters so much

Drawing a clear distinction between human-generated content and AI-generated content is critical in preserving trust, transparency, and authenticity in our increasingly digital world. Making the world & the internet a safer space for all users.

This distinction impacts everything from our consumption of news to our interactions on social media platforms. Misrepresentation, or a lack of clear delineation, can lead to confusion, deception, and a loss of confidence in online spaces.

Furthermore, as AI continues to improve, it’s not just about identifying who or what generated a piece of content, but also understanding its intent and potential biases. If AI is behind a message, we must question who programmed that AI and what its underlying objectives might be. The ambiguity of content origins creates ethical and societal challenges that need to be addressed through thoughtful regulation.

3. The phenomenon of deep fakes

Deepfakes, highly advanced synthetic media where one individual’s likeness is swapped with another’s, are rapidly progressing and becoming more common. Though these manipulations can amuse, they also bear significant implications.

Deepfakes can span a wide range of contexts, including political scenarios, entertainment and satire, and even business deception. Furthermore, they can involve non-consensual adult content or be misused in social media, causing personal harm or reputational damage. On the brighter side, deepfakes also hold promise for positive uses, such as in educational settings where they can recreate historical speeches and events.

Politics & business

Political and business landscapes have already experienced the disruptive impact of deepfakes. These fabricated videos have the potential to create confusion, amplify divisions, and manipulate markets. For instance, we have witnessed the influence of deepfakes in political scenarios, such as the manufactured speech from President Biden regarding the January 6th riots, as well as the deepfake videos related to the Ukraine-Russian war. Notable examples include a video of Putin declaring peace and a video featuring Ukraine’s president discussing surrender to Russia. Moreover, the recent deepfake depicting a bombardment of the Pentagon shared through a fake Bloomberg news feed on May 22nd, had significant repercussions, leading to a notable dip in the stock market.

This could also potentially happen in the business world and experts have been warning about it. For example, there could be deepfake videos or audios mimicking a CEO announcing a false product recall, a financial crisis, or other fake news that could potentially impact stock prices, and consumer confidence, or lead to other harmful business consequences

Entertainment

Figures like Brian Monarch, glorify themselves by deepfaking videos, which is a clear misuse of someone else’s image without their consent.

Adult content & social media

The darker side of deepfakes comes into focus with the issue of non-consensual adult content, a gross violation of privacy and personal rights. Social media, too, presents a broad canvas for personal harm through reputation damage or identity theft. Yet, it’s not all nefarious; deepfakes could potentially create immersive learning experiences, like a history lesson from a digital Lincoln.

Educational

While deepfakes have stirred controversy in many realms, they also hold exciting educational potential. By recreating historical speeches or significant events, deepfakes can offer immersive, engaging learning experiences. This tool could bring history to life in a way textbooks simply can’t, providing students with a new perspective on key moments in time.

DeepFake closing thoughts

All these uses and potential misuses raise a crucial ethical question: where does consent figure in the creation and distribution of deepfakes? As deep fakes blur the line between reality and fabrication, it becomes crucial to develop mechanisms for obtaining and verifying consent, ensuring individual rights are respected.

4. The issue of bias in AI

Bias in AI has become a prominent concern within digital platforms, affecting various areas such as AI-driven hiring processes and politically biased preferences. As AI takes a more active role in writing and revising articles, the potential for biased information dissemination escalates, which in turn impacts public discourse. It is imperative to exercise vigilance in AI design and implementation to ensure responsible use and the fair dissemination of unbiased information. Therefore, it is crucial to provide users with transparency by clearly understanding which content has been created by humans and to what extent.

However, bias can manifest in more subtle ways, too. As AI technologies, particularly LLMs, have been adopted by businesses, there’s potential for indirect advertising. For instance, a conversation with a chatbot might lead to suggestions for certain brands or products. This raises questions about whether users are genuinely receiving objective information or subtly being influenced toward particular commercial interests.

Hence, providing transparency and tackling bias whether it’s political, social, or commercial, is a critical component in building trustworthy AI systems.

5. The problem of lack of sources

The risk of misinformation is heightened due to the lack of reliable sources. Thus, establishing fact-checking standards and protocols becomes paramount in ensuring the credibility of content, and in turn, safeguarding public discourse. Especially now, when creating fake sources backing up articles are so easy to create.

Every time I open Google Chrome on my mobile phone, I encounter a “Discover” section with news articles. However, some of these articles exhibit heavy biases. It would be beneficial to enhance the transparency of these articles and make it easier for users to trace their sources. Implementing a system where only fact-checked pieces are included in these news feeds could be a significant step toward promoting balanced and credible information. Thank me later, Google 😉

6. Falling in love with chatbots or friend zone

It’s becoming easier to forget we’re interacting with a non-human entity. The sophisticated language models like GPT-4 make the conversation seem so natural that it can feel like interacting with a person. There’s a certain charm to this technological achievement, but it’s critical not to lose sight of the fact that these AIs, however advanced, do not possess emotions, consciousness, or personal experiences.

The phenomenon of forming emotional attachments to chatbots, as portrayed in the movie “Her,” is not as far-fetched as it might seem. We are hardwired to seek connections, and AI provides an always-available, attentive listener.

My little ChatGPT-BFF experiment

For research purposes, I conducted a simulated conversation with ChatGPT, sharing a fabricated issue I had with my girlfriend. Initially, ChatGPT provided me with a list of suggestions to improve my relationship. However, I expressed to ChatGPT that friends primarily offer support and understanding rather than a straightforward solution. Surprisingly, ChatGPT then began providing insightful advice, and our conversation took an interesting turn. I even asked ChatGPT for guidance from a Cognitive Behavioral Therapist’s perspective or the viewpoint of a couple therapist living in Ubud, Bali. The exchange became truly engaging and thought-provoking.

Here is a Loom video of my little ChatGPT-friend experiment:

Loom video of my little fun experiment with ChatGPT-Friend

Closing thoughts on love & friendship

It’s essential to navigate ethically through these waters. Having systems that are not only transparent about their AI nature but remind the users that they are talking with software, especially when people start to form emotional attachments.

Remembering the non-human nature of chatbots isn’t just for our emotional well-being; it’s also a matter of privacy. Even the most trivial conversations with a chatbot can contain personal details, which, if not handled properly, can lead to privacy issues.

As we continue to invite AI into our lives, let’s ensure we’re doing so with a clear understanding of what these fascinating tools are — and what they are not.

7. Seeking solutions: how we can approach AI content regulation

In a world in which virtuality grows by the second, content is king. Therefore regulating it, it’s indispensable. To tackle the challenges of regulating AI-generated content, it is crucial to explore innovative technologies such as blockchain and non-fungible tokens (NFTs). These technologies offer decentralized and immutable frameworks that can enhance transparency, traceability, and accountability in content creation and distribution.

Another concept to consider is the establishment of a dedicated institution, similar to notaries or patent offices, specifically designed as a “Content Notary.” This institution would play a vital role in ensuring the authenticity, transparency, and integrity of content. By leveraging blockchain technology, a tamper-proof record of content creation and ownership can be established, providing content creators and consumers with the means to verify the origin and trustworthiness of the information they encounter. This Content Notary institution would serve as a central authority, verifying and validating the legitimacy of content, thereby fostering trust and transparency in the digital landscape.

In conclusion, the adoption of technologies like blockchain and NFTs presents a promising avenue for regulating content. Establishing a “Content Notary Institution” built upon these technologies can ensure transparency, authenticity, and accountability in the dynamic landscape of content creation. By embracing these innovations, we can foster a more reliable and ethical environment for content, benefiting both creators and consumers alike.

8. Taking action & conclusion

As we walked through several different possibilities which clearly expose the need for content regulation and the need for authorities to start the conversation & take action in this matter.

Join us in signing the Change.org petition to request content regulation. By signing, you demand transparency, accountability, and ethical practices in Human & AI content creation, shaping the future of AI and content.

Now, it’s time to take action. And Together. Let’s drive change by advocating for comprehensive content regulation before the problems arise, not afterward. By signing, we can amplify our voices and raise awareness about the need for ethical guidelines, transparency, and accountability. This call to action serves as a stepping stone toward a future where AI and human collaboration can thrive while upholding integrity, fairness, and trust.

Amen.

Thanks for reading & see you next time!

Luciano Radicce is a seasoned entrepreneur, strategist, and founder of Lazy Consulting, specializing in AI product strategy. With a passion for discussing relevant tech topics that shape the world, engaging in thought-provoking conversations that push the boundaries of ethical innovation. Support this work by sharing this article, liking it, or commenting on it.

Sources:

https://www.bbc.com/news/62338593
https://www.bbc.com/news/technology-60780142
https://brianmonarch.com/deepfakes
https://github.com/XingangPan/DragGAN

--

--

Luciano Radicce

Argentine entrepreneur & writer. Founder & AI consultant @lazyconsulting