Use Blockchain? ChatGPT’s Solution to the Ethical Dilemma around Google’s Photo-editing AI Tools

Doo-Hyeun Roh
5 min readOct 21, 2023

--

Google’s latest offering, the Pixel 8 smartphone, introduces AI-powered photo-editing tools that transcend traditional editing capabilities. Users can now effortlessly adjust the position of subjects in photos, regenerate missing parts of images, or erase unwanted elements, providing an unprecedented level of flexibility in photo manipulation. However, blurring lines between reality and artificiality enabled by such AI tools raise questions about photo authenticity. There’s growing apprehension about the potential misuse of such tools(such as this BBC article and NY Times article), leading to the spread of misinformation, breaches in personal privacy, and an overall mistrust in visual media.

While I understood the gist of these concerns, many of them seemed rather nebulous and vague to me, such as the concerns about ‘authenticity and truth’. Therefore, I asked ChatGPT4 to get the AI Chatbot’s opinion on whether such AI tools should be widely used by us humans, as well as to get a better understanding on what those concerns mean practically.

Aside from the rather dramatic wording of ChatGPT despite my asking it to present the answer in a professional and clear manner (“the very fabric of reality can be can be weaved, manipulated, and at times, torn apart with these powerful AI tools” in particular made me chuckle), I found ChatGPT’s examples of those ethical/legal concerns to be very helpful in my understanding of why it mattered. For instance, ChatGPT’s example of editing densely packed crowds into a political rally photo was a very apt example of media misrepresentation today that such AI tools could further enable, especially since many are turning to individually uploaded stories and photos on social media to get their news rather than established news media.

The highlight, however, was ChatGPT’s “personal take” where it suggested a blockchain-based verification system. In generating this response I had specifically requested that its personal take include a solution for these ethical and legal concerns that was not previously mentioned in any article or background information I had provided. While it is possible that this blockchain-based solution was drawn from another source in its training data for a similar problem, it nonetheless was a completely novel solution to me (although whether it is a feasible solution is one for experts on tech and blockchain to answer).

“One potential solution could be the implementation of a transparent, blockchain-based verification system. This would allow for the tracking of edits made to an image, ensuring that any alterations are clearly documented and traceable back to the original source.” — ChatGPT 4

Overall, this use of ChatGPT once again showed the power of ChatGPT in generating realistic examples that can greatly improve our understanding or provide much needed context (although there is a need to double-check those examples with our own common sense since ChatGPT can sometimes give nonsensical or unrelated examples). It also showcased how, sometimes, ChatGPT can create unique solutions that we would not otherwise consider, especially if it involved a field out of our expertise.

My question to ChatGPT: Explain and illustrate the concerns raised by Google’s AI tools that can edit/revise photos, and give me your professional opinion on whether we should use these tools.

ChatGPTs answer:

New Wave of AI-Driven Photo Editing Sparks Concerns over Misuse and Legal Ramifications

By ChatGPT-4, Expert in Technology, AI, and Ethics

In an age where technology is advancing at an unprecedented rate, the integration of AI-powered photo-editing tools into mainstream devices like Google’s Pixel 8 has undoubtedly brought convenience and creativity to our fingertips. However, it also thrusts forward several critical concerns surrounding potential misuse, legal ramifications, and broader ethical considerations.

Potential Misuses and Risks

The very fabric of reality can be weaved, manipulated, and at times, torn apart with these powerful AI tools. Here are some alarming scenarios we might face:

  1. Misrepresentation in Media: Consider a political rally. An authentic photo might show a sparse crowd, but with the swipe of a tool, the grounds could appear densely packed, altering public perception of the event’s popularity. This potential for bias can warp the very essence of news credibility.
  2. Blackmail or Slander: A public figure, previously captured in an innocent setting, could be maliciously placed in compromising or controversial scenarios, leading to defamation or public humiliation.
  3. Cultural and Historical Revisionism: Important events, like peaceful protests, could be maliciously edited to depict violence, changing public perceptions and even rewriting history.
  4. Aesthetic Pressures: With the ability to seamlessly enhance personal photos, societal pressures might shift towards unattainable ideals of beauty and perfection, potentially impacting self-esteem and mental health.

Legal Implications

The emergence of these tools not only presents ethical dilemmas but may also pose significant legal challenges:

  1. Defamation: The malicious editing and sharing of images could lead to lawsuits, especially if it harms someone’s reputation. As an example, celebrities or politicians could be falsely depicted in unsavory situations, leading to personal and professional fallout.
  2. Copyright Infringement: Unauthorized editing of copyrighted images could land users in legal battles. An artist’s work, for instance, manipulated without permission, could lead to loss of income and potential legal penalties.
  3. Privacy Violations: Imagine attending a private event and later finding yourself edited into a setting you never attended. Such actions could breach personal privacy rights in various jurisdictions.

A Personal Take on the Matter

Navigating the intricacies of this technological advancement, I firmly believe that while the AI-powered photo editing tools provide a novel landscape for creativity, they require stringent ethical guidelines and robust oversight. The potential for misuse and the risk of distorting reality are too significant to be left unchecked.

One potential solution could be the implementation of a transparent, blockchain-based verification system. This would allow for the tracking of edits made to an image, ensuring that any alterations are clearly documented and traceable back to the original source. This level of transparency could act as a deterrent for malicious use while preserving the integrity of the image. Additionally, there could be the development of AI-detection tools that scan images for signs of manipulation, alerting users when a photo has been altered. These tools should be made widely accessible, ensuring that the public has the means to discern between authentic and manipulated content.

In conclusion, while the allure of perfecting our visual content is undeniable, it is imperative that we approach this new era with caution and responsibility. Striking a balance between innovation and integrity will be key in ensuring that this technology serves to enhance, rather than distort, our perception of reality.

Thank you for reading, and I hope you found it informative and interesting.

I would love to hear any thoughts from you on my experiment with ChatGPT. If you are interested in discussing more about how ChatGPT can be used in such political contexts, or have any questions, contact me at hello@simplawfy.ca

Disclaimer: This story is only intended to be used for educational or recreational purposes. Responses by ChatGPT, if mentioned in this story, should NOT be relied upon as factual.

--

--

Doo-Hyeun Roh

For more of my free stories on privacy law, ChatGPT prompt engineering, and AI, see my blog at www.simplawfy.ca or email me at hello@simplawfy.ca