In the News: AI deepfakes, safety, and art competitions

The Editors at Hoyalytics
Hoyalytics
Published in
4 min readApr 20, 2023

This week we cover AI ethical challenges including news on viral deepfakes resembling Drake and The Weeknd, award-winning AI-generated entries, and how OpenAI aims to promote AI safety.

This AI is stuck in my head!

By: Sameer Tirumala

Source: Variety.com

We’ve all seen the AI deepfakes going around, from memes about the presidents to serious competitions like you’ll see later in this newsletter! The jokes may be harmless, but the intrusion of AI into more serious competition and industries raises concern. A recent song “Heart on My Sleeve” went viral for resembling Drake and The Weeknd’s music. Streaming providers like Spotify and Apple Music quickly took the song down, allegedly due to pressure from UMG, the world’s largest record label and distributor of Drake and The Weeknd’s music. Drake himself was not a fan of this use of AI, and UMG denounced it as a copyright violation. This begs the question: where do we draw the line with generative AI? Is this truly that much worse than a fanmade mashup or releasing unreleased songs? And how does copyright apply to AI-produced content in general?

Good Intentions, Bad Outcomes? OpenAI’s approach to AI safety

By: Edward Lim

Source

Updated 5th April 2023, OpenAI’s charter on AI safety reads: “Crucially, we believe that society must have time to update and adjust to increasingly capable AI, and that everyone who is affected by this technology should have a significant say in how AI develops further.”

The world has certainly taken notice. To OpenAI’s credit, its technological progress has definitely catalyzed policymaking and public discourse around AI. In the U.S., lawmakers are scrambling to understand AI and how to regulate it, and have begun public consultations to gather insights. A U.S. agency, the National Institute of Standards and Innovation (NIST) released the AI Risk Management Framework 1.0 to guide AI policy and regulation on January 2023. It is however still an early skeletal framework and entirely voluntary. China, on the heels of stronger regulations on technology companies since 2020, has also been particularly concerned about the societal implications of generative AI. The Cyberspace Administration of China announced in April that all AI companies must be accountable for their data sources, and that output content should reflect the “core value of socialism” or the “subversion of state power”. Italy has also taken the drastic step of banning the use of ChatGPT entirely.

Given its nature as a highly strategic and dual-use technology however, the advancements made by OpenAI may have instead exponentially accelerated the development and deployment of AI far beyond what regulations or the forming of social norms around the subject. On the commercial side, many companies and their engineers have reported facing much greater financial pressure to deploy AI models much earlier, sometimes before than when they are ready or comfortable. Many of these have debuted relatively disappointingly compared to ChatGPT, like Google’s Bard debut with a factual error, and Baidu (a Chinese search engine) debuted with a supposedly ‘live’ demonstration that turned out to be pre-recorded “to save time”. There are also strategic pressures. Regulating industries well is already hard enough, and in this case, there are vested interests to ensure that one’s country has a competitive advantage in AI and to develop faster and better.

As development of AI continues to race ahead, discourse and consensus on the topic is needed ever more.

Should AI images be allowed to enter art competitions?

By: Maggie Shen

Source: The Guardian

German artist Boris Eldagsen admitted on his personal website that his award-winning art piece was generated by artificial intelligence technologies — the first time an AI image had won in a reputable international photography competition. Eldagsen refused to accept the prize he received at Sony world photography awards, to which he entered a photograph depicting two women in black and white. He said he wanted to find out if art competitions are prepared for AI images to enter. Yet a spokesperson for the World Photography Organization commented, “The creative category of the open competition welcomes various experimental approaches to image making from cyanotypes and rayographs to cutting-edge digital practices… We felt that his entry fulfilled the criteria for this category, and we were supportive of his participation.” With Eldagsen’s refusal of the award, he hoped to accelerate discussions on AI tools in the world of arts.

--

--

The Editors at Hoyalytics
Hoyalytics

A group of Georgetown University undergraduates eager to learn data science together. Twitter: @HoyAlytics | Publication: https://medium.com/hoyalytics