BITES // 08.01.24 // COMPANIES PUSH FOR AI REGULATIONS

Catherine Marsh
zmbz

--

Every month we collect six of the best pieces of content published on the web and share them with you because we believe that the most extraordinary thinking is inspired by looking to unexpected places. BITES is a reading list for those who want to bring a little of the outside in.

OVERVIEW -

Brands and companies are increasingly advocating for AI regulations and restrictions to safeguard their operations, maintain consumer trust, and ensure fair competition. As AI technology advances rapidly, businesses are concerned about potential misuse, including data privacy breaches, job displacement, and the erosion of brand identity. By pushing for comprehensive regulatory frameworks, these companies aim to establish clear guidelines that promote ethical AI use, protect intellectual property, and prevent market monopolies. This proactive stance reflects their commitment to balancing innovation with responsibility, ensuring that AI advancements benefit society while mitigating associated risks.

1. THE COPIED ACT HOPES TO GIVE MORE PROTECTION FROM AI SCRAPING

A bipartisan group of senators has introduced a new bill that seeks to protect artists, songwriters and journalists from having their content used to train AI models or generate AI content without their consent. The bill, called the Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act), also seeks to make it easier to identify AI-generated content and combat the rise of harmful deep fakes. The COPIED Act would require the National Institute of Standards and Technology (NIST) to create guidelines and standards for content provenance information, watermarking and synthetic content detection. Senate Commerce Committee Chair Maria Cantwell, Senate AI Working Group member Martin Heinrich, and Commerce Committee member Marsha Blackburn helped create the bill. This new bill would require companies that develop AI tools to allow users to attach content provenance information to their content within two years. Content provenance information refers to machine-readable information that documents the origin of digital content, such as photos and news articles. According to the bill, works with content provenance information could not be used to train AI models or generate AI content.

2. CARA IS A NEW HAVEN SOCIAL APP FOR CREATIVES

Artists are fleeing Meta’s platforms over fears their work will be used to train AI. But, photographer Jingna Zhang’s new social app, Cara, promises protection for creatives. Zhang is a seasoned advocate for artist’s rights. In 2022, she filed a lawsuit against a painter who had won prize money submitting work that looked shockingly similar to a photograph she took in 2017 for Harper’s Bazaar. Zhang and three other artists are also suing Google for allegedly using their copyrighted work to train Imagen, an AI image generator. She’s been a plaintiff in similar lawsuits against Stability AI, Midjourney, DeviantArt, and Runway AI. Launched in early 2023, Cara had just a few thousand users for the first year of its life; last week, it jumped from having around 40,000 accounts to 650,000, and is now closing in on a million users.Zhang started Cara to give artists who oppose unethical use of AI a place to share images and network with like-minded peers. The platform takes an explicit stance against generative AI tools developed with training data acquired without permission from artists, and it currently filters out all AI images.

3. GLAZE AND NIGHTSHADE AIM TO PROTECT ARTISTS

As image-generating AI continues to evolve, artists have been fighting against what they’re seeing as a threat to their craft. There have been multiple lawsuits and public statements calling for regulations on this front. Deploying new tech weaponry like Glaze and Nightshade, these are beta technology innovations from the University of Chicago designed to outwit AI scrapers. Glaze subtly morphs artists’ style signatures to misguide AI, while Nightshade goes on the offense, muddling AI’s perception of image content. So instead of recreating something from Picasso’s rose period, you might find AI instead recreating a bottle of rosé. With concerns on how AI threatens livelihoods and the very essence of creative authenticity, artists like Karla Ortiz, a plaintiff in a lawsuit against Stability AI, are endorsing tools like Glaze for digital self-defense. Meanwhile experts contend that while helpful, these measures offer temporary relief, as AI strengthens, such tools may weaken. Regardless, amid an escalating AI arms race, artists continue to grapple with the implications for the future of artistry.

4. NEW YORK TIMES CHALLENGES OPENAI’S REQUEST FOR REPORTER’S SOURCES

The newspaper, The New York Times, asked a federal judge to deny OpenAI’s request to turn over reporters’ notes, interview memos, and other materials used by journalists to produce stories that the media company alleges were used to help train the tech company’s flagship artificial intelligence models. Lawyers for the companies staked out their positions in dueling memos filed during the Fourth of July holiday week in U.S. District Court for the Southern District of New York, where the New York Times filed a copyright infringement lawsuit against both OpenAI and its partner Microsoft in December 2023. The New York Times responded saying that, “OpenAI cites no case law permitting such invasive discovery, and for good reason. It is far outside the scope of what’s allowed under the Federal Rules and serves no purpose other than harassment and retaliation for The Times’s decision to file this lawsuit.” The lawsuit has the potential to set a legal precedent for the use of public materials by AI and tech companies. The New York Times alleges that Microsoft and OpenAI wrongly used vast amounts of copyrighted material from the newspaper to train the large language models that power ChatGPT and other AI models.

5. RECORDS LABELS VS. AI

RIAA, Universal, Warner and Sony have come together to sue two of the most advanced start-ups in the emerging field of AI music. The three major music companies filed lawsuits against AI music firms Suno and Udio on Monday, alleging the widespread infringement of copyrighted sound recordings. Suno and Udio have quickly become two of the most advanced and important players in the emerging field of generative AI music. While many competitors only create instrumentals or lyrics or vocals, Suno and Udio can generate all three in the click of a button with shocking precision. The lawsuits allege that Suno and Udio have unlawfully copied the labels’ sound recordings to train their AI models to generate music that could “saturate the market with machine-generated content that will directly compete with, cheapen and ultimately drown out the genuine sound recordings on which [the services were] built.” The lawsuit is seeking both an injunction to bar the companies from continuing to train on the copyrighted songs, as well as damages from the infringements that have already taken place.

6. REDDIT BLOCKS AI FROM SCRAPING CONTENT IF THEY DON’T PAY

When Reddit said last month that it would update its Robots Exclusion Protocol (robots.txt) to block automated data scraping, it’s now apparent that it wasn’t only aimed at AI companies like Perplexity and its controversial “answer engine.” Bing’s omission is due to Microsoft refusing to agree to Reddit’s terms regarding AI crawling. Google appears to be the only search engine allowed to crawl Reddit and produce results from the front page of the internet. A Reddit spokesperson said “we block all crawlers that are unwilling to commit to not using crawl data for AI training, which is in line with enforcing our Public Content Policy and updated robots.txt file.” The ubiquitous robots.txt is the web standard that communicates which parts of a site can be crawled. Although many crawlers are known to ignore its instructions, Google’s standard procedure is to respect it. The saga could be seen as a trickle-down effect of AI chatbots scraping the live web for results. With courts slow to determine how much of the open web is fair use to train chatbots on, companies like Reddit, whose bottom lines now depend on safeguarding their data from those who don’t pay, are building walls at the expense of the open web.

TAKEAWAY-

With the rapid advancements in AI technology, these businesses are concerned about potential misuse, such as deep fakes, data breaches, and unfair competitive practices. They are calling for clear guidelines and government oversight to prevent the erosion of brand identity and to ensure that AI development aligns with societal values. By promoting a regulatory framework, companies aim to foster a secure and transparent environment where innovation can thrive without compromising ethical standards and consumer protection.

SUBSCRIBE TO SOCIAL SCOOP

Interested in social media trends and keeping ahead of the constantly shifting landscape? Be sure to subscribe to our bi-weekly Social Scoop presented by our social media division, School.

--

--

Catherine Marsh
zmbz
Editor for

Catherine or as people call her “Cat” is a Strategist and is passionate about the undiscovered that lies within the intersection of culture, people, and society