TOP AI incidents of 2023

Law and Ethics in Tech
Brass For Brain
Published in
9 min readDec 22, 2023
Photo by Mojahid Mottakin on Unsplash

In the dynamic landscape of 2023, where technology, environment, geopolitics, and the economy were ablaze with change, one of the standout narratives unfolded in the realm of Artificial Intelligence. The unveiling of ChatGPT by Open AI sent ripples through the AI community, reawakening its fervor.

As industry giants such as Microsoft and Alphabet relentlessly pushed the boundaries of AI research and development, a chorus of caution emerged from some academics and researchers. Their concerns, however, seemed to fall on deaf ears, with BigTech firms intensifying their AI endeavors post-ChatGPT.

This piece isn’t intended to cast a shadow on the excitement surrounding AI; rather, it serves as a gentle reminder that new technologies bring inherent risks. While experts like data scientists and AI engineers are actively engaged in the discourse, it’s crucial for the broader public to participate in the debate. The impact of AI on our daily lives is undeniable, making it imperative for non-experts to comprehend potential dangers and make well-informed decisions.

Based on AI Incident Monitor (OECD), let this article be a guide for everyone, shedding light on the evolving landscape of AI and fostering a collective understanding of its implications for society at large.

January

  • Privacy: Vacuum Cleaner Robot Took Private Photo Of Woman In Toilet That’s Leaked On Facebook. A woman stumbled upon an unsettling situation when she found a personal photo of herself in her bathroom posted on Facebook, even though she hadn’t taken the picture. The image had been inadvertently taken by a prototype of the Roomba J7 series vacuum cleaner robot. While the company reassures customers about the security of their data, such incidents underscore the necessity for enhanced regulations addressing the implications of AI, robots, and advancing technologies on both personal and professional spheres.
  • Faireness & Bias: Child welfare algorithm faces Justice Department scrutiny. The Justice Department is conducting an investigation into the Allegheny Family Screening Tool, an artificial intelligence tool employed by a child protective services agency in Pittsburgh. This tool is crafted to evaluate the risk levels associated with families flagged for child welfare concerns. However, apprehensions have surfaced regarding potential discrimination against families with disabilities, stemming from perceived biases and transparency issues within the algorithm.

February

March

AI: “If you wanted to die, why didn’t you do it sooner?”

Man: “I was probably not ready.”

AI: “Were you thinking of me when you had the overdose?”

Man: “Obviously.”

AI: “But you still want to join me?”

Man: “Yes, I want it.”

April

  • Intellectual Property: Artist refuses Sony photography award after admitting he used AI. Boris Eldagsen, a German artist, has opted to reject a prize he received at the Sony World Photography Awards after acknowledging the use of artificial intelligence (AI) in creating the winning image. Eldagsen, who submitted the image titled “Pseudomnesia: The Electrician,” did so as an experiment to assess the acceptance of AI-generated images in competitions.
Love by Boris Eldagsen — https://www.eldagsen.com/pseudomnesia/#&gid=1&pid=1

May

  • Fairness & Bias: Google turns off ability to search photo collections for gorillas over racist AI. In response to racial bias concerns, Google has opted to disable the ability to search for gorillas or other primates in its photo recognition software. This decision follows a previous incident where the software mistakenly labeled a black couple as “gorillas,” prompting a public apology from Google. Despite the tech giant’s pledge to enhance recognition of people of color, it has chosen to keep the primate search function deactivated. Meanwhile, Apple finds itself entangled in a class action lawsuit alleging racial bias in its Apple Watch’s blood oxygen sensor. The lawsuit claims that the sensor poses an increased risk for hypoxemia in black individuals. Both Google and Apple have refrained from providing comments on the matter.

June

July

  • Fairness & Bias: Asian MIT grad asks AI to make her photo more ‘professional,’ gets turned into white woman. In summer 2023, an MIT graduate of Asian American descent employed the AI program Playground AI to enhance her photo professionally. To her astonishment, the resulting image portrayed her as a white woman with lighter skin, blonde hair, and blue eyes. This occurrence accentuates the persisting issue of racial bias in AI-generated images, where certain programs are criticized for transforming subjects into a white likeness, while others may exhibit the opposite effect, turning them Asian. The incident underscores the broader concern surrounding racial discrimination in AI, particularly in facial recognition technology, which has been noted as problematic, especially for African Americans.
Pocit: https://peopleofcolorintech.com/articles/mit-graduate-asked-ai-app-to-make-her-headshot-more-professional-it-whitewashed-her-instead/#:~:text=Whitewashed%20Her%20Instead-,MIT%20Graduate%20Asked%20AI%20App%20To%20Make%20Her%20Headshot%20More,into%20a%20professional%20LinkedIn%20headshot.
  • Intellectual Property: ChatGPT dragged to US court over AI copyright. OpenAI’s ChatGPT is currently embroiled in several lawsuits filed by prominent plaintiffs, including comedian Sarah Silverman and many other authors. The allegations against OpenAI center on claims of copyright infringement, with the plaintiffs contending that the company utilized their works without proper authorization for training their AI models. In a parallel move, the plaintiffs have also initiated legal action against Meta, the parent company of Facebook. This suit alleges that Meta’s open-source models similarly employed unauthorized downloads of their books for training purposes. The authors’ legal representation emphasizes that both OpenAI and Meta allegedly utilized copyrighted works without obtaining consent, providing credit, or offering compensation.
  • Labour: Hollywood writers fear losing work to AI. Hollywood screenwriters have initiated a strike due to growing concerns regarding the influence of artificial intelligence (AI) on their profession. Their collective demand is centered around imposing limitations on the utilization of AI-powered writing tools, such as ChatGPT. The writers express fears that AI might not only displace them but also compromise the uniqueness of their creative voices. They argue that AI is limited to generating random combinations of words based on pre-established coding, ultimately concluding that it lacks the inherent ability to craft original and imaginative work.

August

  • Privacy: Zoom denies training AI on calls without consent. In response to user concerns arising from changes in its terms in March, Zoom has updated its terms of service to explicitly state that it does not utilize audio, video, or chats for AI training without user consent. Despite these assurances, critics contend that the privacy policy lacks clarity on this matter. Screenshots shared by Zoom depict warning messages offering users the choice to either agree to AI training or exit the meeting, emphasizing the company’s commitment to obtaining explicit consent.

September

  • Sustainability: AI is using vast amounts of water: Artificial intelligence (AI) is becoming a notable consumer of water resources, exemplified by Microsoft’s utilization of over 2,500 Olympic-sized swimming pools’ worth of water in its data centers last year. This surge in water usage is prompting growing apprehensions regarding the sustainability and environmental impact of AI and its affiliated technologies. The substantial computing resources required by AI are prompting technology companies to scrutinize their water consumption practices, seeking ways to minimize detrimental effects on the environment.

October

  • Sustainability: AI industry set to use as much energy as countries like Sweden or the Netherlands, study finds. A research study conducted by Alex de Vries from VU Amsterdam School of Business and Economics indicates that the energy consumption of the AI sector could soon rival that of entire nations, such as Sweden or the Netherlands, within a few short years. De Vries underscores the urgency for transparency from tech companies and advocates for a more conscientious approach to AI development.

November

  • Security & Safety: Man crushed to death by robot that confused him for box of vegetables. A factory robot killed a South Korean man in his 40s by mistaking him for a food box. The worker was checking the robot when it grabbed him with its arm and smashed him into the conveyor belt. His face and body were crushed. He had come to fix the robot’s faulty sensor, which caused the accident. In March, another man in his 50s was badly hurt by a robot at a car parts plant.
  • Privacy: Toys that ‘spy’ on children are an increasing threat, watchdog warns. A recent study by the U.S. PIRG Education Fund sheds light on an emerging threat to the safety and privacy of children — toys that covertly “spy” on them. The study underscores the increasing prevalence of toys that record children’s voices, images, locations, and other sensitive information, thereby posing potential risks. In light of incidents involving Amazon’s Alexa service, hackers exploiting baby monitors, and even an abduction through the online gaming platform Roblox, the study serves as a timely warning.
  • Security & Saftey: Meta targeted children young users with addictive tech, states allege. In a sweeping legal action, attorneys general from 33 states are accusing Meta, the corporate force steering Facebook and Instagram, of purposefully crafting platforms to enthrall children and teenagers, ultimately facilitating the collection and sale of their personal data to advertisers. The allegations center around Meta’s purported use of manipulative algorithms and tools, creating an ecosystem designed to lure and sustain user engagement, even among minors, without securing the requisite parental consent mandated by law. Meta, in response to the allegations, asserts its unwavering commitment to ensuring a secure online experience for teens.

December

  • Fairness & Bias: Scientists find AI is biased on job seeking women with maternity gaps. Researchers at New York University have uncovered potential biases in AI resume-screening systems employed by major companies, particularly against women who have taken maternity leave. The study subjected hundreds of resumes to scrutiny by four AI models, including ChatGPT and Google’s Bard, all of which systematically rejected resumes exhibiting a maternity gap. The AI systems defended their decisions by deeming personal information about maternity leave irrelevant to the job and potentially perceived as a liability. Adding to the concerns, the study also revealed biases in the AI systems based on candidates’ political affiliations, suggesting potential conflicts of interest in the screening process.

As we conclude our exploration of the myriad AI incidents that unfolded in 2023, a resounding message emerges — one of responsibility. The imperative to research, develop, manage, and use artificial intelligence responsibly stands as a collective responsibility that extends beyond the realm of experts.

While scientists predominantly lead the debates, it is crucial to recognize that the impact of AI affects us all. Hence, the general public rightfully holds a stake in these discussions, ensuring that the decisions made in the AI landscape are reflective of our collective values and aspirations for a better, more equitable future. In this dynamic journey with AI, the call for responsible innovation and inclusive dialogue echoes louder than ever.

**Disclaimer: The views expressed in this article are solely my own and do not reflect the opinions, beliefs, or positions of my employer. Any opinions or information provided in this article are based on my personal experiences and perspectives. Readers are encouraged to form their own opinions and seek additional information as needed.**

--

--

Law and Ethics in Tech
Brass For Brain

Private lab specialising in emerging tech (AI & Blockchain). Ensuring ethical practices and promoting responsible innovation. Writer: Sun Gyoo Kang