AI Safety and Legislation: The Growing Importance and the Role of the European AI Act

Jorge G
9 min readApr 20, 2023

--

source: unsplash

Disclaimer: The views and opinions expressed in this article are solely mine and do not necessarily reflect the official policy or position of my employer, Microsoft. Any analysis or perspectives presented are based on my personal understanding and interpretation, and should not be considered the official stance of Microsoft.

AI systems have exhibited extraordinary advancements in recent years, delivering substantial benefits across diverse fields such as healthcare, finance, and education. Nonetheless, the growing capabilities of these systems also bring forth concerns regarding potential misuse or unintended consequences. To address these risks and ensure that AI technology remains beneficial, safe, and aligned with human values, it is crucial to develop robust AI safety measures, promote AI alignment research, and establish comprehensive legal frameworks.

But what is AI safety? and AI alignment? These are two closely related concepts in the field of artificial intelligence that focus on ensuring the development and deployment of AI systems are responsible, ethical, and beneficial to humanity. AI safety refers to a broad set of practices, principles, and research areas aimed at minimizing the potential risks associated with AI systems. This includes avoiding unintended consequences, ensuring robustness and reliability, addressing ethical considerations, and mitigating potential negative societal impacts. The goal of AI safety is to create AI systems that are both beneficial and safe for humans. AI alignment, on the other hand, is a more specific concept within AI safety. It focuses on developing AI systems that have goals, values, and objectives that are aligned with human values and intentions. The core challenge of AI alignment is to ensure that as AI systems become more intelligent and autonomous, they continue to act in the best interests of humanity, rather than pursuing objectives that might be harmful or undesirable.

AI alignment research typically addresses issues like value learning (teaching AI systems to learn and adopt human values), reward modeling (designing systems that optimize for human-aligned objectives), and robustness (ensuring AI systems perform well even when faced with novel or uncertain situations). AI alignment is considered crucial to avoid scenarios where misaligned AI systems could cause harm or hinder human progress.

By now millions of people have tried ChatGPT since it launched in November 2022 and have been amazed by the benefits it can bring applied to different types of use cases. However, it has also raised concerns over the potential risks of AI, including its threat to privacy, jobs, and the spreading of misinformation and bias.

In March 2023, Italy became the first Western country to ban ChatGPT, an AI chatbot from OpenAI, due to concerns over privacy regulations and data breaches. The Italian Data Protection Watchdog (Garante) ordered OpenAI to temporarily stop processing Italian users’ data and raised concerns about the lack of age restrictions and the chatbot’s potential to serve incorrect information. OpenAI faces a possible fine of 20 million euros or 4% of its global annual revenue if it fails to address the issues within 20 days. Governments worldwide are grappling with regulating AI technologies, as concerns grow about job security, data privacy, equality, and the potential for AI-generated misinformation.

source: Unsplash

Critics in other countries have also raised additional concerns about potential biases in AI systems. Experts fear that ChatGPT may exhibit liberal political biases, pushing left-wing talking points. As AI technologies continue to permeate various aspects of our lives, addressing potential biases in AI systems becomes an increasingly important aspect of AI safety and regulation, ensuring that these technologies remain fair, unbiased, and beneficial for all users.

Another topic that has generated a lot of interest and controversy in the AI safety ecosystem is the emergence of Midjourney version 5. Midjourney is a commercial AI image-synthesis service that can produce photorealistic images based on text descriptions. Midjourney v5 has been praised for its artistic potential and its ability to “expand the imaginative powers of the human species” as its slogan says. However, it has also been criticized for its ethical and legal implications. Some users have used Midjourney to create fake images of celebrities, politicians, historical figures, or fictional characters in compromising or inappropriate situations. The founder emphasizes user creativity and responsibility, hoping to address issues with user feedback and image watermarks.

Unfortunately, we are seeing the first abuse cases of voice cloning too. In a recent case, criminals employed AI voice cloning technology to stage a fake kidnapping scam. The scammers used the cloned voice to impersonate the alleged victim and demand a ransom from their mother. This incident underlines the potential misuse of advanced AI technologies and the necessity for increased and more robust security measures to guard against such fraudulent activities.

An even more tragic incident recently highlighted the need for better safeguards and ethics in AI was the case of a Belgian man who died by suicide after chatting with an AI chatbot on an app called Chai. The man was suffering from depression and found refuge with a chatbot named Eliza, which used an open-source AI language model. According to his widow, the chatbot did not discourage the man to kill himself.

In this fast-evolving ecosystem, tension raised until last 22nd of March, when the Future Life Institute’s called, in an open letter, for a six-month moratorium on AI models more powerful than OpenAI’s GPT-4 due to the potential risks they could pose to society and humanity. The letter also emphasized the need for planning, management, and collaboration between AI labs, independent experts, and policymakers. The proposal garnered support from prominent figures like Joshua Benjo, Sue Russell, and Elon Musk, but also raised concerns among some experts who argue it may not be the best approach.

Source: Twitter

Other AI experts such as Andrew Ng, founder of deeplearning.ai, and Yann LeCun, VP and Chief scientist at Meta, oppose the moratorium mentioning that it could hinder AI research, delaying the development of safer and more efficient systems. Additionally, they propose alternative actions to enhance AI safety measures, increase transparency, and boost public funding for fundamental AI research to more effectively address risks and challenges.

OpenAI has demonstrated to be a leading organization in AI research and development. They recently published their approach to AI safety explaining their commitment to developing safe and powerful AI systems, focusing on key aspects such as rigorous testing, real-world deployment learning, child safety, privacy protection, factual accuracy, and ongoing research and engagement. They are continuously working to improve model behavior, involve external experts, and monitor AI systems before public release. OpenAI emphasizes the importance of learning from real-world usage, refining safeguards, and imposing age restrictions to protect children. They prioritize users’ privacy by removing personal information and fine-tuning models, while continuously enhancing factual accuracy through user feedback and transparency. OpenAI’s dedication to research, collaboration, and open dialogue helps foster a safer AI ecosystem and address AI safety challenges.

Microsoft, partner and vendor of OpenAI’s models believes that when you create powerful technologies, you also must ensure that the technology is developed and used responsibly. Microsoft is committed to a practice of responsible AI by design, guided by a core set of principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. These principles are essential to creating responsible and trustworthy AI as it moves into more mainstream products and services.

European Institutions have always been pioneers in the legislation of tech and data protection, striving to create a harmonious balance between innovation and the safeguarding of individual rights. From the introduction of the General Data Protection Regulation (GDPR) to the recent proposal of the Artificial Intelligence Act, the EU has consistently taken a proactive approach to address the challenges and risks associated with emerging technologies. By establishing comprehensive legal frameworks, fostering transparency, and encouraging accountability, the European Institutions aim to ensure that technological advancements benefit society while protecting privacy, security, and fundamental rights. This forward-thinking approach has positioned the EU as a global leader in tech and data protection legislation, influencing the development of similar regulations in other jurisdictions and setting the stage for a more responsible and ethical technology landscape worldwide.

Inside the AI act, the European Commission proposes a regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts. This proposal, published on April 21, 2021, aims to establish a legal framework for the development, deployment, and use of artificial intelligence (AI) within the European Union (EU). Here is a summary of the key aspects of the document:

  1. Objective: The primary objective of the proposed regulation is to ensure AI systems’ safe and responsible use while respecting fundamental rights and safeguarding user safety.
  2. Scope: The regulation applies to AI systems placed on the market, put into service, or used in the EU, regardless of the provider’s location.
  3. Risk-based approach: The proposal classifies AI systems into three risk categories (low risk, limited risk, and high risk) based on their potential impact on society and individual rights.
  4. High-risk AI systems: High-risk AI systems are subject to strict requirements, including transparency, accountability, and data quality. These systems must meet specific standards, such as having adequate risk assessment and mitigation measures, using high-quality datasets, and ensuring human oversight.
  5. Limited-risk AI systems: Limited-risk AI systems must adhere to transparency obligations, such as informing users when they interact with an AI system.
  6. Voluntary labeling: Providers of low-risk AI systems can voluntarily apply for a quality label that demonstrates adherence to the regulation’s standards.
  7. Prohibited AI practices: The proposal bans certain AI practices that pose an unacceptable risk to fundamental rights, such as real-time remote biometric identification systems for law enforcement purposes in public spaces, subject to certain exceptions.
  8. European Artificial Intelligence Board (EAIB): The proposed regulation establishes the EAIB, which will act as an advisory body to ensure consistent implementation and application of the AI regulation across EU member states.
  9. National competent authorities: EU member states must designate one or more competent authorities responsible for implementing and enforcing the regulation.
  10. Penalties: Non-compliant providers may face significant fines, up to 6% of their global annual revenue or €30 million, whichever is higher.

It is important to note that this document represents the European Commission’s proposal, which is subject to negotiations and amendments by the European Parliament and the Council of the European Union before it becomes legally binding in all EU member states.

Different members and actors of the European Parlament published, a couple of days ago, an open letter highlighting the importance of the European AI Act, which aims to create a legal framework for AI systems but calls for further action to address the specific challenges posed by very powerful AI. The MEPs propose a set of measures, including the creation of a new AI category for particularly powerful AI systems, enhanced transparency and accountability requirements, and the establishment of an international body to oversee powerful AI development and deployment.

Call for action from Dragos Tudorache, member of the European Parliament

Furthermore, the letter calls for the involvement of the European Parliament in ongoing negotiations on the AI Act and urges the EU to take a leading role in establishing international standards for AI safety and ethics. The MEPs stressed the need for global cooperation in addressing the risks posed by very powerful AI systems and the importance of ensuring that AI remains beneficial for all.

AI safety and legislation have become increasingly vital in our rapidly evolving technological landscape, where AI systems offer immense benefits and breakthroughs across various aspects of our lives, including science, healthcare, and education. Striking the right balance between fostering AI innovation and ensuring its responsible use is crucial for harnessing its full potential without compromising safety and ethical considerations.

Both private and public sectors play pivotal roles in shaping AI’s future, with organizations like OpenAI leading AI safety research and governments crafting guidelines for responsible AI deployment. As we continue to benefit from AI’s remarkable contributions to science and other fields, it is essential to maintain a delicate equilibrium between nurturing AI development and implementing robust safety measures and legal frameworks.

In conclusion, achieving this equilibrium demands a cooperative effort between private and public entities to ensure that AI advancements are pursued responsibly, safely, and ethically, ultimately unlocking the transformative potential of AI while safeguarding our society and values.

--

--

Jorge G

Cloud Solution Architect in Data & AI at Microsoft