AI Act: A Risk-Based Policy Approach for Excellence and Trust in AI

The EU Artificial Intelligence Act is the first-ever comprehensive attempt at regulating the uses and risks of this emerging technology

Giuliano Liguori
CodeX
10 min readFeb 15, 2022

--

Image created by the author

Introduction

  • What is the EU AI Act?
  • The importance of codifying a policy to regulate AI applications
  • How does the EU AI Act work?
  • The importance of putting in place a model governance for AI
  • Conclusion: Things you should do about the EU AI Act!

Introduction

Artificial intelligence is a promising technology. It is expected to bring a variety of economic and societal benefits to a wide range of sectors, including healthcare, finance, transportation, and home affairs. Powerful machines and algorithms are already capable of diagnosing illnesses performing surgery and driving autonomous cars, these technologies provide us with new tools and disrupt the way we work yet. Human progress is being driven by artificial intelligence in countless ways, including improving health care, improving service delivery, managing energy consumption, and improving public safety. In addition, businesses use AI-based applications to optimize their operations.

It is understandable that we have placed a lot of faith in these new technologies, particularly in health care and decision-making. However, AI comes with both benefits and risks and raises legitimate ethical and legal questions as well.

Our generation will face very complex challenges that in some cases are existential for humanity on a global level. Technologists, experts, and thought leaders in the AI space are called to clarify the role of this technology and how we can overcome these challenges, how they can help institutions, governments, and businesses to achieve their best basically goals and of course how we can empower them to achieve more by leveraging AI.

No one can tell what impact artificial intelligence will have on society.

You might also like to read Model Operations for Secure and Reliable AI

What is the EU AI Act?

The European Commission recently has proposed a new regulation called Artificial Intelligence Act, with the purpose to regulate the development and use of AI in Europe.

The AI Act will be one of the most important regulatory decisions to be made in Europe and also in the world in the coming years, the success or failure of this process will affect important issues related to the investments in AI in fields like research and development for the intelligent automation, innovation and business and the creating of a stronger and more competitive corporate framework to support organizations in expanding their business worldwide.

Nevertheless, many other aspects that will directly affect the impact that these technologies have on the health and life of individuals will be affected by these regulations, such as the respect and defense of individual and collective rights, and the development of educational health care.

Many experts think it is necessary to have an ethical and fundamental rights impact assessment of how the functioning of artificial intelligence will influence on many important issues in our daily life. They see the AI Act, not just an opportunity to pursue, but something to do for which we have direct responsibility.

According to Margrethe Vestager, Executive Vice-President of the European Commission responsible for the digital age, the main objective is to define a common legal framework for the development, marketing and use of artificial intelligence products and services in the EU. The EU’s goal is to make Europe a world leader in the development of safe, reliable and human-centred artificial intelligence and, of course, in the use of it. The regulation addresses the human and social risks associated with the specific uses of AI and this serves to build trust, on other hand, the EU coordinated plan outlines the necessary measures for the Member States that should take to stimulate investment and innovation, all this to ensure and strengthen the adoption of artificial intelligence in Europe.

You might also like to read Don’t Let Tooling and Management Approaches Stifle Your AI Innovation

The importance of codifying a policy to regulate AI applications

The European Union initiative for a new regulation on artificial intelligence is of fundamental importance because usually when the European Union paves the way for some new technological regulation, all other countries can only draw inspiration from it, just as it happened with the GDPR, surely it will happen with the AI Act as well.

So, let’s dive in. First, why do we need an AI Act? Why do we need to regulate artificial intelligence?

Artificial intelligence is starting to be used in many different sectors and applications, but there have been issues that have impacted human rights in some cases. One example is an application of AI in healthcare in the US, which has the ability to determine the likelihood that this patient will need multiple tests based on observing how the patient walks. Unfortunately, it turned out that due to a distortion in the algorithm of the AI model, people of color were systematically discriminated against. Another case occurred in the world of recruiting, where a well-known world-class company had put into production an artificial intelligence model that discriminated against the sex of candidates during the process of evaluation of curriculums. It turned out that the model always prioritized male candidates over women. To stay on the subject, let’s think about facial recognition and therefore the use of this ability of AI to recognize or predict the emotional state of candidates in a job interview.

Another example can be the AI models that moderate the content on social media platforms. A biased model can unfairly restrict free speech and influence public debate. As we have seen in previous articles, the dataset used throughout the entire Model Life Cycle (MLC), which could be either Machine Learning Models or AI models, is fundamental for the decisions or predictions that these models will make. Indeed, all these Technologies, are all based on basically learning from data and if the data are biased, let’s suppose that in our data we just have examples of the mafia being related to Italians and we want to make a model that predicts if a person is a mafia or not. If our model has just seen data relating to mafia members who are all of Italian origin, is going to try to generalize, it’s going to try to think that all Italians are mafia.

Data that is biased can produce models that are discriminatory and harmful to humans. The issue is that the algorithm will reproduce that knowledge it has gained from that biased data to everyone, which means that the risk is to distribute and apply it globally and then suddenly discover that a biased model has been deployed and of course, it is a racist model that discriminates against Italians.

However, though we usually use the term bias mostly related to racism, sexism and all these social issues, in the data science field, bias is nothing more than a mathematical concept, that means basically misrepresenting the distribution. It is defined as the difference between a model’s prediction and the target value, based on the training data, and is also known as “error due to squared bias”. Bias error occurs when assumptions are simplified to make functions easier to approximate. The problem is that we tend to generalize. If we tend to think that a small group to which we have been exposed is representative of the entire distribution. This can be a big problem because, of course, it penetrates our culture, our decisions, and also our AI models.

These are just some examples and concepts supporting the importance of codifying a policy to regulate AI applications. It is necessary to set global standards for artificial intelligence as well to make sure there are certain ethical rules that everyone must follow, making sure we won’t use armed AI against humanity. AI must complement and not replace humans in any way.

You might also like to read Unlocking the Value of AI in Business Applications with ModelOps

How does the EU AI Act work?

The EU AI Act introduces a sophisticated ‘product safety regime’ constructed around a set of 4 risk categories. It imposes requirements for market entrance and certification of High-Risk AI Systems through a mandatory CE-marking procedure. This pre-market conformity regime also applies to machine learning training, testing and validation datasets.

The proposed AI Act combines a risk-based approach based on the criticality pyramid with a modern multi-level enforcement mechanism. This means that as risk increases, stricter rules apply. Applications with unacceptable risks are systematically banned from the EU market due to the unacceptable risks they pose. The fines for violating the rules can reach up to 6% of the global turnover of companies.

These range from non-binding, self-regulatory impact assessments accompanied by codes of conduct, to heavy, externally audited compliance requirements throughout the entire AI application life cycle.

Infographic by the author

The AI act distinguishes between AI systems posing:

  • unacceptable risk
  • high risk
  • limited risk
  • low or minimal risk

Under this approach, AI applications would be regulated only as strictly necessary to address specific levels of risk. Let’s see some examples.

  • AI systems would be authorized for commercialization and use but subject to a set of requirements and obligations particularly on conformity risk management testing data use transparency and human oversight and cyber security.
  • AI systems presenting only limited risk such as chatbots or biometric categorization systems would only need to comply with basic transparency obligations.
  • AI systems presenting only low or minimal risk could be developed and used in the EU without additional legal obligations

It could happen that similar technologies fall into different categories, depending on their use. Facial recognition systems, for example, are increasingly used to identify people and can be very useful for ensuring public safety and security, but they can also be intrusive and the risk of models being biased and making mistakes is high, consequently, the use of such technologies can affect the fundamental rights of citizens leading to discrimination and violations of the right to privacy and even lead to mass surveillance and that is why the Commission’s law on artificial intelligence wants to differentiate these systems into based on their use at high or low risk.

You might like to read also How ModelOps Helps You Execute Your AI Strategy

The importance of putting in place Model Governance for AI

As we have widely explained above, the use of AI, with its specific characteristics (e.g. opacity, complexity, data dependence, autonomous behavior), can negatively affect a number of fundamental rights and the safety of users. To address these concerns, the proposed AI Act follows a risk-based approach whereby legal intervention is tailored to a concrete level of risk.

The regulation of the use of AI models become then a global concern, therefore the implementation of a business capability like ModelOps is mandatory to reduce business risk with ongoing new insights into model risk and performance and avoid bias error. ModelOps can allow companies to establish and apply technical, business and above all regulatory compliance controls and standardize the model validation process, thanks to an engine based on rules governing large-scale artificial intelligence initiatives.

Regulating the use of AI models is a global concern, by using an Enterprise ModelOps Platform to Govern and Scale AI Initiatives, companies can:

  • establishing clear policies regarding the standards that models must meet across all contexts, including business metrics, statistical metrics, internally generated compliance metrics, and external regulations
  • extensively documenting the purpose for each model, its metrics, how it was developed, and how it needs to be deployed
  • identifying all necessary approvals and approvers as the model moves from concept to development and into production
  • capturing all artifacts and metadata associated with the model, including code, training data, test cases, and results
  • logging all activities that happen with the model from the time of release to production through retirement, including deviations from KPIs, remediations such as code changes or retraining, and all approvals that took place at each step
Source ModelOp

An example of a block diagram of an Enterprise ModelOps Platform

Most regulatory bodies require that the personnel who test and validate models must be separate from those who develop models. A well-designed ModelOps platform includes the continuously updated model inventory that captures all the necessary documentation and metadata for every model, including any changes and approvals.

Conclusion: Things you should do about the EU AI Act!

As the first regulation of AI models, the EU AI Act establishes a clear framework for responsible and accountable use of AI. It will apply to all companies whose business relies on AI in any way.

It’s important to have a plan in place as the EU AI regulation goes into effect.

This is why it is essential to develop a compliance methodology to ensure safe data protection, reduce the risk of bias error by establishing an automated validation process for the various types of model testing that needs to be performed before you decide to launch your product in the market. ModelOps can provide your organization with an unbiased model test capability control that ensures you are meeting these new regulations and standards. Moreover, it can optimize your models to reflect the latest technical and legal requirements.

Follow me for daily updates on Technology and Innovation

https://bit.ly/m/ingliguori

References

--

--

Giuliano Liguori
CodeX
Writer for

Giuliano Liguori is a technologist, an influencer in the digital transformation and artificial intelligence space.