Is Trustworthy AI the next GDPR?

Patrick Weissert
Trustworthy AI
Published in
4 min readJun 11, 2020

--

When GDPR was about to come into effect in 2018 it triggered a mad scramble by companies big and small to work towards compliance. It also created a lot of confusion about what companies could and should do, and what they could not and should not be doing. In particular smaller companies that did not have the resources to employ experts in the area were left out in the cold, guessing their way to compliance through a mix of trial & error, media and friends based knowledge.

About a year later however it became clear that GDPR did have an effect way beyond the European Union. Globally operating companies often improved their data protection and privacy frameworks for all customers, not just customers based in the European Union. And regulators around the world followed the lead by introducing data protection regulations that built on and adapted GDPR to their regions, such as the CCPA in California.

While GDPR itself will need to continue to evolve to make it more efficient, practical and easier to implement in particular for smaller businesses, the European Commission is working on a range of new regulations in other areas too which could have similarly wide ranging impact. One of these is the regulatory framework being developed for artificial intelligence technologies.

Similar to big data and data protection, artificial intelligence is and will increasingly become a key enabler for the digital economy and society. As the volume of data processed keeps increasing, the need to automate processing of data and data-driven decision making keeps increasing too.

Besides developing policy proposals on evolving AI capabilities in the EU markets, the European Commission workstream on Ethical AI (called “AI High-Level Experts Group” or “AI HLEG”) so far has centered on developing guidelines for what they call “Trustworthy AI”, AI that people can trust that works to their advantage, supporting them and augmenting their capabilities, while minimizing negative side effects and risks. The guidelines so far include 7 principles of such ethical trustworthy AI:

  • Human agency & oversight: Measures such as human-in-the-loop, human-on-the-loop or human-in-command to ensure that the AI system remains in human control.
  • Technical robustness and safety: Requirements to ensure the safe and reliable operation of the AI system, many familiar to those working with GDPR.
  • Privacy and data governance: requirements around data protection and data accuracy & access familiar from GDPR, but applied specifically to AI systems.
  • Transparency: A specific challenge with many AI technologies are their opacity (‘black box-effect’), complexity, unpredictability and partially autonomous behaviour that make it hard to understand why the system made certain decisions or actions. To deal with these there will be requirements for traceability, explainability and transparency around capabilities and limitations.
  • Diversity, non-discrimination and fairness: Another key issue of many AI technologies, which has already received a lot of media attention, is their potential bias due to the way their models are built or training data is used, leading to discriminatory decision making, for example in automated recruiting platforms or financial services applications.
  • Societal and environmental well-being: Ensuring that AI technologies are designed with a view at sustainability and their ecological and social impact.
  • Accountability: ensure responsibility and accountability for AI systems and their outcomes are clear, for example in cases where Business A runs a cloud-based AI powered service which is used by Business B to run their business.

Given how early stage and rapidly evolving the application of AI technologies in many businesses still is, it is clear that these guidelines, once they are transformed into regulation and come into effect, will have a tremendous impacts on many ML teams, which will suddenly need to look at issues like transparency and bias which often do not play a key role today. They will need to develop tools, processes and structures that ensure they meet these goals.

One highly relevant difference to GDPR however is that, at least in the current white paper on AI, the approach is to focus Trustworthy AI requirements on businesses that operate or employ what is called “high risk AI”, which so far is defined as an AI technology that is used in a high-impact sector like healthcare or public services AND is used “in a manner that significant risks are likely to arise”. This could bring a much narrower focus to the businesses that need to comply and get their compliance audited, avoiding the overload of both businesses and regulators seen in the roll-out of GDPR.

So, could Trustworthy AI be the next GDPR? Maybe. The consultation process on the guidelines is still in process, and it may still take some time to come into effect and requirements may change in the process significantly still. But it is clear that one day businesses operating in the EU or serving EU citizens with systems that leverage AI technologies will be impacted by these requirements. And this, of course, could again trigger a global upgrading of AI systems to higher standards.

--

--