Artificial Intelligence: Contrasting the regulatory approaches taken by Singapore and the EU

Darshita
FinTech & Law
Published in
9 min readApr 6, 2020

Introduction

Artificial intelligence (AI) has been increasingly deployed and scaled in various industry sectors in Singapore as part of the country’s Smart Nation Initiative toward a digital economy. Against this backdrop, Singapore released its second edition of the Model AI Governance Framework on 21 January 2020, which adopts a self-regulatory approach for organisations to address ethical and governance issues when implementing AI technologies. On the contrary, the European Union (EU) has taken bold steps to develop a regulatory framework for AI systems and businesses that make them. In light of differing approaches to regulating AI, this article aims to contrast Singapore’s model framework with EU’s regulatory framework, and proposes that Singapore could benefit from introducing regulations for AI.

Singapore vs. the EU: Diverging Approaches to Regulating AI

Singapore’s Model Framework

Singapore has taken a voluntary and self-regulatory approach to AI governance. Singapore’s Model AI Governance Framework is based on two guiding principles that aim to promote trust and understanding in the use of AI technologies: (i) organisations using AI in decision-making should ensure that the process is explainable, transparent and fair, and (ii) AI solutions should be human-centric, wherein the primary considerations for the design, development and deployment of AI should be based on the protection of interests, well-being and safety of persons.

Under the model framework, organisations are encouraged to adopt measures that promote the responsible use of AI in four key areas:

(i) developing internal governance structures and measures to incorporate ethical values, manage risks and responsibilities relating to algorithmic decision-making;

(ii) determining an appropriate level of human involvement in AI-augmented decision-making;

(iii) operations management — minimising bias in data sets and AI models and developing measures, such as explainability, robustness and regular tuning of AI models; and

(iv) developing strategies for stakeholder interaction and communication, such as making AI policies known to users and enabling users’ feedback.

In other words, organisations are entirely responsible for the ethical, governance and consumer protection issues relating to the AI deployed, and are encouraged to periodically conduct a risk impact assessment to continually identify, review and mitigate risks that are relevant to their AI technologies.

The model framework also encourages organisations to develop a set of ethical principles and incorporate them in risk management structures when they deploy AI in their processes, products and/or services. To this effect, the model framework provides a compilation of AI ethical principles that organisations may refer to.

The European Regulatory Framework

On the contrary, the EU is developing a “European regulatory framework” for AI systems and businesses that make them. In addition to the Ethics Guidelines for Trustworthy AI, which sets out key components that AI systems should meet in order to be deemed trustworthy for deployment and use, the European Commission issued a white paper on 19 February 2020, detailing inter alia policy measures and mandatory legal requirements that would apply to “high risk” AI applications.

The white paper proposes that while existing EU legislation (e.g., EU product safety and liability legislation, including sector-specific rules) may be applicable to AI technologies, it may not adequately address the risks that AI systems may create. In this regard, the European regulatory framework, which follows a risk-based approach, would ensure that regulatory intervention is proportionate to the risk profile of the AI application. Determining whether an AI application would be “high risk” depends on the following two cumulative criteria:

(i) the type of economic sector (e.g., healthcare, transport, energy and parts of the public sector) where the AI application is deployed. The list of economic sectors to which an AI application would be regarded “high risk” will be published in the European regulatory framework and will be reviewed and updated based on relevant developments in practice; and

(ii) the intended use of the AI application, and if it is used in a manner where significant risks are likely to arise. AI applications that pose risks of injury, death or significant damage, or that produce a legal or similarly significant impact on the rights of an individual or an organisation, could be assessed as having a high level of risk based on the impact on affected parties.

The second criterion serves as a cap — it prevents every AI application deployed in the relevant economic sectors from being deemed as “high risk”. It acknowledges that not every use of AI in the relevant sectors would necessarily involve significant risks. For example, the use of AI in a scheduling system in the healthcare sector may not present risks of such significance to be regarded as a “high-risk” application that requires legislative intervention.

The European regulatory framework highlights certain mandatory requirements on “high risk” AI applications, which include:

(i) keeping accurate records of data sets;

(ii) ensuring the type of training data sets are sufficiently representative of the EU population (e.g., all relevant dimensions of gender, ethnicity and other possible grounds of prohibited discrimination are appropriately reflected) and sufficiently broad to cover all relevant scenarios to avoid dangerous situations;

(iii) ensuring the type of data meets standards set in applicable EU safety rules;

(iv) ensuring personal data and privacy are adequately protected in the use of AI-enabled products and services;

(v) providing users with clear and adequate information on the AI system’s capabilities, limitations, and informing users that they are interacting with an AI system;

(vi) ensuring AI systems are robust and accurate during all life cycle phases;

(vii) ensuring AI systems are resilient against both overt attacks and subtle attempts to manipulate data or algorithms;

(viii) ensuring the gathering and use of biometric data for remote identification may only be used in such purposes where the use is duly justified, proportionate and subject to adequate safeguards; and

(ix) ensuring AI systems have an appropriate level of human oversight.

Furthermore, the European Commission intends to introduce a “prior conformity assessment”, which would verify whether one or all of the above-mentioned mandatory requirements would be applicable to “high risk” AI applications. The prior conformity assessment would involve procedures for testing, inspecting and certifying AI applications, including performing checks on algorithms and training data sets that are used to develop the AI applications.

However, it still remains unclear as to the degree and extent to which the mandatory requirements will be imposed by the EU on “high-risk” AI applications, and it would be important for the European Commission to provide a non-exhaustive list of case studies and examples for guidance.

For AI applications that do not qualify as “high-risk” and therefore would not be subject to the mandatory requirements listed above, organisations would have the opportunity to be part of a voluntary labelling scheme. Under this scheme, organisations may voluntarily subject themselves to certain requirements and be awarded a quality label or certification for their AI application. This would enable users to recognise AI-enabled products and services that are trustworthy and in compliance with standardised EU-wide legislative benchmarks.

Concluding Thoughts — Call for Regulation of AI in Singapore?

There are varying approaches to establishing adequate legislative intervention in AI. Singapore’s model framework takes on a self-regulatory approach; however, it may not have sufficient legal authority to achieve the intended results of ensuring that organisations implement and refine their AI technologies in a responsible and fair manner. This article proposes that Singapore could benefit from introducing certain regulations to AI as it would allow the design, development and deployment of AI technologies (even at different phases in an AI lifecycle) in the country to be based on the protection of human interests, safety and well-being over the commercial interests of organisations. Several other supporting reasons are as follows:

Prevents Selective Adherence to Requirements

Regulation of AI would incentivise organisations in Singapore to do more than the bare minimum required under Singapore’s voluntary model framework to promote good data management practices and ethically driven AI systems. Currently, it is difficult to prove that organisations are holding themselves accountable and complying adequately with the requirements under the model framework — organisations may be selective in how they decide to safeguard data quality and minimise inherent biases in training data sets.

For example, Amazon had deployed an AI system that was trained to process data submitted by job applicants over a 10-year period, much of which came from male applicants. The AI-system displayed a gender-bias in its recruitment process since 2015 when sorting job applicants; however, it was only disbanded in 2017. Similarly, when Google had first launched its AI-powered “Google Assistant”, there was a stronger preference for a female voice, which led to its text-to-speech systems being trained on more female voice data than male voice data. This made it difficult for Google to launch a similar voice assistant with a male voice as the AI system was performing better on female voices.

It would appear that allowing organisations to independently and freely determine data quality or the degree of human intervention and oversight necessary may perpetuate systemic biases. If organisations do not hold themselves accountable and are slow to identify these biases, they may fail to protect fundamental human interests, safety and well-being.

Addresses Liability Issues Surrounding AI Deployment

Singapore’s model framework also does not address risk and liability allocation in the event of defects in AI systems or cyberattacks on organisations deploying AI. This poses limitations in the type of guidance and resources that are available for organisations and/or the injured parties to determine, inter alia, the burden of proving causation, vicarious liability, contributory negligence in relation to thedefective performance of autonomous AI technologies.

For example, a self-driving Uber car hit a pedestrian who was jaywalking, as its AI system did not have the capability to classify an object as a pedestrian unless the object was near a pedestrian crossing. There was a safety driver in the car to enable human intervention; however, the safety driver failed to watch the road and take control of the vehicle prior to the accident. This accident raises issues in proving whether liability lies with the flaws in the design and quality of data of the AI (i.e., AI developers), the safety driver’s omission or Uber’s deployment of the technology. Further, if the AI system was autonomous, should it have a separate legal personality?

Increasing complexity in supply chains and various economic operators in the life cycle of AI technology makes it important to consider implementing rules or regulations around risk and liability allocation for AI. This would enable an injured party to build an effective case and provide opportunities for legal redress.

Wide Territorial Effect of the European Regulatory Framework

The European regulatory framework intends to have a wide territorial effect and may be implemented to “all relevant economic operators providing AI-enabled products or services in the EU, regardless of whether they are established in the EU or not.” In this regard, various actors involved in the lifecycle of a “high-risk” AI system (developers, manufacturers, distributors, importers, service providers, professionals, users or deployers of the AI-equipped product, AI system or service) may be required to comply with the European regulatory framework depending on which actor(s) are best placed to address any potential risks.

Organisations offering AI-enabled products or services internationally would be held to a stringent standard of compliance in the EU. However, as these organisations are not required to comply with similar standards in Singapore, they may be incentivised to test and develop AI technologies in Singapore that do not meet these EU legislative benchmarks. For example, due to China’s lax regulations, Infervision collected large quantities of medical data from Chinese patients to test and train algorithms, which were used to develop medical-image-processing AI software. The software was thereafter licensed to U.S. hospitals to identify cancerous lung nodules in CT scans. If AI developers were to work solely with U.S. hospitals to gather data and train AI models, this would drive up costs of compliance.

Differing levels of accountability and compliance requirements between the EU and Singapore may help to promote more innovation in AI in the latter. Nonetheless, organisations may choose to train and test riskier AI-powered products and services in Singapore and practice selective adherence to the requirements under the model framework. Singapore’s model framework may no longer have as strong a legal impact or intervention as initially intended.

The opinions expressed in this article are my own and do not represent the opinions of my employer. This article does not constitute legal advice or a legal opinion on any matter discussed and, accordingly, it should not be relied upon. It should not be regarded as a comprehensive statement of the law and practice in this area. If you require any advice or information, please speak to a suitably qualified lawyer in your jurisdiction. The author does not accept or assume any responsibility or liability in respect of this article.

--

--