Governance of AI in the years to come

GoodAI
GoodAI Blog
Published in
5 min readOct 10, 2018

This blog post is based on discussions during a workshop with members of the GoodAI team, and Frank and Virginia Dignum.

Building bridges to the future

SUMMARY

  • Regulation is vital for AI especially considering the potential power of the technology
  • Current software regulation may not be robust enough, transparency is key to good practice
  • Two options are binding regulations and incentivisation
  • Better education is needed for the general public and key stakeholders

With narrow AI becoming ever more powerful, and the possibility of general AI on the horizon, it is vital that adequate governance models are put in place to ensure safety today in the development process and beyond. The power of AI technology and its potential to disrupt economic and political structures makes the need even greater.

Building “good” bridges

Software development is a fast paced environment and it is hard for the educational curriculum to keep up with the rate of change. Programmers can become AI programmers in a short time period and software development tends to have a relaxed approach. The focus is often on giving the user what they want to see and selling a product. In certain situations, bugs and other problems are often seen as the norm which can be fixed later.

However, when the stakes get higher, and the technology more powerful, we will no longer have the luxury to make these mistakes. If we look at engineers making bridges, no one would commission their bridges if they finished them before all of the safety requirements were reached. We should begin to take the same attitude with software and raise the bar of AI development. These high standards are already in place in certain industries, for example, medical or aeroplane software. Below we discuss two ways towards a more transparent approach, binding regulation and incentivisation.

The requirements should not only cover robustness, flexibility, and efficiency of the main purpose, i.e. the objective function, but also ethical and social dimensions.

Transparency and binding regulations

Unsurprisingly a key driver for change often tends to be enforcing rules from the top, by either governments or other regulatory bodies. It could be useful to start enforcing rules of transparency for AI algorithms and processes so that at least regulators can ensure they are safe. In addition, there should be transparency about the data which were used for training purposes. This is bound to cause upset with developers at first, however, could have huge positive implications for the future.

Take the example of the catalytic converters in cars, there was little uptake of them in the USA when first invented. However, in 1975 the U.S. Environmental Protection Agency released stricter regulations and almost all cars from then on were fitted with the converters. Car manufacturers could not sell their products without complying leading to a significant reduction of environmental pollution.

In the case of AI, there would need to be some kind of top-down regulations which value transparency and safety over profit. This transparency would need to be defined and does not need to open up the copyright of the product. For example, with medicine we do not know exactly what is in each pill, we trust 3rd party organisations to check that they are safe for human consumption. Standards on software are often fuzzy and difficult to impose so some set of product-by-product regulations for AI technology could be extremely useful.

An example could be an EU law which requires companies to reach certain prerequisites if they wish to sell their algorithms to governments (and for governments to buy only these algorithms). This could have a global impact beyond just the EU. For example, there are estimates that US companies spent more money complying with the EU law of GDPR than European companies because they needed to comply to continue doing business. However, another problem will be that checking algorithms and processes are not as easy as simply checking if a catalytic converter is fitted.

Certifications and incentives

Another option which also has potential is certification and other forms of incentivisation. If a regulatory body can give certifications to companies stating that their algorithms have been scrutinized and are considered “safe” it could give these companies an edge in the marketplace, both in terms of PR and in selling products. The example of organic products, or free-range eggs, shows that consumers are willing to pay a premium if the ethics behind the production of food reaches a certain standard, and they can easily identify this property of the product. In turn, the producer can charge more for their products. The same could work for tech companies. Furthermore, governments could provide incentives for companies to gain certification.

However, these guidelines need to be robust and not just turn into a set of targets for businesses to tick off. There needs to be a balance between creating a framework to comply with and keeping the high quality to avoid what is known as Goodhart Law. To quote Marilyn Strathern: “When a measure becomes a target, it ceases to be a good measure.” Also, we need to ensure the continuous update of such frameworks due to the fast pace of developments in the SW area.

Education

For any of the ideas above to work, there is a need for improved education for the general public, programmers and regulators. The general public needs to gain a better understanding of AI and know why it is important for these organisations to comply with regulations. The first group to help speed this process would be to educate media reporters who have significant influence over public opinion.

Companies also need to be more clear with the limitations of their products, educating the consumer on what their product can and cannot do. For example “self-drive” mode in some modern cars does not mean that the car will fully drive itself.

Formal technical and ethical education should also be improved, from primary school to university level. Programmers need to have adequate computing skills to be able to scrutinize their own work, to make sure they are reaching the highest possible standards. Furthermore, regulators need to have well-trained supervisors who can thoroughly assess the work of programmers.

--

--

GoodAI
GoodAI Blog

Our mission is to develop general artificial intelligence — as fast as possible — to help humanity and understand the universe