Building on the idea of an ethical framework for a Good AI Society

GoodAI
GoodAI Blog
Published in
6 min readDec 21, 2018

In the recent paper An Ethical Framework for a Good AI Society Floridi et al. introduce some of the opportunities and risks of AI for society. They then outline five “ethical principles” that should underlie any development and adoption of AI, and make 20 recommendations to policymakers and stakeholders which can lead to a “Good AI Society.”

At GoodAI we feel strongly that key policymakers and stakeholders should think about the potential impacts of AI and how they can act to ensure that we are on a path toward beneficial and non-malicious AI. Papers like the one mentioned are vital for this process. Below we build on some of the ideas in the paper and begin to visualize what some of the recommendations might look like.

A vision for a United AI

The document recommends “the development of a new EU oversight agency responsible for the protection of public welfare through the scientific evaluation and supervision of AI products, software, systems or services”.

This “agency” or organization we regard as vital for going forward. In a recent blog post, we introduced the idea of a “United AI” which would take on similar functions. Below, we envision how such an organization could be taken to a global scale and fulfill wider functions. It would be too ambitious to believe that such a global organization could be easily established, however, it is not impossible in the long run if we take the right baby steps, and start nourishing and elaborating such ideas in the public discourse.

Who would be involved?

United AI would be an international co-development governance structure where members will be co-creating value and sharing the benefits of AI. An alliance like this is likely to be founded by private actors because they have greater flexibility, however, if the alliance can prove beneficial it will soon see states follow suit.

The emphasis should be on collaboration in order to avoid a winner takes all AI race scenario, (we have addressed this issue in greater detail in another blog [3]).

Although the larger actors will be companies or states we would also recommend opening membership up to individuals. The alliance should avoid elitism and include voices from all sectors, but members should all commit to certain United AI values.

Why would they want to join?

Floridi et. al put a great emphasis on incentivization. Six of their recommendations involve financial incentivization for various issues including to improve “sustained, increased and coherent European research effort.”

Joining United AI, members would be incentivized by:

  • Being involved in AI governance of the future
  • Having access to shared AI capabilities and reaping the shared financial benefits
  • Being part of the future AI economy
  • Individual insurance against “not winning the AI race”

Functions of United AI

Testing center for AI

One of the key functions of United AI would be to establish a joint testing center for AI. The objective would be to verify AI and robotic models and determine whether they are safe, whether their limits are clearly communicated to end-users, and whether they are instilled with the “right” ethics or values.

Users need to know:

  • What AI can and cannot do
  • When it will fail
  • If it is ethically aligned (according to IEEE recommendations [4])
  • What objective function(s) it is optimizing for (optimizing for profit vs user’s wellbeing)
  • What side effects its use may cause
  • Legal conformity, e.g. GDPR

Overall, if the models are “black-boxes”, their behaviors need to be communicated to end-users and the ability to do this needs to be independently verified.

The testing center would also, as recommended by Floridi et al., “develop auditing mechanisms for AI systems to identify unwanted consequences, such as unfair bias.”

Wider institutional roles

In the recommendations by Floridi et al. there is also a key focus on “development,” whether it be legal frameworks or agencies. We feel here that it is important that fresh institutions are realized in order to carry out these developments prompting a transition from old institutions to new ones, rather than reforming defunct institutions which could end up costly and implies a lot of bureaucracy.

United AI would be forming a new AI-powered governance apparatus which would completely transform old institutions. This does not necessarily mean responsibility will change hands, it could just mean the modernization of inefficient or inflexible apparatus.

With the “AI revolution” upon us we should also prepare for a scenario where a power shift weakens or dissolves traditional institutions completely. For example, the future role of states.

There is an emerging discourse that governments, through taxation of private entities, will provide a form of support (for example, universal basic income) to their citizens who will have lost their jobs to AI and automation [5]. However, thanks to transformative AI advancements, private entities could be able to self-sustain, having a very diminished need (or no need at all) for state-provided services or to trade with other entities. In which case the states would have to create mechanisms in advance to ensure they are not cut out of the future AI economy if they would like to ensure the wellbeing of their people [6].

Next steps

Trust-building workshops

Before introducing an official, concrete structure of United AI, it would be necessary to start with small international trust-building workshops. Without putting pressure and restrains on stakeholders, these ice-breaking workshops would lead to a United AI type of initiative through gradual trust-building.

The workshops would strive for very concrete, tangible results, and present practical solutions. For example scientific and technological exchanges, guiding principles (“do’s and dont’s”) for the use of AI, risk assessments for AI, collaborative development of the alliance’s guidelines and operational structure.

The workshops can use futuristic roadmapping, simulation games and wargames [7] to model interactions of different stakeholders to better analyze AI race dynamics and societal developments. Resulting practical, visual demonstrations of desirable and undesirable futures will add great educational value and can be used for educational workshops (see below).

Establishing the right facilitators is key so that political neutrality is maintained. An inspiration could be World Economic Forum, a trusted model that keeps ensuring high quality of dialogue and orchestrating synergies between private and public entities.

Educational workshops for stakeholders, including policymakers and the media

These workshops should help a broader group of stakeholders and the public better understand AI and its potential impacts. Purposed work to introduce more objective facts in the media can help manage tendencies for a “doomsday discourse” around AI.

It is also vital to cultivate the discourse of international cooperation and shared values across the globe. Although there seems to be a general agreement that cooperation on a global scale and race avoidance are good ideas, in practice traditional geopolitical division and nationalism heavily influence public, political and even academic rhetoric.

Engaging the general public

An indispensable part of paving the way towards a beneficial future for humanity is engaging the general public and understanding public opinion. Mapping the public opinion landscape through surveys could help measure over time how people perceive AI and its implications — both benefits and risks.

There would need to be a strategy of how to communicate with people on AI topics in a comprehensive and constructive manner, without nurturing a “doomsday” discourse, and how to identify issues which resonate most with the public, thus providing a fruitful ground for engagement and wider collaboration potentially including innovation crowdsourcing.

These efforts should help prepare adequate “Future checklists”, or concrete recommendations which can help citizens start preparing, today, for a future with AI. Transparent global information sharing can help establish United AI by building trust and facilitating the acceptance and adoption of the idea.

Conclusion

Above we have developed some of the ideas outlined in An Ethical Framework for a Good AI Society and laid out a vision for United AI, an organization dedicated to advancing cooperation in AI development. We have also detailed some possible starting places. However, we also understand that there are many questions that need answering and it will not be a straightforward process.

--

--

GoodAI
GoodAI Blog

Our mission is to develop general artificial intelligence — as fast as possible — to help humanity and understand the universe