How Safe is a Solution Within the Field of Artificial Intelligence?

AI Safety ratings — discuss!

Alex Moltzau
AI Social Research
Published in
3 min readAug 16, 2019

--

Safety in the field of artificial intelligence is a challenge to discuss. When I have been in discussion with friends or seen business conferences and events so far there has been little talk of specifics in AI Safety. Perhaps I am in the wrong communities. However to me it seems like very few if any consider implications in AI Safety. If by any chance I meet someone they usually approach it from a technical, engineering or economic perspective. What other areas remain? Well for one what: what is important in AI Safety?

What is important in AI Safety?

What do we want to keep safe or what do we want to protect? We could talk of the field of AI in broad terms or criticism against autonomous vehicles or WalMart’s use of machine learning techniques on checkout. Is it as such humanity or humanness we want to keep safe? Is it investments, financial analysis or insurance?

Securitization as a financial practice (not in international relations) is the pooling of various types of contractual debt such as residential mortgages, commercial mortgages, auto loans or credit card debt obligations. These can be lumped together into bonds.

As an idea intended to keep investors safe from failure of individuals to repay loans, it can be said that securing loans contributed to the financial crisis in 2008 (aka subprime crisis). Who decides how secure artificial intelligence is? Infamous agencies rated loans as triple A (AAA), there is talk of monitoring algorithms and even in some cases certifying.

Triple-A bonds, or AAA bonds, are those considered the absolute safest by bond rating agencies, yet who decides the safest algorithms. Private companies, states, and NGOs do at times invest in these solutions yet there is little assurance of quality or safety. Who or what entity keeps it safe?

OpenAI and Neutrality

The most promising organisation working to make AI safer was OpenAI until it became privatized (despite being owned by a nonprofit). If any could be said to have the beginning of a capability and a role it could be OpenAI. It was after all created to make artificial general intelligence (AGI) safer or at least to ensure it benefits humanity.

“OpenAI’s stated mission is to ensure that all of humanity benefits from any future AI that’s capable of outperforming “humans at most economically valuable work.” Such technology, dubbed artificial general intelligence, or AGI, does not seem close, but OpenAI says it and others are making progress.” Wired 2019

Of course even seemingly independent actors are not neutral and any rating agency if it were to be a nonprofit would not in any way be guaranteed to take neutral decisions. It is additionally a question of whether setting laws in an international context would help if they could not be enforced in certain cases (humanitarian law as an example).

Points for discussion

  • Who could be an actor taking responsibility in rating AI?
  • Should this be done at all?
  • How could it be done if it was done?
  • Think of your own question and post it in the discussion.

This day 75 of #500daysofAI writing every day about artificial intelligence.

--

--

AI Social Research
AI Social Research

Published in AI Social Research

Our group of social scientists and computer scientists work together to publish a book on AI by January 2020. We intend to be critical, interested and inquisitive about its impact on our society.

Alex Moltzau
Alex Moltzau

Written by Alex Moltzau

Policy Officer at the European AI Office in the European Commission. This is a personal Blog and not the views of the European Commission.