šŸ’£ 23 Principles to Prevent Artificial Intelligence from Destroying Us All

Artur Kiulian
Algorology
Published in
7 min readFeb 4, 2017

Is this enough? Algorology Newsletter #5

A group of many scientists and techies (including Elon Musk and Stephen Hawking) have recently endorsed a list of principles that aims to steer AI development in the direction of productivity as opposed to destruction.

The Asilomar A.I. Principles were developed after the Future of Life Institute brought dozens of experts together for their Beneficial A.I. 2017 conference.

The experts (roboticists, physicists, economists, philosophers) had fierce debates about A.I. safety, economic impact on human workers, and programming ethics.

These 23 principles range from research strategies to data rights to future issues including potential super-intelligence. This collection of principles highlights how the current ā€˜defaultā€™ behavior around many relevant issues could violate principles that most participants agreed are important to uphold.

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining peopleā€™s resources and purpose?
  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
  • What set of values should AI be aligned with, and what legal and ethical status should it have?

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

Ethics and Values

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systemsā€™ power to analyze and utilize that data.

13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail peopleā€™s real or perceived liberty.

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

Longer-term Issues

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

šŸ”„ Neuromorphic Chip Market Is Getting Hot

Iā€™ve been following this topic since IBM created a chip capable of competing with rodentā€™s brain.

For those unfamiliar with the matter, neuromorphic computing is a concept developed by Carver Mead in the late 1980s, describing the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system.

In recent times the term neuromorphic has been used to describe analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems (for perception, motor control, or multisensory integration).

The implementation of neuromorphic computing on the hardware level can be realized by oxide-based memristors, threshold switches, and transistors. So basically those neuromorphic chips are analog computers, as you, me and all the animals are.

Increasing preference for neuromorphic engineering over conventional computing continues to drive the demand in the global market for neuromorphic chips, to the extent of surpassing $10Bn in revenues by 2016ā€“2026.

This is a very interesting topic that actually doesnā€™t get much attention in a current AI hype wave. Would love to connect with someone who is somehow connected to this market.

šŸ† How To Compete With Artificial Intelligence?

There are plenty articles nowadays on how to survive the new wave of automation and compete with artificial intelligence, though this article gets it quite right. We donā€™t need to compete, we have to adapt.

Companies need to identify what machines do better than humans and vice versa, develop complementary roles and responsibilities for each, and redesign processes accordingly. AI often requires, for example, a new structure, of both centralized and decentralized activities, that can be challenging to implement. Finally, companies need to embrace the adaptive and agile ways of working and setting strategy that are common at startups and AI pioneers.

šŸ“” Artificial Intelligence Regulation

We all know algorithms can make bad decisions. Even thought those bad decisions are very different from how we make bad decisions those can still have serious impacts on peopleā€™s lives. Which leads to a call for a third party to ensure transparency and fairness of this decision-making.

The report from a research team at the Alan Turing Institute in London and the University of Oxford, call for a trusted third party body that can investigate AI decisions for people who believe they have been discriminated against.

ā€œWhat weā€™d like to see is a trusted third party, perhaps a regulatory or supervisory body, that would have the power to scrutinise and audit algorithms, so they could go in and see whether the system is actually transparent and fair,ā€ said Wachter.

Iā€™m sure we will see a lot of discrimination lead by automated decision making in the next few years but if there is one thing that Iā€™m truly scared of itā€™s the wrong people behind the power of artificial intelligence. Those people who will be able to ā€œregulateā€ the industry in their own favor and against the benefits of society. Good overview of the report here.

šŸ’” Googleā€™s AI software is learning to make AI software

The idea of creating software that learns to make software has been around for a while. Hundreds of companies ran out of funding trying to accomplish this. Though the idea that Google team is pursuing is actually quite different.

The recent success is followed by the attempt to generalize machine learning algorithms in order to produce the same results with less training. Which is quite similar to the idea of transfer learning and ability to reuse machine learning models built on similar tasks in a new problem.

What this means is that in theory, we may end up with needing even less machine learning expertise and data to train algorithms. The biggest eliminator of such expertise has been a recent mainstream application of deep learning, as the method to automatically learn features from the data without human researcher pinpointing them. Though deep learning requires huge amounts of data, which is a painful limitation for creating adaptable algorithms.

Originally published through Algorology email newsletter. If you would like to receive it before anyone else ā€” signup below.

--

--

Artur Kiulian
Algorology

Serial Entrepreneur, Partner at Colab.la, Author of ā€œRobot Is The Bossā€ www.robotistheboss.com