AI Safety Research: A Road to Nowhere

Peter Voss
2 min readOct 19, 2016

--

I’ve just returned from a two-day conference focused on the ethics and safety of advanced AI held in NYC. About two dozen speakers, including several luminaries such as Stephen Wolfram, Daniel Kahneman, Yann LeCun and Stuart Russell explored ways of ensuring that future AIs won’t cause us harm, or for that matter come to harm.

Notwithstanding some very smart speakers, and a several interesting talks and discussions, overall these deliberations seemed little more than mental masturbation.

Why?

Here are two core problems: Most of the talks involved questions of either the moral disposition or status of an AI — yet pretty much everyone agreed that they had no idea of how to define morality, or to select the right one for an AI. Similarly, while the issue of machine consciousness was considered key, general consensus was that nobody really knows what consciousness is or how one could know if an AI possesses it.

Furthermore, mainstream assumptions in this field are rife with questionable core assumptions:

1. That we can (or should try to) explicitly design or craft a utility function to ensure that the system acts morally (whatever that may mean).

2. That advanced AI’s moral ability or knowledge are independent of its intelligence.

3. That reinforcement learning will be a significant aspect of advanced AI

4. That AIs will inevitably develop strong, overriding goals of self-preservation and rampant resource accumulation

5. That there is an inherent, hard problem to get alignment between what users really want and what the AI thinks they want (The Alignment Problem)

6. That a system will achieve general and overall super-human intelligence with essentially no warning, or with sudden, totally unpredictable behavior

7. That closed logic or mathematic models can tell us much about how advanced AI will behave

AI safety is genuine concern, one that we should certainly pay attention to. However, little progress will be made and much unnecessary hand-wringing and money will be wasted pursuing it with no clear understanding of either the nature of consciousness or morality — and with a starting point that embodies several incorrect assumptions.

Speakers repeatedly claimed that ‘no one knows what consciousness is, how we can determine the right moral code, how to solve the ‘alignment problem’, or how to imbue an AI with moral knowledge’. Perhaps the tens of millions currently funding AI safety research could be spent more effectively by involving more people who do not claim such ignorance.

--

--