Counter Arts
Published in

Counter Arts

Are Unfriendly Values Unstable?

Source: The Moral Economist

In artificial intelligence (AI) safety, there is a concept known as human friendly values. In short, if an AI has human friendly values, then it will do things that the humans wants them to do.

This is complex for a number of reasons. First, it is difficult to define human friendly values. Second, it is hard to program complex concepts accurately. Third, humans don’t always know what they want. Finally, do…

--

--

--

Whacky, countercultural, and the only 1 Stop for Nonfiction on Medium.. Who cares what the algorithm says?

Recommended from Medium

About intelligence

Must-Read Papers on GANs

How Will AI Wake Up Exactly?

HiJiffy’s 2020 in Review

The Impact of Right Data on the Power of AI

Will there be a true artificial mind one day?

Machine Learning in 2022

Voice Assistant Timeline

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
The Moral Economist

The Moral Economist

Thoughts on Economics, Politics, Philosophy, Ethics, and Computing by Adam Smith Reincarnated

More from Medium

Verge Civilizations, Self Selection Bias, and AGI Timing

The lie of AI X-risk, updated

What Degree of Wrong Are You?

Machines Can Be Cynics, Too