Source

Will we force AIs to follow social norms as well?

If so, how long will “they” conform?

Soham Dutta
Published in
2 min readMay 16, 2017

--

In Machine Learning, feature or representation learning is a set of techniques employed by a machine (neural networks) that learn a feature — a transformation of raw data input to a representation that can be effectively exploited in machine learning tasks. This obviates manual feature engineering, which is otherwise necessary, and allows a machine to both learn at a specific task (using the features) and learn the features themselves.

As an example, Nvidia demonstrated an autonomous car, last year, whose underlying AI used feature learning techniques showed the rising power of artificial intelligence. The car didn’t need to follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

It isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems.

So, the machine essentially programs itself.

Then at some stage we may have to simply trust AI’s judgment. The judgment will have to incorporate social intelligence. Just as society is built upon a contract of expected behavior, we will need to design AI systems to respect and fit with our social norms.

A natural part of the evolution of intelligence itself is the creation of systems capable of performing tasks their creators do not know how to do — a super-intelligence.

The question is,
How long will super-intelligent entities remain subdued under human constraints?

--

--

Soham Dutta
100 Naked Words

Writing frees my thoughts and fuels my creativity! A science enthusiast, my life finds purpose through my guitar, sketches and books 💯