AI’s biggest challenge is human, not technological.
Yijen Liu
98311

Despite recent breakthroughs in deep learning, present day AI remains a form of automation. The only difference is that we are starting to succeed in tasks computers used to struggle in, often by approaching them with learning algorithms instead of static programs. This “learning” is nothing close to human learning: machine learning is applied statistics, no magical consciousness or autonomy there.

Fundamentally I question whether the engineering approach of task-solving would ultimately take us to conscious, humane AI at all. For we don’t even know what consciousness even is, on a philosophical level; we haven’t even agreed on what a “good” conscious being should be, on a moral and cultural level. This is not a “task” or “problem” some engineering group can simply solve by throwing a few thousand fast GPUs at.

In addition to enhancing our engineering practises, we also need to step up our science. We need to invest more in the science of deep learning, not just the engineering: why do the neural network architectures we desiged work? why do they excel at solving some tasks but remain struggling in others? To what extent are they intelligent, and what is still missing from it to be truly conscious? We need much deeper understanding in the science of ourselves as well: What give rise to our own consciousness? Why do we feel and reason the way we do? What makes us completely impulsive in some occasions, yet coolly calculating in others?

As a species there are many more challenging, yet intriguing questions for us to answer, which I think dwarfs most existing mission statements about superficially more “intelligent” AIs.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.