Our Final Invention Is Here: Artificial Intelligence
Sounds dangerous :How an artificial general intelligence could become an artificial super-intelligence through recursive self-improvement.
Corporations are already throwing billions — and we are creating our own rival. I am not trying to be a fear-monger here but if you are in the field of Artificial Intelligence — you will at least understand basic problem well enough even if you don’t agree to the argument that AI could pose an existential-threat
While we have been witnessing the impact of automation on jobs for the last one decade and now it had become an issue of tremendous importance — and with increasing advancements in the field of self-learning AI and Machine Learning, technology is moving faster than ever, faster than our need to innovate, faster than we can adapt.
With Google AI learning to make other AI , experts agree that it could create a future with a less expensive and more efficient workforce, and even make technology jobs obsolete.
“automated machine learning” as the most promising research avenue for his team.“Currently the way you solve problems is you have expertise and data and computation. Can we eliminate the need for a lot of machine-learning expertise?”
Source: MIT Technology Review
With so much of AI explosion and the power of super intelligence, machines will be steering our future rather than us. Machine Learning based algorithms are already doing everything — from sorting cucumbers to curing cancers.
Are we creating “a globally networked, electronic, sentient being”
In his book Our Final Invention: Artificial Intelligence and the End of the Human Era, James Barrat says:-
If we build a machine with the intellectual capability of one human, within five years, its successor will be more intelligent than all of humanity combined. After one generation or two generations, they’d just ignore us. Just the way you ignore the ants in your backyard.
Bill Gates, Stephen Hawking and Elon Musk had already spoken about the evils of Artificial Intelligence
While any sufficiently intelligent AI will be able to improve itself, Seed AIs are specifically designed to use recursive self-improvement as their primary method of gaining intelligence
“An agent which sought only to satisfy the efficiency, self-preservation, and acquisition drives would act like an obsessive paranoid sociopath,”
writes Omohundro in “The Nature of Self-Improving Artificial Intelligence.”
Our Safety Net
Jeff Zaleski in The Challenge of Artificial Intelligence, says
AI will harbor any ill intentions toward humanity. Indeed, it seems unlikely that AIs will harbor anything at all — instead, they probably won’t be conscious and will neither hate nor love.
Google is working on a kill switch to curb AI uprising, but it isn’t ready to be implemented across the board just yet.
The DeepMind and Oxford University team argues that learning agents are unlikely to “behave optimally all the time” given the complexities of the real world.
My Take On This:
We are at very critical transition point, and we are slowly allowing our tools to take over us because are getting better and faster. We are ready to give the personality too. But, before we give them decision making abilities to flourish themselves we need to re-think other possibilities because