The Singularity: Will AI Take over the World?

Often times in movies we are given imaginings of computers taking over the world. We think of intelligent robots coming to destroy humanity or rule over them. As interesting and exciting it is to think about AI taking over the world, it probably wouldn’t look like that. An important thing to remember about our machines learning is that we supervise learning and make sure the machines are learning the things we want them to learn. From our current use of machine learning, machines would never be able to ‘learn’ and understand anything outside of their supervised process.

A lot of the machine learning programs we use are goal based, meaning that they work toward a specific goal that we have given them. Even in our most advanced systems with many different algorithms, sensors, and programs, such as in a self-driving car, we have set up a specific set of goals or correct results to strive for that our automated intelligence would not be able to go against. Since of our current robots are all programmed to work toward their goals, they can not ‘learn’ to become ‘self-aware’ as many of our fictional robot apocalypses start.

Even looking forward, it is doubtful that our programs would be used to take over the world. The programs and robots we create with artificial intelligence and machine learning are for specific purposes. This mainly has to do with the fact that there is not demand that would drive the creation of artificial intelligence that might start to take over the world. The demand for artificial intelligence mainly pushes our algorithms to complete real world tasks that would not require full ‘intelligence’ that would lead to a robot world takeover.

Even the robots that we are working on in labs to have full ‘intelligence’ do not pose much of a threat to us. First, there are relatively few of them, so an artificial intelligence take over would be difficult due to that. Second, even unsupervised learning algorithms are monitored to make sure their output is reasonable. All of our algorithms for machine learning are too well monitored for any to spontaneously become sentient.

The biggest fear of learning algorithms would be an algorithm that is purposefully created to harm humans or a malicious hack of some of the algorithms we currently have in place. For example, the machine learning algorithms in self-driving cars are extremely important and extremely finely tuned to drive the cars safely. In the future, if a widespread attack on the vision algorithms of self-driving cars were to occur, it could cause many crashes and cost both money and lives.

From my assessment, it doesn’t seem as though we need to fear our machines becoming sentient and coming together against us as much as we should fear ourselves messing up our algorithms.