Artificial Intelligence and The Future of Tech
By Naila Tariq —
No longer confined to science fiction, Artificial Intelligence is fast-becoming more and more integrated into our daily lives. From virtual personal assistants like Apple’s Siri, Amazon’s Alexa, and now Samsung’s Bixby, to self-driving cars and AI-fueled robots.
Experts remain uncertain of when Artificial Intelligence will become truly human-like, but many, such as futurist Ray Kurzweil, predict that we’ll reach the milestone in a bit over a decade given current advancements in technology.
With these advancements come concerns — fueled by fear-mongering headlines — about Terminator-style doomsday scenarios in which artificially intelligent robots and programs rise up against humanity and take over.
Experts assure us this is is not going to happen, at least not in the way we imagine it will.
There are concerns about the safety of AI and how we can keep its various forms safe and beneficial for all. But before we can get to those, let’s get back to basics and explain what AI is and what it can and can’t do, in order to dispel some of the myths and misconceptions surrounding the technology.
What is AI?
As mentioned above, artificial Intelligence can take many forms. The basic definition is that it’s a computer systems technology designed to perform tasks that would normally require human intelligence, such as speech or facial recognition, language translation, or — you guessed it — driving.
Even search algorithms that tailor ads based on your internet activity count as artificial intelligence.
There are two types of AI, narrow (weak) AI or general (strong) AI. Narrow AI encompasses programs designed to perform a singular, basic task. All the examples above are weak AI.
General AI is what people normally think of when they think of artificial intelligence; it’s AI that can perform any task that a human can, and seems human-like. As of yet, this kind of technology has not been developed, though it is the long term goal for researchers in the field.
While general AI is years away (exact number undetermined), there has definitely been significant progress in the field. As homes become increasingly interconnected, virtual personal assistant softwares are becoming more advanced; Mark Zuckerberg’s “Jarvis” is a prime example. Self-driving cars were initially thought to be a far-off reality, but could actually end up hitting consumer markets soon.
With these advancements comes a ton of benefits: the automation of manufacturing tasks and financial monitoring, for example, greater energy efficiency at home and at work with energy-monitoring software, streamlined shopping experiences, and more.
However, task automation can make hundreds of thousands of jobs redundant, prompting questions about what the job market will look like a few years from now. It also brings up an important issue that will become a greater concern if/when general AI is developed: how will the AI complete its task?
Consider a ridiculous, but illustrative example: if an AI is programmed to help people cross the street without being hit by any cars, what’s to stop it thinking that maybe the quickest, safest way is to throw the pedestrian clear across?
One of the biggest challenges regarding AI safety is problems like these, and ensuring that the AI completes tasks in line with humanity’s goals and values. Which goals and values is another debate.
There’s also the question of legality. If a self-driving car gets into an accident, who’s to blame? The passenger? The car manufacturer? The software developer?
Regardless of these hurdles, the field of Artificial Intelligence remains exciting and open to infinite possibilities for innovation, be it in developing a specific narrow AI, pushing research forward for general AI, or problem-solving the challenges that come with both.
Because unlike in the doomsday scenarios, the future is very much in our hands, and it’s up to us to decide what we want it to look like, and exactly how AI can help us achieve and maintain it.