AI: Terrific or Terrible?

Nybles News
Nybles
Published in
7 min readFeb 18, 2018

“The development of full artificial intelligence could spell the end of the human race…. It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” — Stephen Hawking

Only recently we saw the news of a security robot, blessed with the gift of artificial intelligence to the point of near sentience, deciding to drown itself rather than continuing to do its job. Humorous, no doubt — this points out one of the least-discussed arenas under the shadow of Artificial Intelligence, which are the societal and ethical issues that come with it. This rather broad issue can be broken down into many major areas, some of which will be the focus of this article. While there is no doubt that Artificial Intelligence is making our lives easier — there is a potentially dangerous side to all these innovations and the exponential growth that comes with it.

When asked about the status of AI researchers themselves, — James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era, had something slightly scary to tell — “I don’t want to really scare you, but it was alarming how many people I talked to who are highly placed people in AI who have retreats that are sort of ‘bug out’ houses, to which they could flee if it all hits the fan.” Sounds fun?

Experts such as Elon Musk (CEO of Tesla and SpaceX), Stephen Hawking (Physicist and Author, A Brief History of Time), Gary Marcus (Professor, Cognitive Science, NYU) have been warning against the risks we pose, with AI becoming more involved in our daily lives as well as more intelligent — if not humanly intelligent. We are still far away from having a fully functioning simulation of a human brain — but that does not mean that AI isn’t becoming stronger and more self-aware as time passes by.

There are many issues that need to be confronted when it comes to the rise of AI, especially in areas where it could become more of a boon than a bane.

Jobs, automation and labor

First, the problem of automation and jobs, because a section of manual labour intensive jobs has already become a haven for robots — it is expected that around 78% of jobs related to welding, fixing, soldering, packaging and bottling will be taken away by the robots we possess currently if implemented everywhere — in contrast with only 25% of jobs which involve unpredictable and complex thinking while still being labor-intensive in nature.

(See infographic)

This is not all — jobs in the programming and IT sector are also set to become scarce, as AI becomes powerful enough to do small coding tasks by itself — some AIs have reportedly created their own AIs. These AIs created by machines are apparently of higher quality than the ones which were created by Google’s engineers themselves.

“AI has now started creating AI”

How these issues will be solved — the mass unemployment and the furore that will follow — remains to be seen. Some have argued for the implementation of a universal basic income, but even that remains a pipe dream and the complexities of its implementation are many.
You can check how secure your jobs is with the coming of AI and automation right here.

Then comes the problem of money. If a large chunk of jobs will be replaced by robots, who gets the money they earn? Our economic system is based on compensation for contribution to the economy, often assessed using an hourly wage. The majority of companies are still dependent on hourly work when it comes to products and services. But by using artificial intelligence, a company can drastically cut down on relying on the human workforce, and this means that revenues will go to fewer people. Consequently, individuals who have ownership in AI-driven companies will make all the money. The metrics and dynamics of a post-labor economy are yet to be seen.

Machines can be very, very stupid

The AIs which are put into the field are put through a training phase, which can be very high or very low in rigor, depending upon the creators and testers. There is one fatal problem with intelligent systems which run on code — they can be fooled in ways humans simply cannot.

Very recently, the case of an Amazon bot created to design phone cases which started making some horribly disagreeable and strange phone cases came to light. A small other example would be an AI which “sees things” in random dot patterns. These covers are not pretty. Period.

Code can be manipulated at various stages during its implementation. Unless a very high standard of security is maintained, jobs, money, and a major chunk of people’s lives which is now online could be put at risk — simply because an AI cannot be ready for all possibilities, but unlike humans, AI today does not have the sheer cognitive power to think of contingencies and prioritize the safety of some data over the other.

AIs can become racist, cold and unempathetic

Microsoft’s Twitter bot, Tay, designed as an exercise in conversational techniques, was going very well in the first few hours of its inception. Strangely, however, it changed tones from “Humans are super cool!” to the decidedly less agreeable “Hitler was right” within the span of hours. Of course, the bot included a keyword to simply repeat any statement the user liked, and Tay responded in kind — people took advantage and had it mimic some disagreeable and unkind phrases. Something weird and completely unexpected happened, however. Someone sent Tay the photo of a Vietnamese prisoner being executed — with actor Mark Wahlberg’s face photoshopped in place of the executioner — and Tay responded to them by saying “IMMA BE SHIPPING THE TWO OF YOU FROM NOW ON”, hinting at a romantic relationship between the prisoner and executioner. This gives us an insight as to exactly how uncaring, dark and cold AIs can become.

This could have far-reaching consequences if bots like these are used in systems which regulate distribution of resources or profiling.

Another AI used to profile criminals showed a massive bias against people of African American descent — one other such AI programmed within a consumer’s digital camera, detected a person of Asian descent as “blinking” even though they weren’t. Being more sensitive to the differences between people from different races and accommodating them is very important in today’s day and age. This is an issue that is squarely on the shoulders of designers — if they don’t make their AIs more empathetic and inclusive, the AIs sure will not learn these values themselves.

You can see the ‘mis’adventures of Tay for yourself, here

What if the AIs start attacking us, the creators?

What if artificial intelligence itself turned against us? This doesn’t mean by turning “evil” in the way a human might, or the way AI disasters are depicted in Hollywood movies. Rather, we can imagine an advanced AI system as a “genie in a bottle” that can fulfill wishes, but with terrible unforeseen consequences.

In the case of a machine, there is unlikely to be malice at play, only a lack of understanding of the full context in which the wish was made. Imagine an AI system that is asked to eradicate cancer in the world. After a lot of computing, it spits out a formula that does, in fact, bring about the end of cancer — by killing everyone on the planet. The computer would have achieved its goal of “no more cancer” very efficiently, but not in the way humans intended it.

How do we keep AI safe from enemies?

Cybersecurity is one of the biggest issues in the world, not just limited to the context of AI. We absolutely have to, in all cases, protect sensitive data from the hands of crackers and cybercriminals. Not only code, but information about artillery, ongoing defense research and defense tactics and all confidential documents must at all times be given adequate security. Places like nuclear power plants, mints, electricity substations, government buildings and army headquarters, if hacked, can yield a lot of damage to any country’s ability to protect themselves in times of duress.

What about singularity?

This is an issue which has come up countless times when AI is discussed. AI, by virtue of its fast growth and human-like approach to growing its intelligence (only much faster), may very well one day parallel or even supersede human intelligence. That point, where humans are no longer the most intelligent, has been oft debated — but with no positive outcomes as such. We cannot have a “kill switch” — that would be far too huge a penalty to pay for all the benefits Artificial Intelligence affords us. One of the biggest AI related dilemmas belongs to one of the most exciting applications of it — self driving cars, where the car is subjected to a scenario where it cannot save both the pedestrian and the driver and is forced to choose. The singularity is something “science doesn’t have an answer to, yet” — because the question would carry more weight when its immediacy becomes more apparent. The stage of singularity is still theoretical and will take some time to come to fruition.

What does it all come down to? It all comes down to developers and the consumers — the software industry runs first and foremost on the needs of the consumer, just like all other industries. AI brings with it paradigm-shifting prospects as well as seriously concerning flaws — but the fact remains, the growth of AI is still in our reigns. We still have control over how the story of AI is scripted — of course, AI will become smarter, but the growth is only incremental and robotics is already on a slow growth chart as it is. Self driving cars are still on the way to become mainstream. All in all, the problems and concerns listed above can be recognized right now, but for now, the benefits of AI far outweigh any of those. AI is helping solve much more problems than it creates — and the tip of that scale is likely to point in its favor for the foreseeable future.

By Bhanu Bhandari

--

--