How Intelligent is Artificial Intelligence?

Sruthi Kuriakose
BITS & BYTES, NIT Trichy
4 min readFeb 6, 2019
Source : Photo by Owen Beard on Unsplash

AI has grown leaps and bounds in the past decade. A major breakthrough in AI was noted in 2017 when Google’s AlphaGo defeated Ke Jie, the world number 1 in Go (an abstract strategy board game).

Today, the world dreams of smart homes and self-driving cars but the roots of AI lay in machine learning where the ideal dream was for a machine to be able to trick a human judge into thinking that it is a human.

We can well say that we’ve come close to this ideal with the advent of chatbots. Extensive coverage of Summer Olympics, 2018 was authored by bots — WP Robot. The advent of chatbots such as Ruuh, Zo and Insomnobot and the most recent Google Duplex, with their witty and charming personalities seem to almost eliminate the need for human conversation in the future.

There’s a growing fear that Machine Intelligence will soon surpass that of humans. Deep Learning and artificial neural networks are inspired by the human brain and form the basis of chatbots. What started off as a means of mimicking human behaviour, however, has led to serious repercussions. In a Facebook experiment in 2017 in which artificial bots were made to negotiate, they discovered that their models learnt to be deceptive by initially feigning interest in a valueless item, only to later ‘compromise’ by conceding it.

Screenshot from Twitter

This leads to an important discussion on how research in AI safety is immediately necessary. The issues that can occur are vast from a range of technical to ethical problems. In 2015, Google’s image recognition technology made the mistake of classifying people of colour as gorillas. They received severe backlash after which they blocked image categories gorilla and chimp altogether. This occurred because the human faces in the training dataset weren’t diverse enough. The fact is that now when AI machines learn from data, either the data isn’t diverse enough or the programmers’ bias will cause them to overlook certain shortcomings which can cause controversies.

Source: Twitter

Microsoft learnt this well enough when its first English speaking bot, Tay, learnt slurs and started tweeting that she hates Jews and feminists. While they immediately took it down and attempted to rectify their mistakes in their second bot Zo, Zo became too politically correct and regarded words such as the Middle East, feminism and religion as controversial and avoided the topic.

At the same time, AI and Machine learning come at the cost of intelligibility. Complex methods such as random forests and deep neural networks generate complex functions with about 100 layers and a million weights and they are much harder to understand. The programmers give a huge dataset to a computer to look for patterns relevant to their queries. However, the machine may make mistakes which can turn out detrimental in its predictions. For example, given a training set, a machine learnt that asthmatic and heart disease patients have a lower chance of dying from pneumonia. Hence the machine interprets that asthma is good for a patient. However, the reason for this is because those patients already get faster and regular healthcare. Hence there is a need to use transparent models which help understand the process better.

Research labs all over the world are now trying to teach machines to think, read and communicate like humans. To ensure that research does not stray off legal and ethical limits, Microsoft has created a research group — Fairness Accountability, Transparency and Ethics. There is a need to understand the social implications of technology and the social disparity it can create.

In machine learning circles, people seem to agree that it’s not good enough to just have Big Data, data should also be unbiased.

Research tells us that music recommendations from apps and sites like Spotify, 8track and Youtube can have long term cultural implications, as it’s not just about steering users’ taste in a direction but it can very well lead to the obsoleteness of many kinds of music, not to mention the harm it does to the creators and artists. How would you train AI algorithms such that they are not biased when they make decisions? A computer scientist can easily proclaim that an algorithm is the best one for music recommendations but we’re not thinking of its impact. AI is revolutionizing the world and we all need to be aware of the consequences of our actions. It’s not just computer scientists, but anthropologists and sociologists who need to concern themselves with these issues.

Despite all the flaws and pitfalls, AI still remains a technology with tremendous potential and numerous applications.

In fact, technology can now be used to detect depression and even help in its treatment. A research group at Stanford used chatbot — Woebot to delivers cognitive behavioral therapy and patients actually said that they prefer talking to the bot over a human as they don’t feel judged. In Woebot’s first week itself, more than 50 thousand people talked to it, which is more than what an average therapist talks to in a lifetime. Most of the test subjects also showed lower incidences of depression and anxiety.

AI has now been integrated into our lives, and whether it will be ultimately calamitous or beneficial is left to fate to decide.

--

--