AI: what it is, what it is not and what to expect

Are machines getting sentient? Are they able to do complex logical thinking like us? Will one day AI replace us all on any task or even go rogue and exterminate the human race?

Pedro Costa
PublicThought

--

The last couple of years consumers have been spammed with amazing things about Artificial Intelligence (AI) in many parts of their daily life’s. Almost every week we hear about products, news, technology company ads and other mixed information of more extraordinary things an AI can do.

So, are machines getting sentient? Are they able to do complex logical thinking like us? Will in the near future AI replace us all on any task or even go rogue and exterminate the human race?

The short answer for all of these questions is NO. Far from it.

The better term for what we have now is Machine Learning (ML), but that does not sell a product or news so well, does it?

Today we are really on a gold mine of ML, but many of the theories, models and mathematics have been going on for decades, because they are quite simple. So why is it important now? What changed?

Well Big Data and huge server farms, AKA the cloud happened, brought the theories potential into practice and to wide consumer applications. Today we are sitting on a gold mine of information and a way to process it with the vast amounts of processing power that big farms can handle. What this means is that analyzing and categorizing millions of terabytes of data is becoming trivial and with that we get information based on that data. Because it is categorized we can find relations between seemingly arbitrary things and also see patterns on the information that no human could.

So what it is ML exactly? Let’s focus on one of the most widely popular models of ML which is Deep Learning (DL) and give an example by how Google transformed how we handle photos and search for them.

Google is one of the major players in ML and DL. They find it so important that back in 2017, Sundar Pichai said that Google is transitioning from a Mobile First strategy to an AI (read it ML with a nicer name) first strategy. They want to put AI first when thinking on services and products from now on.

One of the first services they provided was the mind blowing Google Photos service which we are able to search for “tree” and it finds pictures with trees in it. This is done using various models most of which are DL algorithms.

Left: Original Photo; Right: Same photo exaggerated by a ML algorithm after it was asked to find patterns it knows about

What this algorithms analyze all the information from pictures and find patterns. After analyzing millions or even billions of images it finds a lot of patterns with a certain degree of probability and it groups them by the likelihood of being the same thing. This way it can be categorized. A cat, a tree and even a human face, with some help, from us (telling it the name of what it just found) can be categorized. This can also connect with other data and information. If you take a picture of the Eiffel Tower the ML system might GPS tag the picture’s location in Paris, even though it was taken without this information.

One of the first tests was the recognition of cats throughout Youtube videos. The use of DL algorithms made possible the recognition of cats, without any given knowledge, with a degree of certainty larger than any previous state of the art systems.

Shows cost function. Lower the better; source: https://codesachin.wordpress.com/tag/gradient-descent/

All this is done by mathematics and probabilistic approximations of what should be the result because it has the information of millions of similar things. In some cases there are tests that some ML systems can already analyse better than a human, such as skin tags for possible cancer signs. If it has health data (historical and present) from the patients to cross reference, the better the results will be(higher probability of being correct).

There is of course some margin for error, but machines are becoming better than humans at the same tasks as systems evolve with more data and, erroneous or discrepant data, is removed from the data feed. A real issue with this kind of systems is that they are black boxes. This means that the algorithm can say with a degree of probability that the skin tag might be cancer, but cannot say why it has arrived at this conclusion. A medic can say it, because of his experience and study that this or that color and shape of the bump means a high probability of cancer, but not the ML system. It just compares the picture(s) with millions of other cases and it categorizes it in a range of probability of a cancer diagnosis, but it doesn’t give a reason why it chose that way rather than the other. This, in itself, may give ML systems the impression that they are less credible by our human kind of thinking, though they are better than most doctors on specific tasks already. This is a challenge that ML needs to tackle to get more acceptance into health systems.

Other dangers are bad data and human manipulation of the model for it’s own designs. For example Microsoft Twitter bot experience was interrupted because the ML bot was fed with racist and misogynist data from twitter users. The ML is just following the pattern that it was fed with. In that regard, I think it tells more about human nature than about the dangerous of AI.

As we can see this means that these systems are really very straight forward and they are not sentient but follow very specific rules of statistics and approximation of the truthiness and intent goal.

There is not an higher function of thinking that might bring them to abstract logic and self consciousness of thinking. So the biggest worry is not an issue. They cannot become rogue. Even today, top researchers in the area are not sure if a conscious machine is possible, let alone be dangerous for humanity. Quoting top researcher Andrew Ng:

There’s a big difference between intelligence and sentience. There could be a race of killer robots in the far future, but I don’t work on not turning AI evil today for the same reason I don’t worry about the problem of overpopulation on the planet Mars

If we colonize Mars, there could be too many people there, which would be a serious pressing issue. But there’s no point working on it right now, and that’s why I can’t productively work on not turning AI evil.

Source.

So the troubling sounds on the media and even some top business men like Elon Musk (which is one of the entrepreneurs I respect the most today) are way off reality. They are not sentient and are not taking away everything from us, but can and will actually take away a lot of jobs.

On the high note, this can be thought like the beginning of the industrial age. At some point there was also concerns that industry, cars, etc, would bring unemployment, worst conditions for humans etc. and for a while it might have brought that, until society adapted to the new reality, new jobs and conditions were created. If we look back to industrialization it was a good thing to humanity in most aspects and the same applies to AI/ML. We will get some jobs cut, but will give the rise for people to focus on other things and give more possibilities and better living for everyone. So don’t stress about it too much and accept evolution without worrying :)

--

--

Pedro Costa
PublicThought

Web Developer. Likes good opinions based on knowledge and likes satire humour. Loves everything about technology and nature. Creator of Public Thought.