IMAGE: Ilya Akinshin — 123RF

Understanding concepts in Machine Learning

Enrique Dans
Enrique Dans

--

After researching a number of interesting articles that I thought could help me understand some issues related to machine learning I decided that rather than let then accumulate, it made more sense to review some of them in a single entry.

The first, “The dark secret at the heart of AI”, is an MIT Tech Review article that does a very good job explaining the “black box theory”, which we have commented on here on several occasions: as machine learning algorithms become more and more sophisticated, human brains will find it impossible to understand the procedures used to reach certain results, which leads us, ultimately, to the need for a black box that generates results we can test only on the basis of their quality, without understanding how they were reached. If we train an algorithm with the whole history of loans granted and denied by a bank, for example, in order to make decisions in the future that a risk committee would take today, we will end up with a black box from which decisions come out but we will have no idea as to how they were reached.

This brings us to another question: when we feed an algorithm, we do it with all the data available to us and that we believe will contribute to the result, what we find is that machine learning redefines our concept of human intelligence, altering our perceptions of what we can or cannot do. The starting point for the machine is what we as humans are able to understand: from there, everything is unexplored terrain and methods that require a computing power our brain simply lacks. So, in the future things that today seem normal for us to do will seem absurd, in the same way we will accept machines doing more and more things that at present seem strange.

Soon, the chatbot will have become the norm for customer service and for many more things like explaining the news. The initial disillusionment and disappointment will give way to a time when, as is already the case with younger generations, we will prefer to talk to robots than people, because not only will they give us better, more predictable and more accurate service, but they will also eliminate the feeling of “annoying someone” (just as a link does not complain or answer back when clicked on 10 times in a row). A chatbot is simply a conversational algorithm that helps us with issues related to a product or service, and in the not-too distant future it will seem strange that people once carried out these tasks.

Similarly, other activities will soon be a thing of the past, whether programming traffic lights to avoid congestion, making investment decisions or diagnosing an illness, and it will seem “odd” or “primitive” that such activities were once carried out by people. Replacing taxi or truck drivers will be seen as something so obvious, that it will seem incredible — and extremely dangerous — that this activity was previously carried out manually by humans, and we will see the millions of victims on the road as a logic consequence of that primitivism. Does that mean that many people doing these jobs will be made unemployed? Possibly, but the solution will not be to tax those robots who have gone on to carry out those activities, but to train people to carry out other related activities. In the meantime, cutting social benefits, as seems the trend in countries like the United States, will only make the problem worse.

None of this absolves us of the responsibility of looking for methodologies that help trace the decisions made by the machines. The article describing the nightmare scenario envisioned by Tim Berners-Lee in which decisions in the financial world are made by machines that create and manage companies, unable to related to typically human notions (and difficult to explain for a machine) such as the social impact or the common good is undoubtedly worth reading, and quotes the phrase from a recent Vanity Fair interview with Elon Musk in which he talked about the same kind of dangers from automatic optimization and what might happen with an algorithm that optimizes strawberry production:

“Let’s say you create a self-improving AI to pick strawberries and it gets better and better at picking strawberries and picks more and more and it’s self-improving, so all it really wants to do is pick strawberries. So then it would have all the world be strawberry fields. Strawberry fields forever.”

On this basis, should we stop developing this type of technology? Obviously not. Stopping technology is impossible, the only thing we can do is to advance the knowledge related to its development to try to prevent nightmare scenarios. There will always be somebody somewhere determined to improve technology, particularly if there is a financial incentive.

Do you like to draw? Are you any good? Well try drawing with the assistance of an algorithm that is trying to figure out what you want to picture and that proposes ways to do so better.

Think about what that algorithm is doing, test it, and remember that what you are seeing is only the beginning, the initial input. Now think forward a few years, to when not only many more people have experimented with it, but many more alternative representations have been proposed by more artists, in different styles and with different possibilities. And we’re just talking about a relatively “simple” pattern recognition algorithm. Get it?

(En español, aquí)

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)