We seem to have failed machine learning

Jarno Kartela
The Hands-on Advisors
4 min readApr 17, 2018

The start of 2018 has been quite a ride as an AI engineer. Machine learning has been accused of many things, from affecting political views to starting wars and keeping people in the dark about certain types of news and insight. Most recently Cambridge Analytica targeted persuadable voters by using data on their psychological archetypes to influence their behaviour.

This begs the question; are we doing something inherently wrong by using machine learning in the first place? Is the technology at hand too immature?

First, machine learning, as any other type of programming, is merely a tool. The problems arise as it is not explicitly programmed. Take the “news bubble” -problem as an example. No-one in their right mind would create a recommendation system that forces people to read solely alt-right news. Since forcing alt-right content will only work for a subset for people, that system would not generate almost any results when applied to the general public. Yet when we apply machine learning, more explicitly collaborative filtering and matrix factorization to such a context, it will create this effect without being explicitly told to do so.

The system learns on the behaviour of like-minded people and uses that learning to target content. It just so happens to learn that people who have read alt-right news might like more alt-right news and continues on that loop until we’re in a bubble and can’t get out. We have not done anything explicitly bad as engineers, we’ve just created a system that implicitly learns that we optimize viewership by learning a method to recommend that works best given a specific audience and user.

Cambridge Analytica and the like are however much different. They have used machine learning explicitly for unethical purposes. This is profoundly different than what happens when machine learning implicitly learns to do something that results are generally unwanted (but are for the purposes of company behind it, say a media company, optimal). Since machine learning can lead to unwanted outcomes implicitly, explicitly programming a system to benefit some sort of propaganda is actually quite trivial.

We’re in a bit of a dilemma. We need to actively fight for ethical use of machine learning to reduce the amount of explicitly unethical engineering and goals. At the same time we need to try to make the general AI development community understand the side-effects of these systems when they are given a specific metric to improve, such as improving return on ad spend, but no constraints on how to achieve that metric. To put it more bluntly, we’ll need to find ways to achieve organizational goals without compromising ethics. For most cases, it will mean finding a balance between the two since we can’t achieve both simultaneously using a single algorithm.

These problems are still mainly limited to social media and media in general. We’ll shortly find ourselves in a world where all decisions are bound to algorithms that are not explicit, i.e. they are driven by machine learning. These systems will control our planes, trucks, ships, cars, healthcare, decisioning, political choices and work as personal assistants. We really have just scratched the surface on what’s possible. We’ll just need to make sure that those upcoming things will cause more good than harm.

It will all boil down to how we manage AI engineering, not that we need to breed a species of extremely pure-hearted engineers. As mentioned, many undesired outcomes are outcomes of implicit behaviour of a system that optimizes something. That optimization target, its loss functions, rewards, goals and metrics are what should be ethical. The thing we want to achieve and the possible ways of achieving that, should be ethical. For nearly all cases, it’s a balance between organizational metrics and softer — but usually for us as a species more important — goals, like improving democracy. Seems that coining the term “friendly AI” back in 2008 was not too far off for Yudkowsky [1].

Luckily, there are multiple endeavours where we aim for the right things. Deepmind is focusing on large scale energy problems that just might help to save our planet [1], Microsoft, among others, is pursuing better healthcare [2], researchers are finding ways to make nuclear energy more safe [3] and even finding ways to help fusion a reality [4]. And despite the recent drawbacks, autonomous cars and traffic in general will eventually make transport much safer than it is now as long as we understand that there is still progress to be made [5, 6].

Given the possibilities of AI and especially machine learning, it’s almost embarrassing how many of us just try to sell more ads. If we keep on doing that and not even try to find better use cases, we have indeed failed machine learning — not the other way around.

[1] https://www.ft.com/content/27c8aea0-06a9-11e7-97d1-5e720a26771b

[2] https://futurism.com/microsoft-ai-machine-learning-discover-cure-cancer/

[3] https://futurism.com/researchers-training-ai-make-nuclear-reactors-safer/

[4] https://www.sciencedaily.com/releases/2017/12/171214144509.htm

[5] https://www.theregister.co.uk/2018/03/24/uber_fatal_self_driving_car_crash_video/

[6] https://www.ft.com/content/54abc2b0-2c21-11e8-97ec-4bd3494d5f14

--

--