“Two CCTV cameras on a gray wall” by Scott Webb on Unsplash

Does the growth of machine learning require politically aware programmers?

Tom Read Cutting
Hackers at Cambridge
7 min readFeb 21, 2018

--

Machine learning is a hot topic, with HaC producing one blog post and one workshop on it in the past three weeks alone! Furthermore, this technology is starting to have huge political impact, with no signs of it slowing down. Whether by changing the way people behave, the way they think, the way information is spread, what people can do, what they want to do — the affect that this software has cannot be understated.

Although a lot of this impact is positive, there are increasingly worrying signs that machine learning could cause, and has caused, a lot of damage. Furthermore, there is no clear vision on how these issues will be tackled.

However, programmers are still (for the most part) relatively apolitical when applying themselves, despite the impact the work they create is having. I believe that if this changed, it has the potential for a lot of good. Not just in allowing programmers to consider how their creations will impact the world, but also in allowing them to educate others about their increasingly relevant topic of expertise.

Why the laissez faire attitude isn’t working

Demand for machine learning is exploding. This is technology with amazing potential when used in the right way, but it is ultimately a tool whose effects are purely the result of those who use it. However, it has also lead to a hands-off approach to tackling problems with a pseudo-scientific attitude — the argument being that the computer is an objective observer simply analyzing data and producing results. The problem with this assertion is that it completely ignores the fact that programs are learning from a subjective, and messy world. Furthermore, a machine learning program can only see what it is exposed to, and therefore can develop biases and can be manipulated the same why a human can. Finally, artificial intelligence can strive to maximize whatever metric it has been instructed to, with non-human lack of the understanding of its consequences.

Therefore, despite the potential of the technology, it needs to be carefully applied with consideration for its consequences. However, there are many cases where the flaws of AI are being exposed and with potentially worrying results.

The video website example

Video uploading platforms on the web are an amazing example of both the awesome stuff machine learning can do and the pitfalls it presents. I would also like to preface this by saying that I think that these platforms are fantastic, however they are also in an incredibly privileged position which is why this example is so significant.

The awesome side of AI

Taking one example: YouTube has one billion users with over 400 hours of video uploaded to the platform every minute. Managing this sheer amount of data is no easy feat. In order to ensure this content was watched as it was uploaded, you would need at least over 24 000 people constantly watching streams of uploads 24/7.

Ideally we would have people do this in order to make sure that the content is what YouTube allows, in addition to being legal and not violating copyright on the content that is uploaded. Despite being imperfect, this is what YouTube’s machine learning systems do. This is a feat that would be impossible for humans to manage, and without AI something like YouTube wouldn’t be able to exist.

So thanks to AI, we live in a world where relatively self-moderating and self-managing video content platforms can exist whilst keeping the marginal cost of serving a video as low as possible. The good thing about this is that it democratizes content-creation and allows these websites to handle traffic on a scale that humans wouldn’t without a cost that would make the websites infeasible in the first place.

The less awesome side of AI

However, there are some less awesome parts of machine learning systems — not all of them being unavoidable. Especially politically, these systems are starting to have a huge effect on Facebook, Twitter, YouTube and many other big social media platforms.

One of the biggest downsides in AI is how it can negatively affect user behavior in unintended ways — often as a result of side effects of simple systems. An example of this is driving user-engagement:

A video content platform has a large incentive to keep their users on their website for as long as possible — and a simple way to maximize this is to create a basic machine learning system that decides which videos someone will see next. This system can then crunch through the data of all its user behavior— identifying patters in how individuals interact with the website. Because it has control over which videos users are encouraged to watch next, it can try calculating which videos will make users more likely to stay engaged with the website — with the goal to increase the average length that a user visits.

However, this can have unintended consequences, because which content will make a human brain decide to stay engaged with a video content platform in the short term may not be what is best for the individual that brain belongs to — or even for the long term engagement behavior of the individual themselves. Similar to the infamous stamp collector (a hypothetical robot designed to maximize stamp collection that eventually takes control of the world’s production system and ensures that all natural resources are turned into stamps), such a system would show happily show users videos which result in the end of human civilization if it increased short-term user engagement (despite this not really benefiting the video content platform provider). Although this example is extreme, the issues it demonstrates are real issues facing social media platforms today, with political consequences like encouraging the promotion of conspiracy theory videos which keep users hooked to their monitors.

So why have politically aware programmers?

The problems and challenges presented by machine learning are complex and multi-faceted. There are many incredibly smart people working towards solutions to them and these aren’t going to solve themselves overnight. Do I think just having politically aware programmers is a be-all-and-end-all answer? No. However, I do think it is one of many pieces of the puzzle for multiple reasons.

The first is simply the fact that the more domain experts that are aware of the political implications of machine learning, the more domain experts there are thinking about the challenges and problems this presents — and therefore can actively add to the conversation and solution with their own unique perspectives, thoughts and ideas.

Secondly, there is the output they produce. When evaluating the algorithms and work they produce, a politically aware programmer can think about the wider reaching consequences of what they produce outside of the simplified model in which they would have previously imagined their systems to operate within. Taking the simple video-serving example, having the individuals question the side-effect of the machine learning systems they are implementing could allow for much smarter solutions to the problems being tackled. The algorithm could take into account long-term user engagement, possibly even disincentivizing obsessive or unhealthy engagement. Efforts could be taken to avoid promoting damaging, disturbing or deceitful videos.

Thirdly, a simple benefit would be for domain experts to educate the general public and policy makers about machine learning, as it’s an increasingly relevant topic understood by relatively few people.

Finally, there is the economic perspective, and that is the result of the fact that the way companies behave is largely a function of the economics involved. The reason machine learning and AI is such a growing domain is because of its potential economic benefits. It allows for loads of useful work to be done using cheap machines without the input of expensive humans. They can also be used to make money in other ways — such as by pushing content to users that will keep them engaged with your product.

However, because these systems are so influential and do affect user behavior so much, they can be economically harmful. No one benefits from the promotion of engaging videos that ultimately lead to the end of civilization due to political harm. Although no individual machine learning system has such a direct, extreme and measurable effect, if the wider economic consequences of machine learning systems were considered by their politically-aware implementers then adjustments could be made to ensure that doesn’t happen.

Will things get better?

I think they will, just because of the fact that the incentives are there to do so. In many ways, we are reaching a turning point as companies realize the consequence their systems are having. It’s worth pointing out that companies depend on political stability in order to thrive — so the incentive is there to prevent their systems from having the effect they do. Facebook is trying to tackle the issue of fake news. DeepMind, the pioneers behind AlphaGo, have launched an Ethics & Society research unit in order to “attempt to scrutinise and help design collective responses to the future impacts of AI technologies”. This is all promising work, however, the dangers of machine learning will still always be present, and this will be an active learning experience which needs active people coming up with active solutions. The solutions aren’t clear and obvious, and having a wider pool of innovators can only be a good thing.

Further Reading

Special Thanks

Special thanks goes to Raphael Schmetterling for editing this post.

--

--

Tom Read Cutting
Hackers at Cambridge

Computer science student at the University of Cambridge, Co-Founder of Hack Cambridge and Hackers at Cambridge. I enjoy games, rock climbing and tech stuff.