There’s an algorithm for that. Or there soon will be

OECD
Chatbots bible and More.
5 min readSep 1, 2017

By Marina Bradbury, OECD Public Affairs and Communications Directorate

Would you like a machine to decide on your medical treatment, whether you could insure your house, if you should be hired, or what news stories you read? It may be happening to you already. Every time you go online to make a purchase, search for a restaurant, access your bank account or simply interact with your mobile device, you are creating a digital trail of data that is being tracked and stored. This “big data” is fodder for machine learning algorithms that will for example suggest what to buy.

Traditionally in computer science, algorithms are a set of rules written by programmers. Machine learning algorithms are different: they can improve the software in which they are embedded without human intervention. The more data they receive, the higher their ability to “understand” and predict patterns, including patterns in human behaviour. They are another step along the road to creating artificial intelligence (AI), even if we don’t know where this road is leading. As Stephen Hawking and his colleagues writing in The Independent, claimed “Success in creating AI would be the biggest event in human history” before going on to say, “Unfortunately, it might also be the last, unless we learn how to avoid the risks.”

We are living in an algorithmic society, and many argue that this is a positive thing. On an economic level, machine learning algorithms could help stimulate innovation and productivity growth. According to OECD research, big data used to feed machine learning algorithms can boost industries including advertising, health care, utilities, logistics, transport and public administration. When it comes to our day-to-day lives, algorithms can save us time and effort, for example online search tools, Internet shopping and smartphone apps leveraging “beacon” technology to provide timely recommendations based upon our whereabouts. Computer scientist Pedro Domingos even predicts that in five years’ time, digital personal assistants will be more important than smart phones, with their capacity to aggregate information from various apps to predict our needs before we even know them.

However, the large-scale use of algorithms can also be threatening to us as citizens. For example, if algorithms allow companies to predict our purchases before we even make them, what implications does this have for our personal choices and privacy? Critics point towards the dangers of allowing companies to exploit vast amounts of personal data and restrict individual liberties.

Take the realm of insurance, loans and legal advice. Nowadays, our credit rating or health insurance record is often assessed by a machine, not a person, whilst virtual legal assistants are becoming increasingly common. On the one hand, this can be advantageous to companies, enabling higher levels of efficiency, and in turn more accessible prices. The legal industry is undergoing a veritable transformation thanks to algorithmic technology, with quantitative legal prediction (QLP) being a prime example. Making information-based predictions is at the heart of the legal profession. In addition, legal cases often require the analysis of large-scale data or document sets, which can pose a challenge to the cognitive limitations of humans. Since algorithms are able to make predictions based on “big data” with increasing accuracy, QLP is arguably set to play an increasing role.

On the other hand, when it comes to ordinary customers looking for legal support or a loan, automated systems may not be helpful. Critics warn that even if an algorithm is designed to be neutral, bias can creep in. This can be due to unconscious bias of computer programmers. With machine learning algorithms, this is also due to the fact that they are fed by data. Even if they absorb this data in a completely rational way, they will still reproduce forms of discrimination that already exist in society. For example, if you are looking for a bank loan, you might be offered a higher or lower rate depending on your postal address, name, age or gender.

In the same way, whilst “talent analytics” is being used in HR to help build fairer recruitment practices, these new technologies do not offer a quick fix. For example, studies have found that women or people with “foreign” sounding names receive different kinds of job advertisements than white males. Nevertheless, global companies such as Google and McKinsey are already developing “talent algorithms” to recruit the best staff and assess performance. Moreover, some argue that companies that fail to move in this new direction may lose out later on. Overall, it seems that algorithms could have a positive impact on the future of recruitment, but only when used judiciously as part of a wider process towards inclusiveness.

The healthcare industry is another key area in which the paradigm of the algorithmic society is played out. For example, a recent study in the US revealed how machine learning can offer a faster and less resource intensive method of detecting cancer, with machines automatically extracting crucial meaning from plaintext reports. Arguably, if machines can be used to review and analyse data, this frees up humans’ time to provide better clinical care. However, the ethical sensitivities of using algorithms to make critical health decisions must be addressed when developing innovative new models.

Trading algorithms are transforming the financial world as we know it. Algorithmic trading has given rise to companies such as Quantopian, which invites “talented people everywhere” to create their own algorithms for free, and pays those for the ones that perform best, and Rizm, which lets those new to trading test and even trade using their own algorithms. However, the field is not without dangers: just one typo could lead to significant financial losses in a short amount of time. The ethics of algorithmic trading are also questioned by critics. With computer-driven or “quantitative” hedge funds enjoying success despite volatile markets, their business models will not escape scrutiny as algorithms continue to permeate our economic systems.

Finally, algorithms that drive search engines can influence the information we receive, impacting upon our outlook on the world and even our well-being. Take the phenomenon of “filter bubbles”. This relates to the way algorithm-based search tools are likely to show us information based upon our past behaviour, meaning it is unlikely to challenge our existing views of spark serendipitous connections. More worrying still, Facebook conducted an experiment in 2014 to test the reaction of users to negative or positive content. The results revealed that those shown more negative comments posted more negative comments, and vice versa. However, the way the experiment was conducted was criticised for its lack of transparency.

The paradigm of the algorithmic society is very much bound up in the unknown. In many ways, this is exciting, capturing how data is becoming the raw material of our era, a source of many possibilities for innovation and even the means to address social problems. Yet it can also be a threat. As Pedro Domingos puts it, “You can’t control what you don’t understand, and that’s why you need to understand machine learning”. The challenge will be to ensure that we live in a society which reaps the benefits that algorithms can bring, whilst ensuring that their implications are understood by all.

Useful links

OECD Policy Brief on the future of work: Automation and independent work in a digital economy

Originally published on the OECD Insights Blog.

--

--

OECD
Chatbots bible and More.

Better policies for better lives: The Organisation for Economic Co-operation and Development is a global policy forum and a hub for statistics & analysis.