Algorithms Tell Us How to Think, and This is Changing Us
As computers learn how to mimic us, are we starting to be more like them?
Silicon Valley is predicting more and more how we are going to respond to an email, react on someone’s Instagram picture, determine which government services are we eligible for, and soon a forthcoming Google Assistant will be able to call our hairdresser for us in a real-time.
We have invited algorithms practically everywhere, from hospitals and schools to courtrooms. We are surrounded by autonomous automation. Lines of code can tell us what to watch, whom to date, and even whom should the justice system send to jail.
Are we making a mistake by handing over so much decision-making authority and control to lines of code?
We are obsessed with mathematical procedures because they give us fast, accurate answers to a range of complex problems. Machine learning systems have been implemented in almost every realm of our modern society.
Yet, what we should be asking ourselves is, are we making a mistake by handing over so much decision-making authority and control to lines of code? And, how algorithms are affecting our lives?
In an ever-changing world, machines are doing a great job at learning how humans behave, what we like and hate, and what is best for us at a fast pace. We’re currently living within the chambers of predictive technology — Oh hey there Autocomplete!
Algorithms have drastically transformed our lives by sorting through the vastness data and giving us relevant, instantaneous results. By collecting big amounts of data we have given companies over the years the power to decide what’s best for us.
Companies like Alphabet or Amazon have been feeding their respective algorithms with the data harvested and are instructing AI into using the information gathered to adapt to our needs and be more like us. Yet as we get used to these handy features, are we talking and behaving more like a computer?
“Algorithms are not inherently fair, because the person who builds the model defines success.” — Cathy O’Neil, Data scientist
At this technological rate, it’s impossible not to imagine a near future where our behavior is guided or dictated by algorithms. In fact, it’s already happening.
Designed to assist you write messages or quick replies, Google rolled out its latest feature on Gmail called Smart Replies last October. Since, taking the internet by storm a lot of people have criticized the assistant, saying that its tailored suggestions are invasive, make humans look like machines, with some even arguing its replies could ultimately influence the way we communicate or possibly change email etiquette.
The main issue with algorithms is when they get so big and complex they start to negatively affect our current society, putting democracy in danger — Hi Mark Zuckerberg, or placing citizens into Orwellian measures, like China taking unprecedented means to rank people’s credit score by tracking their behaviour with a dystopian surveillance program.
As machine-learning systems are becoming more pervasive in many areas of society. Will algorithms run the world, taking over our thoughts?
Now, let’s take Facebook’s approach. Back in 2015 they rolled out their newer version of the News Feed which was designed as an ingenuous way of raking and boosting users’ feed into a personalized newspaper allowing them to engage in content they’ve previously liked, shared and commented.
The problem with “personalized” algorithms is that they can put users into filter bubbles or echo chambers. In real life, most people are far less likely to engage with viewpoints that they find confusing, annoying, incorrect, or abhorrent. In the case of Facebook´s algorithms, they give users what they want, as a result, each person feed becomes a unique world. A distinctive reality by itself.
Filter bubbles make it increasingly difficult to have a public argument because from the system’s perspective information and disinformation look exactly the same. As Roger McNamee wrote recently on Time magazine “On Facebook facts are not an absolute; they are a choice to be left initially to users and their friends but then magnified by algorithms to promote engagement.”
Filter bubbles create an illusion that everyone believes the same things we do or have the same habits. As we already know, on Facebook algorithms aggravated the problem by increasing polarization and, ultimately harming democracy. With evidence showing that algorithms may have influenced a British referendum or the 2016 elections in the U.S.
“Facebook’s algorithms promote extreme messages over neutral ones, which can elevate disinformation over information, conspiracy theories over facts.” — Roger McNamee, Silicon Valley Investor
In the current world constantly filled with looming mounds of information, sifting through it poses a huge challenge for some individuals. AI — used wisely — could potentially enhance someone’s experience online or help tackle, in a swift manner, the ever-growing loads of content. However, in order to function properly, algorithms require accurate data about what’s happening in real the world.
Companies and governments need to make sure the algorithms’ data is not biased or inaccurate. Since in nature, nothing is perfect, naturally biased data is expected to be inside many algorithms already, and that puts in danger not only our online world by also the physical, real one.
It is imperative to advocate for the implementation of stronger regulatory frameworks, so we don’t end up in a technological Wild West.
We should be extremely cautious about the power we give algorithms too. Fears are rising over the transparency issues algorithms entail and the ethical implications behind the decisions and processes made by them and the societal consequences affecting people.
For example, AI used in courtrooms may enhance bias, discriminate against minorities by taking into account “risk” factors such as their neighborhoods and links to crime. These algorithms could systematically make calamitous mistakes and sending innocent, real humans to jail.
“Are we in danger of losing our humanity?”
As security expert, Bruce Schneier wrote in his book Click Here to Kill Everybody, “if we let computers think for us and the underlying input data is corrupt, they’ll do the thinking badly and we might not ever know it.”
Hannah Fry, a mathematician at University College London, takes us inside a world in which computers operate freely. In her recent book Hello World: Being Human in the Age of Algorithms, she argues that as citizens we should be paying more attention to the people behind the keyboard, the ones programming the algorithms.
“We don’t have to create a world in which machines are telling us what to do or how to think, although we may very well end up in a world like that,” she says. Throughout the book, she frequently asks: “Are we in danger of losing our humanity?”
Right now, we still are not at the stage where humans, are out of the picture. Our role in this world hasn’t been sidelined yet nor it will be in a long time neither. Humans and machines can work together with their strengths and weaknesses. Machines are flawed and make the same mistakes just as we do. We should need to be careful about how much information and power we give up because algorithms are now an intrinsic part of humanity and they’re not going anywhere anytime soon.