Blind Devotion, Masked Betrayal

Technological literacy will take off the blindfold

Gonzalo Rosales
Future Vision
11 min readMay 1, 2019

--

Want cool Future Vision Merch? Check out our store here

Technology has profoundly changed the way modern society functions, from how we communicate to our finances. At the core of technology, there exist algorithms, which are essentially just instruction sets for computers to follow to come to a solution. However, as simple as they may sound, these instructions can be combined in increasingly complex ways to solve increasingly difficult problems. We have come to trust these algorithms with decisions from what we buy, who gets a loan, to criminal sentencing.

Yet, we have failed to understand how they work to come to these conclusions. These algorithms, machine learning algorithms in particular, are fed our own data and use the trends they find to come to their results. In turn, these algorithms also adopt our biases. Algorithms can be made more just by accounting for machine bias in learning models, which can be done by introducing equalizer models, which force algorithms to base their predictions on unbiased data, or by implementing machine ethics within machine learning algorithms themselves.

Even these solutions come with their unique set of challenges and controversies. Who decides what counts as biased data? Will the companies be the ones making these decisions or will it be the responsibility of society as a whole? The latter will clearly hinder our current rate of innovation if we have to wait for regulation to deem algorithms just. In the end, it depends on our ability to understand how these algorithms work and let the final say come to the human, not the algorithms. Ensuring that everyone has a basic level of technical literacy will ensure that we, as a society, make more educated decisions about when to trust algorithms in the future.

In her book, Hello World: Being Human in the Age of Algorithms, Hannah Fry, a mathematician at University College London, presents a neutral view of algorithms. By labeling them as neither good or bad, she encourages the reader to create their own opinions while encouraging them to think more critically about the people programming these algorithms, rather than criticizing the algorithms themselves. She places algorithms into four main broad categories based on their goals: Prioritization: making an ordered list, Classification: picking a category, Association: finding links, and Filtering: isolating what’s important. She argues that the vast majority of algorithms will all perform a combination of the above. She uses UberPool as an example, “which matches prospective passengers with others heading in the same direction. Given your start point and end point, it has to filter through the possible routes that could get you home, look for connections […] pick one group to assign you to […] all while prioritizing routes with the fewest turns for the driver […]” (Fry, 10)

I appreciate Fry’s neutral view of algorithms and the way she categorizes them, however, the main thing that stood out to me was a term she used to describe our reliance on them: “blind faith.”

It Made Me Do It!

To explain what she means by this, she uses the example of a man, Robert Jones, who relied too much on his GPS and got himself into legal trouble. On Sunday, March 22nd, 2009, Robert was driving back from visiting some friends and he noticed he was running out of gas. His GPS showed him a shortcut.

Eventually, the shortcut led him to a winding dirt road that almost left him crashing down a cliff, saved only by a wooden fence at the edge. He later appeared in court charged with reckless driving, where he admitted that he didn’t think to “over-rule the machine’s instructions […]” Unfortunately, blaming the GPS didn’t seem to be an argument admissible in court.

“It kept insisting the path was a road…so I just trusted it. You don’t expect to be taken nearly off a cliff”

- Robert Jones

Although many of us might think that we would never be the idiot that followed their GPS down a cliff, I believe this shows a truth about how we generally tend to trust technology. The moral of Fry’s example is that we all tend to trust these algorithms because we believe they are so complex that there is no way for them to fail us, when in reality they are all just a black box making decisions that we don’t understand at all.

Criminal Sentencing

Society has come to blindly trust the algorithms that have a big impact on our lives, without understanding how they work and questioning the integrity of the data they use to come to their conclusions. The machine learning algorithms that decide things from who gets a loan to who gets incarcerated tend to also learn our biases from the training data they are fed. After all, this data is created by society, so it must include everything including our prejudices; creating a phenomenon known as machine bias.

An example of these algorithms is COMPAS, an algorithm used by many courts to help the judge decide the sentence for a defendant. COMPAS is just one of many risk assessment algorithms used to calculate the probability that a criminal will commit another crime, labeling criminals as either low-risk or high-risk based on many factors.

Risk assessment algorithms are now used in many courts in an effort to decrease the number of people placed in prisons to avoid overcrowding, and while it has proved to be effective in doing this, it runs the risk of instead choosing to wrongly incarcerate those who it believes are at higher risk to commit another crime. In 2014, U.S. Attorney General Erich Holder said “Although these measures were crafted with the best of intentions, I am concerned that they inadvertently undermine our efforts to ensure individualized and equal justice […] they may exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system and in our society.”

A ProPublica article discovered that this may indeed be true. The formula used by the algorithm was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants. COMPAS seems to be highly biased against black people, giving them higher risk assessments compared to white defendants even if their crime wasn’t as serious.

ProPublica argues that this is an example of machine bias and that it is unethical to let these algorithms influence such big decisions about people’s future. The problem is that the data these algorithms use isn’t made publicly available by the companies that create them, so one can’t be sure that the data is indeed biased or if it’s correct in assuming higher risk in black defendants.

However, we must ensure that the data used by these algorithms is accurate and does not simply replicate our own prejudices. One of the judges ProPublica interviewed, Boessenecker, mentions that these scores aren’t always indicative of a person’s potential danger, he says that “A guy who has molested a small child every day for a year could still come out as a low risk because he probably has a job […] Meanwhile, a drunk guy will look high risk because he’s homeless.” Judges, just like society as a whole, tend to rely on algorithms that they do not understand.

It is clear that we must stop treating algorithms that have so much power over people’s lives as a black box and finally start to question their accuracy and account for their biases.

Right for the wrong reasons

Images showing analysis captioning algorithm made before and after equalizer model implementation

One way to account for bias is by forcing algorithms to use only certain parts of the data when coming to their decision. This is the approach used by Dr. Kate Saenko, a researcher at Boston University, and her colleagues in their paper “Women Also Snowboard: Overcoming Bias in Captioning Models.” Her and her colleagues research how to minimize bias in an algorithm that looks at an image and creates a caption for it.

One of the problems arising with this algorithm is that whenever it would analyze a picture of something like a kitchen, it would assume that the person in the image was a woman; adopting the human bias that women are typically associated with domestic chores. Their research found that not only did the algorithm rely on the biased data it learned from, but it also exaggerated it and over relied on it. For example, they found that if a word was present in 60% of the training data, the algorithm will predict it 70% of the time when captioning.

To reduce bias, Dr. Saenko and her team created an equalizer model that would force the algorithm to analyze a person in a picture directly to predict their gender, instead of relying on the surrounding objects and context. This would ensure that the algorithm was always “right for the right reasons.” This algorithm didn’t just display society’s stereotypes about women, it exaggerated them and started basing its output entirely on them.

Too Good; Too Creepy

Target knows you’re pregnant before you do. How? Big data.

A case for why technological literacy is important is that we can’t let algorithms outsmart us. Some algorithms get so good that their creators have to purposely dumb them down to make them less creepy. Target did just that.

In 2012, an angry man went into a Target just outside Minneapolis, demanding to talk to a manager:

“My daughter got this in the mail!” he said. “She’s still in high school, and you’re sending her coupons for baby clothes and cribs? Are you trying to encourage her to get pregnant?”

The manager, no clue what the man was talking about, apologized, and even called again a few days later. Kudos to Target for that great customer service! The manager said that on that call, the father was somewhat abashed:

“I had a talk with my daughter,” he said. “It turns out there’s been some activities in this house I haven’t been completely aware of. She’s due in August. I owe you an apology.”

Andrew Pole, a statistician hired by Target, could be said to be responsible for this. Target discovered fairly quickly that it creeped people out that the company knew about pregnancies in advance. (Wow, what a shocking realization!). So Target got sneakier with the coupons. Andrew Pole told the New York Times:

“Then we started mixing in all these ads for things we know pregnant women would never buy, so the baby ads looked random. We’d put an ad for a lawn mower next to diapers. We’d put a coupon for wineglasses next to infant clothes. That way, it looked like all the products were chosen by chance. And we found out that as long as a pregnant woman thinks she hasn’t been spied on, she’ll use the coupons. She just assumed that everyone else on her block got the same mailer for diapers and cribs. As long as we don’t spook her, it works.”

Target’s philosophy is similar to the first date philosophy. If your date doesn’t know you’ve already stalked their whole social media life, then it’s all good.

This sparks a conversation about how much power our data gives companies and how easy it is for them to outsmart us. (When Pole was hired in 2002, Target went from being worth $44 billion to $67 billion in 2010).

If it hadn’t been for the angry father, we perhaps still wouldn’t know that companies like Target manipulate us in tricky ways. It is why we must all have a basic level of technical understanding to be able to assess how our data is used and when companies have gone too far. Society can’t afford to be outsmarted by corporations and the algorithms they use.

Technological Literacy

“ In order to be a technologically literate citizen, a person should understand what technology is, how it works, how it shapes society and in turn how society shapes it. Moreover, a technologically literate person has some abilities to “do” technology that enables them to use their inventiveness to design and build things and to solve practical problems that are technological in nature.”

- International Technology Education Association (ITEA)

To summarize the ITEA’s definition of technological literacy, a technologically literate person should understand what technology is, how it works, how it shapes society and how society shapes it.

It is important, now more than ever, to stress the importance of a technologically literate society. Technology affects almost every phase of our current and future lives. In order to be proactive, citizens of today must have a basic understanding of how technology affects the world and how they exist both within and around technology. In the past, society could afford to let itself develop their technological literacy through their experience with technology. However, with the speed modern technologies are being developed and the increased impact on our lives today, we simply can’t leave this up to chance anymore.

Reestablishing Education as the Great Equalizer

Technological literacy is a new basic and should become a fundamental subject taught along classic elementary courses like mathematics and English. Teaching computer science courses at a young age will establish an analytical backbone to every person in society, providing the baseline knowledge needed to live in an increasingly connected world and to analyze the algorithms and technology we use everyday.

A study conducted by Code.org found that 9 in 10 parents believe their students should take a computer science course. However, the study also found that only 35% of high schools in the United States teaches any computer science courses and only 3 out of 10 parents actually demand school administrators to implement these courses.

There are clear challenges in establishing these courses in all high schools across the country. As if it weren’t hard enough finding teachers in general, finding CS teachers in particular is increasingly difficult.

With the high pay in STEM careers, many computer science graduates don’t see the appeal in becoming a teacher with a much lower income. Those that do become teachers often find themselves being absorbed by the industry and leave their teaching jobs.

Another challenge is that implementing these curriculum changes at a federal level is a long and difficult process. There are changes that must be approved by local, state, and federal level education boards. In order for us to become a technologically literate society, there must be changes at the political level and government funding should be allocated to attract and hire computer science teachers.

Want to continue reading?

--

--

Gonzalo Rosales
Future Vision

A Mexican with a Mission | Boston University ’21| Incoming Google APM