The Black Mirror Effect – Human problems not technology problems

Patrick Miller
4 min readDec 20, 2017

--

If it isn’t already, the Black Mirror Effect will come to be known as the phenomenon of people taking an amazing technology and screwing it up. If you haven’t seen Black Mirror yet, check it out (just don’t start with season 1 episode 1 The National Anthem – try Nosedive, San Junipero, Whitebear or White Christmas, then go back and start from the beginning – you’re welcome).

While there will always be technologies that are conceived and created for evil, destruction, and otherwise nefarious purposes, most aren’t. Most are created to make our lives better. It’s people who find ways to use these new tools for corrupt and dishonourable ends. People in the field of technoethics are working to make sense of our relationship with technology as new technologies emerge and evolve in ways unintended and sometimes unseen by their creators. Joseph Marie Jacquard could not have predicted his invention of punch cards for weaving looms in the early 1800’s would eventually evolve and make possible Donkey Kong and computer viruses. Whether it’s the internet of things, cellular services, social media or gene therapy, there are always those who seem to find a way to misappropriate technologies to serve their own agenda, even at the expense or demise of others. There are times too where people create new technologies or repurpose old ones just because it is possible, without considering if they should. Just because humans were capable of creating the atom bomb, pop up ads or The Bachelor, doesn’t mean they should have done it.

“… scientists were so preoccupied with whether they could, they didn’t stop to think about whether they should.”
Ian Malcolm – Fictitious Mathematician

In a recent episode of CBC’s Spark, Nora Young and Kevin Roose discuss Lessons in tech anxiety from Frankenstein’s monster. They talk about “Frankenstein Moments” as times when a creation goes out of control and causes disastrous results that were unintended and unforeseen. They highlight a recent example of Facebook’s ad tools temporarily letting advertisers target people who self-identified with anti-Semitic sentiments. Another example is the CRISPR gene editing technology which was created as a therapeutic tool with the ability to eradicate devastating genetic diseases. However, it could also be used as an enhancement tool with some pretty deep moral and ethical implications for current and future generations. Coming to a global consensus on what we will and won’t use CRISPR for is a necessary but extremely difficult process. Only time will tell how people choose to use its power. “To forge ahead without thought about the consequences of one’s work is just irresponsible,” says Jennifer Doudna author of A Crack in Creation: Gene Editing and the Unthinkable Power to Control Evolution

It seems like it’s not just those who misappropriate technology that don’t consider or care about the consequences or collateral damage. Kelly Weinersmith from Rice University was recently on the Science Friday podcast talking about her new book Soonish — Ten Emerging Technologies That’ll Improve and/or Ruin Everything. During the interview, she responded to a question about the sub-title of her book by saying, “We think it’s really important. And one thing that’s sort of under-reported, is to consider how these technologies could also be awful. And actually, while researching the book, we were surprised at how little it seemed most people were thinking about the potential negative implications. So we felt like an honest portrayal of the technology included both how it could make everything awesome, but also how it could ruin a lot of things.”

There is clearly a need for a balance between regulation and agility. How do we regulate emerging technologies to keep people safe and their privacy intact while supporting the possibilities that can be realized through their rapid iteration? There are no binding global laws to govern the moral and ethical use of technology. While the field of technoethics has been considering these issues for some time, we are still left with local and international laws that are not equipped or agile enough to deal with these increasingly complex concerns.

So, is it humans or technology that present the risk? In education (and elsewhere), it is easier to blame the unsavory actions of humans and to deflect the woes of distracted students and “lost basics” onto technology. It is easier still to site the reduction and regulation of technology as the cure for humanity’s disconnectedness. Easier because we know that this simple solution (a reversal of technology or even maintaining status quo) is so unlikely. Trying to hold back the use of technology in education only serves to contribute to school’s irrelevance. Our students will still use these tools as they see fit outside school without the guidance and professional judgement of someone able to give them the critical and ethical questioning skills needed to use them safely and effectively. Our students are already citizens of the world whether we treat them that way or not. Not addressing technology’s promises and perils in schools (and at home) only leaves our young people to find their way alone and unsupported.

Really, the larger and more pressing question is…

How do we bridge the gap between what emerging technologies can do and the ability of humanity to adapt to use them for good?

As Black Mirror creator Charlie Brooker says “It’s not a technology problem we have. It’s a human one.”

--

--

Patrick Miller

Education Officer Ministry of Education Innovation Design and Implementation Team Toronto, ON