Staying Aware of Ethical Dilemmas in Science and Technology

--

For six years now, the Reilly Center for Science, Technology, and Values has released a Top 10 list of Ethical Dilemmas and Policy Issues in Science and Technology, read by nearly half a million people all over the world.

We live in an era of rapid development as technologies that seemed theoretical only a few years ago are increasingly incorporated into our daily lives. Our concern is that there’s little public dialog about the use and risks of these technologies, and that dialog is necessary to keep public policy in pace with science and technology.

Photo by Andy Kelly on Unsplash

The list is not about scary technologies to try to avoid or about creating fear of the future. It’s simply a heads up to those of us who don’t have the time or the inclination to follow every breakthrough as we’re bombarded with news of all kinds each day. Our hope is that in reading the list, people will take time to think about the steps involved in integrating new technology into society and what role they might have to play in it.

Our most recent list was, by far, the most relevant since most of the technologies are already in the implementation stage. You can find more information at reillytop10.com. Here is a summary:

  • Social Credit Systems — In 2014, China’s State Council issued a report called “Planning Outline for the Construction of a Social Credit System” in which they alerted Chinese citizens of their plan to unveil a personal scorecard for every person and business in China, based on their level of trustworthiness, and that participation would be mandatory for every Chinese citizen by the year 2020. Data is currently being collected on shopping habits, credit ratings, online behavior, friend connections, and anything else that might be used to give China’s 1.3 billion people a score from 350–950. People and businesses with higher scores will find it easier to do business (anything from signing contracts to checking into their hotels faster) — people with low scores, not so much. It’s been suggested that people with low ratings will have slower internet speeds; restricted access to restaurants, nightclubs, or golf courses; or that their right to travel freely abroad could be removed. Scores will influence a person’s rental applications, their ability to get insurance or a loan, and even social-security benefits. Citizens with low scores may not be hired by certain employers or possibly forbidden from obtaining some jobs in civil service, journalism, and legal fields. Low-rated citizens will also be restricted when it comes to enrolling themselves or their children in high-paying private schools. Reputation is your greatest currency in this new world. China has already begun publicly shaming debtors on government websites. Can and should we mandate obedience? Will it truly lead to a more trustworthy world?
  • Ransomware — Many of us don’t realize that 2017 saw millions of dollars paid to cyber criminals by companies and individuals. At its most basic, ransomware is like a virus that can get into your computer, operating system, or database, and encrypt your files so you can no longer access them. The ransomers then ask for money in return for an encryption key. To add to the indignity, some of them will also include a creepy clown photo or threats of physical violence. And you don’t have to be a computer genius to launch an attack — the Dark Web currently has about 45,000 ads for ransomware for sale, and a lot of it is designed to hit regular citizens. If you think you don’t need to be concerned, just know they’ve already infiltrated the bank accounts and medical records of private citizens worldwide.
Photo by Jefferson Santos on Unsplash
  • The Textalyzer — Made by Cellebrite, this technology would give police officers the ability to access a driver’s phone after a crash or traffic infraction to see if they were using the device in the time leading up to the event. The Textalyzer plugs into the driver’s cell phone and retrieves a history of what they’ve been up to. Cellbrite claims the content of texts or searches will not be accessible to the officers, just information about the time and length of usage on the phone. This data would, however, include exactly what apps you were using at exactly what time. Even just a swipe of the screen can be detected. New York will be the first state to test the waters — they haven’t employed the technology yet, but a study was ordered by Gov. Andrew Cuomo earlier this year. Right now is the time to ask questions about the software, how it will be implemented, how the police will avoid profiling and unequal treatment while using it, what will happen to people who “fail” the scan, and how we can avoid seizing a phone used by a passenger instead of a driver.
  • Helix — The new wave of digital genomics is all about optimizing your body and mind and preparing for your future — straight from your phone, of course. Helix teams up with app-makers who basically sell you an interpretation of a small part of your genome. But how do we know certain characteristics can be measured accurately? What of the traits that are determined by multiple genes, or are affected by your environment? Do you really want your phone to break the news that you have cancer? How much does this kind of genetic prognostication actually improve the human experience?
  • The Robot Priest — When the Protestant Church of Hesse and Nassau rolled out the BlessU-2 robot to mark the 500th anniversary of the Reformation, it was designed to be controversial and spark debate about the future of the church. Last year (and going forward) Japan’s SoftBank Robotics started producing a new line of Pepper robot Buddhist monks — the plan is to have these monks deliver funeral rites to the exploding elderly population in Japan for about 1/5th of the cost of a human-led, traditional funeral. These robots bring up some interesting questions — could a robot be a legitimate addition to religious services in places where, and to people who, have no alternative? Could AI ever deliver effective pastoral care?
  • Emotion-Sensing Facial RecognitionAffectiva wants to see how annoyed you are while you shop, eat, play video games, and just generally do anything that could involve you spending money at some point. Their emotion recognition software can be incorporated into all sorts of things in order to provide “deep insight into unfiltered and unbiased consumer emotional responses to digital content.” Essentially, this means that the software allows companies to see exactly how you respond and react when using their website, playing their game, or using their app so that they can make adjustments to improve your experience (and their business). But how do they collect this data? Simple. Webcams. Say hello. The tech is being sold to retail stores right now, so it’s time to ask: Do we really want machines reading our emotions? How will companies use this software to manipulate us? Who is going to monitor this technology? Privacy anyone?
  • Google Clips — Google Clips is a blend of AI and facial recognition software designed to “capture beautiful, spontaneous images” of a person’s life. It can be set up anywhere (or attached to you) and will constantly scan your environment (in its 130-degree field of view) for the faces you interact with most (because they, presumably, belong to the people and pets you love). Then, it will record up to 16GB of motion photos when it “senses” potentially picturesque life moments. If you’re thinking that seems like constant surveillance, you’re not wrong. The obvious issue here is privacy. There will be those who don’t believe anything Google says about the camera not being a surveillance device (a fairer argument would be that it’s certainly not optimized to be one). But there’s also an interesting issue here about letting Google’s new AI algorithm Moment IQ decide which of your life’s moments truly deserve to be captured.
  • Sentencing Software — An algorithm may have the uncanny ability to predict which Netflix show you’ll enjoy this evening, but where do we draw the line with big data’s rule over our lives? Turns out we better decide quickly because the technology is outpacing both our awareness and our laws. While we don’t know just how many jurisdictions are using it, we do know that sentencing software is already out there. Is it possible that our justice system would become fairer if we handed it over to AI? Eric Loomis didn’t think so when he was sentenced to six years in prison for attempting to flee an officer and operating a motor vehicle without the owner’s consent. At the sentencing hearing, the court mentioned that it arrived at the sentence with the help of a “COMPAS assessment,” which helped determine Loomis’ risk of recidivism. COMPAS is a program sold by Northpointe, Inc. and marketed as a means to guide courts in their sentencing. The problem, according to Loomis’ lawyers, is that the algorithm is designed by a private company that will not reveal how it works. The Wisconsin Supreme Court decided that Loomis had no right to Northpointe’s proprietary software, and so neither do you. Not only are we dealing with private companies playing a role in the judicial system (one that they do not have to reveal to the police or the courts), but the decision whether or not to involve AI at all seems to have whizzed by us without notice. Algorithms don’t emerge out of nothing; they’re written by coders, people who have biases that they may not even realize (as do judges, to be sure). It’s no surprise that a ProPublica study found that COMPAS routinely gives worse scores to black defendants. How can we even begin to weigh the validity of this tool if we’re not allowed to see its decision-making process?
  • The Rise of Robot Friendship — Even grieving has taken on new form in the 21st century. Take Luka, for example, a chatbot built by Eugenia Kuyda after the death of her best friend Roman. Like many of us, Roman left behind a digital footprint made up of social media posts and chat messages. Kuyda collected those items from as many friends as she could and used them to create the chatbot designed to replicate the sense of intimacy and security she missed so much. Since then, thousands of people have chatted with Roman’s bot. In addition, “With Me” is an app created by a South Korean programmer that allows you to make a 3-D avatar of your dead loved one using their old photos so that you can take some posthumous selfies (the catch is that this person needs to visit a 3-D scanning booth prior to their death in order to get the right images). The avatar can even read your facial expressions and ask you what’s wrong or comfort you. And, of course, there are other programmers out there trying to create platforms on which to build digital duplicates using people’s old messages and social media posts. Will this technology keep people from moving on with their lives or could it become an integral part of therapy, especially for those who need closure? Could you make your own bot before you die — and is this a move towards downloading consciousness?
  • The Citizen App — “Citizen” was first released in October 2016 under a different name, with different branding: “Vigilante” was promptly removed from Apple’s App Store due to a violation of their review program with concerns centered around user safety. This first app release was published with the tagline, “Can injustice survive transparency?” alongside a dramatized video showing a violent assault being stopped by users of “Vigilante.” Police backlash against this release was strong, with the New York Police Department going as far to say, “Crimes in progress should be handled by the NYPD and not a vigilante with a cellphone.” Developers behind the rebranded app, now called “Citizen,” advertise it as a way for innocent citizens to stay safe and aware in areas wracked by crime. This controversial app, at its core, acts as a digital police scanner, notifying people of ongoing crimes or major events in their area. In addition, it allows for live streaming video directly through the app, providing “complete transparency of your neighborhood around you.” Does this app allow for the monitoring of police brutality while maintaining the ability of the law enforcement community to do their jobs? Should citizens have real-time access to crime locations? Does this app encourage vigilantism? Does this app have serious privacy concerns by allowing the public access to the locations of crimes before the justice system has done its work?

At the end of the day, it’s important to say more than “there are ethical issues to consider” when assessing new technology. We need to name them, talk about them, tell our legislators how we feel about them, and raise future generations of people who can do the same. Stay tuned for the release of the Reilly Center’s 2019 list this December.

Photo by Franck Veschi on Unsplash

--

--

Reilly Center for Science, Technology, and Values

Exploring issues where science & technology intersect with society at the University of Notre Dame. Views expressed belong to the individual authors.