Changing Hearts, Minds, and Machines: “Safe AI” and the Critical Reader

Yael Kidron
7 min readApr 1, 2019

--

By Ahmed Amer and Yael Kidron

Ahmed Amer is Associate Professor at the Department of Computer Engineering at Santa Clara University. Yael Kidron is the Director of Character Education at the Markkula Center for Applied Ethics at Santa Clara University. Opinions are their own.

In Plato’s writings, we can find a conversation (as retold by Socrates) between Thamis, King of Egypt, and Theuth, the inventor of written language. Theuth thought that writing could improve memory and wisdom. However, the king believed that people would become lazy — instead of exercising memory, they would rely on written reminders.

Today, we seem concerned about the growing ability of artificial intelligence (AI) to write for us. When OpenAI announced that they will not do a full release of their new software (GPT-2), the company effectively modeled ethical thinking quite late in the game — after their product had been designed and completed. The benefits of anticipating ethical pitfalls early in product design notwithstanding, OpenAI should be commended for raising this issue. GPT-2, which has the capacity to generate compelling articles that continue the topic and style of a prompt, can improve computer-assisted learning and writing. However, GPT-2 can allow anyone to set up a fake news mill with less human resources than ever before. The magnitude of the malicious generation of content is a risk that might outweigh the benefits.

The ethical issue here is about more than the balance between risks and benefits. It is about the line that connects product development ethics and consumer ethics. The ethical duty of product development is to anticipate the vulnerabilities of their clients and to protect users from any foreseeable harm. The responsibility of consumers is to use the product as intended by the developer and to avoid any illegal or potentially dangerous practice. In this article, we discuss both ethical responsibilities.

Product Development Ethics

Frequently, there is no smoke without fire. Technology does have the potential to impede people’s mental capacity. See for example the research evidence on the negative effect of global positioning systems (GPS) on spatial memory.[1] Such research suggests that regular use of this technology reduces people’s effort to encode spatial information and create accurate cognitive maps of roads.

While one article will not change people’s mind, a stream of articles, especially those that invoke emotional responses, can push aside reason and constructive discourse. The unsuspecting reader might accept the fake news at face value or develop a general sense of distrust in all news. Neither of these responses contributes to a democratic society.

At the early design stages, software developers should ask multiple questions about the different unintended uses of the product, such as: can it encourage malicious and illegal behavior? can it discriminate against a certain sub-group of users? and, what potential modifications may turn it into a tool that serves those who seek to sabotage efforts to promote the wellbeing of individuals, society, and the environment.

Consumer Ethics

Technology progress depends on continued research and development, customer feedback, and market research. Ethical consumers enable such development. We have benefitted from tools that expand our ability to perform a larger array of tasks with better quality. Tools like sequencers, composition software, and Apple’s Garageband have lowered the threshold of entry for budding music makers, making it easier for relative novices to compose interesting musical pieces. Alysia (link: https://www.withalysia.com/about-us/), created by our colleague Maya Ackermann, is another example, “democratizing song-writing through AI” by allowing anyone to create original songs in minutes.

Through multi-lens cameras and advanced processing algorithms, more people can take better quality photographs. Through photo editing software more mistakes can be repaired, more images can be refined, and completely original scenes can be framed and constructed from subjects and locales that need never have coincided in the world they portray. Again such technologies, when enhanced with the latest algorithms, can be pushed even further, helping us create a fake reality.

To be fair, fake news is already a prevalent social concern, even without help from AI.[2] It has been a concern well before the invention of the internet. It might help to remember what happened to older technologies to understand why the human component should be our first concern. Back in 1958, when the broadcaster Edward Murrow delivered his “wires and lights in a box” speech, he expressed his concern about the power of television to distract people from reality while catering to public taste. He noted:

We have currently a built-in allergy to unpleasant or disturbing information. And our mass media reflect this. But unless we get up off our fat surpluses and recognize that television in the main is being used to distract, delude, amuse and insulate us, then television and those who finance it, those who look at it and those who work at it, may see a totally different picture too late.

Deepfake videos will likely have the same effect on video recordings that photo editing had on photographs. An initial period of fear, controversy, an arms race of fakers and fake-detectors, followed by a healthy general skepticism marred by localized pockets of willful acceptance of the unacceptable.

Aside from weakening people‘s faith in the incorruptibility of the medium (which is arguably misplaced, be the corruption within reach of a nation state with vast resources, or within reach of a single private individual), the problem is always a human one. All such AI does is ease the creation of a constant deluge of manipulated or fabricated content.

The real fear of technologies that lower the threshold for creative output, and how many can be adopted by unscrupulous content creators, is not just in the technology itself. It lies in the unscrupulous content creator aiming to mislead and manipulate, and it’s in the willfully ignorant, those who choose to accept whatever content they read, see, and hear as being authentic for no other reason than that it speaks to what they want to read, see, and hear. You do not need the latest technologies to bend the willingly ignorant to the unscrupulous intentions of an individual who tells them what they want to hear.

The journalist, Malcolm Gladwell, once noted that he felt it was his responsibility as a person to revisit his positions on different topics. “And if you don’t contradict yourself on a regular basis,” he concluded, “then you’re not thinking.”

There are increasing efforts to prepare the public — especially the younger generations — to become more critical consumers of information. Some of these efforts can support the work of gatekeepers — for example, through systematic fact-checking of the news by reliable organizations including those affiliated with the International Fact-Checking Network at Poynter. Other types of efforts increase media literacy and include the News Literacy Project and curricula such as Stony Brook University’s News Literacy Center.

A Proposed Collaboration

In addition to stronger gate-keeping activities, such as authentication of content creators and filtering of malicious content, hi-tech companies are in a position to foster consumer ethics. This is not an easy task. Simply raising consumers’ awareness of their vulnerabilities would likely result in minimal effects. Decades of research on label and package warnings, at times because of laws and regulations, has taught us that cautioning users against hazardous or counterfeit materials has little or no effect on consumers’ behavior.[3] However, a more interactive approach, which trains consumers to identify potential harms, may have potential. A series of studies by Sander van der Linden and his colleagues demonstrates that a multi-step process of training readers to be critical of what they read may act as “vaccines” that inoculate people against misinformation.[4]

Currently suggested solutions ask whether we should suppress such dangerous algorithms lest they get out? If they were to get out, should we regulate their use? Such strategies would be doomed to ultimately fail, as algorithms aren’t physical entities to be banned from manufacture or trade. Even if it were possible to manage them in such a way, all it would take is one safe haven to render such bans pointless.

To blame the technology for the ills it enables may be reasonable, but to focus solely on the technology, allowing it to serve as a scape-goat for our own complicity, is to distract from where the worst rot lies. There is no point in banishing our “scariest” code to an “island of naughty algorithms” if we’re content to have the rest of the world full of ignorant humans at the mercy of immoral ones. It is not how we program machines that should cause fear; it is how we allow ourselves and others to be programmed. We should hold technologists accountable, but no more than we should hold ourselves, and each other, to account for our own parts.

Aristotle advised that being virtuous involves using deliberation and apprehension for good judgment. More than ever this advice holds true for humans in the case of reading online information, and that remains an ever more crucial truth whether that information is expressed through human or mechanical hands.

[1] Wedell, D. H., & Hutcheson, A. T. (2014). Spatial memory: From theory to application. In T. J. Perfect and D. S. Lindsay (Eds.), Handbook of applied memory (pp. 76–91). Thousand Oaks, CA: Sage.

[2] Lazar, D., Baum, M., Benkler, J., Berinski, A., Greenhill, K., Menczer, F., Metzger, M., Nyhan, B., Pennycook, G., Rothchild, D., Schudson, M., Sloman, S., Sunstein, C., Thorson, E., Watts, D., & Zittrain, J. (2018). The science of fake news. Science, 359, 1094–1096.

[3] Spink, J., Singh, S., and Singh, J., (2011), Review of Warning Labels and Their Effect on Consumer Behavior with Applicable Insights to Future Anti-Counterfeit Systems Research. Packaging Technology and Science, 24, 469–484.

[4] See for example, van der Linden, S., E. Maibach, J. Cook, A. Leiserowitz, & S. Lewandowsky. (2017). Inoculating Against Misinformation. Science, 358(6367), 1141–1142.

--

--