Chapter 6

Michael Fischer
Stanford Law: Regulating AI
14 min readOct 20, 2020

Virtual Assistants: Free Speech, Privacy, and Society

By Mikey Fischer and Shreyas Parab

From the last chapter, there are two topics that are important to understand when we are talking about AI and government decision making. The first is non-arbitrariness. When the government acts, it owes you a reason why it acted in a certain way.

When a decision is made, the courts need to be able to explain how they came to that decision. It can be acceptable to say, an algorithm told me the outcome. That is certainly an explanation, although not an ideal one. In addition to giving an explanation, the explanation should be accessible to people that are not lawyers or experts. An informed person should be able to follow the chain of reasoning. Although this is not always achieved, this is the goal when detailing a court decision.

As artificial intelligence and algorithms are incorporated more into court’s decision making, it becomes even more important that humans are able to trace how neural networks work in coming to a conclusion on government decision making.

Getting a computer to come up with an explanation of how a decision is made is an important area of research in computer science. Explainable AI would allow us to trust that we know why a system is making the right decision. People want to make sure that the decisions AI makes is based on the right parameter and that the AI is not using some undesirable trick to determine an outcome. For example, when a computer vision system is classifying a cow from a dolphin, is it only looking at whether the background is green grass or blue water? And if a dolphin were put on a green background would it incorrectly be thought to be a cow. When decisions are being made with AI, we need clear accountability that the decision making process is trustworth and transparent. As explained later in the chapter, the European Union requires under GDPR a right to explanation so that we know algorithms are making the correct decisions.

One of the downsides of interpretability though is that it can come at the cost of accuracy. When a system has to offer an explanation of why it came up with a decision, it can. Of course, having an accurate model is good but typically being able to get an explanation from a system leads to a better result over the long term.

There is also a larger, darker, issue that comes with explainability. To illustrate the point, we will use an example. Let’s say a human is trying to teach a mouse how to do calculus. After years of trying, and despite the humans best effort, the mouse is not able to learn calculus. Simply put, there are probably limits to what a mouse is able to understand. The mouse’s brain has a certain carrying capacity.

Now we go back to the example of the AI algorithm trying to explain its reasoning to a human. An artificial intelligence might have millions of parameters and weights that go into making a decision. Whether it be that our ears and eyes are not equipped to digest this information, or that our brain is not equipped to process this information, the end result is the same. The level of thinking that the AI algorithm does is not something that humans can understand.

Case Study: Eliza

One of the earliest NLP implementations in history was created at the MIT Artificial Intelligence Laboratory by Joseph Weizenbaum. ELIZA, which was released in the mid 1960s, received attention across the country as the advent of computers simulating “humanness”. Eliza was simulating a psychotherapist and would repeat user inputs back in a reframed way which mimicked the techniques of Carl Rogers (the namesake of the field) and the branch of Rogerian psychotherapy. By simply parroting the user’s sentences and a couple targeted rules to respond to key words like emotions, it was incredibly rudimentary and could be easily fooled when stress tested. At the time, however, ELIZA was able to fool many common users who could not distinguish between the messages being automated and it being a computer. At the time, it was not even comprehensible to the everyday American that the computer could maintain a conversation with a computer. In fact, ELIZA is considered one of the first chatbots and actually contended well in passing the Turing test. Of course, the level of sophistication needed to pass the Turing test today is much higher (as a result of modern users being cognizant of technologies’ capabilities), but 50 years ago this field had burgeoned and started laying the foundation for what we now know today at virtual assistants and natural language processing.

In fact, even 50 years ago, researchers looked at the technology in ELIZA and forecasted being able to use this technology to replace doctors and psychologists. They were absolutely convinced that this technology would fundamentally change how humans interact with technology and forecasted the rapid demise of the need for professions like therapists. Although they were correct, it would continue to take several decades of evolutions to start realistically approaching that future.

Why Does Language and Speech in AI Matter in Legal Contexts?

In our lifetimes, we have seen perhaps the most visceral and explicit reaction to questions about privacy online taking shape in regulation like the GDPR in the European Union and state legislation like the California Consumer Privacy Act. These kinds of regulations, although sometimes regulating too much or not enough (just as all regulation is inevitable to do), are actually one of the only strong protections for the modern user of technology. Legal protections are incredibly limited, which often surprises people, but often for good reason, which we will delve into a bit later.

When it comes to language in isolation, the legal system has become very efficient in addressing those questions. For perhaps the earliest

Virtual Assistants and Free Speech

People in the United States have freedom of speech as defined in the first amendment of the constitution. At a high level, the first amendment protects the rights of people to express an opinion, even if it is unpopular or unsavory, without the government being able to censor them. Speech can is more broadly meant to refer to communication and expression and can take many different forms.

Expression can come in many forms. What someone writes in books, leaflets is a form of expression. What someone says at a rally, theater, or poem is a form of expression. Expression also encompases what people choose to wear, for example wearing armbands in support of the Vietnam war or a t-shirt that has political speech on it at a school. Freedom of expression can come in the actions that we take, if we are It can come in terms of actions such as burning a flag. When someone donates money to a political campaign, money is a type of expression. The First Amendment does specify even what it means by the definition of what is speech. It doesn’t even mention what types of speech should and shouldn’t be protected. The interpretation of what is meant by “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.” is left of the courts.

There are limits to the type of speech that are protected. The courts have come to the conclusion that some limits on Freedom of speech are needed to run a society. For example, allowing someone to go around falsely yelling that there is a fire in a crowded theater does little to push society forward. There are in fact many exceptions to freedom of speech in the United States. People are not allowed to incite violence, “advocating for the use of force” when the goal is to “produce imminent lawless action” that will cause people to “incete such an action” .

Under certain circumstances, when someone is falsely stating a fact, such as in a libel or slander case where the person has a “sufficiently culpable mental state” that they are intentionally trying to cause harm to someone else. Free speech can be limited when the expression is thought to have very limited value to society, such as obesity. Courts have ruled that obsenity that thas lacks “serious literary, artistic, political, or scientific value” can be limitedespecially if it appeals to tha “shameful or morbid interest in sex”. The basis of this law is that the justices believe that such material has a “corrupting and debasing impact” on society. Similarly, child pornography is unprotected free speech.

Child pornography goes above and beyond the limits of obsenity, because it is irrelevant if it meets the criteria of obscenity because it is always illegal. Fighting words and true threats, speech that “tends to incite an immediate breach of the peace” and is a “direct personal insult” is not generally protected. Threats against the president are illegal. This type of threat is different from fighting words and true threats because the person does not need to have the ability to carry out such an action, merely the stating of the threat is illegal. Intellectual property can also limit speech, for example, speech that is owned by others, for copyrights and trademarks is not protected. Similarly, false advertising, is not protected free speech.

All of the benefits and exceptions to free speech only apply to humans. Artificially intelligent systems do not have the same rights. However, it is not too far-fetched to think that in the future AI will have the same wants and needs of humans and soon desire to have free speech as well. What happens when a virtual assistant begin to collect information and utterances, will then it’s speech be regulated?

What is a way that we can dissect the issue of if an AI system should have freedom of speech. At some point, someone will argue that an AI bot speaks on their behalf. If we think about it, free speech is more about the rights of the listener than it is about the rights of the speaker. Conceptually it is hard to to understand free speech without taking into account the rights of the listener.

If we look at this from a listener’s perspective, there have recently been laws passed that try and protect their rights. SB 1001a (Bolstering Online Transparency Ac, B.O.T.), says that a virtual assistant or bot must disclose itself when interacting with a human. There are limits on this thought. SB1001a only applies when the intent of the bot is to incentivize a purchase of a good or service in a commercial transaction or to influence a vote in an election. The bill does not make bots illegal, it only makes mots required to identify themselves. Some groups argued against the bill saying that it would limit real speech and create an unreasonable reporting requirement for people and companies.

There is a balance that has to be made here, because many bots provide useful information to people. Additionally, is using a bot to convey one’s ideas on Twitter any different than using a megaphone to project one’s speech or advertising to project one’s desire to sell a product. When a human endorses a product and tells people about it through a TV commercial, is that any different than when a bot tells people about a product through Twitter?

Privacy in the World of AI Assistants

According to recent market research, roughly 23% of Americans own some kind of voice activated assistant in their home like Amazon Alexa or Google Home. This is just the ownership of an independent voice assistant device and when keeping in mind the over 41.4 million monthly active users in the United States of Apple’s Siri, it is fair to assume that virtual assistants have become ubiquitous in the homes and lives of most of America. In a post-Snowden age when Americans are fearful and cognizant of the potential monitoring of communication by the government, the idea of keeping a listening device in your home and on your persons via your cell phone at all times seems counterintuitive.

But like any decision, consumers are faced with a trade off that they must make based on their individual needs, desires, and concerns. There are of course many benefits to virtual assistants that increase convenience, accessibility and speed of getting information to users. Similar to a real-life assistant who can remember appointments, gather information about topics, and aid in everyday tasks that eat up an individual’s life, virtual assistants like Alexa and Siri can be a significant augmentation to a human’s workflow. In fact, usage and functionality of virtual assistants has been climbing these past few years and can even aid in telling a child a bedtime story, DJing a party, and even paying off monthly credit card or mortgage payments on command.

At the cost of this additional convenience and functionality, however, there have been many questions regarding the privacy of virtual assistants. Although these virtual assistants are only activated by “wake up words/phrases,” in order to be at the ready, they are constantly attentive with the microphone on. Although there is a high degree of encryption and security that your device will have to deter hijacking, there have been reports before of DIY enthusiasts and white hat hackers who try to figure out clever workarounds. At the core, although it is not impossible to hack a virtual assistant to gain access to recording conversations, it raises equally pressing questions surrounding the privacy of questions and tasks that the speaker asks of their device.

Although the companies that manufacture these virtual assistants vehemently deny that their virtual assistants raise significant privacy concerns, there have been several transgressions that show that these companies can in fact keep track of requests made and often hire contractors to analyze the recordings of almost 1% of queries that are asked. A former contractor of Apple tasked with listening to recorded Siri conversations revealed that he has heard private intimate details of user’s personal lives including “doctor’s appointments, addresses and even possible drug deals.”

Although there have been many grassroots activists who have protested the use of these practices by large companies, the added benefit of these technologies do not deter the high proliferation of virtual assistants across the country.

However, the public has been exposed to the many concerns that the legal system has with using data from virtual assistants in criminal prosecutions or investigations. Amazon has turned over record amounts of customer data to the US government in 2017 when it received almost 1,618 subpoenas, 229 search warrants, and 89 court orders. Of those requests that were made for Amazon to turn over customer data, they fully complied with 42%, 44%, and 52% of those requests according to their bi-annual transparency report.

Increasingly, prosecutors and law enforcement agencies are pushing private companies to release relevant customer data during investigations that will help them track, incriminate, and identify criminal activity. The contentious relationship the company has with privacy and law enforcement will only continue to exacerbate as more requests will be made and these companies continue to extend their reach in the private lives and data of consumers.

How Do We Regulate Virtual Assistants?

Of course given the potential massive intrusions on the privacy of the American public, there have been many ongoing regulatory conversations around virtual assistants. As it currently stands, much of the “low-hanging fruit” to regulate virtual assistants has to do with disclosures related to advertising products/services. When a virtual assistant is tasked with performing an action or gathering information, the virtual assistant might suggest or recommend a specific course of action. For example, “Find a local plumber” or “Find me nearby stores” may seem like normal queries to ask a virtual assistant. When being presented an advertisement, the Federal Trade Commission (FTC) requires the presenting platform to inform users in a “noticeable and understandable fashion” when the results are connected to a financial relationship between the platform and the advertised affiliate. Although there have not been massive actions taken against virtual assistants like the ones of Google or Amazon, the FTC claims to not have received complaints about ads through virtual assistants. The FTC, however, did take action against a small company that provided information to prospective college students that did not disclose paid results versus organic ones. This native advertising in virtual assistants continues to be problematic and questions around its compliance continue to be raised, but many technology companies have developed a simple workaround. For example, in the case of Google’s Assistant, Google contests that the company “isn’t paid for these results”. Indeed, a business may not have paid for this specific recommendation from Google, the results are pulled only from a database that is explicitly tied to the Google Ads products they offer. This means that if an individual does not partake in other Google Ad products, their result would not show up to the user’s query. Google also leverages the ability of third parties that have more domain expertise related to the query (for example sites like HomeAdvisor or Porch) to respond to user questions which means that these third party search partners can continue to profit from the virtual assistant without the explicit knowledge of the consumer.

In another vein, the expansion of the EU’s General Data Protection Regulation (GDPR) has caused many companies to stop manual reviews of audio collected by virtual assistants. As mentioned before, many companies maintain the ability for manual review of audio clips obtained by virtual assistants to improve the product. After a contractor released more than 1,000 recordings to the media of Google virtual assistants, a German data protection authority expressed its desire to use the powers given to it by GDPR to order data processing to stop. Google responded quickly stating that it has halted the practice across the whole of Europe. This was perhaps one of the first real usages of GDPR to not fine companies for not adhering to GDPR, but rather stop practices entirely from continuing to take place. This public show of power that regulators got through GDPR quickly sent reverberations throughout the virtual assistant market with Apple suspending its similar program and Amazon following suit. The GDPR does not give specific guidelines as it relates to audio data and treats the data just as it would any other format, but the usage of Article 66 which gives regulators the ability to shut down technologies that merit “an urgent need to act in order to protect the rights and freedoms of data subjects” was first widely used in virtual assistants and perhaps will continue be the application of data privacy laws.

How do virtual assistants shape society?

There is a larger issue at play here that how will virtual assistants shape society. Virtual assistants have the ability to gain our trust and to shape us. In the near future, virtual assistants will be smart enough to want to start to persuade us. Persuade us to buy certain products, persuade us to trust them, persuade us to take certain perhaps dangerous actions. Currently even rudimentary virtual assistants have the potential to change the labor market and increase human welfare. As language shapes economic relations and social relations we can expect this trend to increase. If we know that a virtual assistant is constantly monitoring us, we might not speak out against crimes if we know it can constantly be used against us. As bots become more prevalent, we can expect them to even change how humans develop social relationships with each other.

Conclusion: AI Personhood

As virtual assistants continue to become more advanced and mimic the functions/behavior of a real life human, many legal scholars raise questions around whether an artificially intelligent entity could legally be considered a person. With the designation as a human, the entity would suddenly be entitled rights by international and national law which comes with it a cascade of consequences. For example, shutting down the entity would be considered murder or it would have the ability to freely communicate. In recent years, Saudi Arabia granted citizenship to a robot named Sophia and under these designations of citizenship, Sophia would be granted the same rights that any citizen of Saudi Arabia would. Ironically enough, scholars argued that Sophia, who identifies as a female, actually possessed more rights than many “real-life” women citizens of Saudi Arabia. They argued that this action reduced the dignity of humans who have their rights oppressed throughout the country.

Although many acknowledged that the move by Saudi Arabia was nothing more than a publicity stunt, the implications of their actions bring up pressing questions around AI personhood.

At what point will artificially intelligent entities be considered people? As it stands, in the United States, corporations have been given rights of free speech and religion and if through a clever legal loophole an AI could become a corporation, an AI would essentially have the ability to act as a company or person. They could engage in legal torts, they could vote, run for office and more! Of course this future sounds outlandish, President Siri? Indeed, however, virtual assistants have developed incredibly personal relationships with users (dramatized in 2013 science fiction film Her) that could exceed the threshold to be considered a person. This future is not that far off as artificial intelligence increases in its capabilities, especially those around natural language processing, and the prevalence of AI in the lives of humans plays a more pivotal role.

--

--