Is owning a digital assistant really worth it?

Uddhav Bhagat
SI 410: Ethics and Information Technology
10 min readFeb 22, 2022

Apple’s Siri, Microsoft’s Cortana, and Amazon’s Alexa are activated by voice commands and they can schedule appointments, make reservations, set reminders, play music, check the weather, send text messages, and so much more including even unlocking the door for the users (Hoy, 2018). Given how helpful they can be, people love to place them in homes, offices, schools, etc. But, how do users know how these devices work, when they are and are not listening? The fact is that, yes, these devices are always listening to us, and they have serious privacy and security concerns tied to them which users often dismiss as they focus on their benefits instead (Wilson, 2021).

Image from CPO Magazine

Langdon Winner states in, Politics of Artifacts, “…the stern advice commonly given to those who flirt with the notion that technical artifacts have political qualities: What matters is not the technology itself, but the social or economic system in which it is embedded.” (Winner, 1980) This made me think — why exactly are these big tech companies developing these digital assistants? Are they prioritising making routine tasks easier for users or are they prioritising collecting their data and sell it to other third parties to earn money? In that light, is owning a digital assistant worth it? Are the benefits provided by it greater than the infringement on your privacy and security? Additionally, without companies being more transparent about their intentions behind these devices and users reading the fine print, we are going to keep having these basic privacy problems. In my opinion, the ethical issues relating to privacy and security for individuals outweigh the advantages of convenience provided by digital assistants. Some examples of the privacy and security risks include: collecting sensitive information from the users and using it to train models and sell it to third parties, recording conversations without the knowledge of the user, among others.

The benefits of using a digital assistant

People are often drawn towards digital assistants for their ease of use and convenience. They also provide the user with a variety of uses. Some of the common uses are as follows (Hoy, 2018) -:

  • Find information online such as weather details, latest news, sports scores, etc
  • Make reminders and set alarms
  • Send messages and make calls
  • Help with playing music through various streaming services such as Spotify, Apple Music, etc
  • Control smart devices such as smart lights, locks, etc
  • Provide translation of languages on the go
  • Make reservations for restaurants, taxi services like Uber and Lyft
  • Read notifications and allow you to respond to them if needed

Risk 1: Dangers of easy to say “wake words” and silently hearing you talk

There are times when digital assistants are activated by words sounding similar to their wake words or words used in conversations with somebody else (Hernández Acosta, 2022). For instance, one of my friends owns a Google Home and its wake words are “Hey, Google” or “Ok, Google”. In conversation, my friends and I were talking about Tom Brady’s total passing yards in his career. Since we were unsure about the exact number, I told my friend “Ok, Google it then”. To my surprise, the Google Home’s light lit up and it was waiting for a command to carry out a task. Examples like these are common in all the households that own a digital voice assistant as seen in the research conducted by Luca Hernández Acosta and Delphine Reinhardt in “A survey on privacy issues and solutions for Voice-controlled Digital Assistants”.

In my case, we were not talking about anything sensitive or private. However, there are times when users may be talking about private information that addresses their financial, emotional, or health issues, and the digital assistant may be unintendedly woken in conversation and record it to send it over to the companies’ database, which could then be used by the company for various purposes that the user may not be aware of. Examples of those include — using the data to train machine learning models, selling to third parties for monetary gains, among others (Hernández Acosta, 2022).

Another way they can collect data is through recording conversations taking place in the background of the legitimate user which may include personal details like business dealings, private conversations, among other things (Pal, 2020).

Governing this issue is a tricky one as you indeed do give permission to these personal assistants to monitor and record your speech, but that is limited to when you choose to launch it. It is hard for us, as users, to figure out if and when they record our conversations and what would they do with such data. Personally, I would feel very uneasy not knowing how and when my data is being used. Additionally, this also makes people uncomfortable sharing sensitive details in the presence of digital assistants in their own house. There have been instances like the one reported in The Sun that claims that Alexa may have been listening and sharing details about your intimate moments. This naturally makes people worry about their privacy and security. Thus, in my opinion, having a digital assistant just for convenience and automating tasks is not worth it if they silently listen to your conversations and send the information to the companies making them.

Risk 2: The employees of the company are listening to your recordings

Employees from companies like Google and Amazon have come forward and expressed concerns regarding the unethical usage of the data collected by these devices. A former Amazon employee told the Guardian “Having worked at Amazon, and having seen how they used people’s data, I knew I couldn’t trust them” after Alexa seemed to be repeating previous requests that had already been completed. Furthermore, USA Today and other reputable newspaper sources have released reports claiming that Google employees are constantly listening to the voice recordings picked up by Google Home. These recordings have conversations that should not be recorded in the first place and might contain sensitive information.

This can be dangerous because this is a breach of the data-sharing privacy. Additionally, how often does one read through the terms and conditions of a device? Barely ever right? Companies use this to their advantage and users like me who do not go through the terms and conditions thoroughly often do not realize the extent to which their personal information can be collected and utilized (Pal, 2020).

What if the employees use this personal information that they just heard through the recordings to their advantage? Given how important personal data is in today’s world, I don’t think we can trust the intentions of the people listening to these recordings.

Risk 3: Using your data for training models without your consent

Companies like Amazon, Apple, and Google are the biggest companies leading in the development of these digital assistants and they try to make these interfaces more life-like. This is only possible through the more real-time collection of data that can be used to train their existing models which run on the digital assistants. However, most of this collected information is potentially identifiable and possibly sensitive information. Even though the user can go to the cloud and delete these recordings, they run the risk of negatively impacting their customer experiences (Bolton, 2021). French sociologist Antonio A. Casilli states that it raises a significant privacy concern when digital assistant companies use the recordings of their customers to further train their digital assistants to improve the accuracy of their results. He claims that providing “free data for the training and improvement of the virtual assistant, often without knowing it, is ethically disturbing.”

Shoshana Zuboff, the author of The Age of Surveillance Capitalism, in her interview with the New York Times, describes personal data as the primary source of economic power for big tech companies (Jackson, 2021). They can also monetize the details of our digital lives. In her book, The Age of Surveillance Capitalism, Shoshana Zuboff defines surveillance capitalism as a process that “unilaterally claims human experience as free raw material for translation into behavioral data.” (Zuboff, 2022) In other words, companies feed the data that is collected from their users into machine algorithms to produce prediction models that can anticipate what we will do at any given moment.

Risk 4: Selling your data to third parties

In my experience, people agree to provide their data directly to digital assistants and the apps associated with it because they are not acutely aware of how the data will be used and who else will receive the data.

How often has one seen targeted advertisements based on things you must have searched for in the past or spoken about just recently? How is this possible? It’s creepy. And it probably makes you wonder to what extent your gadgets are spying on you. Do digital assistants also play a role in this? Yes, data collected by these assistants can be used for various purposes without the knowledge of the user. One of the main concerns raised is that the sensitive data available to the digital assistants can be accessed by the manufacturers or by other third parties to provide certain functionalities (Hernández Acosta, 2022). You may think the distribution of this data is primarily for advertisements, but in a world where data is king and tech giants have increasingly stronger bonds with governments, you never know when this information could be used against you. Zuboff, on being asked what these companies can do by obtaining personal data, mentions in one of her interviews with the New York Times that

“We’re not just talking about targeted ads. We’re talking about subliminal cues, psychological micro targeting, real-time rewards and punishments, algorithmic recommendation tools, and engineered social comparison dynamics.” (Jackson, 2021)

In this light, is our valuable data really secure and private? How can we trust the company manufacturing these devices if they sell it to other marketing agencies and companies? Additionally, we have no idea how these marketing agencies will use our data. What if they have malicious intentions?

Risk 5: Audio Data can be used to infer information

Okay, some might think that “So, what? It’s just a recording of my voice. As long as it does not contain any sensitive and personal information, it doesn’t matter, right?” However, to my surprise, audio data is extremely valuable for the inference of information. Various characteristics of the user such as their body measurements, age, gender, personality traits, physical health, among other things can be inferred by just their audio (Kröger, 2020). “Additional sounds produced by the end-users (e.g., coughing and laughing) and background noises (e.g., pets or vehicles) provide further information.” (Kröger, 2020) Furthermore, data on users’ searches, queries, commands, and locations are used to build an accurate profile of their habits, whereabouts, and preferences. This adds more responsibility on users like us to be extremely careful about owning a digital assistant and monitoring what is being recorded by the devices.

Do you want to trust these big tech companies with all our data? They can create an entire persona for the user. Imagine if Amazon and Google know what you actually look like, your tastes, preferences, physical health, personality traits just by hearing you talk and listening to your conversations through their devices.

Risk 6: Your digital assistant can be hacked

This is not only a question of trusting these companies with our data, but also all this data can be compromised if a hacker gains access to private servers or these digital assistants (Lemos, 2021). In this case, a “hacker” could be anyone — a stranger or an acquaintance coming to the house where a digital assistant is plugged in or it could be a professional hacker using ultrasonic waves to hack into the digital assistant. Anyone entering one’s house where a digital assistant is connected could voice some commands in the absence of the owner. These commands can be malicious, and if the person tries hard enough, they can get access to credit card information, they can send malicious messages, place orders on the internet using the owner’s credit card, among other things (Lemos, 2021). All of this just by talking to the digital assistant!

This technological innovation is being developed for convenience, but it requires the tradeoff of personal privacy and security. Zuboff’s argument that the rise in surveillance tech has resulted in a “wholesale destruction of privacy” is evident through the privacy and security concerns of digital assistants (Zuboff, 2022). Digital assistants can be hacked, credit card information can be leaked, your personal information can be shared with others, employees of the company can hear your private conversations, among other things. Additionally, audio data collected can create and entire persona for the user, isn’t that really eerie to think about? Thus, in my opinion, the ethical issues relating to privacy and security for individuals outweigh the advantages of convenience provided by digital assistants, which does not make it worth it to own a digital assistant.

References:

Hoy, M. B. (2018). Alexa, Siri, Cortana, and More: An Introduction to Voice Assistants. Medical Reference Services Quarterly, 37(1), 81–88. https://doi.org/10.1080/02763869.2018.1404391

Wilson, R., & Iftimie, I. (2021). Virtual assistants and privacy: An anticipatory ethical analysis. 2021 IEEE International Symposium on Technology and Society (ISTAS). https://doi.org/10.1109/istas52410.2021.9629164

Winner, L. (1980). Do Artifacts Have Politics? Daedalus, 109(1), 121–136. http://www.jstor.org/stable/20024652

Hernández Acosta, L., & Reinhardt, D. (2022). A survey on privacy issues and solutions for Voice-controlled Digital Assistants. Pervasive and Mobile Computing, 80, 101523. https://doi.org/10.1016/j.pmcj.2021.101523

Pal, D., Arpnikanondt, C., Razzaque, M. A., & Funilkul, S. (2020). To Trust or Not-Trust: Privacy Issues With Voice Assistants. IT Professional, 22(5), 46–53. https://doi.org/10.1109/mitp.2019.2958914

Flanders News. (2019, July 10). Google employees are eavesdropping, even in your living room, VRT NWS has discovered. Vrtnws.Be. https://www.vrt.be/vrtnws/en/2019/07/10/google-employees-are-eavesdropping-even-in-flemish-living-rooms/

Bolton, T., Dargahi, T., Belguith, S., Al-Rakhami, M. S., & Sodhro, A. H. (2021). On the Security and Privacy Challenges of Virtual Assistants. Sensors, 21(7), 2312. https://doi.org/10.3390/s21072312

Jackson, L. (2021, May 25). Shoshana Zuboff Explains Why You Should Care About Privacy. The New York Times. https://www.nytimes.com/2021/05/21/technology/shoshana-zuboff-apple-google-privacy.html

Zuboff, S., Knight, P., Stone, B., Zuboff, T. A. O. S. C. B. S., 978–1610395694, Knight, S. D. B. P., 978–1501135927, Stone, T. E. S. B. B., & 978–0316219280. (2022). The Age of Surveillance Capitalism [Hardcover], Shoe Dog, The Everything Store 3 Books Collection Set. Profile Books/Simon & Schuster UK/Corgi.

Kröger, J.L., Lutz O.HM., Raschke P. (2020) Privacy Implications of Voice and Speech Analysis — Information Disclosure by Inference. In: Friedewald M., Önen M., Lievens E., Krenn S., Fricker S. (eds) Privacy and Identity Management. Data for Better Living: AI and Privacy. Privacy and Identity 2019. IFIP Advances in Information and Communication Technology, vol 576. Springer, Cham. https://doi.org/10.1007/978-3-030-42504-3_16

Lemos -, B. R., Preimesberger -, C., Preimesberger -, C., eWEEK EDITORS -, Kerravala -, Z., & Rash -, W. (2021, February 2). How Voice-Activated Assistants Pose Security Threats in Home, Office. eWEEK. https://www.eweek.com/security/five-ways-digital-assistants-pose-security-threats-in-home-office/

--

--