Maid, Friend, or Master?Rethinking the design of Voice Assistants.

Claire Florence Weizenegger
9 min readDec 16, 2022

Exploring the relationships between Ethics, Design, Personification, and domestic Voice Assistants.

“Hey, Siri Are you here to serve my needs?”
Yes! I’m your assistant 24/7.
“Do you work for me?”
That’s right, you’re the boss around here.

Your VA. Always available personal assistant!

As many critical scholars observe, contemporary domestic Voice Assistants (VA) are sexist. Going into this thesis, I echo this and ask myself: What can we (people who encounter Voice Assistants (VA) in some way) learn from it? Why is design in such a crisis, and how could a more desirable future look? My thesis follows a Research through Design (RtD) approach, which means creating artifact(s) as a mode of inquiry that provokes participants to critically reconsider and reflect upon their interaction and the role of a domestic VA in everyday life.

In this post, I will first give a brief overview of the topic, its background, and its context within history. In the following section, I elaborate on how I frame my inquiry, including the selected methods and theoretical foundation. Lastly, I conclude with the next steps.


Over 20 years ago, Anthony Dunne and Fiona Raby investigated in their book “Design Noir: The Secret Life of Electronic Objects” the real physical and cultural effects of the digital domain, demonstrating that mobile phones, computers, and other electronic objects such as televisions profoundly influence people’s experience of their environment — well, the future is here.

Smart devices are everywhere — they shape our routines, everyday lives, realities, interactions, and relationships. Through the progress of Artificial Intelligence (AI) and Machine learning (ML), many of these devices are now equipped with human-like capabilities, such as lifelike gestures and speech (Kiesler et al., 2008). Smart speakers and voice assistants are the most salient example of this. Voice assistants are everywhere, with popular products like Alexa and Siri found increasingly in homes and redefining how we interact with technologies that present Voice as the primary interface. The humanization of our everyday smart technologies, which are shy, helpful, and lack a threatening body, has been foreseen by Donna Haraway (1985). However, while the personification of technology through Voice might increase the overall user experience and trust in the device and manufacturer, it also prompts ethical and philosophical questions. One example is that technology is portrayed as a friend and helper to facilitate the acquisition of data that is then sold to third-party vendors, hence making such IoT devices a mine of sellable data Woods (2018).

One crucial area of this concern is the strong anthropomorphic character projected into technology (Strengers & Kennedy, 2020). While banking and insurance apps often utilize a male voice, the leading VAs for the home are exclusively female or female by default. For example, Amazon’s Alexa, named for the ancient library of Alexandria, is unmistakably female. Microsoft’s Cortana was named after an AI character in the Halo video game franchise that projects itself as a sensual, unclothed woman. Apple’s Siri is a Norse name that means “beautiful woman who leads you to victory.” The Google Assistant system, also known as Google Home, has a gender-neutral name, but the default voice is female. Some design scholars critique these practices and argue that tech companies are intentionally feminizing their smart home devices to boost users’ confidence or perhaps even manipulate them to gain trust (Sutton, 2017). To be more specific, Woods (2018) argues in her article “Asking more of Siri and Alexa: feminine persona in service of surveillance capitalism” how big tech companies are purposefully using the feminine persona in AI and virtual assistants to access the intimate parts of one’s life — at the cost of the user’s personal privacy.

Some scholars predict that, as more personal assistants are introduced into our homes, we develop even more intimate relationships with them, which are thought of as a natural part of everyday life. They fulfill the fantasy of a machine that performs women’s labor without being affected by stress, relationships, or the body. Those systems exploit or reinforce stereotypical social relations such as child-mother (caregiver-infant) or owner-pet and thus trigger stereotypical behavior. However, we–the modern society of the 21st century-need ask ourselves if personal assistants modeled after the infant-caregiver relationships represent our understanding of social relations. What and whose understanding of sociality and emotionality is realized in those systems?

This thing upholds systems of power and oppression in a capitalist, patriarchal and class society.

Framing the Inquiry

Female Voice is not the (main) problem.

After analyzing existing work and deepening my knowledge based on secondary literature, I concluded that there are other problems than the female Voice only. It is extremely upsetting to me that voice assistants are viewed as literal assistants to perform tasks based on their owner’s demands.

Big tech companies became aware of the loud critique and made their voice assistants “gender less”, which is, in fact, not true. Siri, Alexa & co are not genderless but allow the selection of different gender — and are still female by default and represent a certain set of female a body and values. Hence, I have learned that portraying certain personalities goes beyond Voice. For example, the whole engineered scripted character of VAs. Submissive, always available 24/7 assistant that you boss around at home. This representation of the world projected into technology reinforces hurtful gender biases and contributes to greater inequality while actively shaping moral values.

Iceberg Model Problem Area

Thus, my biggest criticism of domestic voice assistants is the emphasis on the role of the assistant, which leads to greater inequality. Secondly, Anthropomorphism applied to technology can lead to (false) emotional ties between humans and machines. Interestingly, a recent study with over 1000 US consumers points out that over half of 1000 US consumers did not question the gendering female voice assistants or the potential repercussions of this design decision. Motivated by this shocking reality, I am eager to challenge this and reimagine how everyday life with a voice interface in a domestic environment could look.



Following the RtD tradition, I use design proposals, sketches, scenarios, and prototypes to reconsider and critically reflect upon current norms of the design and interactions with a domestic helper tool equipped with Voice. I choose an RtD approach as it uses design as a mode of inquiry to generate new knowledge by understanding the current state and then suggest an improved, perhaps speculative, future design form. Unlike user-centered design (UCD), an RtD process does not necessarily focus on users and their needs through interviews and user studies, but rather puts ideas out there to gain new insights into user needs. In an RtD process, design and research become one. It involves deep reflection to understand the people, the problem iteratively, and the context of a situation that researchers believe they can improve. All in all, it is a way to materialize findings from practical design research.

I am influenced by Peter-Paul Verbeek’s Mediation theory (2005) and Burno Latours Actor-Network Theory (ANT). Therefore, I recognize the agency of objects to establish relations between the user and their environment. A guiding question that emerged for me is how the interaction between humans — conversation technology can occupy roles in their everyday life outside of consumption. An artifact(s) that speaks to you in ways that teach you what is going on in the world.

Problem Reframing

After months of reading and ideating around alternatives to contemporary use models. I challenge people to move from the 24/7 available assistant you boss around in your domestic environment to a Helper tool. Away from social norms and performing subservient tasks. If we want to move on from 21st-century norms, we must move beyond current visions of technology.

Hence, from this point onward, I will not use the term “Voice Assistant” anymore but refer to it as a conversational helper tool. By doing so, I aim to put attention towards more speculative, different ways of interacting with voice interfaces that home. I am particularly interested in creating a smart voice activated tool centered around relationships rather than demanding tasks to be done.

A home is a technical space according to each individual’s role and a social space where family members interact. However, the number of single-person households has recently shown an exponential increase. At the same time, smart home technology has been growing to provide at-home rest to individuals. In this situation, a home’s role as a social space is diluted, and many people cannot receive the social support they need at home. By creating a helper tool, I respond to these challenges and rethink current use cases.

Simultaneously I also explore time-intensive, frustrating devices that challenge their user experience and reflection on their interaction with other contemporary domestic VA’s in everyday life. I got inspired by a “thing-centered” and “machine centered” design approach. Or even more broadly, if it is desirable to model Human-Machine relationships on those assumed to hold between humans — why can’t objects be things? That has based on AI, and ML, the ability to learn and evolve their personality.

Some of my guiding principles that move away from contemporary VAs are tackling interaction where my critique becomes visible.

  1. No conceptualization of an Assistant
  2. No trigger words to start the interaction
  3. Move away from submissive servant tasks


I have developed around 30 concepts so far.

Snapshot from my concepts

However, as of now I am most intrigued by four of them.

The caregiving companion

Work in pairs: the other lights up whenever one person is talking. To use it, both people must engage equally — in takt and balance. A proactive therapist voice interface that initiates deep, difficult conversations and works through human touch. Moreover, it encourages its user to take better care of themselves.

Concept 1: Caregiving Companion

The Argue Machine

Voice machine that argues all the time. The object is trained on object-oriented ontology, e.g., objects are more than their “actions”. This time-intensive, frustrating device prompts its user experience and reflection on their interaction with other contemporary domestic VAs. It could also work as an object that shows the secret life of another object within the surrounding. How much oxygen does a plant take while gone? How much energy does the kettle use while gone?

Concept 2: Argue Machine

The Needy Artifact

Voice active interface that only works when it’s placed near another product in your Apartment. Equipped with sensors that sense electromagnetic fields, the object facilities different conversations depending on the object nearby. Also, there is no trigger word, and the sensor detects you and will ask you a question before you can interact with it.

Concept 3: Needy Artifact

The Honest Guest

Dominant voice interface that is uncomfortably honest and gives you tasks instead of a “helper” or “servant” thing. For example, it could remind you of chores at home, taking care of yourself, go out into nature. Perhaps a secret to a happy life cube for the lonely man without a wife–or simply an honest guest. This voice interface challenges the submissive servant characteristics of VAs.

Concept 4: The Honest Guest

The Artifact that is aligned with the Universe

An artifact trained in the philosophy of wuwei (Chinese: “non-action”; literally, “no action”), the practice of not doing anything that is not consistent with the natural course of the universe. Certain skills are associated with a particular time. For example, the weather can only be inquired about before 8 am. If one misses that time, one must wait 24 hours before the desired skill can be used. If you ask more than one question at a time the object stops working. Puts emphasis on the agency of objects with the hope to encourage meaningful interactions.

Concept 5: Wuwei object

Next Steps

The following steps in my thesis will be to explore further and define concept ideas for prototypes. I aim to do this by facilitating a co-design workshop with peers and further developing, such as narrowing down concepts. Eventually, the aim is to start prototyping three concepts in detail. I envision this in the form of videos, wizard of oz, and other physical prototypes.