It’s human nature to perceive robots as having human features and exhibiting human behavior. Anthropomorphic inclinations are in our DNA, and engineers can’t override this tendency. What roboticists can do is help us better cope with cognitive biases and better address social ones. To accomplish these goals, they should embrace a postmodern aesthetic.

Bots should be designed like Deadpool — the graphic novel–adapted cinematic antihero who constantly breaks the fourth wall by reminding the movie audience that he knows he’s a superhero character in a superhero movie. Bots should emulate the vibe the in meta-slasher Scream franchise where characters recite horror movie commandments, break them, and pay the price for their transgressions. Starkly put, roboticists should aim to promote honest anthropomorphism by programming their devices to remind us that they’re just actors with make-believe human characteristics that are performing for an audience that has a hard time suspending its disbelief.

Let’s consider popular digital assistants, like Amazon’s Alexa, Apple’s Siri, and Microsoft’s Cortana. Ian Bogost, the Ivan Allen College of Liberal Arts distinguished chair in media studies at the Georgia Institute of Technology, recently argued in the Atlantic that their design is the source of #genderfails: The bots’ very names ring gendered bells; the bots perform service-based labor that has been historically associated with stereotypes of women’s work and women’s emotional labor; and the bots can only ignore or disengage from sexist language, a far cry from real feminist ideals.

Bogost concludes: “Maybe the best way to represent women as technological apparatuses is to avoid doing so in the first place.” Agreed!

With good intentions, we might want to see Rosie the robot from the Jetsons cartoon get upgraded and become Rosie the Riveter bot. But while some design choices are more empowering than others, assigning robots any gender always risks amplifying social prejudices and incentivizing objectification.

We Build Robots with Our Own Biases

Sadly, human biases can be as subtle as they are powerful. While Siri and Cortana are disembodied, their speech patterns — which don’t include code-switching — evoke distinctive racial associations in human listeners. Indeed, Siri’s “accent and intonation” alone are types of information that offer racial “clues” to human beings.

A few months ago, I asked students in my Philosophy of Technology course to take a survey created by SPACE10, Ikea’s “future living lab,” called “Do You Speak Human?” The questions are all about personal preferences: what you would like a customized robot to look like and how you would like a customized robot to behave.

If you filled out the survey and didn’t skip any questions, you would have to decide if your bot should be “motherly and protective,” “autonomous and challenging,” or “obedient and assisting.” Most students judged the wording as good shorthand, a succinct way for distinguishing between very different alternatives. Yes, they recognized the gendered dimension of the first option. But it’s okay, they maintained, insisting that female animals have evolved to protect their young and that scientific evidence shows female humans have protective instincts toward their newborn offspring.

Maybe so. Unfortunately, the ideology-infused popular imagination readily crosses the line between ideas supported by sound science and idealized, fetishized, and politicized appeals to what’s natural. Women who have a hard time bonding with their babies shouldn’t be made to feel that they are in any way defective or that they just need to think positive, maternal thoughts to beat postpartum depression. And all it takes is for spurious connections to be made that link the essence of being a woman to motherhood to get folks spewing unjustifiable criticisms of women working outside the home, choosing to not have children, or wanting to exercise reproductive rights. Patriarchy, like racism and colonialism, is rooted in histories of false essentialisms, and these essentialisms persist and reemerge when they aren’t continually challenged.

What Will Be the “Keyboard Shortcuts” of Voice Interaction?

How can a postmodern aesthetic help? As usual, New York Times and Wired tech writer Clive Thompson offers helpful suggestions. In “Stop the Chitchat. Bots Don’t Need to Sound Like Us,” Thompson argues that digital assistants like Alexa and Siri can be frustrating to talk to because the bots mimic human idiosyncrasies like “phatic” expressions — the inefficient words and phrases that humans use in social conversations.

Thompson predicts that in the future, people will “crave a more fluid, allegro pace of voice interaction the same way power users of desktop software eventually adopt keyboard commands.” I asked Thompson what else could be done to free chatbots from excessive anthropomorphic constraints, as well as problems like sexism.

“If I was making a voice bot and really wanted to foreground the weirdness of making a robot gendered,” Thompson replied, “I’d have it switch voices frequently, going from a high-pitched female voice to a deep male one, to a deep female one, to a high-pitched male one, and so on. You’d never know precisely which one you were gonna get, but they’d all be ‘Siri.’ The ‘Ursula LeGuin’ method!”

“Maybe occasionally I’d have Siri respond with three or four voices speaking in unison, a mix of male and female,” Thompson added. These design possibilities are in line with a postmodern aesthetic, because the proliferation and amalgamation of voices do two things at the same time: perform a familiar role (conversational partner) while also reminding us that role-playing is occurring (during the conversation).

Back to the Deadpool model of using words, not sounds, to call attention to role-playing. Apple deserves credit for trying to evolve Siri beyond the initial gendered associations the company fostered by using Susan Bennett to give the bot its first voice. If you ask Siri if it’s female, it replies by emphasizing its botness. “I wasn’t assigned a gender” is one answer. “Well, my voice sounds like a woman’s, but it exists beyond your human concept of gender” is another.

This is a step in the right direction, but not the right technique. Siri only shows its hand if you trigger it with select, inquisitive questions. Siri doesn’t actively push out reminders that it’s only a bot. In part, this is because Siri is designed to treat conversations as user-initiated events without ever taking the lead to broadcast facts about its being.

Robots, Remind Us That You’re Not Human

A better model is Woebot, a robot app that provides cognitive behavioral therapy. To remove the stigma that some people associate with getting therapy from another person, and to avoid giving people wrong impressions about what the bot can do during an emergency, Woebot punctuates anthropomorphic expressions with botly declarations.

Woebot automatically tells users the following sorts of things: “I’m just a robot. A charming and witty robot, but a robot all the same.” “Helps that I have a computer for a brain and a perfect memory…With a little luck, I may spot a pattern that humans can sometimes miss.” Woebot even nudges you to say, “Sir, yes, sir!” so that it can correct you: “Tee hee…though I’m neither a sir nor a madam.”

Alongside these revealing remarks, Woebot also uses anthropomorphic language that suggests it has more agency than it really does. Woebot talks about what it “wants” for you and pretends to reminisce about a time that it was “nervous.” These aren’t deceptive claims, even though they’re false. The whole is larger than the sum of its parts, and the rhetorical conceits occur during a dialog that alternates between closing the curtain and pulling it back.

Although I’m advocating for a postmodern robo-aesthetic, I only chose this label because, frankly, I couldn’t think of a better term. Postmodernism is an inherently contested concept, and some people associate postmodernism with deep skepticism about truth. I’m not advocating that roboticists go down an epistemological rabbit hole, but I am urging them to be more transparent about how their devices represent what they are.

The aesthetic I have in mind has a lot in common with what Ryan Calo, a renowned law professor at the University of Washington, calls “visceral notice” — the idea of using highly resonant design strategies to heighten consumer awareness of issues that they have a hard time perceiving, like privacy threats.

“It would be a shame were we to leverage the unique affordances of robots to nudge and sell but not to inform,” Calo told me. “Robots can do so much more than recite terms of service. They can engage, including to remind us that they are not as human as they sometimes feel.”