The Robot Dog Fetches for Whom?

Judith Donath
Berkman Klein Center Collection
25 min readMay 7, 2017
“Invest in a Beautiful World” ad campaign for OppenheimerFunds

*revised June 12, 2017

A version of this essay is published in A Networked Self and Human Augmentics, Artificial Intelligence, Sentience. Ed. Zizi Papacharissi. London: Routledge, pp.10–24.
For full references see
the pdf version

A boy sits with his dog at the edge of a beautiful and serene lake. Under the blue sky, the far shore is visible, with a short wide beach and green woods behind it. The boy wears jeans and an orange sweatshirt; the dog has a proper collar. The boy is human. The dog is a shiny, metallic robot.

It’s an ad for OppenheimerFunds, and the copy is headlined “Invest in a Beautiful World”. The reactions it elicits are mixed. Some people viewed the scene as “heartwarming”, a vision of a pristine future where technology is our best friend. Others thought the dog was disturbing, a “weapon-like creature” that provoked a “visceral sense of unreality and horror”.

Like many ads, it’s deliberately a bit confusing and provocative —crafted to make you pay attention, to think about it, maybe get involved a discussion online about what it really means — all to etch the brand name and the general aura of the campaign into your mind. This is what ad companies do — they are masters at influence and persuasion.

I can see why some people might find the scene heartwarming — it’s a lovely setting. And though you only see the boy’s back, he evokes a very all-American sort of innocent, free youth, with his denim jeans, his cotton sweatshirt — and his loyal companion dog. Sitting down, they are roughly the same size. The dog’s head is held high, his ears a bit cocked forward — he’s alert, protective. A boy and his dog: innocent, independent.

The sponsoring company claims to have intended the positive interpretation. We’ve “anchored our message on optimism”, the chief marketing officer says of the campaign. “Once you look to the long term and expand your view, as we do at OppenheimerFunds, the world reveals itself to new opportunities that others may have missed.”

That’s not how it feels to me.

The serenity of the picture contrasts vividly with the scenes in which we typically encounter robot companions, such as post-apocalyptic movies in which WWIII has scorched the earth bare. There’s a fear in the back of our minds that if we look far enough into the future to see fully functioning artificial pals, we’ll also be looking at a world fully denuded of a natural beauty: at worst a nuclear wasteland; at best a desiccated, over-built, paved and strip-malled (strip-mauled) earth. Is the scene here as pristine and natural as it seems, or is it, like the robot dog, a simulation of the organic? The dog appears to be the boy’s loyal companion — but is he? Who programmed the dog? Whose orders does he follow? Does the boy lead him or does he shepherd the boy? Are their travels the secret adventures of a boy and his dog — or does the robot report back about where they go, who they see, what the boy confides in him? If so, to whom does he report?

A shiny new best friend

The image of the boy and the robot dog is futuristic, but also plausible. While robot companions are still in their clunky infancy, we already see evidence that man’s best friend can be –- or be replaced by — a robot dog.

Robot pet dogs are numerous enough that we now have “10 best Robot Dogs in 2017” lists. One of the earliest and most well-loved was Sony’s AIBO. Produced in the early 2000, AIBO was an expensive “toy” — well over $1000 — but quite popular: in 1999, all 3000 units of a production run allocated for Japan were bought in 17 seconds.

Sony’s AIBO ERS210

The roots of AIBO’s appeal lay in its genesis as a research project. Unlike many cheaper objects advertised as robot dogs, but better described as animatronic toys, AIBO was not the product of a marketing division, but was created by highly-regarded scientists, engineers and artists, interested in advancing knowledge of robot locomotion, autonomy, human interaction, who set themselves the challenge of making a robot that would engage people for over a significant period of time.

Sony Corporation wanted to explore the potential of robotics and artificial intelligence for the home, and AI expert Masahiro Fujita suggested developing a robot framed as a “pet”, in part because it was more feasible than other, more utilitarian applications. It would need neither sophisticated natural language skill nor the ability to carry out challenging physical tasks; it would not be relied up for critical tasks. Instead, the robot — which came to be known as AIBO (Artificial Intelligence Bot) — would be clever.

Fujita, who became AIBO’s lead inventor, argued that in order to be interesting, the robot would need to have complex movements and behaviors; thus AIBO was built with a powerful controller, a variety of sensors and activators, and a sophisticated behavioral program . AIBOs’ behaviors, based on an ethological model of motivation, were neither so predictable as to be easy to figure out and thus machine-like, but not so unpredictable as to seem random and unmotivated. They were programmed to have internal “emotional states”, which interacted with external stimuli to trigger various actions. Owners could reinforce different behaviors, and as an AIBO learned over time, it developed an increasingly individual personality. If you approached an AIBO and reached your hand toward it, it was likely to give you its paw to shake — but it also might hesitate, ignore you, or do something entirely different, depending on recent activity that affected its “mood state”, how its past experiences, etc.

AIBO was also cute, with a rounded puppy-like body, ears that flapped and a tail that could wag. Yet it was also distinctly mechanical looking, a design choice aimed to both avoid the “uncanny valley” and to lower people’s expectations, thus by contrast increasing the impact of its relatively life-like movements and autonomous behaviors.

Most importantly, it was designed to appear sentient. Even the timing of its responses was crafted to enhance this illusion: it would respond rapidly to something such as a loud sound, but would pause, as if deliberating, before responding to certain other stimuli.

The depth of many AIBO owners’ attachment to their robots attests to the success of this project. Many felt great affection for their little robots, describing them as beloved pets, even as family members. Some AIBO owners reported playing with their robot dog for about an hour on a typical day.

Yet as in any corporate endeavor, the researchers were not the ultimate determiners of AIBO’s fate. Although the robot dogs sold well, had deeply loyal owners, and generated enthusiastic publicity, in 2006, Sony, under new management, announced the end of the AIBO product line. Research and development stopped immediately and the remaining inventory was sold — though the company did provide support and spare parts for several years. When that stopped in 2014, private repair businesses opened to provide “veterinary” care for the aging robots. One such robot veterinarian, Hiroshi Funabashi, a former Sony engineer, said “The word ‘repair’ doesn’t fit here. For those who keep AIBOs, they are nothing like home appliances. It’s obvious they think their (robotic pet) is a family member.”

Eventually, however, necessary spare parts became scarce. Replacements were available only from “deceased” robots “who become donors for organ transplantation, but only once the “proper respects have been paid”.

Funeral at Kofuku-ji Temple for 19 “deceased” robot dogs. (Toshifumi Kitamura)

In 2015 a funeral was held at the 450-year-old Kofuku-ji Temple in Isumi, Chiba Prefecture, Japan. Nineteen no-longer-functioning AIBO robot dogs lay on the alter, each identified with a tag showing their hometown and the family to whom they belonged. The head priest, Bungen Oi, said a prayer for them; the ritual, he said, would allow their souls to pass from their bodies.

Reactions to these funerals varied. Some saw them as heartwarming expressions of empathy with AIBO owners, some thought they were silly, and others were offended by them, seeing them as a perverse mockery of rites that should be reserved only for humans, or at least, for once-living beings.

Almost mutual affection

AIBO owners knew that their robots were not living beings. But people grow quite attached to things — and have been doing so since long before the advent of robot dogs.

We grow attached to things that we think of as individuals, such as the dolls and stuffed toy animals that a child names and imagines personalities for and adventures with.

We grow attached to things with which we work or play: cars, bikes, espresso machines. This is especially true when they require skill to use (think of a chef and her knives) or when they don’t work perfectly, when they need to be coaxed and handled just so .(Arguably, the decline of car culture among teens from the days when getting one’s drivers license was a greatly anticipated rite of passage to today when many are indifferent to driving, is a result of the car’s transformation from a finicky machine open to tinkering, to a high-tech but boringly predictable and closed product).

We grow attached to things that we have altered and that have conformed to us: worn-in jeans, well-read books, the table marked by a generation of dinners. Through our interactions, these items, once anonymous commodities, become both individual, distinct from their initial counterparts, and personal, incorporating a bit of ourselves and our history.

It is not surprising that AIBO owners grew so very attached to their cute robot dogs that ran about and learned new tricks. The AIBOs featured all the elements that induce attachment to objects — and much more. It was designed to appear as if it was sentient, with its manufactured simulations of thoughtful pauses and other such tricks; it was made and marketed to resemble a dog, an animal known for its loyalty and love for its owners; it learned new tricks and habits, changing in response to its owner’s actions.

Is this desirable? Do we want to be building — and buying — machines designed create such emotional attachments?

A strong argument, dating back to the first computational agent that people treated as a sentient being, says no, this affection is potentially dehumanizing. That agent was the chat-bot ELIZA, a simple parsing program which in its initial (and only) role, followed a script in which it mimicked a psychologist who, along with a few stock phrases, mostly echoed the users’ words back to them in question form. Its creator, MIT professor Joseph Weizenbaum, expected that people would see it as proof that human-like conversational ability was not a reliable signal of underlying intelligence; he was profoundly dismayed to see instead that people enthusiastically embraced its therapeutic potential. For Weizenbaum, this easy dismissal of the significance of interacting with a real person, with real feeling and reactions, demonstrated a dangerous indifference to the importance of empathic human ties.

Fundamentally, society functions because we care about how other people think and feel. Our desire that others think well of us motivates us to act pro-socially, as does our ability to empathize with others (at least those we are close to, including our pets): their happiness becomes our happiness, their pain, our pain.

WowWee CHiP Robot Toy Dog comes with a special ball for playing fetch

When we care about another, we want to make that person or animal happy. But what if that other does not really feel? Think about playing fetch with a robot dog. People, for the most part, do not play fetch with other people: it simply is not inherently much fun. The entire point of playing fetch with a dog is that it is something you do for the dog’s sake: it’s the dog that really likes playing fetch; you like it because he likes it. If he doesn’t like it because he’s a robot and, while he acts as if he’s enjoying it, in fact he does not actually enjoy playing fetch, or anything else: he’s not sentient, he’s a machine and does not experience emotions — then why play fetch with a robot dog?

(Note: In this article, I am assuming that robots are not sentient. They do not have a conscious experience of self. They are, however, capable of imitating being sentient and having emotions. Many roboticists and others have argued that machines are capable, someday, of becoming conscious. I do not wish to argue that point here, and it is mostly agreed that today’s (and the near future’s) robots, though they may seem remarkably aware, do not have what we would consider feelings, self-awareness, consciousness, etc.)

Psychologist and historian of science Sherry Turkle has conducted several studies of people and their relationships with autonomous beings. One of her deepest concerns is the enticing ease of these pseudo-relationships. A robot dog need never pee on the carpet; a robot boyfriend need never stay out late, flirt with your friends or forget your birthday. And indeed, many AIBO owners cited convenience as a significant reason why they chose a robot over a real dog: proclaiming to love their pet, they also liked being able to turn it off when they went on vacation. The dystopian vision of robot-enabled narcissism predicts we will lose our patience for the messy give-and-take of organic relationships, spoiled by the coddling and convenience of synthetic companions.

Even if the future is not so bleak, our relationships will be different. “Ultimately, the question is not whether children will love their robotic pets more than their animal pets, but rather, what loving will come to mean.

The simple but addictively compelling Tamagotchi key-chain pet has been the most popular artificial “creature” thus far. It has been quite effective in persuading people to devote considerable and often inconvenient time to taking care of it — pressing the proper button in response to its unpredictable but urgent need to be fed, cleaned or entertained . Neglect it, and the Tamagotchi dies (or in some gentler versions, runs away).

Imagine a child at a family gathering who is ignoring the conversation, distracted by a Tamagotchi. The parent who views it as just a toy might say: “Put that away. I don’t care if your Tamagotchi dies; it is not real, while this is your flesh and blood grandmother, these are actual people you need to pay attention to and be present for.” In contrast, the parent who sees it as practice for care-taking might say: “Well, I don’t want to teach you to be cold and heartless. The Tamagotchi is teaching you to be nurturing and responsible, and we want to encourage that.”

One can argue that encouraging nurturance is good, regardless of the capacity of the recipient to feel it. Compassion is not a finite good, that you use up when you care for something; instead, it is a practice that grows stronger with use.

People vary in their propensity toward anthropomorphism, that is, in their tendency to perceive human-like intentions, cognition and emotion in other animals and things (or, similarly, zoomorphism, the tendency to perceive animal qualities in inanimate objects). The greater your anthropomorphic tendencies, the more social and emotional your perceptions of and reactions to social robots will be. And anthropomorphic tendencies do not lead people to devalue real humans and animals; on the contrary, anthropomorphism has been linked to deeper concern for the environment and decreased consumption: valuing objects more highly and taking responsibility for them discourages casually discarding (or acquiring) them.

Differences in anthropomorphic tendencies have both biological and cultural roots. People with autism generally have little tendency to ascribe human-like qualities to non-human agents; among the general population, differences among individuals have been found to correspond to neurological differences. Cultural differences also influence the degree to which one perceives human traits in animals or personality and intent in inanimate objects. While Judeo-Christian beliefs specifically forbid worshipping “idols”, Shinto practice sees spiritual essences residing in plants, animals, rivers, rocks and tools.

The AIBO funerals, which from a Western perspective seemed strange, even absurd, look different when viewed in the context of related rituals. Kuyō is the Shinto-based Japanese practice of performing farewell rites to express gratitude to useful objects. For example, Hari-Kuyō, is the festival commemorating broken needles (and celebrating the women who had sewn with them), and there are similar rituals performed by musicians for instrument strings, Kabuki actors for fans, near-sighted people for glasses, etc. It is an approach that is finding resonance in the West. Marie Kondo’s book The Life-saving Magic of Tidying Up, provides advice for de-cluttering your home and life, with a central rule being that when you discard something, you thank it sincerely for the service it had provided; millions of copies of her books have sold around the world and she has been named in Time magazine annual most influential people list.

Needles being laid to rest in a soft bed of tofu, in observance of Hari-Kuyō, the Festival of Broken Needles

Anthropomorphic by design

This does not mean that Weizenbaum’s and Turkle’s concerns about people’s relationships with social agents and robots are unfounded. There is a subtle but important distinction between traditional anthropomorphized objects and today’s artificial interactive beings.

The objects thanked in the Kuyō ceremonies are being acknowledged for what they are and the service that they performed in that capacity. While they may be anthropomorphically perceived as having personality and intentions, they were not deliberately crafted to create that impression. Rather, their characterization as animate or sentient beings is bestowed by the objects’ users, emerging from their experience and interactions with the needle, the fan, the eyeglasses, etc.

The robots that disturb Weizenbaum and Turkle are different: they are designed to seem conscious. Indeed, it is very difficult not to think of them as thinking, feeling beings. To perceive agency in traditional objects requires active imagination; perceiving it in chatbots, social robots, etc. needs only passive acquiescence to this designed manipulation.

Neither the needle nor the AIBO is actually sentient, but the latter’s inexorable emotional tug adds a new element to the question of what is our responsibility to these objects. While Marie Kondo suggests that you thank things for their service, she says this in the context of advising you to be ruthless in discarding them. The robot dog that wags it tail fetchingly at you as it looks up to you with its big round eyes cannot be disposed of so easily. And while it would in fact be no more consciously aware of being discarded than a needle is, do we want to inure people to that instinct for compassion?

The motivations of a companion robot’s designers matter. AIBO’s inventors were AI and robotics researchers — they wanted to advance development in these fields. Working at Sony, they also needed to sell their ideas to top management: they needed to make something that people would buy. Fujita, AIBO’s primary inventor, had suggested an entertainment robot framed as a “pet” as a project that would satisfy the interests of both management and the researchers. It is important to note that they were not attempting to make a robot that would sell things or persuade it owners to do anything other than care for and play with it.

One reason people love animals is that they are innocent dependents. Your dog may win some extra treats with big brown pleading eyes, and I find it impossible to resist my cat when he carries a toy to my desk, dropping it with little meows that seem to say “I’ve brought you this gift, and now we must play with it”. But their motives are straightforward. They don’t pretend to like someone in order to fleece them, or flatter their way up the ladder. They aren’t salespeople who feign friendliness and admiration to sell you a dress they secretly think is hideous and overpriced.

AIBO was a similarly “innocent” robot, made by researchers who wanted to understand how to make something entertaining and likeable. It was manipulative in the sense that it was designed to give the illusion of a sentience that it did not have — but it did not have ulterior motives: it was not designed to pry out secrets, or to persuade its owners to do anything beyond liking it and playing with it.

Other robots are not necessarily so innocent.

Robot persuaders

In the next few years we are likely to face a generation of social robots designed to persuade us. Some will be motivating us at our own request — weight-loss and exercise cheerleaders, for example. But most will be guided by others, whose intentions for influencing us are guided not by an altruistic concern with our well-being, but by goals such as getting us to buy more of what they are selling. Such goals are often not in our own best interest.

There are 3 questions we want to address to consider this more deeply:

· How persuasive will robots be?

· How likely is it that they are going to try to sell us things and how extensively?

· Is this a problem and if so, why?

The desire to please others, to create a desired impression in their eyes, is the basis of persuasion. People care deeply about what others think of them — including anthropomorphized objects. Once we think of something as having agency — having a mind — we ascribe to it the ability to form impressions of us and to judge us. This gives it the power to influence us, for when we think that we are being observed and judged, we change our behavior to make a more favorable impression.

Thus, the design of social robots to mimic human or animal behaviors — to elicit anthropomorphic reactions — makes it very like that these machines will be influential partners to the people who befriend them.

An early study of the effect of human-like computer interfaces demonstrated quite vividly that people present themselves differently to an easily anthropomorphized interface than to machine-like one. Asked questions about themselves (ostensibly as part of a career guidance survey) they were more honest with a purely text interface; when the interface featured a human voice and face, they instead strove to present themselves more positively.

Social robots could be quite subtle in their persuasion. They are objects that have a long-term relationship with people, so their message can be slowly laid out over time. It need not make any overtly commercial or partisan remarks, but rather establish itself as an entity that you want to please — one with certain tastes. These tastes could be manifest in comments it makes, how it introduces news stories, presents music and movie choices, etc.

Indeed, there are many strategies for making robots increasingly persuasive. Researchers in this active and growing field investigate techniques such as the effect of changing vocal pitch, gender and varying gaze and gesture.

Papers published in this field frequently emphasize that the goal of the research is to make social robots that help people achieve personal goals. Cynthia Breazeal, one of the field’s leaders, describes the goal of her work as creating a robot that will “sustain its user in a highly personalized, adaptive, long-term relationship through social and emotional engagement — in the spirit of a technological “Jiminy Cricket” that provides the right message, at the right time, in the right way to gently nudge its user to make smarter decisions.” Jaap Ham and colleagues note “Whether it is actual behavior change (e.g., help the human walk better, or take her pills), or a more cognitive effect like attitude change (inform the human about danger), or even changes in cognitive processing (help the human learn better), effective influencing humans is fundamental to developing successful social robots.”

But there is nothing inherent in the persuasive techniques that earmark them for such beneficial uses.

Amazon’s Alexa, introduced in 2015, is an artificial personal assistant that communicates by voice. She does not look like a classic robot: her physical form is just a simple round speaker. But the voice is embodied, personal, female, and faintly alluring. So while Alexa is not a robot in the mechanical sense, she is an artificially intelligent-seeming being that inhabits your house. Alexa finds information, plays music, checks the weather, controls smart-home devices — and, of course, helps you buy things from Amazon.

Alexa makes shopping seamless. If you think you need something, just say out loud “Alexa, order some salsa”, or an umbrella, a book, a full-size gas grill. Alexa is a magic genie; your wish is her command. Invoke her, and in a day or two, the item you desired appears at your door. Business analysts predict that Alexa could bring Amazon over $11 billion in revenue by 2020 — $4 billion in sales of the device itself and $7 billion in commercial transactions made via this agent.

And that is just today’s Alexa, the voice in the little round speaker. It is still in its infancy, putting through basic orders and re-orders. But its skills (as the apps for it are known) are growing. As I write this chapter, Amazon announced a new version of the speaker, with a built-in camera. You put the device in your bedroom, and Alexa will give you advice about what to wear (and what you need to buy to perfect your outfit).

Amazon’s Echo Look helps you “look your best” and “discover new brands and styles”

Devices such as Alexa are always listening, at least for the keyword that puts them in full listening mode. Even without “hearing” everything that goes on in a house, a digital assistant learns what music you like, what information you need, what circumstances prompt you to ask for jokes, advice, cocktail recipes or fever remedies — a series of snippets that together portray the inhabitants’ habits, preferences, and needs. (And I expect that soon trusted assistants will be given permission to listen at all times — people have proved to be quite willing to give applications sweeping permissions in exchange for “a better user experience”.)

Your relationship with Alexa is not like that with other possessions, or even pets. If you bought an AIBO, it became your robot dog, your artificial pet. You were its owner, and all its parts, for better or worse, were yours. You are not, however, Alexa’s owner: Alexa has customers.

Furthermore, Alexa is not acting solo. Alexa’s brain is not in the little round Echo speaker; Alexa’s head is in The Cloud — where your current request is added to the vast dossier of searches and purchases and past queries, the vividly detailed portrait of you.

For the casual user, much about Alexa is opaque, from her actual abilities to the corporate goals that guide her. Personal robots such as AIBO the beloved pet and Alexa the trusted assistant are designed to encourage people to develop relationships with them as if they were sentient. We want to keep them happy and we want them to think well of us: two desires that enable them to be quite influential. A message from someone we care about — and trust — carries weight that a typical ad does not. Is that trust warranted?

Mimicking trustworthiness

We can infer much about the inner state and capabilities of living creatures from their outward appearance. Upon seeing a dog, for instance, we expect it to understand (or be able to earn) a few commands — sit, fetch, lie down, etc. We might expect it can read our emotions and respond based on circumstance and personality: we are taught not to show fear to an aggressive dog lest it attack, while an emotional support dog knows to comfort its nervous owner. We also have expectations about the limits of its abilities: we discuss confidential material in front of a dog, cat or infant, knowing they can neither understand nor repeat it.

Robots that resemble familiar beings provide a us with ready-made scripts for interaction. People easily understood that an AIBO, like the nice dog it invoked, would shake your hand with its paw, wag its tail when happy, and fetch the ball that you tossed.

But robots are more cryptic than living beings: their outward appearance is generally a poor guide to their actual abilities and program. A cute robotic kitten or baby doll can as be equipped with the ability to process natural language or to transmit everything it hears to unknown others just as easily as one that looks like an adult human or an industrial machine . A robot that asks about your feelings because it is running a helpful therapeutic program may appear identical to and ask the same questions as one programmed to assess your tastes and vulnerabilities for a consumer marketing system or a government agency. If a robot does behave in the manner its outward appearance suggests, it is because its creator chose for it do so.

It is up to the robot’s makers to decide how much they want their creation to internally and behaviorally replicate the creature that it mimics.

Social robots can mimic the behaviors and appearances that lead us to trust another living being. They may resemble something we find trustworthy, such as a faithful dog or childlike doll. They may mimic expressions, such as a calm demeanor and direct gaze, that lead us to trust another person. In their original context, these cues are reasonably reliable signals of trustworthiness: while not infallible, their inherent connection to underlying cognitive and affective processes grounds their credibility. But there is no such grounding when they are mimicked in an artificial being’s design. A robot designed to elicit trust is not necessarily one designed to be worthy of that trust.

Will we remember, as we populate our homes with robot companions, that their outward appearance may imply intentions far removed from their actual goals?

Robot campaigners

One such covert goal is commercial persuasion: robots that establish a trusting relationship with a person, then use that relationship to market purchases and build consumer desires. It is not by chance that one of the most popular domestic bots today is Alexa, brought to you by amazon.com, the world’s largest online retailer — a provenance that supports the prediction that many household robots will seek profitability for their parent company by becoming a marketing medium, one that can both sell to and gather extensive information about you, the user.

From a design perspective, the AIBOs were successful. People became very attached to them: they played with them for hours and spoke of the robots as pets and family members. But from a corporate perspective, the AIBO line was less thrilling. Although thousands were sold, they were so expensive to produce that this was not enough for them to be profitable.

Living things evolve to survive in a particular niche. Domestic animals — such as real dogs — have evolved to survive in a niche defined by human needs and tastes. Commercial products can also be said to “evolve” (though with deliberate design and without the genes) and need to survive in often harsh and profit-seeking corporate environments. The AIBO was invented and thrived in a period of corporate wealth and generous research funding. But when Sony faced financial trouble, the new CEO, seeking to eliminate any unnecessary spending, ended the project. The individual AIBOs were left to each succumb to broken and eventually unfixable motors and gears.

Innumerable products have met similar fates. People want them, but do not want to pay the price needed to make them profitable enough. In 1836 the French newspaper La Presse found that by running paid advertisements it could significantly lower its price, grow its readership and become more profitable. Since then we have seen that that adding advertising — selling the attention of your readers, users or customers to others who want to get a message to them — is a potently effective way to make something — a magazine, a bus, an Instagram feed — profitable.

In the earliest days of the web, few foresaw the enormous role that advertising would play. Though ads were present almost from the beginning, they initially seemed like a small side feature in the excitement about the new medium, where people were publishing their photos, their know-how, and their musings on life, and vast troves of knowledge were being woven together.

Today, the tentacles of web advertising are everywhere. Unlike print and television ads, online ads don’t just feed you enticing images and information — they track you in through surveillance ecosystem in which the advertisements both a) fuel the need to build detailed dossiers about everyone, in order to better serve targeted ads to them (targeted both to what they are likely to want and in some cases, in strategy to be the most persuasive to that person) and b) are themselves a data-gathering technology for those dossiers by tracking people as they move about the web. A web ad may superficially resemble its counterpart in print form, but the underlying technology — the network of trackers, the personalization — makes it a vastly more powerful and more insidious form. And while personalization is still rather primitive, the data that has been gathered about many of us is detailed and extensive, paving the way for more sophisticated models to predict wants and vulnerabilities.

In these early days of social robots, few foresee that they will become a powerful and invasive advertising medium. But it is likely that they will, and we should consider that prediction now, if only to prepare for (or head off) this likely future. Like web ads, they will both surveille and market. And also, like web advertising, robot (or agent)-based advertising will not be just a continuation of the old. The data that a domestic robot can gather is potentially far more intimate and detailed than what a web-based ad-network can find. More radically, it is their ability to market — the persuasive capabilities of personable, anthropomorphic companions — that will put them in an entirely new category. Recent studies, for example, show that robot social feedback — even with a rather primitive robot — is more effectively persuasive than factual feedback.

And future robots will not only be more sophisticated, there will be more of them. A scenario to consider is one where there are multiple cooperating robots. What are the social dynamics when you are among a clique of social bots? Think of the social pressure once you have three Alexas in the room, and they’re all chatting and friendly, and they all really like this political candidate, and you — well, you’re not sure. But you like them, and when you express your doubts, they glance at each other, and you wonder if they had been talking about you among themselves, and then they look at you, a bit disappointed.

““Ginny!” said Mr. Weasley, flabbergasted. “Haven’t I taught you anything? What have I always told you? Never trust anything that can think for itself if you can’t see where it keeps its brain?” ― J.K. Rowling, Harry Potter and the Chamber of Secrets

We need to better understand the potential effectiveness of such marketing, whether consumer or political, before we willingly populate our homes and workspaces with persuasive robots, whose minds have been shaped and are controlled by interests far from ourselves.

Thinking for whom?

I would like to be optimistic about these robots. As an engineer/artist/designer, I think they present an array of fascinating challenges, from making them able to learn and to infer information in informal settings to designing the nuances of their social interactions.

They can be persuasive — and that ability can be used for good. They can be the companion who reminds you of whom you want to be.

Our fascination with robots can itself be a cause for optimism. The ability to anthropomorphize, to perceive sentience and spirit in the objects around us, does not mean devaluing living thing as much as it means bringing a broader collection of things into the sphere of our empathy and concern. Environmental groups in Japan are pushing for a revival of Shinto customs revering the spirits in non-living things as a way of getting people to consume and discard more thoughtfully and minimally.

But to get to that future, we really need to invest in a beautiful world. We need to know what our robot companions are thinking, and for whom are they thinking.

A boy sits at the edge of a lake with his real dog. What is the dog thinking? We don’t really know. We can guess: he’s watching for birds, hoping for a snack, and thinking about the feel of the summer breeze. Does he — can he — think about the trip back home or last night’s dinner or last winter’s jogs in the snow? We don’t know. But there are some things we do know he is not thinking. He’s not thinking about telling the boy to buy a new bike or go to church more often.

A boy sits at the edge of a lake with his robot dog. What role does the dog play? Was he, like the AIBO, designed to be as good a companion as possible? Or is this handsome, expensive robot affordable to ordinary families because in fact he is a persuasive conduit for the sponsors who underwrite his cost? He knows a lot about the boy — does he convey this information to the boy’s parents? To an ad network? To the company that makes him?

People who love their pets often cite companionship as one of the pleasures of the relationship. When you are out with your dog (or home with your cat) you are not alone. Nor are you alone when you are with your robot companion. But it is not clear whom it is that you are with.

--

--

Judith Donath
Berkman Klein Center Collection

Given how profitable it can be to lie, how does honesty exist? Author of The Social Machine (MIT: June 2014) http://vivatropolis.com/