What kind of childhood is Big Tech selling — and should we buy it?

Shelby Stanovsek
9 min readSep 4, 2018

--

Examining how voice assistants like Alexa, Siri, and Google are being marketed and manufactured for families and what this means for kids growing up today.

Recently, I encountered an ad for Amazon’s new Echo Dot Kid’s product. The commercial features different scenarios of kids in their room interacting with the virtual assistant Alexa, asking “her”” questions and prompting her to tell them a story or dinosaur joke. Immediately, it brought to mind an ad I saw in 2016 for Google Home:

The ad opens to a familiar childhood setting of a father reading a story to his daughter on the couch. Soon, the little girl interrupts mid-story to ask, “hey daddy, how big is a blue whale?” to which the dad responds “hmm…” and then directs that question to Google. The Google Home device answers that a blue whale typically weighs 300,000 pounds, to which the dad responds, “huh!” and continues on with the story. Turning the page to read about the animal noises, the daughter is again prompted to interject, noting “this is when mom makes a whale noise.” Wanting to provide his own noise, we see the father again prompt Google to contribute to the story, asking the device what sound a blue whale makes. His daughter giggles with delight after the voice assistance plays a recording of a whale sound. The commercial then pans to a closing frame with text reading “Story time by you, help by Google.

On the surface, both Google Home and the Amazon Echo Dot Kid’s Edition seem to promote bettering life, enhancing stories & play:

In the the Google ad, we see it suggest the device can enhance story time by offering immediate and accurate information. With the Amazon ad, the device is purported to enhance play time by allowing kids to play ad-free music, tell kid-friendly jokes, stories, provide games and other prompted information. We’ve seen similar such ads from Apple, promoting Siri as a digital assistant that “changes how you get in touch, how you get answers” — seemingly through conversation, claiming you can ‘do just about anything with just your voice”. For these reasons and more, devices with anthropomorphized virtual assistants like Siri, Google and Alexa are now owned by more than 39 million Americans according to a recent study — up 128% from the year before.

In an MIT Review post about growing up with Alexa written by senior editor Rachel Metz — the mother of a 4-year-old daughter — she writes how she believes the utility of the device will outweigh its drawbacks. Metz explains that Alexa has taught her daughter about how to interact with machines, suggesting that it “is possible that simple, routine interactions with this kind of AI will help kids learn even without much advancement in the technology or its design”. Researchers from the MIT Media Lab suggest virtual assistants can be used to enhance children’s social skills and push boundaries, and the team propose considerations for future device designs that they believe can facilitate understanding through interactive engagement, providing novel opportunities for the technology to serve as “learning companions.”

However, what these products don’t expressly force us to consider is the full range of consequences for how these technologies can alter childhood experiences.

Technological design and innovation is typically perceived of as having a certain irreversibility, as technologist Evgeny Morozov writes, never taking us back but only moving us towards progress as a society. He argues that when we are surrounded by such technologies, “we have little choice but to live in accordance with the seemingly universal norms of anonymous social engineers, without ever coming to question the adequacy of those norms.”

When I think back to my own memories of bedtime stories in childhood, I cherish them as being fun, collaborative experiences where my parents encouraged my siblings and I to exercise our imaginations, come up with our own answers to the (many) questions we asked along the way, and let the story veer off into whatever absurd direction we wanted to take it. Today as a babysitter, one of the experiences that brings me the most joy is watching children tap into the unfettered expansive creativity of their young minds that story time prompts.

That’s why when I watched the Google Home whale commercial two years ago, it gave me pause.

My face was something like that…

Morosov states that though history doesn’t support the idea — we expect technology to deliver us from the imperfections of the human condition. A major imperfection that humans have sought to correct throughout history is our dominion over time — something we never seem to have enough of (despite seemingly mentioning our desire to find ways to kill it or pass it.) As David Levy points out, because of technological innovations, “we are all now expected to complete more tasks in smaller amounts of time,” noting that while tech may indeed save us time searching and collecting potentially relevant info online, this is truly misleading, as the technology “cannot clear the space and time needed to absorb and reflect on what has been collected”.

This is part of what we see in the Google Home ad — while the dad is able to accurately and efficiently answer his daughter’s question of how a big a blue is by referring to Google, she looks befuddled as he continues on with the story. There is not an option for her to conceive of what how big 300,000 pounds is. With Google cutting straight to the point, opportunity for imagination is thwarted. As Cassell suggests — most children do not learn simply by receiving facts — they learn by being challenged by other children, parents and their teachers.

I remember the wonder instilled in me when my first grade teacher sought to articulate the massiveness of the blue whale by explaining that it was the size of THREE SCHOOL BUSES lined up back to back. By expressing it with concepts from my everyday experiences, I was better able imagine how big a blue whale actually was than I would be have been able to had this teacher told me it was 300,000 pounds.

In his book Present Shock: When Everything Happens Now, douglas rushkoff writes a flaw of Vannevar Bush’s early conception of the Memex device was that Bush thought it would free up space in our brains to devote more attention to solving problems, while in actuality it places us “in danger of squandering this cognitive surplus on the trivial pursuit of the immediately relevant over any continuance of the innovation that got us to that point.” As Nicholas Carr describes in assessing how two decades online has affected his thinking:

“ whether I’m online or not, my mind now expects to take in information the way the net distributes it: In a swiftly moving stream of particles. Once I was a scuba diver is a sea of words. Now I zip along the surface like a guy on a jet ski”

The way Carr describes his info-seeking style, and the way Google and Amazon and Apple depict it as the ideal in their voice assistant ads, raise serious questions about the time and space these technologies grant us — as Levy contends — to think, pause, question and reflect on the information that is continuously presented and offered up to us. It leads us to question the value of this hyperattentive style of information gathering, to borrow N. Katherine Hayles’ term.

By considering the way tech giants advertise these products, we can begin to address the the seemingly universal norms that are being thrust upon us.

Earlier this month, Sherry Turkle published an op ed in the New York Times about the future of sociable robots, raising questions about how technologies like Siri — which she describes as a “conversational object presented as an empathy machine” that can understanding people — force us to question our human values and indeed, what it means to be human. Some of the responses to Turkle’s recent op-ed on Twitter were quick to flag this decree as premature “doom and gloom”.

However, manufacturers and marketers of these technologies, Turkle writes, “encourage children to develop an emotional tie that is sure to lead to an empathetic dead end.” Metz contends that while her daughter knows Alexa is a robot, she no less believes Alexa has the capacity to feel happy or sad, aligning with a recent study that found children between ages 3–10 interacting with the machines tend to anthropomorphize the devices with human qualities, generally finding them trustworthy and friendly. Unlike humans with actual life experiences, these machines are designed to perform empathy, Turkle argues. She worries that in replacing conversations about human values with “technological ideologies of post-human values” we start on the path of forgetting what it means to be human.

While the article was challenged for critiquing “technologies that doesn’t even exist yet,” within the very article, Turkle writes that forgetting what it means to be humans happens before our robotic companions are in place, “ it begins when we even think of putting one in place… rebuild[ing]ourselves as people ready to be their companions.” Noting that 22% of US parents consider the virtual assistant to be another member of the family according to 2017 Adweek study, coupled with findings that many children believe these machines have the capacity for emotion , we can see that we are indeed getting comfortable with the notion of being their companions.

In certain populations, for instance among the aging, these robotic companions are already very much in place, providing assistance for those dealing with issues related to dementia and loneliness. The question to be asking is, in what contexts should be be supporting these tools as “good enough” or perhaps “better than” human companionship which offers the possibility for empathic connection derived from human experience?

It is important to note that there is a distinction between pursuing technology that enhances existing ailments, like artificial brain power for people with Alzheimers — and using it for artificial intimacy to deal with the inconveniences and vulnerabilities of human connection that we believe robotic companions and voice assistants will “make easier.” This is where it is important to bring in a larger consideration of our human values.

For instance, the artificial intimacy of these voice assistants might be a great resource for children that don’t have the option of playtime or story time with an attentive parent — a parent that may be overworked and overtired, parents that may be understandable too exhausted or unavailable to read to their children — a very real issue that studies have shown are linked to their children’s academic success (though there have been recently raised concerns about study’s racial bias). In this case, the Amazon Echo Dot might very well be a great option to increase the amount of words a child is exposed to, which is found to be critical to their brain development in the first 5 years of life.

However, when we witness a little girl alone in her room in the commercial asking the device to tell her a story, we should ask if this is being marketed this way, or as the ideal for the sleek innovative and efficient future that we want?

Recently, parents concerns about how politely their children interact with the devices, and recent research findings that back up how children are not likely to use phrases like “please” and “thank you” when eliciting demands from voice assistants, has led companies to reform the UI/UX design by incorporating a politeness feature that encourages the use of manners before the device performs tasks. This demonstrates a great instance in which the use of technology calls for a consideration of our human values (in this case — manners )which can be assessed and better incorporated into technology’s design.

As a result of concerns about how children were talking to the devices, Google Home and Amazon Echo Dot have redesigned the kid’s version, for instance Amazon Echo Dot praises kids for “asking so nicely’” while Google Home encourages them to “say the magic word” before completing the requested task.

Researcher Radesky suggests that it is not too early to consider the long-term impact that raising kids with virtual assistants may have, acknowledging while the tools have awesome capacities for providing solutions in certain contexts, she wants “parents to consider how that might come to displace some of the experiences they enjoy sharing with kids.”

As mentioned in another article I wrote about the complicated tango of parenting kids in the age of mobile media, it can be hard to pull away from the attraction of being connected, productive, efficient, and entertained in the ay our devices undoubtedly provide for. But as evidence from an interview Sherry Turkle features with a father from her book Reclaiming Conversation, it is clear that we need to put the tech aside, for a moment, to assess — as a society:

What are the values of childhood as we know that we want to maintain, how can these values be maintained through child-rearing, and how do we go about designing technology that aligns with these values?

--

--

Shelby Stanovsek

media+ tech ethics. trying to make some sense of things to carve out the sustainable digital future we want.