Talking to Toys
We’re set for a renaissance in the toymaking industry. Advances in voice interaction AI and small form factor hardware can breathe life into physical toys at a time when parents are increasingly limiting their children’s screen time.
Toymakers have an amazing opportunity to use new technologies to develop the next runaway success. The growing movement to move kids away from screens means that toys have now become a savior for parents. If a toy can teach and engage a child, it means less time parents need to referee their child’s time on apps like YouTube and Snapchat. But some traditional toymakers might struggle to keep pace with technological innovations. This article explores the following question: What can the traditional toymaker do to remain competitive with the “smart” toys coming to the market and how can they create new ways for kids to interact with their products and make the products sticky?
As a child, I was enchanted by Inspector Gadget. While the Don Adams-voiced character lacked a sense of presence and was a notch below Agent Smart of Get Smart, he was a man-machine hybrid with a seemingly unlimited number of contraptions he could call on. Logically, because many of these devices were connected to his hands and arms, he could actuate them through voice. The wake word for these was “Go Go Gadget,” “Go Go Gadget Helicopter,” “Go Go Gadget Umbrella,” “Go Go Gadget Roller Skates,” etc. For the purpose of gag comedy, these never functioned as expected.
There were other very forward thinking things about Inspector Gadget. Gadget’s daughter, Penny, who often saved him from calamity, had a literal notebook computer. She sported a watch with the equivalent of FaceTime (apparently, with 4G because it didn’t require a phone to tether). There was also handsfree mics that her dog, Brain, used to speak with her (through Barkgrish). Their car was a crossover vehicle, somewhere between a Chevy Citation and a Chrysler Minivan.
When I was nearly five, my mother mentioned that an Inspector Gadget doll was available and that she would try to get one for me. This wasn’t going to be a small fete as the toy was only available in the US. We lived in Ottawa, about an hour drive north of Ogdensburg, New York, a sleepy town renown for making jetbridges and the Ogdensburg Agreement, which paved the way for US customs pre-clearance in Canada. However, stores in Ogdensburg sold American goods and had more selection, including access to mail order toys like Inspector Gadget.
I would dream about all of the things that the Inspector Gadget doll could do and assumed it was everything that the character could do on the show. I assumed voice control, flashing siren lights, hovering with a single propeller, its jacket able to inflate, wheels coming out of its shoes. It was going to be awesome.
I never found out what it could do. There was some problem in getting one — something my five-year-old brain didn’t fully grasp, like being on back order or sold out, that meant I’d never get it. Maybe because I never actually received the toy that to this day, I still assign it mystical “never meet your hero” status.
Sensing the World
Thinking back now to the Inspector Gadget toy and having my own kids, I wonder how today’s kids can be enchanted by toys that are imbued with IoT capabilities. Sensors and actuators are cheaper and easier to implement, there’s the whole Internet thing, and processors are thousands of times more powerful and cheaper than they were when I was a child.
It’s also easier to create low run objects with customization thanks to highly precise and cheap 3D printing. For control, the cellphone created an easier way to interface with toys and now voice offers a new means of interaction that wasn’t previously available.
Beyond the “basics” when it comes to making a toy like a doll relatable (fluffy, kind-looking, pleasant contours, etc.), what are some ways designs can build in interaction that makes things seem more relatable? The first step is mimicry.
When someone’s mannerisms and level of excitation doesn’t match our own, it can create discord and make it more difficult to relate. We can address this by mimicking the person’s rate of speech, excitation, facial expressions, and other modes of behavior. Sometimes accents and vocabulary can be matched to also help make something more relatable. Mimicry isn’t impersonation. Its purpose is to flavor the interaction with the user’s personality, rather than to try to trick the user into thinking they’re interacting with something truly different.
Toy IoT Applications: Let’s Get Talking
The second opportunity for toy developers is to enhance the experience with the toy through multimodal interaction. The easiest way to do this is to add an Action on Google or Alexa Skill that can coordinate with the toy. For Alexa, this is more straightforward thanks to the Alexa Device SDK. While Google doesn’t have a similar offering, it’s still possible to build this interaction; it just requires more development work.
Imagine the lights on a firetruck turning on while Alexa plays sirens or hide and go seek with Google Assistant where the next step in an adventure begins once a user presses a button on the toy. Other potential offerings could be a Mr. Potato Head that speaks through Alexa when face parts are put in different spots or a game of Operation that buzzes on an Echo if a user touches a part.
Alexa and Google Assistant integration are the equivalent of augmented reality with toys. The small incremental cost of building a skill could be buoyed by the increased features that could be used to market the product and new types of interactions. These could make the toy stickier. The other parallel for developing Skills and Actions for Google is Webkinz, an equivalent of which could easily be implemented as a Skill that you could use to talk to the toy.
Another big incentive for toy makers to start developing Skills is the potential for a new revenue channel through In-Skill purchases. Imagine being able to unlock capabilities of a board game or features of a figurine. Maybe there’s a story that could be told… if the user decides to upgrade their purchase? The challenge for the toy maker is that they are typically set up to make revenue through one channel — the sale of a physical device.
One application that’s been demonstrated for talking with toys (rather than to toys) is for messaging. Shark Tank alumnus ToyMail created a series of toys called Talkies that allowed parents to send messages to their kids to be played out from the plush toys. Similarly, kids can record and send messages to their parents, who can listen to these on an app. Another cool feature is being able to send messages from Talkie to Talkie. Similar toys might be able to augment this type of interaction — maybe converting text to speech and vice versa, altering the voice, or even inserting sounds around the speech as the device plays a recording.
With the huge advancements in AI and mechatronics technology, another implementation of voice interaction in toys that’s ripe for disruption is home animatronics. The Teddy Ruxpin toy of the 1980s was a pioneer in bringing creepy animatronics into the home. Nothing was as nightmare inducing as a Teddy Ruxpin running low on batteries. A lot has changed since then. Step motors, actuators, and other electronic components used to control them have fallen in price and are easy to implement in products. On the AI side, we can render synchronized lip movement much better than before. The same goes for eye movements that seem realistic. Text to Speech synthesis with advanced tools like WaveNet2 means that we can get hyper-realistic voice responses. Combined together, we’ll likely see the next version of Teddy Ruxpin be more like Ted. Maybe this will inspire the new generation of Furby as well.
We’re ripe for a renaissance in toys. Voice technologies and AI have seen leaps over the past five years and we’re only starting to see this spill over into the hardware space. Especially with the rejection of screens as a primary interface for kids, voice will be the natural interface for these devices in the years ahead.
This article was originally published through IoT For All on December 10th, 2018.