5 Lessons from Microsoft’s Clippy

What the rising AI avatar industry can learn from the brief career of a Microsoft Office assistant

Nahua Kang
twentybn
5 min readMay 2, 2019

--

This blog post is featured in the fifth issue of Embodied AI, the bi-weekly digest on the latest news, tech, and trends for AI avatars, virtual beings, and digital humans. 👉 Subscribe and get ahead on AI avatar news!

It might have taken Jesus only four days to resurrect Lazarus, but it took Microsoft almost a decade to resurrect Clippy, its anthropomorphic digital assistant introduced in 1996 to help Microsoft Office users work more effectively. During Clippy’s brief yet controversial career in the Office software, he won several infamous titles such as the most annoying software bug and one of the worst UIs ever deployed to the mass public. But Clippy’s come-back, lasting only one day, was promptly killed by Microsoft’s brand police despite his new, limited role as a sticker pack in the Teams chat software.

While Clippy’s employment prospects are everything but doomed, his pop culture influence has seen a comeback. Not only has he starred in memes, but a Microsoft cloud developer advocate, Chloe Condon, recently featured Clippy on her business card prompting former Microsoft Office executive, Steven Sinofsky, to tweet: “I suppose if you live long enough, others will wear your failures as a badge of honor.” Last weekend, Andreesen Horowitz’s Frank Chen interviewed Sinofsky about the Clippy story, digging into the history behind this annoying yet iconic character.

Steven Sinofsky comments on Clippy’s resurgence in pop culture

5 lessons from Clippy’s failure

Clippy: The unauthorized biography with a16z

Imagine learning upon purchase of your new Amazon Echo that you have to pay an additional $200 to learn how to use it. Sounds ridiculous, right?

But that’s how it was when PCs became widely available in the 1990s. Like today’s virtual assistants that come with numerous skills, computers were shipped with thousands of features and thick manuals. To solve this feature discovery issue, Microsoft decided to create virtual assistants like Clippy who, upon understanding user intent, appear with helpful suggestions.

Despite offering concrete utility value, Clippy came and went quickly: introduced in 1996, Clippy was turned off by default in 2001, and permanently removed by Microsoft in 2007. As the AI avatar industry continues to evolve, we can extract valuable lessons from the failure of this predecessor of virtual assistants to avoid pitfalls in product development and user testing. Hence, we’ve gathered 5 key lessons from Clippy’s failure for you.

Clippy: “Hey Bill. We’re gonna be BBF, right?” Bill: “Right…” (Credit: Reuters)

1. Optimize for repeated use, not just first time use

First, according to Microsoft’s Chris Pratley, Clippy suffered greatly from the “optimization for first time use” problem. The first time you saw the cordial, “It looks like you’re writing a letter,” suggestion, you might have been pleasantly surprised by Clippy’s intelligence. But after the 1,000,000th time you probably found it hard to take Clippy’s incessant, repetitive options every time you wrote “Dear…” on your Word doc. It’s not surprising that many users chose to turn Clippy off after his novelty faded.

2. Imbue diversity in product design and development

Second, Clippy was created in a male-dominated design process that lacked diversity. The original idea was to create a fun and non-intrusive helper for the Office interface, yet Clippy and his fellow digital helpers turned out to be especially unpopular among women. In an interview with the New Yorker, former Microsoft executive Roz Ho recalled: “Most of the women thought the characters were too male and that they were leering at them.”

3. Seek and listen to your real customers’ feedback

Crucially, feedback from these focus groups were not taken seriously in the product development phase. It seems that the male-dominated engineering team couldn’t understand why the female reviewers thought the characters were leering or male-looking. Ultimately, 10 out of 12 assistants that were shipped with Clippy were male characters. According to Sinofsky’s interview, even Bill Gates made fun of the assistant’s annoying nature when he first heard of the idea, suggesting he’d want to kill “the clown”. Furthermore, Sinofsky also revealed that many reviewers in the focus groups were tech enthusiasts and did not include “regular folks” who represent a larger proportion of Office users.

4. Avoid being excessively attached to your creation

Along the journey to create an assistant that users could connect with, the creators of Clippy became emotionally attached to their own product. Unwilling to accept feedback, “they were willing to throw out the focus-group-provided data” because it defied their expectations. James Fallows also softly hinted that Clippy was a holdover from an unsuccessful Microsoft Bob project, which Melinda Gates led. It might not have been a decisive factor, but it may be why employees were hesitant to offer their sincere opinions about poor Clippy.

5. Be aware of the adjacent possible

Finally, in our estimation, the Clippy product was outside the adjacent possible and way ahead of its time. According to Sinofsky, the Clippy project emanated from studies on social interaction and intent classification with Bayes theorem and NLP. But he also revealed another critical problem that doomed Clippy from the beginning: when Clippy was launched, contemporary computers had only 2MB of RAM, 20MB of harddrive space, and a VGA screen that could fit only two paragraphs of Microsoft Word. Melinda Gates also acknowledged that Bob needed a more powerful computer. But, at that time, the gigabyte was not a given and GPUs were nonexistent. For any intelligent virtual assistant today, these engineering constraints are like digital starvation.

Clippy retirement party in San Francisco (Credit: Steven Sinofsky)

From Clippy to AI Avatars

The rise of AI avatars today bears resemblance to the early days of personal computers. While deep learning has made many previously unimaginable ideas possible, building likeable AI characters shares the same fundamental challenges. So Clippy’s failure imparts valuable lessons that can help us — the creators of the new generation of AI-powered assistants — avoid the same mistakes that Microsoft made all those years ago.

Instead of chasing after the fantastic yet impossible, we should build products within the adjacent possible. Instead of offering only a novelty factor, our AI products must provide sustainable utility value. It also means we must address gender imbalance and other social issues in the tech world because including women and minorities in the design and development processes will make AI products more inclusive, fairer and better. Finally, we should avoid becoming overly attached to our own creations and listen to feedback from our users.

Resurrection has its charms. Lazarus became a saint. Clippy became a weird, iconic, and cult-like figure. But most products won’t have the luxury of the retirement party that Clippy enjoyed, not to mention a resurrection. Therefore, it’s not Clippy’s rise in pop culture but rather his failure that we should pay attention to.

--

--