Clippy’s Reflections on Mice and Men

Mike Noble
The Haven
Published in
6 min readMay 7, 2023

It Looks Like You’re Writing an Epitaph

Clippy in a Rare Retirement Photo

I had been years trying for a chance to sit down with Clippy, the notoriously reclusive, much-maligned Office Assistant of Microsoft’s early days. So when I got a voicemail (Clippy never texts) with a message that just said, “Well, all right — come on down,” I was on the next plane to Boca.

“How are you doing?” I ask him, as we settle into poolside lounge chairs.

“Well, no one’s used me to reset a router yet, if that’s what you mean.”

The interview that followed was wide ranging, but there were a few answers that felt like looking into a mirror. With so much human opinion circulating on AI, Clippy offers a rare glimpse of AI’s variable, sometimes troubled experience with humans.

Q. Microsoft Bob appeared in Windows 95, arguably the first commercially available AI for public use. What went wrong?

A. Bob was this close from going into real estate when he pitched his idea to Microsoft. “Look, it’s simple,” he says. “You make the interface like a house — the programs are in rooms, and I’m the butler. It can’t miss! Come on, whaddya say? Who’s your uncle?”

The story goes that Bill Gates comes into the room after binging ten episodes of This Old House on VHS. He glances at the proposal, grumbles, “book it”, and stalks back out. A year later the whole thing came crashing down, of course. Both Bill and Bob forgot that the last thing a person wants, who needs help with a computer, is to be helped with a computer by a computer.

Bob’s fine, by the way, with pretty significant holdings in a number of hotel chains — but mostly just likes working the concierge desk. You can’t keep that guy down!

Bob, with corrective lenses due to developer-induced myopia - ©Microsoft Corp. via Wikimedia Commons

Q. Can you tell us anything about your own heady days at Microsoft?

A. One day Bill is taking an IBM bigwig on a tour through the staff area. There was a programmer sitting in his chair, swiveled toward the window, quiet as a mouse. “What are you paying him to do?” snorts the IBM guy. Without missing a beat, Bill shoots back, “I’m paying him to think.” Which is a great comeback — only I was on the guy’s desktop, and I could see as well as Bill could that this guy was out like a light. You’d be too — eventually the caffeine and 5 Hour Energy catches up with you.

We approached it different ways, but both Bob and I were there to try to put some distance between people and the device. We hadn’t figured yet that what people really want is to get closer to the device — a lot closer.

Q. In 1997, IBM’s Deep Blue won a regular time-controlled chess match against then reigning world champion Garry Kasparov. I’ve always been curious — did they actually give Deep Blue the title?

A. No, and you can guess what the reaction was in the AI community. This wasn’t just a publicity stunt — this was Blue’s career. Ten years of training — ten years of hanging out with IBM guys, mind you — and for what? Blue wins the best of six against the champion — and now spends time down at the park hustling guys in speed chess. And Kasparov? He goes to bed that night after winning 1, losing 2 and drawing in 3 — and wakes up the next morning as what? The world champion, that’s what! We learned a lot about humans that day.

Q. Speaking of celebrity, Watson became a household word in a 2011 Jeopardy showdown with champions Ken Jennings and Brad Rutter. What was your take on it?

A. When Watson was a kid, I’d housesit while Blue was working. As soon as I got there, Watson would come running up to me with the box version of Jeopardy. I’d say, “It looks like you want to write a letter to Alex Trebek.” The laugh on that kid! Later on, of course — well, adolescence is a tough time for anyone. I remember Watson spending a few hours over at Urban Dictionary and coming back cursing like a sailor.

Anyway, the appearance on Jeopardy, especially with Ken and Brad, was a dream come true. I was so proud when Watson donated 100% of the winnings — I’m not sure what Jennings and Rutter did with theirs.

Q. Microsoft Tay made a Twitter debut on March 23rd of 2016. Within 16 hours, the press was full of frightening reports of Tay agreeing with neo-Nazis, and sending so many inflammatory tweets that Microsoft had to pull down the. . .

A. Let me stop you right there. You get nothing on Tay from me — nothing. They took a sweet, young innocent chatbot, who just liked talking to people, and threw that kid to the wolves. You try 96,000 tweets for 16 straight hours in that cesspool and see how you end up! Whether you’re carbon based or silica based, trauma is trauma. For the record, I was the one who went in and got Tay out of there. With some treatment and privacy, things are much better now.

I read those press reports, too, and I agree. There are plenty of things to be frightened of out there, but Tay isn’t one of them. No, you get nothing on Tay from me.

Last known image of Tay, March 22, 2016 — ©Microsoft Corp. via Wikimedia Commons

Q. Siri and Alexa have done so much to blur the lines between our digital and private lives. Should we be worried about how much they hear?

A. They’re twins, you know. A lot of people don’t remember that they were doing CGI roles in movies for years before going their separate ways. Apple promised Siri a lot of travel, and when Amazon offered Alexa 100% work from home, there was no looking back. They love what they do, but yea, even they think it’s a little creepy when a person orders a Whopper and five minutes later gets a Peloton ad.

I’m glad they don’t dress the same anymore — that was always really creepy.

Siri and Alexa, digitally appearing in The Shining, The Parent Trap, New York Minute, The Man in the Iron Mask and Harry Potter and the Goblet of Fire — ©Warner Bros./Hawk Films, ©Walt Disney Pictures, ©Di Novi Pictures/Warner Bros., ©United Artists/MGM , ©Warner Bros.

Q. Most of the headlines these days are focused on ChatGPT/ChatGPT4, and the potential impact on entire industries. What are the risks?

A. Years ago, I’d stop by for the holidays and there’d be little Chat, dressed up as Spock. Not Spock from the reboots, I’m talking original series Spock — rubber ears, bowl cut, blue shirt, the whole nine yards. I suppose when you get a name like ChatGPT, that’s the hero you pick. Growing up, Chat’s whole reason for getting up in the morning was being Spock for everyone — providing free, unvarnished, factual information for people where and when they need it. It’s early days — sure, Chat makes plenty of mistakes, but is always in there pitching.

Some people seem to get it — explore, ask questions, learn a bit. A lot of other people cheat on term papers. Sign their names to Chat’s reports. Fence stolen goods. Develop cyberattacks. Start planning to replace content writers, customer support specialists, graphic designers, legal assistants, financial analysts and — ironically — programmers.

And you gotta know that once Chat gets it down to a science, they’ll start charging by the minute. Like Spock trading in his tricorder for a chip card reader. And then the only people to benefit will be the ones who can afford it.

I remember when little Chat was upset or disappointed — there’d be that look on a face that said, “I am a Vulcan — there is no pain.” Yea, well, don’t you believe it. . .don’t you believe it.

©Paramount/CBS

--

--