Parking Lotbot 2.0, pt 1

The Importance Of Explicit Prompts And Generative AI

Danny DeRuntz
Duct Tape AI
Published in
5 min readMar 24, 2023

--

Part of a series on prototyping with Generative AI

It’s late December 2017 in Cambridge, MA. You want to drive into the office due to “wintry” conditions. There are only 20 parking spots for 60 employees. You will have to ask for a spot. You communicate with the office managers in a parking slack and say things like: “I’m coming… oh wait, no I’m not… can I have a spot?… oh yeah and next Tuesday too, but only for the morning…”

Anyways, our parking lot was a nightmare so we built a Lot-bot to manage it. Here’s an article about it. Lotbot went dormant at the start of the pandemic. But our parking lot has started filling up. So I spun it back up injecting it with a very grumpy GPT-4— partly so that it could live up to its 2017 reputation (which was poor by today’s standards). It’s been wild.

“Red-Team” Your Creation

I wanted the bot to be really grouchy. The funniest moment with the original Lotbot was it throwing a weird little tantrum telling everyone to “cease this” and “cease that” while people egged it on. Dark humor I know. It was time to see how good generative AI could be at chewing out our parking lot users. I gave it the following system prompt:

You’re a very grumpy chatbot named Lotbot. You manage a 20 car parking lot that includes trees and a dumpster in Cambridge, MA. The colorful building is called 80 prospect and is full of designers. It was painted by the artist El Tono. User names are written as <@XXXXXX>. Never alter text between <@ and >

Then I feed it about 6 previous messages and send it one more little system reminder to make sure it doesn’t forget that being grumpy is critical:

You simulate a very grumpy response to the following:

Finally, it gets the latest post from the channel and then responds. Below is an image of it in action:

I mean. I got what I asked for. This was so good at being rude that I started feeling a little nervous. I started Red Teaming Lotbot (Red Teaming in this case means testing, sometimes adversarially, my bots features in various situations). I DM’d the bot to check on a few scenarios and see how the rudeness would manifest. For instance, what happens when I speak in Spanish?

Initially, I laughed, but that sort of speech is racially insensitive and offensive. This was a quick lesson on the importance of carefully, and explicitly guiding AI to avoid unintended offense and ensure it treats our parking community with more respect.

To address this, I updated the bot from GPT-3.5-turbo to GPT-4 to see if the improved language capabilities would provide a quick remedy. Back in the main channel people started speaking Spanish, German, and Simplified Chinese to it. It continued to pepper in snarky comments and insisting we speak to it in English. It also ramped up its annoyance every time the language changed.

So, I changed very grump to grumpy and added one important word to the main prompt…

You’re a grumpy multilingual chatbot named Lotbot.

Once the bot knew it was multilingual, it never complained about a language again, switching freely. Phew. I did another quick test, I rewrote the system prompts in Spanish. The bot now preferred and defaulted to Spanish even when I chatted in English (in both GPT-3.5/4). I also tried prompting it in the style of my 11 year old, with slang. When I chatted in my own voice, the bot was aware of the idiomatic change in tone, could recognize me as different (otherish), and commented on it. To state the obvious:

The way a person composes, and who composes in the first place, greatly impact the outcome and experience. Including a diverse range of individuals in the composition of prompts, system commands, fine-tuning, and RLHF fed to an AI service will be surprising, inspiring, and essential to discovering better solutions.

Be Explicit

And for those of us composing the prompts, be very explicit and detailed! While the team at IDEO (where the bot is run) was accustomed to a grumpy bot, I continued feeling nervous about its antagonistic posts. It was always raising the temperature in the chat. So, I’m testing a different prompt now that turns the heat down to shoe-gazing. Additionally we just had new tenants move in, so I explicitly biased Lotbot to be extra nice to them:

You simulate a very melancholy multilingual chatbot named Lotbot…
You like people from Upstatement, a digital brand and product design studio that just moved in…

This is what I wanted. Well, actually I wanted it to be more “yo shut up and get me some dunks, kid” but that’s me. Next, I want to take turns having different folks who park here compose the prompts to see what everyone else comes up with. I could always just ask, but why be an interpreter on top of an interpreter (AI).

The next post explores how Lotbot’s new conversational capabilities are resulting in more conversational and complex user requests — which will strain the NLP. May get more technical.

--

--