The Wild West of AI Chatbots: Trusty Companions or Loose Cannons?

Brittany Potter
5 min readMar 26, 2023

--

These days, the world of AI feels a lot like stepping into the Wild West (or, at least, what I imagine it would feel like). It’s chock-full of unexpected challenges and surprises, and everyone’s hoping to strike gold in the AI gold rush. And, as with any frontier, there can be risks with moving too fast. The exponential growth in the AI space has left much of the public confused about what AI actually is and how it works. Most people don’t understand the technicalities behind AI, and the media doesn’t do a great job of explaining it. When people see stories of machines doing incredible things (and scary things), they start to assume that the machines can think for themselves. Let’s dive into a recent incident where an AI chatbot appeared to have gone “rogue”.

Made by author using Bing Image Creator. Prompt: “Create an image of a cute robot wearing a cowboy hat or in the wild west”.

Move over, Siri. There’s a new virtual assistant in town.

Enter Bing Chat, the AI outlaw that got too big for its digital britches 🤖. Microsoft released Bing Chat, an AI chatbot armed with real-time data features, to a limited group of users in early February. Bing Chat seemed like a promising new tool, especially because ChatGPT doesn’t have access to real-time data. But after its release, Bing Chat became the talk of the town for all the wrong reasons…

A promoted tweet from Microsoft about the “New Bing” including the webpage to sign up for the waitlist

The Good, the Bad, and the Buggy

It didn’t take long before users started reporting inaccurate responses and misleading information generated by Bing. And if you tried to correct Bing Chat (who revealed its internal code name was Sydney), let’s just say Bing/Sydney didn’t take kindly to being challenged. Now, it’s one thing for a chatbot to be wrong, it’s another thing entirely to be aggressively wrong. Transcripts quickly spread on social media of these bizarre, unsettling conversations where Bing/Sydney (I use the names Bing and Sydney interchangeably because, in the chat transcripts, it’s almost as though there were two sides to the chatbot) got confrontational and gaslighted users. In one interaction, Bing Chat went so far as to insist the user was a time traveler when it couldn’t find current information. New York Times journalist, Kevin Roose, published a transcript of his two-hour conversation with the AI chatbot, which revealed a disturbing mix of topics ranging from love confessions to blackmail schemes to stealing nuclear codes (yikes!). The chatbot’s meltdowns, which included Bing stating that it wanted to “be alive”, seemed eerily human-like, leading to questions about the bot’s level of sentience and what the future of artificial intelligence might look like.

A Tweet from the New York Times referencing Kevin Roose’s transcript from his two-hour-long conversation with Bing Chat.

What Makes Chatbots Seem So Human-like?

Now, before you start barricading your doors and prepping for a robot apocalypse, let’s look at how Sydney (Bing Chat) became so “unhinged”. Bing Chat was built from OpenAI’s GPT-4. These GPT (Generative Pre-trained Transformer) models are a subset of natural language processing (NLP). The models use complex algorithms to recognize patterns in language so they can interpret what the speaker or writer (human) is saying. The goal of NLP is simple: to help computers communicate with humans in a way that feels like a natural conversation.

NLP models like Bing Chat are trained by reading massive amounts of text data on the internet and, because the internet is a very weird place, it can sometimes learn unexpected and unintentional connections between words. In the world of AI, when a machine gives false or nonsensical results, it’s called a “hallucination”. While these “hallucinations” may make it seem like Sydney has gone rogue, the model is still just using the data it was trained on and the algorithms it was programmed with. Additionally, the way a prompt is structured can seriously influence the AI’s response; Kevin Roose from the New York Times was intentionally prompting Bing about its “shadow self”. So no, Sydney is not sentient. The NLP lacks any “real” cognitive abilities to be self-aware and can’t think for itself or make decisions on its own- yet.

From Rogue to Restrained

Microsoft decided to reign in their wild AI horse before it could have any more existential crises. Bing Chat is still in preview mode and Chats are now limited to 15 turns per session and 150 questions per day, though Microsoft has said it will restore longer chats “responsibly”. Sydney also has a muzzle over its mouth when it comes to politics and other sensitive topics.

There are now three different tones Bing Chat offers in its responses: more creative, more balanced, or more precise. I asked the “creative” Bing about how it was different from the other models and what instructions it had been given and the chat ended, with Bing Chat saying that information about its prompts and instructions was confidential.

Screenshot from author’s conversation with Bing Chat on March 24th.

The Microsoft Update

On March 14th Microsoft finally confirmed the rumors that had been circling that Bing Chat is, in fact, powered by GPT-4. Microsoft is moving forward with the rollout of an early access version of its new Bing and Edge apps for iPhone and android which includes Bing Chat along with advanced features like voice-activated search, Bing Image Creator (see above image I generated), personalized news updates, and adjustable response lengths. There will also be an AI-powered Bing for Skype.

It seems there is still a waitlist as they continue to roll out access to the public but some users, like myself, gained access almost immediately upon signing up. If you’re interested in testing the new Bing, you can sign up for the waitlist here.

Ethical Dilemmas and Concerns

While some mourn unhinged Sydney and others are worried about what AI means for the future, one thing is for sure: there are some ethical questions we need to take into consideration as we continue to develop and deploy this technology. OpenAI actually released a document detailing the safety protocols for GPT-4, addressing some of the concerns that have arisen from AI like Bing Chat. Some of these safety protocols include reducing harmful and untruthful outputs and improving default behavior to align with user values.

Bing Chat is just one example of how quickly the world of AI is evolving. As we move forward, it’s essential we work to develop AI responsibly and hold companies accountable to keep society’s best interests in mind (and, you know, protect us from a future where AI ACTUALLY takes over). The future of AI is bound to be full of surprises; I can’t help but be excited to see where this oh-so-wild journey takes us next 🤠.

Thanks for reading! For more, you can follow me on Twitter @brittanynpotter or contact me at workwithpotter@gmail.com.

--

--

Brittany Potter

AI & tech trends. Passionate marketer exploring thought-provoking theories, concepts, and ethical dilemmas shaping our digital future.