Hijacking Chatbots: Dangerous Methods Manipulating GPTs

Jan Kammerath
13 min readMar 29, 2024

Security research on GPTs and LLMs has only just begun. It’s already become a meme to force customer service chatbots to start programming. In robotics, there’s the known hijacked robot problem where robots face severe malfunction when abducted. Similar issues now arise with GPTs when they are taken out of their comfort zone and are manipulated to perform completely different tasks.

Automated hijacking of chatbots with hostile bots is a thing already

I would also like to add that I do not consider it kidnapping as neither robots nor GPTs are a person. They are a thing and hence, it’s hijacking and not kidnapping. With a vast amount of different GPTs being hastily deployed to websites and customer-facing roles, security might be about to face its worst nightmares imaginable. This article is intended to give you an insight into the first security challenges experienced with GPT deployments. The background information provided will hopefully give you an idea on how you can best protect your GPT applications.

Context Window Stretching

All LLMs have a context window, meaning they have a maximum limit of characters they can process for a response. When the characters of the entire conversion exceed the context window, the LLM will drop specific information from the prompt or previous prompts in the conversation.

--

--

Jan Kammerath

I love technology, programming, computers, mobile devices and the world of tomorrow. Check out kammerath.com and follow me on github.com/jankammerath