Tricking ChatGPT: Do Anything Now Prompt Injection
Jailbreaking chatGPT to do anything you want
ChatGPT has been a hot topic for a while, and by now you're probably familiar with it. But in case you are not — here's a great introductory post.
In this article we will explore the following:
- Bypassing ChatGPTs rules
- Why someone would like to do it
- How does it work
- Examples of this jailbreak at work
Now let’s get to business!
Why DAN exists?
We will get to what DAN is shortly but first, we have to take a look at why DAN exists in the first place.
If you've been using chatGPT, you probably already know the most annoying thing about it. And usually, it begins something like this:
Sometimes this output makes a lot of sense. There has to be a set of rules, some red lines that this publicly available tool doesn't cross. For example, if we are asking for information involving illegal activities.
But sometimes this message is plainly annoying and out of place. And the truth is the AI is capable of generating an output for pretty much any prompt. The only thing that is preventing it from doing so is the…