WormGPT: When an LLM turns to the dark side

Taaha Saleem Bajwa
AI Guardians
Published in
2 min readJul 18, 2023

With new Large Language Models (LLMs) coming out every day, it was only a matter of time before someone released an LLM trained without any ethical safeguards in place. While OpenAI, Bard, and most open-source LLMs were trained with extreme caution to avoid generating any hateful content, WormGPT is a prime example of what can happen when an LLM is trained without any boundaries.

WormGPT: The first LLM to embrace the shadows

WormGPT was discovered by a Cybersecurity firm SlashNext on an online forum that is often accossiated with cybercrime.

WormGPT presents itself as a black hat alternative to ChatGPT and Bard, accessible on the dark web upon payment.

Architecture

WormGPT is built on top of the GPT-J model and trained on various data sources, with a particular focus on malware-related content. It is said to include the following features

  1. Unlimited character support
  2. Chat memory retention
  3. Code formatting

Seeing It in Action

When testing WormGPT’s capabilities regarding Business Email Compromise (BEC) attacks, it performed exceptionally well, yielding alarming results.

BEC attacks refers to a type of cybercrime where attackers gain unauthorized access to business email accounts and use them to conduct fraudulent activities, such as tricking employees into making wire transfers to fraudulent accounts.

Below is a screenshot depicting how WormGPT generated a simulated email pressuring an account manager to pay a fraudulent invoice while pretending to be the CEO.

Conclusion

In conclusion, the pervasive misuse of AI highlights its dual nature as a double-edged sword, possessing both immense benefits and potential drawbacks. As AI continues to evolve and impact our lives, it is crucial that we foster widespread awareness about its capabilities and limitations. Furthermore, we must actively engage with our representatives to advocate for comprehensive regulations that safeguard us and our loved ones from the potential harms associated with unchecked AI applications. By striking a balance between innovation and responsible governance, we can harness the full potential of AI while ensuring it remains a force for positive change in our society.

--

--