Generative AI Is Enabling Fraud and Misinformation

Several investigative reports have recently highlighted the potential of generative AI to empower hackers as well as mis- and disinformation spreaders.

Waleed Rikab, PhD
7 min readJan 17, 2023
Generated through Stable Diffusion with a prompt by the author

Since its launch in late November 2022, many online users are finding OpenAI’s ChatGPT to be very amusing and also very helpful in writing, coding, automation of tasks, and linguistic translation, to name but a few uses of this sophisticated AI-powered tool. Given this versatility, it is perhaps unsurprising but still unfortunate that fraudsters and mis-, dis-, and malinformation (MDM) spreaders are also looking into utilizing ChatGPT and similar AI models to streamline and enhance their operations.

A recent report by the security company WithSecure, for instance, explored some of the potential benefits of ChatGPT for malign actors. Their conclusions are that in a wide array of attack vectors, ChatGPT promises to bring illicit operations to new levels of sophistication. ChatGPT can allow cyber attackers to “industrialize” the orchestration of “spear phishing” attacks that trick corporate victims into opening emails that they believe come from trusted parties. Such access can then be used for insertion of malware, extortion, or fraudulent transfer of funds.

--

--