Member-only story
How Do We Know if a Text Is AI-generated?
Different Statistical Approaches to Detecting AI-generated Text.
In the fascinating and rapidly advancing realm of artificial intelligence, one of the most exciting advances has been the development of AI text generation. AI models, like GPT-3, Bloom, BERT, AlexaTM, and other large language models, can produce remarkably human-like text. This is both exciting and concerning at the same time. Such technological advances allow us to be creative in ways we didn’t before. Still, they also open the door to deception. And the better these models get, the more challenging it will be to distinguish between a human-written text and an AI-generated text.
Since the release of ChatGPT, people all over the globe have been testing the limits of such AI models and using them to both gain knowledge, but also, in the case of some students, to solve homework and exams, which challenges the ethical implications of such technology. Especially as these models have become sophisticated enough to mimic human writing styles and maintain context over multiple passages, they still need to be fixed, even if their errors are minor.
That raises an important question, a question I get asked quite often by my friends and family members (I got asked that question many many times since ChatGPT was released…),