ChatGPT is awesome. But here’s a failure mode I discovered. LOL.
I recently used GPT-3 (what’s inside ChatGPT) to do very handy research for a topic I was interested in. As I do these days. Always with lots of checking. And I discovered something fascinating. About GPT-3.
The topic was the ‘sustainable abundance’ trend in futuristic thinking (i.e. that energy will soon be cheap, green and abundant).
Lots of correct stuff emerged from GPT-3.
I then asked it who the thinkers on this topic were and it got that right too. Tony Seba. Ray Kurzweil. Elon Musk. Because I already knew a lot of these guys.
In fact via GPT-3 I discovered lots of new thought leaders on this topic I didn’t yet know about but could then successfully confirm. (If I’m honest, I’m a bit peeved it didn’t list me.)
Much faster than via Google.
GPT-3's Achille’s heel
In a moment of necessity (on the topic) and sheer brilliance IMO (as a test of GPT-3), I next asked GPT-3 to give me some quotes from these guys.
Now, I’m in the field of large language models, and I knew this was where GPT-3 might hallucinate. Because it doesn’t remember anything verbatim from its training. It only learns statistical links between words.