The potential dark side of Large Language Models

The ethical and social harm that can be caused by LLMs

Sharad Joshi
Geek Culture

--

ChatGPT is akin to Windows when it comes to market-fit and bringing LLMs in the hands of everyone on the planet. It’s success is so huge that even my friends and family who know next to nothing about AI are trying it , talking about it and are genuinely excited about the future of AI.

Photo by Digital Content Writers India on Unsplash

There are claims going on about ChatGPT replacing coders, teachers etc. , bets going on about the arrival of AGI in next few years vs not being able to achieve AGI in our lifetime, the usefulness(peer programmer, teacher, search engine, creative writing etc. ) and limitations(OOD performance failure cases e.g addition of floats with more than 5/6 digits, how RLHF avoids the most common pitfalls not solve the problem at its core, hallucination, misinformation, lack of understanding of implied but not written text e.g if a kid pets a dog we know the kid is not afraid of the dog but ChatGPT doesn’t) of ChatGPT and so on.

Given all the hype, I thought it’d be best to raise a few points to keep in mind when it comes to LLMs.

I’ve tested ChatGPT for all the risks raised and while RLHF has put a band-aid to these risks, it’s not enough!

  1. Misleading or False Information : LLMs are known for…

--

--

Sharad Joshi
Geek Culture

Committed to be a lifelong learner and a data science unicorn(referral link, https://medium.com/@sharadjoshi/membership)