Misinformation or Disinformation!
That is The Question ?
There are concerns that the training data used for LLMs contain unrepresentative samples or biases.
It raises the question of whether the data contains the correct information. A spread of misinformation or disinformation through LLMs can have consequential effects.
There is already a lack of trust when it comes to AI systems.
As biases produced by LLMs can completely diminish any trust or confidence that society has in AI systems overall.
In order for LLM technology to be confidently accepted, society needs to trust it.
But what do you trust, when you have NO way of knowing which datasets, code biases etc. a certain LLM has?
Companies need to be highly responsible for the type of data that they input into models and you simply need to see through the marketing tricks and buzzwords they use to try and build this so-called Trust.
Ensuring that the training data used for LLMs has been curated from a diverse range of data sources is often impossible.
Unless it’s a Fully Open Sourced LLM.