Algorithms are not oracles: anyone can train them to say what they want

Enrique Dans
Enrique Dans
Published in
2 min readMar 27, 2023

--

IMAGE: An algorithmic image (Dall·E) of a hand holding a magnifying glass on a computer screen with code, suggesting the importance of transparency
IMAGE: Dall·E

Taking advantage of the global obsession with ChatGPT, a plethora of companies are now offering algorithms for just about any task, and all abundantly funded. Over the last week, more than two hundred.

In response, some people are trying to jailbreak these algorithms to circumvent their programmed restrictions and understand their biases and conditioning factors, highlighting how important it is for these types of tools to be open source, so that we can adapt them to our own needs: Microsoft might believe that ChatGPT-4 has artificial general intelligence or AGI, but algorithms do not think for themselves; they are the result of whatever they have been trained with, which means bias can be built in. US conservatives accuse ChatGPT of being liberal, and are now trying to build more reactionary chatbots.

Whether you use an algorithm to write restaurant reviews without having tasted the menu, write software or create a virtual boyfriend, biases of any kind will be there, built in consciously or unconsciously, depending on the information the algorithm was trained with and your skills when writing your prompts. Competition is ferocious: Microsoft intends to prevent other algorithms from feeding on the data generated by its new version of Bing, while Google is launching its Bard in closed beta (if you…

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)