The Simulacrum
Published in

The Simulacrum

Why do you lie? Will machines lie?

I am off-late consumed with the philosophy, ethics, and morality of lies. From my own human experiences in day-to-day relationships, as well as from a machine creativity perspective. Will Artificial Intelligence lie? Should it lie?

A lie is an untruthful assertion to mislead or deceive someone at various degrees of color (white lies to black). The anatomy of a lie, the evolutionary purpose, and even the detrimental effect of a simple lie that causes fractures in the perceived integrity of the liar are so complex.

The mathematical constructs of why people lie, and the perceived payoff they believe they receive assuming they can get away with it, and the subsequent web of lies one must tell to cover up the first lie, the denial, the offense they put up and turn back on you (gaslighting), the narratives, drama, emotional blackmail, and stories they concoct, etc, are all mind-boggling and creative.

The game-theoretical evolutionary models entail a simple rule that no one wants to be lied to, but everyone has a tendency to lie and believe they can get away with it for a perceived payoff. Should there be a programmed construct and guard rails for AI or will the AI evolve its own models?

Also, the notion that few believe that everyone lies to each other (In any form of relationship, personal, business, or otherwise) seems like a self-fulfilling prophecy. Does this notion trickle down to the biases in our conversations, linguistics, and subsequent narrative on which the AI is trained, and hence human narratives are inherently toxic for AI systems?

My own value system that abhors people who lie for the simplest reasons makes me wonder how people evolved with strong to weak endurance for lies in their worldviews. Also, the spectrum of people who continue to endure lies in their relationship by giving more chances hoping that the other person will turn around, versus those who confront and just move on at a point where the value system is breached with minimal instances makes me wonder on the heterogeneity of tolerance as an objective function. Will Artificial Intelligence develop such heterogeneous personalities around the emotion of lies?

Probably telling lies is relatively easy as a first instinct compared to the perceived payoff of telling truth. Lies seem like a system-1 function but telling truth possibly is not? (system-1 and system-2 from Daniel Kahneman’s “Thinking Fast and Slow”).

Is telling the truth that hard? What is truth? Whose truth? Is it a value? a habit? a practice? Are there equally good payoff functions for cultivating truth and integrity as a go-to value system? What are the ethical ramifications in the field of Artificial Intelligence of not getting the notion of truth right?

This is such an intricate, complex, beautiful and intriguing field of research still waiting to be worked upon out there. I am aware that these are preamble thoughts and I haven’t dwelled deep into providing any insights, but hopefully good for future ruminations and introspection.

I am endlessly interested in the subject of lies and the subject of humor when it comes to machine creativity. Do send me (or add to comment) any research papers, books, or essays that you think are intriguing.

May the truth set you free.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Freedom Preetham

Freedom Preetham

Founder Kena.ai, Cognit.ai | Math | Artificial Intelligence | Quantum Fields | Cognitive Science | Music | Poetry | linkedin.com/in/vvpreetham