Weapons of Mass Deception
On January 29, 2002, U.S. president George W. Bush in his State of the Union address de facto started the war (on terror) in Iraq. He claimed that “states like these, and their terrorist allies, constitute an axis of evil, arming to threaten the peace of the world with weapons of mass destruction.”
History never confirmed Iraq’s possession on these so called WMDs, and this was one of big deceptions of the whole humanity. Without securing U.N. resolution explicitly authorizing force, the U.S. formed a “coalition of the willing” and, on March 20, 2003, launched military operations against Iraq.
I will not go any further, but borrow the acronym WMD in our context of Generative Artificial Intelligence, claiming that Generative AI are the new WMDs — The Weapons of Mass Deception, proposing to form great WW coalition in taking this threat seriously.
Because the consequences may be devastating.
I will try to elaborate my appeal, hopefully better than GWB, aiming to get unanimous support at least from my readers, saving what can be saved, without usage of weapons or any kind of institutional support.
Deception
Deception refers to the act of intentionally misleading or tricking someone by presenting false information or concealing the truth. It can take many forms, such as lying, exaggerating, omitting key facts, or creating a false impression.
The goal of deception is usually to gain an advantage, manipulate behavior, or avoid negative consequences.
It looks like as if we were describing Trump’s campaign rally, isn’t it?
Lame!
Let’s learn from the best — Keyser Söze: “The greatest trick the devil ever pulled was convincing the world he did not exist”.
We do not need to go that deep/far, there is no devil, nor QAnon, we are just paying too much attention to AI phenomena, so we do not notice the gorilla waving, and waving, … trying to warn us.
But, who is the deceptor in this story, who is telling us to direct all attention to AI, new products, new, better models?
It is the Usual Suspect — US!
(not U.S. 😊)
So, don’t look up! Direct your attention and time to sharpen your own (cognitive) abilities. Do not relay on AI, do not wait for better product/tool, it will come tomorrow, every day there is announcement that tomorrow is coming new, improved … like I just announced in the last column.
That was not an announcement, it was mere illustration that Generative AI (logical) reasoning is not ready. Yet.
Every once in a while, one also should take a look into the future, AI and humanity future, as you can learn from inspiring Mustafa Suleyman’s April TED talk.
From the lot of sharp and insightful observations on the future of artificial intelligence, I was struck by the observation of the past. That humans began to differ from apes (line of ancestors) only after they started using tools. Simply brilliant! And probably true.
Tools, which helped us build new tools, are building us as well.
Kind of spiral (progressive) recursion I pointed out in Planning is indispensable column.
Point of no return
This one is again very light column, noting much to understand — you either understand that we are the problem and solution as well, or not. 😊
But, this is also the pivotal column in this series!
From now on, we will be much more oriented towards ourselves, identifying tools (principles and models) that can help us better understand Generative AI and use it today, not waiting tomorrow for some panacea feature which will solve our problem. Tomorrow.
And by using Gen AI as a tool, we will sharpen ourselves as well, preparing us to wonderful world of our new digital companion, brand new (first of a kind) digital species we are creating.
As Mustafa explained AI it to his 6-year-old nephew.
Knowing More
‘The Invisible Gorilla: How Our Intuitions Deceive Us’ (2010) is a book co-authored by psychologists Christopher Chabris and Daniel Simons.
It explores the limitations of human perception, memory, and intuition, and how these mental processes often lead us to false conclusions. The book’s title stems from a famous experiment by the authors, known as the “Invisible Gorilla” experiment, you can found on youtube.
In this experiment, participants were asked to watch a video of people passing a basketball and count how many times it was passed. In the middle of the video, a person in a gorilla suit walks through the scene, stops, and beats their chest before walking off. Surprisingly, about half of the participants failed to notice the gorilla at all.
Experiment demonstrated the phenomenon of “inattentional blindness,” where people can miss obvious details when they are focused on something else.
Of course, they should use another metaphor — tiger, instead of gorilla, but we understand the point 😊.
The core idea of the book is that our confidence in our perceptions and memories can be deeply misguided simply by getting false priorities.
Mustafa Suleyman, a co-founder of DeepMind (2010) and Inflection AI (2020), joined Microsoft in 2024 as Executive Vice President (EVP) and CEO of the newly established Microsoft AI division. He leads efforts to develop AI products like Copilot, which integrates AI across Microsoft’s consumer services. He reports directly to Satya Nadella and plays a pivotal role in shaping Microsoft’s AI strategy.
Mustafa Suleyman’s work stands at the intersection of technology and ethics, and his influence extends beyond AI research into how we think about AI’s role in society. He is recognized for his vision of using AI to solve complex problems while advocating for responsible, human-centered design. His ongoing efforts reflect a dedication to shaping the future of AI in a way that aligns with ethical values and long-term societal goals.
In Search for Knowledge publication
Mastering Insightful Dialogue with Gen AI
<PREV Dual system of reasoning
NEXT> Marshall’s Plan