Conway Law revisited : A.I.s are our golems.

First of all, I would like to discuss about artificiality, digitality and virtuality. There are lots of false associations between artificial, virtual and digital.
Virtual isn’t an opposition to real but to concrete.
Few things are virtual; A political promise, a prototype, are “virtualis”. It doesn’t mean that they aren’t critical. They are (critical), because they are criterion to the future.
The virtual reality can have a concrete effect. When it has a concrete effect, it stops to be virtual.
A virtual reality has the power to stay virtual as well as to become concrete. It’s only when it becomes concrete and then stop to be virtual, that it can lead to a success.
When it is effective-less it stay virtual.
Also a POC, a prototype, a political promise aren’t, necessarily, digital. The word “digital” incorporate a lot of meanings which I will not develop right now. To resume, digitization is a digital way to leave a fingerprint.
What else?
Artificial stands for “Man-made”. When we talk about artificial intelligence, we need to bypass our concept of virtuality, and remember that it’s an element of digitization.
How to avoid to become a “mad men” in a rising world of man-made world?
In the previous sentences, perhaps, several images were triggered by “what else?” and “mad men” if you appreciate some TV series or are aware of some commercials; Consciously or unconsciously.
And if you think about CLOONEY’s advertising, perhaps some associations were made; Perhaps you remember that it’s 10:00am and time to take a coffee. That’s the moment a virtual effect become concrete.
We aren’t the master in our own home
As said S. FREUD.
Why an artificial, man-made, artifact should be better, greater than us?… Or worst?
As we discuss about artificial intelligence, I would like to end the introduction by another. One of Mr Lab’s song “Lost”.
Oak tree move very little and have a fairly peaceful life. Man move a lot and often involved in useless conflict.
During his life, majestuous oak tree fertilize the earth and oxygen the air. Man spend much of this time running on activity to another, consuming enormous amount of the earth precious energy, often giving very little back.
Oak trees can lives many hundreds of years, man often lives less than eight decades.
Considering this, one can ask, what is really intelligence?

Artificial Intelligence, virtual errors
On the face of the Golem of Chelm, is engraved the word “emet” (stands for truth and God in hebrew).
Truth isn’t True
Ok. It’s counter-intuitive. But it’s true AND not false. Ok! I lose you.

A system can capture all true positive and all true negative. It doesn’t mean it is pertinent. Is it?
Gates’s Foundation launched a large program (1,8b$) to understand the characteristics of the best schools. A study concluded that small institutions had better results. Based on that, large amount of money were spent to split big schools.
Gates’s Foundation launched a large program (1,8b$) to understand the characteristics of the worst schools. A study concluded that small institutions had worst results. Based on that, large amount of money were spent to merge small schools.
Where is the truth, where is the fake?
The fake news is the second, the non-fake the first (i.e. $51 Million Grant from the Bill & Melinda Gates Foundation to Support Small Dynamic High Schools to Boost Student Achievement in New-York).
And the truth isn’t here (Evidence that smaller school Do Not improve student achievement from Howard Wainer and Harris L. Zwerling).
The decisions of Bill & Melinda Gates Foundation was biased by a small number effect, combined by an halo effect.
In small school there are more likelihood to encounter extreme results (best students as worst students).
So if you look only at one face of the problems you’ll have an higher risk to make a biased decision.
And as the conclusion was done by Bill & Melinda’s foundation, the social halo of “Gates couple” highly triggers and excites all educational stakeholders.
It’s only one example among other ways to address errors. Keeping the EMET in the face of our AI requires a strong effort to know/understand errors, avoid errors to become failures and/or avoid to transfer errors to other layers/agents which lead to masterpiece of failures (the basic of lean management).
Perhaps it’s another reason of the MUSK/ZUCKERBER clash. One saw only the positive side, the other only the false part.
But, regarding these previous assumptions, it’s tricky to address several topics:
- Understand how to address it
- Detect virtual errors
- Anticipate the impact of concrete errors
If a system doesn’t have the capability to manage errors, it can’t learn. Then our capability to handle the impact of errors and our resiliency to failures are seriously tested.
I deliberately talked in term of system for the first part, and humanity in the second one. Our capability to absorb concrete errors depends on our understanding of how A.I. based systems has the capability to address of errors and inform us how it made a choice.
Taking care of errors is far more complicated than as presented above. What if a self-driving car had the choice between crushing a pedestrian or send the car to a ravine. What if an AI has the choice between your mother or your daughter; What if an AI makes decisions based on a namesake with real bad reputation? (I already experienced this situation).
If you asked to an AI “Is small schools has better students than big schools?”, you will have the same result. Quicker, but the same result. You will only spent quickly 1.8b$. Who is to blame?
Moreover, we can be seduced or overconfident to answers provided by an AI. Or the data sent to an AI to be trained can be biased.
Knowing our layers to understand those of A.I.
An A.I. is made up of several layers. Each layer is a composition of one kind of nodes . Each layer can be based on different type of node.

Moreover, a deep learning network system is based on a three steps operational process. Train, Test, Use.
Also several neural networks agents can be interconnected.
Finally, a neural network can be duped or faked. i.e. when it is overloaded by trolls or when it becomes overconfident due to over-fitting.
Does this sound familiar? No?
Take a look at what a cerebral cortex is, the cartography of a visual cortex or dig into how fear is addressed (localisations involved, learning, inhibition, …), and you will understand why A.I. are digital golems.
As a golem isn’t a human, a neural network and human brains aren’t the same. We can also discuss if neural networks are an uncompleted tentative to build a human brain, OR, if a neural network is a model based on our incomplete knowledge of human brain moreover limited by our technical capabilities.
Discussing the deep metaphorical/philosophical representation that fit the best to an AI is as discussing if an AI is a stone golem or an iron golem.
Some people argue that brain is self-organizing and neural network not.
Self-organization is defined as a process by which systems that are usually composed of many parts spontaneously acquire their structure or function without specific interference from an agent that is not part of the system.
What about self-organized map in unsupervised learning?
Even if our brains are self-organizing, hopefully, it doesn’t mean it reorganize all our connections every night. Our fears, behaviors, patterns, associations aren’t generally speaking remodeled in one night. It’s good for us, except when it turn us mad. Moreover, if a brain changes every day/hours its patterns & our behaviors, it belong to a schizophrenic or bi-polar mind.
We can often manage errors/issues by ourselves, but sometimes it requires, at best, to go to the shrink, or at worst it belong to the society to send people to an asylum.
Finally, Self-organized doesn’t mean unconnected and independent. It is dependent and connected to our body, our eyes, our ears, our skin, our size, our strength, our weakness. (Somatosensory and Somatomotor Cortex). Cortext is also deeply connected between areas as well as other parts (amygdala and endocrine system).

Understand our bias to live with A.I. in harmony

AIs are our modern golems
On the bad side, as in the legend of the Golem of Chelm, A.I. can turn mad. Do we have the capability to erase the E of EMET to the golem face, and turn it to MET (death in hebrew)? Or will the golem erase the E of our own face?
Those two questions have preconditioned associations in our own minds. Some by terminator and matrix movies. Some by the movie A.I.
Each associations from our past experiences hold a weight. And the combination of those weights set the foundation of our feelings, convictions and behaviors.

On the other hand, AI can virtually deeply improves our lives when we use it in a collaborative mindset to build a sustainable world.
Whatever are the results, AI, Collaboration between AI, and AI/Human collaboration will be a copy of the way we learn, make decision, communicate and collaborate.
What’s next?

Keep in touch… I’m building…Errors must be shared with AI
References :
Deep Cortical Layers are Activated Directly by Thalamus
Prefrontal PV Interneurons In Fear Behavior
Prediction of Human Behaviour Using Artificial Neural Networks
Dynamical regimes in neural network models of matching behavior.
Google’s DeepMind pits AI against AI to see if they fight or cooperate
