Occur. AGI as a formal model

Pavel Chekmaryov
5 min readApr 25, 2022

tldr; analogy between human consciousness and AGI, perceiving invariant as a formal model

To build an AGI that can be born and develop, there’s no other way but to perceive AGI as a formal system with axioms and rules of inference;
the game of life, but in logic

Mathus Apparatus

For getting closer to formal models applied to consciousness, we can use the Jungian 3 categories: superego, ego, it — but with my adjustments [all 3 of them might be perceived as formal models]. Instead of “it” there will be an “alter ego” containing images of other individuals, both as a summary image of a group and as an image of just one person — through these processes people gather information about others, like identifying and acquiring features, in-group and out-group calibration, and the collective unconscious. Keep in mind that the definition of “alter ego” in this case differs from the one commonly used.

For simplification, you can perceive the “alter ego” as “mirror neurons” or simulation entities that work for any kind of calibration activities, from motor functions to empathy. Ego then compares these images to itself — this is where the actual calibration happens. For the proper transition from the Jungian theory, you can consider “it” to be either a background or a subset of each mentioned entity.

The superego here is a set of rules, e.g. principles, morality — long-term ‘abstract’ profit. Ego is an operational behavior, usually short-term profit. The comparative size of these two entities impacts the behavior in respect of the function, and it might change through life.

The Sense of Life

Approximately after the 30s [age varies greatly] with the full frontal lobe formation and the most basic things in life experienced [or mistakes made] — and as a consequence, understanding of the societal hierarchies, self-reflection, and self-criticism — happens a shift in behavior from more ego-centric to more superego-centric. This shift requires something bigger beyond the person to fulfill the need for progress. Without it, there’s a high chance of an existential crisis that could lead to diseases and deaths of despair, which is a huge economic loss. Diseases of despair include not only a substance addiction but also general escape from the existential suffering into games and other virtual worlds [e.g. Metaverse], even though it’s a relatively safe defense mechanism close to a sublimation. Existential crises happen for many reasons, both internal — when the reality doesn’t match the fantasy produced by the biased mind [sometimes thanks to the social networks], and external — when the injustice is forced on responsible people, of course, the truth is somewhere in between. This inner emptiness might be abused by other people — including governmental and religion authorities — for propaganda. To gather actual data on the professional lifecycle, analyze it, and provide it to younger generations [on a more global scale than mentorship], to keep up to date with the market needs and the best possible skills applications, to prevent burnout and other issues related to work activities, there’s no other way than to make a complex platform that would aim all these points together and find optimal solutions.

Philosophy as a whole is an attempt to answer the existential questions, define various categories, build relationships between these categories and play it around aka try to live by these formal models of relation me-world. Existentialism — philosophy of existence — has been developing since the times of Socrates, it might be formalized and applied to the use of society as a consciousness lifecycle. Heidegger, for example, defines life as a continuous entity of concernment and highlights mediation between being ultimately alone and having relationships with other people.
Religion also plays an important role in both superego formation and self-actualization as it covers many aspects of human lives and is accessible to everyone.

Deus ex Machina

“God from the machine” through formalizing, analysis, and calibration of behavioral models and their consequences on a global scale, summarizing it into a formal system aka a set of values, ethics, traditions, etc.

From the societal standpoint, the profit from establishing a superego representation artificially — in contrast to establishing chaotically through communications and collective unconscious when all the information goes mouth to mouth — is in the opportunity to pass information from generation to generation more consistently and with fewer losses. It might open up truths that not many want to see, yet the only way to do something better is to realize the problems. With the more and more people are being remotely working, their circle of connections starting to be limited by their household and there’s no way for people to gather enough variety of others to talk to in real life to calibrate life experiences, especially in the abusing positive feedback loop data science approach of social networks. Metaverse in this case might be an option, hardly the best one.
Even if the “square root of professionals committing 50% of results” rule is always going to be true, everyone should be as efficient as possible.

From the technological standpoint, to produce anything else but a new artificial species from the machines [that would compete with us], we need to gather behavioral models of people and sum it up into morals, or superego, or a set of axioms on which their behavior would be built on [instead of low-level optimization for short-term resource profit]. The set of axioms in this case equates consciousness to a formal model, any philosophical concept is also [or should be] a formal model. Imagining AGI as a formal model is yet another attempt to build up the black box, in this case with more data and transparency.
God from the machine — making a collective superego similar to religion, based on the mined data on behavioral models and their lifecycles.

--

--