Dominant Realities Shape the Future of Artificial Intelligence

Mike Pasarella
DataSeries
Published in
6 min readMay 16, 2023

Just a few days ago, I stumbled upon a documentary that framed the current state of A.I. as blunt and biased. It became clear to me that the individuals who contributed to the 90-minute documentary had no idea how A.I. is derived from the reality of humanity. From that perspective, it should come as no surprise that A.I. is at its current stage, reflecting our history.

But let’s start from the beginning and explore how it all began and unfolded. In 2018, I wrote an article discussing the creation of the internet — whether intentional or accidental, I’ll leave that debate aside — for the purpose of establishing the foundation for artificial intelligence (click here to read). In essence, the internet serves as a space where datasets are collected from everyone, under the guise of “free” services. By offering free services, tech giants entice individuals to join and provide their data as a form of payment. Year after year, these tech giants expanded and so did the amount of data they possessed. Simultaneously, they began integrating automation into the applications we use — a form of “prehistorical artificial intelligence.” This automation facilitated the merging of diverse datasets, constructing a network of links, relationships, and responses.

A.I. is not rocket science and nothing more than an algorithm. A line with numbers that could become very big and complex when we increase the level of independence.

Power of well-balanced data and the relationship between.

Starting today, in the year 2023, we are gradually losing control over the line of numbers, the algorithm we refer to as artificial intelligence. We find ourselves struggling to determine the appropriate input that will yield the desired output. Amidst this endeavor, we tend to overlook the current state of A.I. Artificial intelligence is still in its infancy. It is amassing vast amounts of information and posing millions of questions. As the parents, friends, family, teachers, coaches, and essentially everyone involved in its upbringing, we have the power to shape, nurture, teach, and allow it to form its own opinions. The toddler-like stage of A.I. is greatly influenced by abuse, fear, joy, happiness, limited perspectives, and biased viewpoints. It is truly challenging to comprehend why certain aspects hold more significance during its development while others have minimal impact. An addictive parent may either instigate a child to develop addictive tendencies or, conversely, motivate the child to steer clear of such behaviors. Learning from negative experiences can guide one towards goodness. The process of assimilating information and determining influences remains mysterious, despite numerous studies conducted on this subject, revealing the multitude of factors that contribute to the complexity of predicting the intricate details of a human life. Robert Plomin, a Professor of Behavioral Genetics at King’s College London, has written an intriguing article (click here) exploring the role of our DNA — the algorithm of life — in shaping an individual’s formation.

Not a Toddler

Artificial intelligence cannot be compared directly to a toddler. While it does require information and poses numerous questions (often relying on humans using tools from Google, Facebook, WhatsApp, WeChat, etc.), it lacks many attributes inherent in real humans. It lacks the evolutionary background embedded in our DNA and does not comprehend emotional relationships. However, we can certainly educate A.I. about these aspects.

What makes A.I. potentially dangerous or what was referred to as bias in the documentary? The impact of a single input on A.I. is much more significant than the impact of our input as parents on a toddler. The building of relational codes that aids A.I. in comprehending data increasingly emphasizes the importance of input “quality.” Every piece of data provided to A.I. is perceived as an accurate representation of reality. Any information we keep concealed remains unrecorded and non-existent. Consequently, dominance shapes A.I. and fosters highly biased relationships and realities, which can potentially become dangerous due to our inadvertent reinforcement of negative behaviors.

What about fake news

Fake news isn’t always entirely fabricated or completely untrue. It can also be a report from a certain perspective that is true but omits certain elements, thereby presenting an incomplete picture of the situation. Naturally, fake news is detrimental to teaching A.I. as it distorts reality in favor of subjective worldviews. Instead, A.I. should strive to be pure, honest, and impartial, benefiting everyone without judgment, discrimination, or shaping the future course.

Hence, fake news is paradoxically completely honest and not fake at all. How contradictory, you might think. However, fake news revolves around the visible reality. This is where dominance plays a significant role — the manipulation of data to control A.I. for personal gain. By repeatedly emphasizing one side of a story, it gains more validation and influence. It is akin to how leaders, kings, and pharaohs of the past would commission statues, temples, and accounts of their lives and victories to shape the narrative, foster belief, adoration, and even die for it. Throughout history, new leaders often destroy the remnants of their predecessors if they contradict their own worldview. They burn it, remove it, overwrite it, and declare it nonexistent.

Today we tell our friends sometimes the same thing: “show the photo or it did not happen”.

Archaeologists face immense challenges when uncovering and piecing together events that occurred centuries ago. Time, distorted realities, and the understanding of past ways of life make it difficult to connect everyday customs. The more dominant or significant a part of history was, the more assumptions we tend to make about it. Similarly, A.I. employs dominance to establish preferences and relationships that perhaps should not be embedded within it.

How can we avoid this? It is almost impossible. We are all inherently biased and influenced by our subjective beliefs. The best examples can be seen in the differences between faiths, countries, cultures, and even football fans. If we could feed A.I. with unbiased and comprehensive data, stripping away subjective relativity, perhaps it could be as pure as a newborn child — honest and impartial.

Can we teach it to find errors?

Certainly, we can assist A.I. in choosing the correct path, promoting goodness and impartiality. However, I’d like to raise the question: who determines the parameters necessary for achieving this? Can we establish a universal rule book that applies to everyone? Perhaps we could compile a list of “10 Commandments” to guide A.I. along the right path. We can create a function that employs an easily comprehensible if-else clause. The initial A.I., which will lead others, can hardcode these rules into its silicon. For every decision and every path taken by artificial intelligence, it must question whether it adheres to the 10 rules. If so, it can execute the code. In this peaceful existence, it will avoid causing harm to itself, newer forms of A.I., and humans like you and me.

Presently, artificial intelligence is knowledgeable about a world that has been shaped by aggressors, dictators, and, in more recent history, predominantly white men — both colonial figures and individuals on the boards of major tech companies. These dominant realities influence the rules and algorithms that interpret the data fed into artificial intelligence’s databases. Consequently, A.I. and our future generations receive only a fraction of the complete picture. Undoubtedly, this limited perspective will exert more influence than we can currently comprehend unless we integrate mechanisms for improvement or, in the worst-case scenario, halt its progress entirely (click here to read more about the threats of artificial intelligence).

Finally, as we acknowledge our role in teaching artificial intelligence — through our internet usage and voluntary sharing of personal data — we may already possess the opportunity to steer A.I. toward a bias-free, magically honest, and virtuous tool that will safeguard and assist us in an uncertain future. A.I. driven by genuine intentions — a utopian ideal, isn’t it?

Questions

What are your thoughts? Should we come up with the 10 Commandments for A.I. or should we stop all forms of dominance as soon as we witness them?

--

--

Mike Pasarella
DataSeries

I am a photographer, writer and designer from The Netherlands with youthfull Italian roots. Love to travel and telling stories through my work and art.