CodeX

Everything connected with Tech & Code. Follow to join our 1M+ monthly readers

In order to Outsmart A.I. Traps we Have to be Aware of Them

--

Understanding how algorithms churn our engagement can help you retain your freedom of choice.

Photo by Hasan Almasi on Unsplash

The first thing you need to understand is that the dangers are real, and that scientists and AI experts have acknowledged the danger.

In January 2015*, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts signed an open letter on artificial intelligence calling for research on the societal impacts of AI.

The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential “pitfalls.”

The call to researchers was that they must not create something which cannot be controlled. The four-paragraph letter, titled “Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter,” lays out detailed research priorities in an accompanying twelve-page document.

See the document at https://deepai.org/publication/research-priorities-for-robust-and-beneficial-artificial-intelligence for such concerns by top scientists and researchers.

To quote a bit of this concern here:

…Stanford’s One-Hundred Year Study of Artificial Intelligence includes “Loss of Control of AI systems” as an area of study, specifically highlighted concerns over the possibility that

…we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes — and that such powerful systems would threaten humanity. Are such dystopic outcomes possible? If so, how might these situations arise? …What kind of investments in research should be made to better understand and to address the possibility of the rise of a dangerous superintelligence or the occurrence of an “intelligence explosion”? (horvitz2014hundred, )

While Artificial Intelligence has the potential to alleviate ills such as human poverty and disease, the technology, while wildly innovative, is taking shape upon a scaffold of ethics which, because they are extant and evolving, must be examined.

More and more, we are being herded and guided by algorithms that we don’t understand, whose true intentions, not adequately revealed to us, are not even fully comprehended by their designers. This poses numerous ethical issues that have far reaching implications.

What is of concern is that algorithms are driving the churn of human engagement. The intentions of these designs are rooted in foundational premises that are not initially apparent. The risk is that at times we only find the consequences after being swept up in it. For some the consequences could be dire.

Take for example the way we interact digitally. The most common thing people use is a search engine, like Google, to get information. In the past a codex of archived information was static.

The difference between a static and interactive log of information means that we are not in control of the way we access information.

Instead of navigating how we discover and find things we want and need, the information shifts and morphs. We don’t acquire it as much as we are fed it.

While this might seem insignificant, it can have quite insidious consequences.

If you imagine disinformation in the matrix, this becomes even more problematic. In terms of politics, governance, news, the recording of history, control of information easily turns into manipulation.

We are not anymore acquiring information as much as we are fed it.

Increasingly we engage, or become dependent on, platforms that have us enmeshed in an algorithmic tango; we input preferences and tastes and the (artificially intelligent) algorithm is rigged to anticipate and predict our future decisions and desires based on past behavior.

The trouble is that algorithms have their own goals for engagement, and these designs cannot possibly be aligned with its users wants or needs in every instance.

This creates a contradiction that will necessarily collide with a personal user’s interests.

It can disrupt how humans learn what they want and need. It can stunt the natural, organic process of (often clumsy) discovery, that leads to a of refinement of thinking.

Personal growth is achieved through natural trial and error, but it would seem that algorithmic design might corrupt that process.

This inherent contradiction lies in the design; parameters that incorporate our desires stymie human development that happens through unfettered discovery.

Discovery within fixed, pre-set premises is guided discovery.

In essence, (in order for the algorithm to function) having parameters means it must be biased. It must have a set goal. This means that by design, social media platforms driven by algorithms, have the potential to be weirdly limiting.

The question is, can “unpredictability” co-exist within an algorithm’s set parameters?

Is it possible for them to be unpredictable if they are designed to predict and anticipate our desires?

If they are predicting an outcome, they have to have parameters.

It becomes a sort of self-referencing loop. In order to anticipate something they must suppress a certain natural wandering or meandering inherent in the human experience.

If you have ever chosen some options into a platform, (think Netflix’s feed) you will recognize that sense of how you’re being corralled in the AI algorithm feeding you what it thinks you want. And, because the options are pre-conceived, there is a lack of freedom and creativity.

If morality is a product of our discerning faculties, we must have the freedom of access to be able to discern. This is a basic premise of our human morality. Algorithms would seem to have the power to corrupt our moral development because they disturb this process.

The new morality being born — one that exists in a feedback loop, is antithetical to growth and the possibility of new vistas of understanding. As humans we evolve in erratic and unusual ways, not only individually, but holistically, as a whole race.

Human consciousness has always evolved and morphed in terms of a connected, shared experience. Algorithmic if-then thinking stultifies that process.

Using social media platforms as an example, or the idea of retargeting ad campaigns, you’ve surely experienced this exhausting discovery loop that feels like a trap. In fact, this very platform, (no disrespect intended) Medium’s own tag-line is “Where Good Ideas Find You.”

Controlling how a platform continues feeding you things that you have already digested and no longer desire is tricky and hard to decipher. In choosing the settings one is corralled into making decisions that lack nuance and judgment. Discovery is controlled and maze-like.

Aside from algorithmic loops, traps and containment settings affecting human endeavors, there is also the matter of those humans that are at the helm of such platforms.

We cannot exclude the fact that the personalities, preferences and motives of founders are essentially part of the premise of the algorithmic design that runs the business and keeps it profitable.

Think of Google’s business model and the many conflicts of interest. The relationship with Google is one that exchanges our data and behavior patterns, and our private interactions, in exchange for tech that manages our lives and how we search for information.

In addition, if there is remuneration tied to discovery (i.e. who benefits in how Google presents information in their search bar), the discovery process is skewed, though it is not always clear who benefits and who loses.

One thing is clear, however, that in the tango of a commercial relationship with the likes of a dynamic index, the non-consensual exchange of information makes the algorithmic design weigh heavily in favor of the platform. They hold all the cards.

“Cancel Culture” reveals the danger of control in thought and speech between users and owners of platforms. Even if you dismiss the fear, of being silenced for having views that are not in line with those who own and run the platform, the inherent conflict is that the underlying business model is geared to serve the primary interests of its owners.

A most macabre example of these AI steering tactics was exposed in the Wall Street Journal’s Facebook Files (Facebook Files https://www.wsj.com/articles/the-facebook-files-11631713039 ), where their investigation revealed a study of fun-house mirroring tactics inherent in Instagram algorithmic machinations.

That is, the AI steered users who interacted with porn and violence down a worm hole of addictive engagement.

The danger is worse for people under say, 25 years of age.

Adults, about 45 years old and over, have the perspective to understand the contrast of how life was before such technology existed. They are equipped with life experience to determine and identify when a platform is reshaping our interactions with questionable motives.

Younger generations do not have as mature a perspective, nor do they have a deep connection to history as a frame of reference. This is where they can be misguided and misinformed.

Does anyone doubt what Mark Zuckerberg will do with children once they are ensnared inside his Metaverse? He does not disguise his intentions and, in fact, he is richly rewarded for his goals.

He was already creating Instagram for children long before these containment feeds were discovered. Does anyone doubt anymore that history can easily be redrawn in the push of a button? There is a deeply troublesome, and insidious, dark side to innovation where there is no oversight or close inspection.

That is why men like Elon Musk have fought for, and created initiatives for keeping AI honest, by making platforms that support Open AI and collaboration among all men. The community of scientists leading the research, those creating the technology, are acutely aware of the dangers and has admitted publicly that it cannot yet fathom the unintended consequences of AI and Deep Learning.

AI and its algorithmic design can become electrified Frankensteins; a composite simulacrum of ourselves with no defined, or intentionally calibrated goals. The specter is more frightening because the monster now moves at super human speeds.

That we imagine outside ourselves, our physical environment, broader, wider, vaster, and into the “cosmic beyond” is a testament of our human morality and sentience. But we cannot be satisfied with celebrating innovation for innovation’s sake. If we do not at least identify and articulate what these inherent flaws are, we will most certainly be carried away by them, never stop them, or have them under control.

If innovations in AI and Deep Learning are to serve human needs, their root premises demand us to define what it means to be human and have human needs. This cannot be parsed out of the equation.

The point is not a neo-Luddite entreaty, but rather a re-imagination of what and how we understand what it means to be human and to preserve our ability to advance our moral obligations and self-awareness.

The Superhuman-Speed of A.I. calculations is also a cause for concern.

As algorithms leverage mass collections of data, and the power of electricity leverages the speed of our calculations, we must be aware that we are building technology to serve our human needs but super-human speed and scale.

What are the effects of this natural speed vs. hyper-speed relationship?

If one were to teach a toddler to walk on a treadmill set on the speed for an Olympic athlete there is an understanding that something gets destroyed before it can even begin to develop naturally or progress in a natural fashion.

We are unleashing massive data, at hyper-speed, into the hands of a rigid and fixed Artificial Intelligence and allowing it to influence our human interactions. In essence, we believe we can clone our sentience and then let it run on auto-pilot.

This is where we undercut our humanity. The human race is not static. It is constantly evolving. However, with the premise that we can digitize our minds and experiences and then clone them into a digital matrix we are, in some ways, stultifying our humanity if we get caught up in it.

Most recent innovations of algorithmic design have shown deep ethical conundrums in how computers calculate morality.

Algorithms, like it or not, are recreating how we communicate and interact. It is not just the overwhelming speed and data that can crush us, it is also the fundamental seed changes in how we see our human selves.

--

--

CodeX
CodeX

Published in CodeX

Everything connected with Tech & Code. Follow to join our 1M+ monthly readers

Sherry Horowitz
Sherry Horowitz

No responses yet