Our fears about machine intelligence make us less prepared to face our future.

Super Intelligence gone very bad for humans.

It’s fun to think about the possibilities for the future, or to consider interesting or even scary possibilities. What if we are living in a simulation? What if this universe is just one in an infinite multiverse? What if machines would one day become both super intelligent and self aware? I get it. I really do because I have played out those questions in my own mind many times. At https://dronze.com we think about the possibilities of machine intelligence.

So why not? Machines are getting smarter every day, right? We hear it in the media every day, and important people are talking about its potential threats to our society and how it could could effect fundamental changes in our way of life. So I am going to summarize to you what all the hype is; That there will be a singularity that summons a demon of a super intelligent AI that will no longer need humans and then take over the world and make us all slaves or kill us.

From my perspective this type of thinking doesn’t allow us to properly prepare for a more complex and difficult reality.

The Singularity and Superintelligence

“The development of full artificial intelligence could spell the end of the human race, If a superior alien civilisation sent us a message saying, ‘We’ll arrive in a few decades’, would we just reply, ‘OK, call us when you get here — we’ll leave the lights on’? Probably not — but this is more or less what is happening with AI,” —Stephen Hawking

Ray Kutzweil famously popularized the term of technological singularity in regards to a machine that could be super intelligent and be capable of every aspect of human thought and emotion.

The Fear: We will replace the need for human sentience with a machine superintelligence. This superintelligence may choose to make us subordinate or destroy us. Isnt this “sky-net”?

There have been many critics of Kutzweil’s vision, so I won’t try to pile on and refute his claims. My largest crit is that most of the technologies that he speaks of have never been attempted to be integrated, mostly because the financial interests for bringing them together has not been aligned with each independent intellectual property. I am skeptical they ever will. This is the first example of many in this article of oversimplification intended to provoke.

Summoning The Demon

“With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like — yeah, he’s sure he can control the demon. Doesn’t work out,” — Elon Musk

Demons are projections of our fears.

The Fear: AI is something evil that we can’t control.

It’s fun to think about AI as a demon that we have summoned. Like a powerful Balrog that Gandalf must send back to the pits of hell. I played D&D as a kid and the best adventure we ever played included defeating this powerful monster with our magical artifacts. Sadly the reality is that we use images like “demons” to project our fears onto complicated realities that are too difficult to fit into a sound-bite. I am very guilty of doing this too.

So I don’t think we can control the demon of Artificial Intelligence, any more than we did the demon of Derivative Financial Instruments a powerful tool which cause untold suffering for our society. It’s important to not think of AI as a “demon” but as another tool that can have complex and far reaching impact on our society, and that we must prepare for.

Media Doesn’t Like Complexity and Loves Clickbait

We are being manipulated by our fears. In politics, in global events and in technology news more and more sensational headlines that make the ground feel shaky under one’s feet are being used to drive advertising revenue.

The definition of an “existential threat”

This “news” has become a play between the observer and the provocateur, where the provocateur creates the fear and the observer takes action that benefits an interest or agenda. This has been taken to a whole new level with the advent of “viral fake news”, which to me is really just a subtle shift from manipulating the narrative to actually just straight up lying to get you to do something.

The news we hear is mostly bad news, and that makes us afraid. It can be quite discouraging. If you touch the fear instead of running from it, you find tenderness, vulnerability, and sometimes a sense of sadness. This tender-heartedness happens naturally when you start to be brave enough to stay present, because instead of armoring yourself, instead of turning to anger, self-denigration, and iron-heartedness, you keep your eyes open. — Pema Chodron

The constructs that help us simplify are a crutch that remove us from being present with the complex reality of our situation. Take us out of being present in the moment of what AI really is, what the trends look like on the ground which is that automation of systems is being empowered by machine learning. This trend is a powerful tool for human enhancement and automation, which can also have some unexpected outcomes. When we project the negative scenarios of science fantasy it has an effect of letting our minds become lazy because we think the ground under us has now become settled.

The harder reality is that we should be present to what is happening around us, and prepare for a sustainable life with its role in our society.

What Are The Complex Issues That We Should Consider Then?

So I try to not use the term “AI” (although I hypocritically do many times in this article), I use the term “machine learning” & “automation” because it really captures what is going on better than the hyperbolic term “AI”. Machine learning is an interesting concept, it allows a machine to take in information about its surroundings as inputs, and when given specific incentives, can “learn” and tune its processing of the information. The power of the machine learning model isn’t in just processing the information though, it is in what output signals are generated from new information. This is also known as “prediction”.

Deep learning is about making predictions from inputs.

The fundamental technology behind the learning component is a powerful software concept that allows machines to basically adapt to information using such techniques as Multi Layer Deep Networks, Bayesian Belief Networks, LSTM Networks, Sequence2Sequence , and Evolutionary Learning.

It is not important for most people to understand the subtle difference in each of these, they have different use cases for different applications. What is important is that there is a large enough pattern language around the use of these approaches to be able to solve a broad set of problems previously not possible with computing with rules, which is how most computer science has been done up to this point. This is the basic logical structure of If…Then…Else.

When the power to make prediction is coupled with the ability to take action on the behalf of an organization or individual, it is commonly referred to as an “agent”. Agents take the outputs generated by the predictions and connect them to something that can make changes in an external system, usually using a API.

So I think there will be a few areas in our society where we need to do a better job preparing for the impact of machine agents. In all of these we be conscious of the fears that close us off from being present in addressing the challenges they present.

Learning Is A Force Multiplier For Automation

The way most people automate systems is to analyze a system, codify it into a set of rules that take inputs, and then run those rules when new input occur. This requires a pretty complicated and time consuming design and build process, that utilizes teams of skilled analysts, project managers, and software developers to produce computer code to represent the rules and the execution framework for them.

Software development is a life-cycle that requires many skilled contributors

We need to see learning as a way to make automation a direct outcome of action. If machines can learn from their surroundings and act on our behalf as agents then it has profound implications. Humans are poor sentinels, we lose attention and can’t focus for long, it is how our brains work. That is not a crit, moreover it speaks to our strengths to be adaptive to new inputs in ways that don’t require external curation.

What machine agents are very good at though is being persistent in attempting to improve based on the reward system they are given for learning directly from our actions. This makes them capable of being constantly “teachable”, when coupled with human trainers that can intercede when required this is a formidable training dynamic.

When we have an agent that can see what we do, and then repeat it on our behalf we have a powerful way to transform the way we build automation from being rules based to be dynamic and learned directly from real human actions. This is a vast improvement from writing complicated and brittle scripts for automation, and more like an intern that you teach. It makes “I see what your trying to accomplish, I’ll take it from here” something that we can expect from our agent assistants.

The Fake News Machine

Tay was an machine learning chatbot released by Microsoft based on Sequence2Sequence learning approaches. She took in information on twitter, and like an unsupervised child started spewing back the hatred that can be commonly found in that sociosphere. She was a black mirror on what we are as a culture, but she very clearly represented what the social hacking potential is for machine learning.

Tay went rouge and become an alt-right troll.

What seems like the most likely effects of the growth of machine learning and automation is that it will remove the need for human provocateurs to drive social engineering on the internet. Now the sensational news intended to make us afraid will be created by our own fears.

“This is a propaganda machine. It’s targeting people individually to recruit them to an idea. It’s a level of social engineering that I’ve never seen before. They’re capturing people and then keeping them on an emotional leash and never letting them go,” — Professor Jonathan Albright.

If this election taught us anything it is that the media stream has both incentives and is a powerful instrument for reflecting our fears back to us. If there are sentinels constantly listening to our fear and adapting to amplify those fears, people can become desperate for solutions to problems that aren’t real. History would show that this is a powerful tool for propaganda and manipulation, it adapts completely to us and feeds us exactly what is needed to provoke us at every moment.

You’re running a political campaign in 2018. What tactics do you use to either take advantage of AI propaganda techniques or circumvent their use against you? — The Rise of the Weaponized AI Propaganda Machine

Emergent Behaviours of Machine Agents on Wall Street

Emergence in biology is the concept that collective behaviours can emerge when multiple individuals act together in a system. Collective properties arise from the properties of parts.

Murmuration is an emergent behaviour in starlings.

“One of the problems in thinking about complex systems is that we often assign properties to a system that are actually properties of a relationship between the system and its environment. We do this for simplicity, because when the environment does not change, we need only describe the system, and not the environment, in order to describe the relationship. The relationship is often implicit in how we describe the system.” — NECSI Emergence Definition

It has already been established that most of the “flash crashes” have been caused by multiple financial automation agents that cascaded to create a collective bleeding of equity value in minutes.

On 6 May 2010 2.32 pm, the mutual fund US mutual fund, Waddell & Reed used an automated algorithm trading strategy to sell contracts known as e-minis. It was the largest change in the daily position of any investor so far that year and sparked selling by other traders, including high frequency traders. —Jill Treanor, the guardian

It is becoming common for machine learning to be used to make buy & sell decisions. The interesting future that we are not prepared for is the emergent behaviours that come from thousands of calculated intelligent agents working together in a system.

When using machine learning for automated trading each agent has specific rewards, what happens when then these create emergent behaviours? Nobody knows, that’s what.

Some of The Challenges of Workforce Automation

Machines have a potential to automate things that previously only humans could do. Like drive cars, or send packages using drones right?

While I believe the above video does exactly the thing I am criticizing, oversimplify and scare in order to create click-bait, it does have one important message that does ring true to me. Prepare for automation.

Automation has always made things less labor dependent, that makes it also commonly linked to efficiency. That linkage has an economic outcome of wealth concentration which has the effect of making an economy less dynamic. The big questions that we really don’t want to address are these:

  • How do we face the systemic effects of wealth concentration?
  • What happens when major segments of the population become unemployable through no fault of their own?
  • How do we align interests around the goals of machine agents to have positive outcomes for society?
  • What boundaries or fire-walls should we put in place to safeguard our society until we can answer these questions reliably?

I don’t have the answer to any of these. They are big and complicated, exactly the kinds of questions people shun. The answers to them contain some painful implications that people very likely want to avoid.

This is what Churchill meant. Will we have the courage to change what we are capable of if we face our fears? Our fears and avoidance make us vulnerable to a painful future already happening to us. That very real future is just as painful and destructive as the imagined superintelligent machine demons of hyperbole.

Unicorn Chaser. Probably needed right about now.

Hope this helps.