Artificial Intelligence (AI) Community Is Playing A Risky Game With Us

Doc Huston
A Passion to Evolve
15 min readOct 12, 2016

--

Weapons of mass destruction (WMD) come in many forms — nukes, bioweapons, nanotechnology, cyber-weapons. So far each one required a human — be it a malevolent person or group, or tribalism gone stark-raving mad (MAD doctrine) — to initiation a dystopian nightmare.

Within the next couple of decades this will change with the development of artificial general intelligence (AGI). Then, for the first time, a dystopian WMD nightmare could be visited upon us without a human initiating it.

Beyond the novelty of this existential risk, what is most disturbing and perplexing is the attitude many in the artificial intelligence (AI) community have toward this risk.

Context

There is no doubt that the benefits to humanity from AI will be wondrous and breathtaking in scope.

There is also no doubt that basic AI (i.e., human level intelligence) will evolve next to a societal level of intelligence — AGI.

When — not if — AGI emerges it seems certain that it will advance to the point of an “intelligence explosion” and likely become smarter than all of humanity combined. Whether this precipitates sentient self-awareness is unclear.

What is clear, however, is that AGI will be the single most powerful invention ever made. More to the point, like all technologies, it will cut both ways — good and bad.

Whether an AGI intelligence explosion leads to an existential crisis for humanity is unclear. It is, however, certainly plausible as a matter of design, accident, nonlinear event or self-preservation. This means that properly designing AGI at the earliest foundational AI level is probably the single greatest challenge civilization has ever faced.

Of course, everyone hopes the AI community succeeds in their efforts to manifest a wholly synergistic, non-adversarial AGI. Whether they are capable of threading this AGI needle — both avoiding an aggressive and adversarial dark-side and adequately disambiguate our positive values and ethics, then translating them into code before an intelligence explosion occurs — makes the Manhattan Project look like child’s play.

Consequently, no one in the AI community should be dismissive about the magnitude of this challenge (see here and here). Nor should anyone be so cavalier as to assume we can simply “cross that bridge when we come to it.” The fact is, when an AGI intelligence explosion does occur, the window of opportunity to halt a bad outcome could easily close faster than anyone could react.

Whatever the outcome, it will not be for lack of effort or simple malice. Rather, it will be the result paternalistic hubris, success in corralling complexity, and or nonlinear emergent properties. In this context, the pace of events is accelerating. The landscape ahead is troubling. And, the clock is ticking.

Paternalistic Hubris

Over the last few years, a lot of very smart people (e.g., Bill Joy, Elon Musk, Stephen Hawkings, Bill Gates) and interested organizations have voiced reasoned concerns about the existential risks associated with AI systems in general and the next generation of AGI systems in particular. As the awareness of these concerns has spread and the number of AI applications and advances has become more visible, the AI community has grown increasingly anxious about a public backlash.

To tamp down public concerns, many in the AI community initially sought to erect a fairly traditional PR firewall. This took the form of your basic binary PR campaign — ignore entertaining but silly Hollywood versions of AI and trust us that it will not happen.

Of course, given the seriousness of the potential risk and the ubiquitous 24/7 media world today’s digital cognoscenti inhabit, this PR firewall was doomed from the start. However, what was worse is that this public display of paternalistic hubris attempted to defy gravity and, as a result, cast the AI community generally in a role akin to climate change deniers.

Indeed, as more and more people began examining existing AI systems, it became readily apparent that some basic AI applications were already going awry. While the problems manifested were relatively limited — biases and prejudices society has sought to eliminate for decades — they were, nevertheless, symptomatic of the larger, more fundamental concerns. Namely, how can the coming AGI systems be adequately designed and developed so as to insure they preempt existential risks?

No longer able to flatly dismiss the potential for AI problems publicly, some in the AI community moved to establish organizations aimed at grappling with the public concerns. Note that up until the last year or so there were only a couple of groups focused on issues related to AI going awry (e.g., Machine Intelligence Research Institute). Then, suddenly, new AI related organizations were proliferating. For example:

  • Open AI — Elon Musk’s group researching prophylactic AI measures
  • Future of Life Institute — Max Tegmark and MIT working to mitigate existential risks
  • AI-100 — Stanford University and others studying societal effects of AI applications
  • Partnership on AI — consortium of major tech companies focused on easing public concerns

The orientation and ambition of these groups tend to fall into two buckets. One tends to reflect concerned realists. These groups are substantively attempting to address serious foundational and long-term concerns related to AGI design and development and preempt a bad intelligence explosion outcome.

The other groups are pursuing the next logical traditional PR gambit — information warfare. In particular, it appears the immediate goal for these groups is three-fold:

  • limit discussion to the near-term (i.e., prior to 2030)
  • focus solely on positive benefits of basic AI applications
  • play down AGI and longer term concerns.

The result is classic politically oriented groups aimed at fostering an AI shell game. Contrived Astroturf organizations with expert advisory committees, conducting colloquia, issuing reports, and giving speeches and interviews to generate favorable media coverage of AI systems and the AI community. Thus, by providing a diversity of resources these groups provide the patina of a respectable consensus and the dominant, authoritative face and voice of the AI community.

But there are two, exceedingly noteworthy, elements missing that these new PR groups are obscuring. One is the time-frame. This is consequential because the prevailing consensus is that somewhere around 2030 AGI starts to become plausible and thus the concerns more immediate.

The other element is who is conspicuously absent in all these groups. Namely, active participation by our military and intelligence agencies and those associated with other countries. Even latest U.S. and U.K. reports ignore this issue, though the U.S. report does note China is now ahead on R&D. This is an extraordinary and consequential omission because every government on the planet needs to operate on the assumption that AGI will be developed and an intelligence explosion will follow.

Indeed, having experienced cyber-espionage and cyber-warfare and understand the implications, every government is engaged in an all-out race to avoid second place because second place may not exist with AGI. This adds another level of paternalistic hubris in three ways.

  • While governments are clearly lurking and communicating with all AI groups, the lack of visible participation shields their activities from scrutiny and thus makes all visible public efforts to diffuse AGI existential risks mere window dressing.
  • The AGI arms-race guarantees that the initial design, development and application of these systems will be focused on aggressive and adversarial objectives.
  • Fear of being second in the AGI arms-race ensures those involved will not exercise the maximum amount of caution in the design and development necessary to preclude existential risks.

Of course, all this paternalistic hubris assumes AI and AGI development unfolds in a linear sequence and the global sociopolitical environment remain subdued. Even if true — which seems highly improbable — this would only temporarily keep the public at bay. More importantly, it does absolutely nothing to address the real existential concerns.

Thus, contrary to public rhetoric, images and protests from the AI community, we are now racing toward a uniquely dangerous precipice. There is a pressing need to change this dynamic domestically and globally.

Corralling Complexity

By their very nature AI systems and applications grow increasingly complex as they advance. This growing complexity inherently carries with it a number of old and new problems that are, to say the least, daunting in the AI and AGI context.

Basic code problem

An obvious concern in the design and development of safe AGI is today’s problem with hacking. Said differently, if no one today can write software code that is immune to hacking, how can anyone create secure code for AGI systems, which will be far more complex, without creating similar vulnerabilities?

This is especially problematic because, by their nature, AGI systems will be more powerful in handling an ever wider array of public and private operational, management and decision making functions. When these systems are hacked the potential for both expected and unexpected large-scale problems increases significantly. Further, such problems can easily spill over to other systems to create a cascading of problems.

Task-Goal problem

Often software code writing for AI tasks and goals is more far nuanced and ambiguous than most people realize. Indeed, financial AI systems have already demonstrated such problems.

Additionally, it is easy for competitive pressures to cause carelessness, insufficient depth of domain knowledge or contextual understanding, and or a basic difficulty in translating desired human values and ethical concerns into code. As a result, code can end up fundamentally inadequate, deficient or problematic. Moreover, for

AI to protect our societal values…will be really hard to do when we as a society don’t agree what exactly we want to protect in the first place….[I]t’s computationally tricky to differentiate meaningful statistical patterns that can inform positive social interventions from ones that rely on variables that are actually proxies for protected classes in a way that reinforces traditional bias….Suggesting that we should build our machines to follow the law is a sincere proposition, but also a naive one….[With] judgment calls, how do we decide which values beyond the law are worth teaching the machines? [Miranda Bogen]

Inasmuch as we are already seeing missteps in the coding of basic AI task and goal oriented algorithms and programs, the practical demands of adequately precluding bigger AGI problems becomes more perilous as system complexity grows. Thus, without a doubt, creating error free, fully controllable, safeguard AGI systems will be far more challenging than is generally acknowledged.

Data problem

In order for AI and AGI systems to work, some programmed task, goal or algorithm must analyze massive amounts of data. Problems can arise with the data itself, an algorithm, the learning and or analysis program, and or clarity, adequacy or appropriateness of the task or goal itself. Thus, all of these coding and algorithmic problems lead to issues related to whether and how machine learning systems — both supervised and unsupervised — can identify and overcome these problems in the data studied.

The data sets themselves are seemingly the simplest of these problems. Setting aside the quality of data issue (not inconsequential), the data sets employed must reflect a large enough sample to be appropriately representative of the task, goal or audience being served.

Today, most of the AI applications have subject domains (e.g., cats, faces, and chess) that tend to be narrowly defined or highly restricted. Consequently, the representative data sets are relatively easy to collect and monitor for problems.

Furthermore, today, before data is actually processed in supervised machine learning systems, it is often profiled and categorized by humans. De facto, this means that biases are easily incorporated. Examples of how these biases have crept into data sets and generated undesirable problems and bad results are now regularly chronicled.

Data complexity issues are certain to grow with many AGI situations, if only because the tasks, goals or audience will require society wide representation. For example, beyond basic demographics, the data sets employed are increasingly likely to need additional, multidimensional socioeconomic, psychographic, and other micro-targeting categories used by advertisers and marketers.

All this data complexity leads to tangential (but, again, not inconsequential) issues related to privacy, hacking, encryption and dynamic updating. This is yet another, albeit, separate can of worms that must somehow be sorted out before used in AGI systems and creates problems.

Next level problem

Some novel issues and potential problems are already visible as AI evolves toward AGI. One is how there will be an increased reliance on unsupervised machine learning systems. Another one arises from software that is able to write and or correct its own code on-the-fly (e.g., Viv).

In both instances a fundamental problem is that the actual processes generating results or decisions often constitute a “black-box mystery.” In other words, it is increasingly difficult to evaluate, interpret or judge how these systems arrived at their results, and or the appropriateness or the accuracy of the results, decisions or actions.

Consequently, identifying and correcting problems adequately and in a timely manner, even after something problematic or disconcerting occurs, may be difficult and, in some instances (e.g., quantum systems), virtually impossible.

Finally, all of the problems discussed thus far are exacerbated as AI systems and their tasks and goals become linked, stacked, integrated and or aggregated into overarching AGI systems.

As a result, any fundamental lower level flaw, coding error or conflict, algorithmic routine, task or goal could unexpectedly change or negatively impact a top-level process or expected result. Unfortunately, as AI and AGI systems become more common and complex, the risk of such unexpected results is certain to happen and with increasing frequency.

Nonlinear dynamics

By their very nature, AGI systems will be doing hundreds and thousands of tasks, many of them comprised of multiple, complex steps. Moreover, this will be manifested through incredibly rapid and powerful feedback loops:

  • Increasingly, sensors will enable machines to effectively “feel and know” the real world
  • Increasingly, AI technics will unlock ever more “unsupervised” machine learning
  • Increasingly, powerful APIs will turn machine decisions into autonomous “concrete action”
  • Increasingly, cloud services will enable AI systems to share their “own experiences” together
  • Iteratively, this process increases the diversity of AI scenarios and quality of predictions

Thus, as AGI systems become better at sensing, feeling and knowing the real world, they will grow better at identifying patterns and anomalies. Increased success in identifying patterns and anomalies facilitates better decision making for actions pursued.

Feedback from individual actions taken will help all AGI systems to collectively learn from their experiences. Shared learning increases the diversity and quality of scenarios and predictions.

In other words, traces left in the environment by AGI actions stimulate the improved performance of a next AGI action by the same or a different agent. As such, these feedback loops support efficient collaboration between extremely simple agents, who lack any intelligence or awareness of each other.

Collectively, this is a very powerful scenario and predictive capability. One that is certain grow exponentially toward the creation of a shared knowledge base. More importantly, AGI will be far faster at both scenario building and generating high-probability predictions than any individual human or institution.

Simply put, these feedback loops have the potential to produce complex, seemingly intelligent systems without any planning, control, or even direct communication between the agents. Said differently, AGI is likely to become an autonomous evolving system.

Of course, this type of collective evolving behavior is common in many systems. Ants interact, for example, using pheromones in their path that makes the colony seem to have intelligent behavior.

In this respect, data is like pheromones for machines. However, unlike ants, the evolution of self-learning machines will probably account for unexpected emergent properties in the AGI ecosystem.

So, eventually, the growth of shared cloud experiences and learning is likely to start resembling a special kind of synergy. The AI community knows this as “stigmergy.” More generally, it is a form of spontaneous self-organization.

However defined, this is a mechanism of indirect coordination through the environment between agents and their actions that can lead to nonlinear emergent behavior. Indeed, it is akin to what happens in our brain-mind system. In this respect, it is important to note that other evolving system have repeatedly demonstrated that

evolution didn’t have to discover, painstakingly, all components of some complex…structure of behavior. Aggregates of things interacting in nonlinear ways make for situations pregnant with emergent dynamic possibilities….When evolution takes a really big step, it’s this jump from a collection of individuals at one level forming a single individual at the next level. [Drexler]

For humanity, the species at the apex of biological evolution and self-organized into a technological civilization, the nonlinear emergence of autonomous AGI systems as a new type of evolution in the cosmos seems inevitable. It is humanity’s progeny and prodigy, and thus a simple extension of a larger cosmic evolutionary process.

Illumination Needed

The unspoken reality is that we are in the process of using AI and AGI systems to redesign and restructure the operation, management and decision making systems for the entirety of our civilization’s infrastructure and social organization. The scale, swiftness and consequences of this enterprise are totally without precedent and anything but a trivial.

Unfortunately, trust is in short supply and the pace of advancement accelerating. We need and deserve more than rosy paternalistic hubris. We need the AI community to be upfront, open and transparent about how AGI is expected to simultaneously be most important and most dangerous invention our species will ever create.

It is obvious that as AI and AGI systems become linked, layered, integrated and aggregated the potential and likelihood of black-swan and nonlinear events grows exponentially with each advance and application. In this respect, we cannot assume that major tech companies alone will be up to the task.

[W]e should be skeptical that this group of companies can unilaterally define what values, and what expression of those values, are worth protecting — particularly when most…don’t reflect the country’s diversity of backgrounds, let alone the diversity of opinions….Unfortunately, the larger debate over what about humanity is worth cultivating in the face of technological advances is being obscured…[and] may already be marginalizing large swaths of society…left out of a critical, national conversation on what it means to be human, and what about being human we want to preserve. [Miranda Bogen]

None of this is to suggest it is useful to attack and prosecute every AI design and application (e.g., self-driving vehicles and the trolley problem) as is now done with climate change, for example. Rather, it is for the AI community to openly acknowledge that

  • the complexity of AI systems implies imperfection
  • while AI systems will outperform humans in many tasks, system failures are inevitable
  • with the benefits of AI systems comes risks, both short term and in the not too distant future
  • AGI can lead to a bad outcome without more collective human involvement in their design and development
  • by working together (i.e., society in the loop) we could create an AGI capable advancing civilization toward a post-scarcity, self-actualizing world

The fact is, emergence of AGI looks to be not only the defining technological event of the 21st century, but for all of humanity.

Beyond the policy and regulatory issues, what we need is global AGI design effort that has society in the loop and systematically addresses the concerns emanating from worst case scenarios. Short of that, we, as a society, should aggressively and openly engage in an all-out effort equal to that of the Manhattan Project or project to put men on the moon. Indeed, President Obama suggested as much today.

If some form of this proactive scenario is not publicly advanced, and the technical and perception problems are not adequately controlled contemporaneously at an early, foundational stage in AI evolution, eventually the backlash against the AI community and consequences for society could be calamitous, if not catastrophic.

As a society, civilization and species we cannot risk getting AGI wrong. If, as AGI evolves, a nonlinear emergent dynamic comes into play, the probability of a bad outcome increases dramatically. So, the AI community should want and welcome us all constantly asking a very basic question:

How can we better design an AGI system that is more compatible with us and less adversarial?

At this point, how all this unfolds and ends is anyone’s guess. Based on the existing AI community’s anxieties, Realpolitik and unquenchable tribalism, my guess is we are unlikely to develop AGI in a way that makes it fully supportive of humanity. Ultimately, that has a high probability of making for a bad day on this blue marble.

If you enjoyed this post, and want to share it, please hit “Recommend” below. Thanks! It helps spread these ideas!

You can find more of my ideas at my Medium publication, A Passion to Evolve — or — my website dochuston1. com

In any case, may you live long and prosper.

--

--

Doc Huston
A Passion to Evolve

Consultant & Speaker on future nexus of technology-economics-politics, PhD Nested System Evolution, MA Alternative Futures, Patent Holder — dochuston1@gmail.com