Doc’s Review of Kevin Kelly’s, “The Inevitable”

Doc Huston
A Passion to Evolve
18 min readJul 13, 2016

From the Whole Earth Review to Wired magazine, to his many books, Kevin Kelly has been at the leading edge in chronicling technology’s impact on society. Like the cosmos itself, this edge expands and accelerates to make the task of intellectually moving the ball forward a serious challenge. Kelly’s past books, “Out of Control” and especially “What Technology Wants” successfully met the challenge. His new book, “The Inevitable: Understanding the 12 Technological Forces that will Shape Our Future,” is less successful.

The Inevitable has four central threads meant to encompass digital technology’s impact on society over the next 30 years or out to 2046:

Trends — 12 technological forces
Transformation — New digital economy
Trajectory — Evolutionary direction
Treatise — Political change

Trends12 digital technological trends and their societal impact

The most obvious thread is rooted in what Kelly calls “liquid and fungible” digital technologies. But the focus is not on specific technologies, rather their societal impact or, as he says, verbs not nouns. Thus, each verb has its own chapter as follows:

  • Becoming — Technological change is accelerating but our habits hide the impact from compounding incremental changes, which makes us blind to our new Web-type of thinking — part human and part machine.
  • Cognifying — Artificial intelligence comes in many specialized forms that will be added to everything and give us new ways to think and interact, which will accelerate the number and scale of disruptions throughout society.
  • Flowing — Every act, thing, and idea is being dematerialized into cheap, intangible, fungible, tagged and linked bits for copying and ubiquitous on-demand, real-time access, which will solidify a new wealth creating economy.
  • Screening — Screens on every surface will track us for portable access to anything and anyone, thereby changing how and what we think, our personal identities and exacerbating cultural clashes as all societies are reorganized.
  • Accessing — The cloud is becoming an infinite warehouse and store that expands the benefits of an on-demand economy without the disadvantages of ownership, acts as a platform for decentralized collaboration and services, and is the way we are extending ourselves over space and time.
  • Sharing — Digital tools create decentralized spaces as alternatives to existing state and market coordination and production systems for maximal individual autonomy and group power.
  • Filtering — Infinite options and limited time demands far more layers of real-time filters to better personalize and focus our scarce attention in an economy increasingly driven by content preferences and idiosyncratic differences.
  • Remixing — With access to all media content and the entire history of our lives we can mix, remix, mash-up and merge any of it endlessly to create new products, services, and genres, which creates wealth that further transforms the economy.
  • Interacting — Virtual and augmented reality, sensors and speech recognition will alter how we interface and interact with content, devices and will change our sense of presence such that anything not interactive will appear to be broken
  • Tracking — Everything and everyone will be measured and tracked endlessly for better personal and organizational use, but tracking needs to become two-way if we are to limit the power of businesses and governments and create mutual trust.
  • Questioning — Free Internet answers are compressing everything into a viewable space that increasingly synchronizes people globally and will amplify an emerging collective power, which will make good questions the ultimate societal asset.
  • Beginning –As we increasingly link human, machine and nature’s intelligence together we are developing new ways to think and act that will transform civilization and result in the emergence of a superorganism with a global mind, the “Holos.”

While the details of how these trends unfold, actually function, and their implications are interesting, especially for the general public, tech readers are more likely to find the discussion of economic issues most rewarding.

Transformation — The new digital economy moves us toward a post-scarcity, post-capitalist system

Virtually every chapter references the new digital economy and new forms of wealth creation. In essence, this second thread reflects Marc Andreessen’s software eats the world. Thus, everything is becoming a liquid, fungible and endless flow of copies emanating from a cloud-based distribution system able to maximize access, decentralization and increases the scale of on-demand personalization.

All media content, the entire history of our lives and anything that can dematerialized into bits or have bits embedded will be mixed and remixed endlessly to create new products, services, genres and thus wealth. Existing products and service are being unbundled to exploit inefficiencies and enable their benefits to be remixed for different interests and needs.

As Kelly notes, what is important is that the moment something becomes free and ubiquitous its position in the economic equation suddenly inverts. Thus, as everything becomes a cloud-based commodity, prices head toward zero or what is effectively too cheap to meter. In this respect, like public infrastructure, sharing items dominated by atoms trumps ownership and further enables usage costs to drop fast.

Of course, easy access to everything makes attention the scarcest and thus most valuable commodity. But, as Kelly says, there is a spectrum of attention — from cheap, low grade commodity attention (e.g., passive TV viewing) to high quality attention (e.g., personalized interactive experiences) that cannot be easily copied.

Thus, the real challenge is to sell things with “generative qualities” (i.e., better than free) that people will gladly pay for — immediacy, personalized/customized, interpretive services, trustworthy, convenient universal access, event experiences, artistic patronage, and services to aid in finding things — and that are advanced by new filters and tech tools.

None of this should come as a surprise since capitalism, technology and wealth creation are now effectively synonymous. Thus, Kelly sees the new economy blending market and non-market mechanisms in a way that leads us closer to a post-scarcity, post-capitalist, self-actualizing world.

Trajectory — Aggregating these trends reveals a sustainable evolutionary direction to societal change

While individually none of these trends constitute a certain destiny, Kelly says there is a bias in the nature of technology that moves them all in a certain direction. That in the aggregate this reveals a sustainable directional trajectory reflecting other evolutionary and nonlinear change processes. Thus, in the digital realm there is a momentum for the current trajectory to be continued and therefore inevitable.

Consequently, for Kelly, the aggregate momentum of these trends will shape the contours of society and increasingly have a disruptive impact in and on all aspects of society. He concludes that the result of all this will be the nonlinear emergence of a new global-scale mind, the Holos, and a reorganization of how civilization operates, behaves and acts on a planetary-scale.

Anyone unfamiliar with Kelly’s prior books or who are inclined to consider trends outside the digital realm are likely to find the inevitability of his trajectory difficult to accept at face value.

Treatise — A New Political Regime

The fourth thread, the book’s subtext, is the most provocative. Kelly suggests the momentum and impact of these trends and their aggregate trajectory results in the emergence of a new political system.

His scenario consists of two core arguments. One is that these technological trends not only change the basic character and economics of our daily activities and behavior but, in the aggregate, incrementally, albeit subliminally, synchronize the expectations of everyone on the planet toward the development of a new global political system.

This synchronized expectation leads into his second argument, whereby there is an actual change in the scale, nature and viability of planetary political organizations and systems. This change moves civilization away from existing (i.e., industrial) organizational and economic models and arrangements and gives birth to a post-scarcity, post-capitalist, self-actualizing civilization with a single global mind. This is heady stuff and certainly a desirable scenario.

Kevin Kelly interview about and review of “The Inevitable”

Along with paul saffo, Stewart Brand and Nicholas Negroponte, Kelly is among the foremost visionary thinkers around. Broadly speaking, based on my own macro-evolutionary work (here), there is little doubt that what Kelly describes — the aggregate trajectory and nonlinear emergence of different economic and political systems — is “inevitable.” That said, the devil is in the details and additional comment and consideration are warranted.

  • Trends

Arbitrary trends — In the interview Kelly acknowledged arbitrarily limiting his discussion to certain categories of digital technologies. This creates an artificial, relativistic narrative frame that easily misleads readers as to what is actually being claimed. Kelly acknowledged other technologies and societal trends will also have significant impact and influence the nonlinear emergence of any new system.

Thus, “the inevitable” social impact of digital technologies is more akin to the conjecture of a science fiction writer than nonfiction. And this is where things get rather sticky.

Cognifying — The most obvious trend readers are likely struggle with is Kelly’s glossy treatment of artificial intelligence (AI). While he acknowledges the weaponizing of AI is a real danger, there is no discussion of the subject. Similarly, he acknowledges concern about AI becoming self-aware, saying caution is prudent and that we must “prevent” it from happening.

Yet, there is no discussion of how emergent self-awareness (i.e., an intelligence explosion) would be prevented nor that most of current literature suggests it is virtually impossible to prevent. Indeed, as Kelly himself says, there is a bias in the nature of technology that moves all technologies in a certain direction.

Additionally, at the end of the book, Kelly breezes through the idea of a singularity, parsing it into two forms. One is called a “hard singularity,” where AI (i.e., artificial super-intelligence or ASI) leaves humanity behind, which he dismisses out of hand without stating his reasoning.

The other form is called a “soft singularity,” whereby humans and AI live in a harmonious, symbiotic relationship and it/they do not enslave us “like evil versions of smart humans.” Why the “soft” form wins or AI will not learn from these smart evil humans begs for clarity.

The conspicuous absence of any discussion as to how or why AI somehow stops evolving toward emergent self-awareness seriously undermines his claims. Kelly is far too smart and knowledgeable for this to have been an unintended omission. Thus, in the interview I asked him to clarify his position, which led to him to put forth three main arguments against the emergence of self-aware AI.

  • There will be many species of AI for specific narrow (e.g., vertical) applications, but there will be no artificial general intelligence (AGI) or artificial super intelligence (ASI) for the foreseeable future.

Obviously there will be many narrow, specialized AIs and is reflected in what we have today. Asked whether it is likely these specialize vertical AIs will be horizontally networked together to create new organizational layers or levels that consolidate various vertical AI capabilities, he said this was a certainty. Asked if this layering of AIs would continue on to ever more consolidating levels, he again agreed, adding there would be many types of such horizontal consolidating AIs.

Pressed as to whether these additional consolidating horizontal AI levels might ultimately lead to a dangerous or self-aware AI, Kelly responded that, setting aside self-organizing emergence (not an inconsequential caveat described in his other books as a common feature in the evolution of other systems), he shifted the discussion to his second argument, engineering.

  • Even if AGI were plausible it would be an impossible engineering task to develop a sustained high-level goal and would require infinite storage and time.

Thus, he says that not only do we lack the knowledge and skill to engineer an AGI, but we also lack the knowledge and skill to develop a sustained goal that could be adversarial to humanity. Yet, this seems contrary to what is already being done and what seems quite plausible. For example,

  • Software engineering is moving in two directions where human knowledge and skills are less consequential and ultimately could elude our understanding: reinforced machine learning (i.e., software agents take actions to maximize some cumulative reward or goal), and self-writing software that responds to novel situations or queries on the fly to generate new code de novo (e.g., new Viv virtual assistant).
  • Hubris has often led to innovations that produce unintended negative consequences (e.g., state-of-the-art gift of Stuxnet malware to every malcontent on the planet to learn from and build upon).
  • Traditional strategic concerns about another country having an AGI breakout that leads to military and or economic advantage (i.e., “realpolitik”) is already fostering an aggressive global AGI arms-race among governments (i.e., military and intelligence agencies).
  • The seemingly benign introduction of some code into open-source software used in a specialized AI or the tweaking of some malware that generates an unexpected and unintended behavior or action that leads to an intelligence explosion.

The issue of “infinite storage and time” is something totally out of the blue. If such an issue is real, any intelligence explosion should lead an AGI to access (e.g., hijack) all storage capacity available, which would be problematic if not detrimental to us all.

As to the issue of developing a sustained high-level adversarial goal, Kelly moved the discussion to his third argument, the absence of a metric for intelligence.

  • We lack any metric to judge what would constitute a superior intelligence.

It is true that we lack any such metric and his idea about developing a taxonomic matrix makes sense. However, whatever the metric, suggesting an intelligent AI could not have a sustained high level adversarial goal is contrary to the unconscious behavior exhibited in all of biology (e.g., selfish gene, survival of ecologically fittest and predator-prey relationship in the food chain) and human behavior (e.g., non-kinship discrimination, sociability, Maslow’s Hierarchy of Needs).

Indeed, Kelly’s book, What Technology Wants, is dissertation on how technology has inherent goals, including autonomy. Moreover, any seemingly simple, benign AI goal could have far reaching, unintended consequences (e.g., goal to maximize paperclip production could eliminate humans to repurpose resources). Or, if a high-level consolidating horizontal AI is capable of understanding and contextualizing human history it might see us as threat to itself (e.g., a desire to “prevent” self-aware AI).

Further, as Kelly notes, the human brain has 86 billion neurons versus six trillion transistors in the world today that can be thought of as neurons. That together these transistors act like a very large self-healing computer and already significantly exceed our brain’s complexity.

Moreover, unlike the brain, the number of transistors double every few years. Thus, according to Kelly this provides sufficient infrastructure for the emergence of a global mind, the Holos, that lacks conscious self-awareness, yet is goal oriented. Why his argument for a goal oriented Holos is not equally applicable to AGI is unclear.

Finally, regardless of what is included in any taxonomic matrix of intelligence, it is clear that what led our species to dominate was a type of intelligence. Inasmuch as Kelly says our job is to make intelligent machines, why would we not produce AGI?

Thus, whatever dimensions are used in such a matrix, and wherever human intelligence fits in it– near the top, bottom or middle — we have set the minimum threshold for AI. Once past our threshold, however it occurs, we are at a disadvantage. To wit, the consensus indicates this is likely by mid-century.

Both weaponized and self-aware AIs are vexing, potentially existential problems that must be affirmatively solved. Offering nothing beyond his belief that everything will be fine is unsatisfying and does not suffice.

Interacting — In the interview I noted he has said mainstream virtual reality is a decade away. Also, that self-driving vehicles and ambient knowledge systems (i.e., virtual assistants) seemed equally likely to be major economic platforms. He agreed, saying he should have said we are entering an era of multiple platforms.

Tracking — As in What Technology Wants, Kelly says that asymmetrical, one-way surveillance must be replaced by two-way “coveillance” (i.e., us watching those who are watching us).That individually and collectively it is imperative we be able to monitor and understand how governments and businesses are collecting our data, its accuracy and usage and be able to hold them accountable for any misuse. True.

The problem here is that there is no evidence of movement in this direction, nor has Kelly outlined how it could come about since he introduced his idea years ago. Further, unlike discussions in his earlier books, Kelly makes virtually no mention of the importance of encryption or the tech industry’s pursuit of it globally.

In the interview Kelly agreed there has been no movement toward coveillance, though he thinks the public appetite for it may be growing. More importantly, he said the failure to institute an effective coveillance regime could blow-up and crash the future outlined in his book.

He then said that inasmuch as others have pointed out this surveillance dystopia before there was no reason for him to address it. But, given how he himself sees the magnitude of this issue, selling an unalloyed optimistic future seems, at a minimum, a gross disservice to readers. It is akin to being against war and saying you want world peace. That takes us nowhere.

  • Transformation

While Kelly’s case for the new economy is the strongest part of the book, there are a couple of glaring omissions in the transition from the old to the new economy prior to reaching his post-capitalist economy.

While it is true that the digital economy is deflationary and drives down prices, as prices fall toward zero and free producers and providers will need to rely more on autonomous systems. For example, as Kelly says, the robot takeover will be epic. That cheap robots will drive down manufacturing costs so dramatically that transportation will become the biggest cost in an item. But all those transportation jobs are to be replaced by self-driving (autonomous) vehicles.

Similarly, while content can be remixed endlessly to generate wealth, generally the benefits will not be widespread or consequential. Further, political lobbyists and legal impediments favoring legacy content providers will constrain this market. In other words, as with the “gig” economy, the work-reward ratio market no longer functions properly.

The reference he uses about new industries and jobs being analogous to the transition from our agrarian economy to an industrial one is not apropos inasmuch as the new economy moves far faster and far more automated. Indeed, we have probably already passed peak global employment with less economic opportunity or mobility ahead.

So for the vast majority of people this economic transition is likely to be painfully protracted, generating widespread economic anxieties and probably social unrest. None of this is discussed.

Finally, it is interesting how Kelly clearly demonstrates the four-way ecosystem benefits of platforms to buyers and sellers, advertisers and the platform operators in millisecond ad auctions. Further, that while each Google search costs the company three cents they earn 27 cents from each one. Yet, there is no reference to the data being used by these platforms as constituting our personal property.

Said differently, the new digital economy runs on data about each of us and our use of digital services. Thus, the fuel for the new economy is our personal data, which should make our personal data the property equivalent of land or buildings. As such we should be compensated for its use.

Given the elegant four-way millisecond auctions now in place, and the fact that a platform like Google nets 24 cents per search, it would seem easy to add a fifth party to these ad auctions, us, to receive a share of the revenue generated. There is no discussion of this simple economic property issue.

  • Trajectory

The basic science Kelly employs is muddled. In the interview it became clear his reference for the trajectory of his trends and the emergence of a new sociopolitical system was based solely on biology and human history.

So, instead of employing the larger nonlinear evolutionary context offered in What Technology Wants — essential for the type of analysis and forecast he is offering — he misapplies biological analogies and metaphors and the limited context and content of human history to make his case. This helps explains why, when unpacking his book, one can be disappointed with his narrative.

Thus, aside from the shortcomings of his trends as noted, both the basic trajectory and the lone emergent outcome cannot withstand genuine scrutiny. For example, asked about the excess complexity being generated by the proliferation of dangerous new technologies, especially as related to the global arms-race, his response was that, while complexity can be good or bad, in biological ecosystems complexity makes them more robust and survivable.

Problem is that biological systems do not consciously develop weapons of mass destruction for mutually assured destruction. Similarly, if the ecosystem analogy was universally true the biological complexity we experience in old age would be irrelevant to us.

Then I asked about Carl Sagan’s comment: that while the probability of life elsewhere in the cosmos approaches 100% the likelihood any civilization surviving its technological epoch is perhaps 0.001%.

Obviously, we are deep into our technological epoch, and his 12 trends are just the tip of the iceberg of what else is inevitable. His response was that so far human technological history has been a 2% net positive and we lack sufficient data points to accept Sagan’s concern. That qualifies as reductio ad absurdum.

Despite the aforementioned deficiencies, Kelly nevertheless channels the trajectory of his trends toward one, single nonlinear emergent outcome scenario to the exclusion of all other possible outcomes. In the interview he said that he is “allergic” to dystopian or “Black Swan” scenarios.

Indeed, in the book he dismisses all dystopian scenarios as unsustainable forms of chaos that inevitably burn themselves out. Yet, even within his limited frame, it seems the oxygenation of the atmosphere and the extinction of Neanderthals are real dystopian events for the entities in existence at those times that did not burn out as he suggests.

Said differently, while nonlinear emergent phenomenon do occur in nonhuman systems, even in biological systems the probability of emergence is rare, as evidenced by the fact that 99% of all species that ever existence are now extinct. So, even using Kelly’s biological reference as a frame, the most probable outcome is not emergence of a better political system but a bad outcome for us on this blue marble. Thus, assuming the current trajectory leads us to only one, albeit seemingly ideal, global-scale mind and political system outcome is a bridge too far.

  • Treatise

Obviously, the political treatise on the self-organizing emergence of a new global-scale system is appealing. While Kelly discusses how digital trends could invert traditional organizational and economic hierarchies in various ways, the assumption that this automatically spills over into the actual political realm, and that it does so globally, requires a huge leap of faith.

In addition to the missing caveat about the consequential absence of a coveillance system, there is no discussion of the visible tendency of various governments actively seeking to Balkanize and fragment the Internet. Yet, Kelly does make numerous vague references to various forms of cultural, economic, legal and political pushback and backlashes to just his 12 trends.

More to the point, there is no history of those with political power and benefiting from existing political systems spontaneously abdicating, which presumably would be necessary for his outcome to materialize 30 years from now. His use of anecdotes, such as the unlikely success of Wikipedia, making the impossible possible, is in no way analogous.

The fact is that the trajectory of Kelly’s digital trends lacks a coherent discussion as to why a particular and positive global-scale political system will spontaneously self-organize and emerge. My own work in this area (here), for example, indicates nine possible outcomes, with at least six leading to a negative outcome. Thus, at a minimum, the simple assertion about the aggregate momentum of digital trends creating a trajectory toward the emergent “beginning” of such a positive political system borders on hubris.

Conclusion

Any narrative as ambitious as one called “The Inevitable,” and put forward within the structural confines of a book, will be assailable. That said, whether one agrees with any of the threads Kelly has lain out, there are lots of interesting ideas provided. Moreover, given the accelerating velocity of change in the world, getting a better grasp of these digital trends is worthwhile. In this respect, I found his discussions on generative assets and filtering interesting. (Full disclosure, this is what my company provides).

Finally, we should all fervently hope his political scenario is right. But, for those of us in the trenches and watching the coming U.S. election and other world events, trying to persuade us to not worry and be happy seems a risky luxury.

In the end, the book is perhaps best described as pop cultural fiction easily mistaken for science fact due to Kelly’s well-deserved reputation for presenting science facts. Nonetheless, “The Inevitable” is a good read on a plane ride or a summer decompression at the beach.

If you enjoyed this post, and want to share the news, please hit “Recommend” below. It really helps spread the word, thanks!

You can learn more about my work at https://medium.com/a-passion-to-evolve or my website http://www.dochuston1.com/ You can also find me on Linked-in.

In any case, may you live long and prosper.

--

--

Doc Huston
A Passion to Evolve

Consultant & Speaker on future nexus of technology-economics-politics, PhD Nested System Evolution, MA Alternative Futures, Patent Holder — dochuston1@gmail.com