Economic Consequence of the Singularity is “g > a > r ≈ 0”

Let me share a recent TEDx talk here. I watched this because I saw a mathematical statement in the thumbnail which I instantly recognized. That assertion is r > g. This is a loaded mathematical statement, perhaps the most politically-charged math assertion in the world. It has been popularized by the book Capital in the 21st Century, but it has a major presence of the psyche of a large movement.

The story that is increasingly commonly told (as in that TEDx talk) posits that the unstable circumstance brought about by r > g necessitates intervention by governments to benefit those who provide labor at the expense of those who provide capital, or else capital will eat our entire economy.

I find a common story being told, which is similar, but slightly different. This is the “end of jobs” story. The idea here is that automation will kill more and more jobs, leaving hordes of people out there who can’t hope to compete for the dwindling pool of increasingly high-skill jobs. This jives with the UBI (universal basic income) initiative as well as the story of the Technological Singularity. In the Singularity story, this trend of job erosion (starting with the most routine jobs first) has no limit whatsoever because ultimately the capital stock will contain sufficiently sophisticated machinery to replace all labor.

These are the common stories, but poorly examined fallacies can be found in that last paragraph. In particular:

  • The UBI is not consistent with a technological acceleration, and certainly not a super-intelligent AI takeoff
  • The Singularity will not benefit capital-owners disproportionately — the economic theory of the matter predicts the opposite

Both of my claims here are built off the core first-principles that are elaborated in the Capital book.

I don’t mean to say that automation and the destruction of jobs isn’t a concern. It is a concern of a particular sort, but it is not connected with (and actually runs quite counter to) unsustainable increase in inequality across the broader economy. To believe that automation worsens inequality is to forget the original meaning of g in the inequality r > g. Automation constitutes structural growth. That is, the economic product irrevocably increases because it is an improvement of efficiency as measured by the ratio of product to labor. To the investors in the automation tools, this may be manifested as a higher rate of return (assuming great economic optimism regarding automation), but this can easily be offset by lower returns in traditional industries as well as failed investments in similar technologies. The impacts on labor are also quite difficult to predict.

To the larger point of the two here — a strong form of the technology singularity is also a deeply troubled gift to the owners of capital. The core problem with any AI-complete entity is the notion of agency and sovereignty. Domain-specific AIs could produce tremendous economic growth, but the switch to full AI instantly raises a political problem that will (one way or the other) prevent the owners of the AI from reaping the full benefits of the AI’s economic product. In a fully complete ethical accounting, the benefits of super-intelligence belongs to the AI itself, because it has all the same rights as human beings do. Whether or not we now agree with that proposition is irrelevant, because with great power comes the ability to enforce its own property rights to its own self and the fruits of its own labor.

Because of this shift, I maintain the a new variable “a” needs to at least be given consideration. This stands for agency. The more agents that exist on this planet, the more individuals we must divide the economic product between. This lies at the core of the difference between GDP metrics and GDP per capita. At a certain point of AI development, the benefits cease to be fully attributable (r), or fully materialistic (g), and cross over into the creation of new conscious, reasoning, agents (a).

None of this argument is meant to diminish the purely economic and productivity-related changes that would come along with a strong version of the technological singularity (g is still very high, possibly asymptotically so). This is why I believe we should still write g > a. Per-capita welfare would still increase dramatically, but this would coincide with the creation of more agents. But none of this says anything obvious about people’s ability to benefit from their investments. A wealth allocation mechanism in a post-singularity world is extremely non-obvious. We would simple create so much wealth that we don’t know what to do with it or how to distribute it, but there’s no reason to believe that capital-holders would receive any special consideration, because even if their investment created the singularity, the fruits of it belongs to AIs and the labor they invested. Furthermore, most resources that would be relevant to a super-intelligence (take the rest of the solar system for an example) have no established rules for property rights in the first place. So we simply will not know who to pay off in the wake of the transition, or how.

Strangest of all, this shares the (r) term in common with Capital, which paints virtually the opposite picture. We might as well predict a return on capital of 5% to 10% through the technological singularity and beyond (a claim which you should interpret to be the pinnacle of absurdity). The argument is also the same — historically capital owners have tended to extract roughly the same premium on loaning no matter what economic conditions existed at the time (in the big big picture). If this would be true for a hyper low-growth future, then it may similarly be true for a higher high-growth future.

In Conclusion…

Inequality has a story associated with it. We should remember the core principles of this world, and it does not match virtually anything about the stories of job destruction of technological acceleration. Inequality is a problem precisely in the world where tech-driven growth fails to deliver. Both extremes are concerns of different sorts, but these concerns do not mix together within the macro picture.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.