AI Isn’t the Problem … Greed Is

Kurt Cagle
20 min readMay 30, 2018

--

Make this simple. The issue about the future of work is not about robots (or artificial intelligences) replacing human beings. It is about income security. Most people, when pressed, would be perfectly happy to not have to wake up every day, spend an hour on the road each way fighting traffic, deal with office politics or rude people or insane deadlines.

We would love for robots to take over so that we can spend more time on things that matter — writing that novel, catching your kid’s baseball game, walking through mountain trails, all those things that our culture waves in front of us as our heavenly reward for a lifetime of toil in the trenches to make a microscopic fraction of the 1% of people in this country filthy rich.

If you cannot operate a computer, you will not have a job.

What we fear is that when those robots come in, we won’t have the financial means to participate in the economy, because no one is paying us for our labor anymore. Few people, especially in the US, actually outright own their own homes. Instead, they lease them, a monthly mortgage at a time, until they can pass the debt on to someone else. Break that revenue stream, and the economy collapses. What we fear is being poor, because this is worse than being imprisoned, even if the walls of imprisonment are cubicles within glass towers.

If you cannot operate a computer, you will not have a job. In the rush to impose draconian immigration requirements, the Trump administration created an impossible situation for farmers who could no longer hire migrant workers under the table at well below minimum wage. This isn’t creating new jobs for our young folk. They want a real wage, want some basic guarantees that their parents and grandparent have, and those farmers (the ones that haven’t already sold out to big agra-business firms) have razor thin margins.

This is one reason why agriculture is in fact one of the hottest areas for AI research — autonomous tillers, planters, pickers and harvesters, using GPS signals and intelligent sensors. Of course, it’s the big agri-firms that are doing the investing, and buying up farmland at pennies on the dollar as most farmers conclude that they can’t make a living anymore without taking on massive amounts of debt.

This is what happens when an economy shrinks. It’s easy (and perhaps somewhat legitimate) to point to Silicon Valley as the source of this automation, but in point of fact, the problem is systematic. When the cost to automate reliably drops below a certain threshold, it makes more sense to buy into that automation rather than using human labor. Peoples’ 401Ks are on the line — if dividends are not paid out, then people heading into retirement cannot retire in the style that they have come to expect. This is what capitalism does, and has always done, and automation has been one of the most powerful tools to make sure that dividends are paid out even when a company is no longer actually producing value.

Why is the economy shrinking? Notice how no economist ever talks about economic shrinkage, though you occasional hear muttering about negative economic growth. Economic shrinkage is a terrifying concept in a capitalistic society because it means that resources are becoming harder to acquire. Greed plays a part as well — those at the very top of the pyramid tend to amass more and more wealth because once a person has reached a break-even point then they have more opportunities to invest surplus wealth, but even there, multi-gazillionaires are not making as much money as fast as they were a few years ago, at least not in the US.

The economy is shrinking because resources are dwindling, because AI (including robotics) becomes the go to solution when there are no regulations limiting it, and because AI by definition performs tasks orders of magnitude faster than human beings can. We use all kinds of interesting accounting to hide this fact (partially because it does hide the dividends), but the reality is that automated systems (and some pretty stupid AIs) have been eliminating jobs far longer than most people tend to realize.

Yes, automation creates jobs as well, but those jobs involve increasingly specialized skills and augmentation. You want to make good money today? Learn how to model three dimensional objects in formats that can be used by 3D printers. That means becoming proficient with a suite of modeling programs, learning about data formats and interchanges and modeling principles and the physics of light, requires having a fast computer, and necessitates competing on a global stage. This in turn takes time, and even then your sunk costs may be obviated by the new version of modeling software taking off, necessitating even more retraining.

You want to be a security guard? Better learn how to operate CCTV camera systems and thermal imaging software and anomaly detection systems. Most cop cars today have a swivel arm for hardened laptops positioned in the front middle seat, because most policing today requires proficiency with certain kinds of software. A farmer today has software analyzing crop yields and moisture content, GPS based autonomous tractors, and control centers rivaling airplane cockpits. A sales person lives or dies by his spreadsheets, and increasingly by BI tools that take those numbers and generate comprehensive dashboards. A lawyer who is not tied into Lexis Nexus is likely not practicing law.

These are cognitive augmentations. Tomorrow (and increasingly today), the security guard will need to know how to work with flying drones. He won’t be piloting them — they are already intelligent enough to handle that task. Rather he’ll be directing them, passing their feed through AI software that detects differences not only in action but behavior, logging anomalies that he’ll review or acting on sufficiently severe ones by sending people out.

By 2035, he won’t be there at all, because these actions will have also been automated. Having human security guards on site will be considered an insurance liability, and the only decision makers may be in an office park half a continent away.

It is this arc that will describe many jobs. The drone revolution should be seen as a way of extending sensors beyond human means. Yet it also breaks the requirement of proximity. Today a bridge inspector usually needs a cherry picker and has to be on site. Soon, the inspector can do it from his desk with a drone. Once the drone is trained (and with sufficient fuel) the drone can follow a specific inspection routine initiated by a timer on a clock. The video feed is sent to a server, where the video is decomposed, time stamped, and correlated, making it possible to see the evolution of problems in four dimensions, anywhere on the bridge. In time, even maintenance is managed by drones, save for the occasional repair that requires specialist engineers. Factor in new meta-materials, and the need for bridge repair crews drop dramatically.

What’s worth noting here is that it is no one thing that has effectively obsoleted the need for an army of bridge inspectors and repair people. Drones make it possible for the inspector to not only view inaccessible places, but also to create regular, repeatable trajectories of flight. Specialized micro-cameras and recorders can be used across a spectrum of light and sound to get views of the bridge that human eyes could not (such as UV or ultrasound to determine stress deformations). Wireless tech makes it possible to stream binary data to servers, while AI can be used to compose n-dimensional views of the bridge (including time) across different frequencies and conditions. Visualization tools can perform analytics on the data, and a vast network of hyperlinks makes this data available to the proper users. Materials engineering makes reinforcement of structures easy while 3D printing makes it possible to create specialized load-bearing parts, and orchestration software keeps the whole thing moving forward,

This synthesis of different technology “vectors” is important, because it helps provide a timeline that helps to see where augmentation becomes replacement. Most of these technologies are already extant today, and drones are becoming one of the key tools that civic engineers are already using. The biggest limiting factor is fuel. Orchestration is similarly sufficiently mature. 3D printing still has some ways to go before it can be done on the fly, and specialized commercially available repair bots are probably still a decade in the future. This means that augmentation will become replacement in ten to twelve years time for the inspector, and perhaps fifteen to twenty years for the repair and maintenance bots to replace the bridge repair engineer.

What about a bridge construction crew? That timeline is a little farther out, because the challenge there will be developing drones capable of heavy lifting. These drones are in essence helicopters, with size and mass limitations that attempting to fly helicopters within urban areas brings with it. At the same time, it’s notable that other autonomous systems (such as shipping cranes) are already in use to manage the loading and unloading of standard shipping containers with little to no human intervention. This means that when thinking about future work, it’s important to realize that the latest technology may not ultimately be the one that accelerates the route to replacement.

Such replacement also takes place only after a prolonged period where a given individual goes from being actor to being supervisor. Autonomous trucks will still have “engineers” that ride shotgun, even if they are not necessarily the ones behind the wheel, for at least a decade after they become redundant as drivers. However, as autonomous trucks are configured to drive in caravans, you may end up seeing one engineer for four or five trucks, then for one in ten. Again, eventually, you’ll see a situation where insurance liability will push down the number to a small fraction of the total.

A similar phenomenon occurs in the creative space. Anyone who has watched a superhero movie in the last few years will have noticed how many animation artists are involved when the credit scroll. It is unlikely that these jobs will go away for several decades yet, but there’s also a hand-me-down process that happens. Algorithms for real world simulation of physics and rendering start out as software, but over time, these get baked into hardware pipelines within graphical processing units (GPUs). This means that there’s about a five year lag between an effect appearing on the big screen and the same effect being available to an “amateur” 3D artist or animator in a medium-level hobbyist system, and a ten year lag to the point where the typical computer will be able to render it “live” in a game.

New jobs are certainly created here, but it’s worth noting that doing 3D rendering requires knowing a fair amount about how the software works as well as an understanding about things such as specular vs. matte light surfaces, shaders, bump maps, rigging and other aspects of 3D animation. Yet as the tools become more democratized, competition for decent paying jobs in that field also increase. At the same time, there’s an upper limit for a technology before it becomes good enough, and consequently improvements have diminishing returns. While we are still a ways from being there yet, I expect that we’re not far from someone saying “I want a female, red-headed elven mage casting spells” and having the renderer able to create at least a first pass of such a character instantaneously (with animation and details) instantly.

Put another way, in a tech-oriented society, there is a distinct upper bound beyond which augmentation becomes replacement. In 1994, if you were reasonably competent with HTML you could name your salary. Today, HTML is taught in grade school, and is mostly generated on the back end. Today, you can command a six figure salary as a data scientist. By 2025, expect analytics and visualization to be a staple of most data architectures, generated automatically as patterns are discerned and are then written into code, and the need for data scientists will have essentially passed. This means that while technology creates jobs, those jobs have a very limited shelf life, often not even lasting a generation, let alone a lifetime.

It’s worth remembering, however, that the fundamental problem comes down to financial security. And this is where it is necessary for us to ask some hard questions. Wages serve a critical purpose in an economy — they are the primary way of transferring money (as a proxy for value) to workers to allow them to participate in that economy. As the number of available “slots” in the economy decline, the velocity of money also declines.

The velocity of money is an important concept. When a dollar is introduced into the economy, it is effectively spent multiple times, as it gets transferred from buyer to seller in repeated transactions. When the velocity of money is high, then that dollar can often generate tens or hundreds of dollars exchanged before it is finally withdrawn from circulation. When that dollar is in electronic form, then it is the identifying token (the dollar’s serial number) that is passed from one system to another.

Saving dollar can do the same thing via interest, where the dollar essentially serves as collateral for loans. Loans work primarily because the number of times that the dollar is spent can be used to create leverage — but only if the loans are paid back promptly. When a loan is not paid back, then that leverage (those potential dollars) are lost. Saving in a mattress is perhaps the worst thing that can happen to a dollar (at a macro-level) because in this case that dollar is taken out of circulation altogether.

At the micro-level, of course, saving in a mattress is a reasonable strategy if you do not trust the banks. (Okay, it’s still a bad idea, but in times of economic instability it is not at all uncommon for people to invest in alternative currencies or similar portable forms, especially when inflation becomes hyperinflation). This is one of the ironies of economics — what makes sense at a micro level is often counterproductive at a macro level and vice versa. Real economics exists in the gray area where the one transitions into the other.

When wealth gets concentrated, the velocity of money slows down, because the number of transactions involved actually drop compared to large numbers of small transactions. This is why an economy where only the very wealthy have disposable income is usually very weak — there are simply too few people spending, and the spending tends to be concentrated in a much smaller orbit.

Now, putting this together with the jobs situation, and what emerges is both obvious and stark. At a macro-level, each job slot loss results in the slowing of the economy. The economy shrinks, which leads towards a push for more efficiency through automation (beneficial at the micro-level for the investor class) which then reduces more job slots causing the economy to shrink further. The investor, originally seeking a return on investment (greed) is now seeking a return OF the investment (fear), so they too stop spending, causing the velocity of money to slow even more.

Eventually, the vicious cycle causes the economy to collapse altogether, because there is no longer a consumer base able to afford what is being produced. The last time that this happened (in 2008) it took a massive global infusion of money into banks on the part of governments worldwide, money that was essentially borrowed from the future (which is what credit is). This has become standard tactics since the 1970s. The problem with such borrowing is two-fold — credit is secured by a sale of resources and taxation and credit has inherent risk. Resources are not infinite, and once sold they cannot be resold. Taxation, improperly administered, also slows the velocity of money. Finally, if the resources and taxation fail to meet the credit requirements, the cost of interest rates go up.

So, what does this say long term for automation, drone tech and AI? By eliminating the ability to push money into the system and have it get disseminated to the widest potential base (by eliminating jobs), the move towards automation (and taking all the profits to be derived by such automation) will ultimately make the economy collapse. Investments do not net enough to make them worthwhile, infusions of capital by governments cannot reach enough people to turn the economy around. Ultimately this will cause a major split between the rentier or investment class and the technical class.

Why is this important? Because the technical class has the potential to create alternative economies … and they are doing so now.

Starting a business fifty years ago meant huge barriers to entry that required deep pockets, but typically with a large payoff.
While it is still not easy, starting a business today has a much smaller barrier to entry … but also smaller potential for profits.

One of the most profound changes that AI has wrought, unlike any previous industrial period, is that it has reduced the barrier to entry for people to create businesses. This is the flip-side to the destruction of jobs. A person with the right software can start a business to write books or , create specialized products, produce an animated movie or video game, establish a newspaper and so forth (and critically, that software is not expensive in comparison to the issue of managing inventories and facilities). As with any business, the earliest to market is usually the best to take advantage of a given niche, and as with most businesses, there’s usually a rush to get into that space that can, for a few years, lead into over-competition, but that also tends to even out with time as talent and perseverance wins out.

Such entrepreneurship needs to be put into perspective, however. There are many more opportunities to get your books published in 2018 than existed even a decade before, many more opportunities to produce specialty clothes for cosplayers, get your music heard, get your artwork seen, your models purchased.

For creators, this is a boom time, but on a per work basis, these creators are making considerably less. Part of the reason for this is that publishing (which covers most creative endeavors) was designed to act more like a bank than anything. Publishers would take a risk on a product by bankrolling it (in effect providing a loan), then would handle printing, distribution and publicity. This encouraged longer works (and ultimately higher prices).

Today, the bulk of the cost in putting together a book is the author’s time, with perhaps a paid editor’s review, an inexpensive cover, and perhaps a couple of hours of formatting. The publishing infrastructure (the AI) is amortized by hundreds of thousands of writers agreeing to a portion of profits with Amazon or similar market platforms.

This has the effect of pushing down prices and encouraging smaller works. With novels, the typical novel has gone from 150,000 words to about 75,000, and successful novelists today have adapted by creating engaging short novels and long “short” stories rather than creating massive tomes and short story collections. This decomposition process is taking place everywhere, as we go to an a la carte mode where we are subscribing to individual channels rather than bundles.

This same pattern is happening everywhere. A typical IT project is not done in house anymore. Instead, you are increasingly seeing bids for projects, a company acts as a general contractor which then brings in subcontractors. The project lasts until the product is finished, then the company reconstitutes itself. This model is nothing new, but what has changed is the size (much smaller), how distributed that company is (far fewer people in shop), and how much these projects rely upon cloud based architectures such as Google Cloud or Amazon AWS.

In the modern business, there are remarkably few jobs that are not, in some way, moving to the cloud.

By now, some of you may be thinking “This makes sense for programmers, but I’m not a programmer”. It’s worth remembering, however, that any time you create or edit a document or spreadsheet, create a model or a computer graphic, design a widget to be 3D printed or prepare a lesson plan, you will be interacting with a database tied into infrastructure, most of which is now migrating to the “cloud”. With movies or game production, as an example, just about every aspect of production, down to what goes into what food gets ordered or the location of scripts and contracts goes into a distributed network. Sales are managed through these same networks, and increasingly the auditing process is simply part of the downstream analysis of these systems.

In an ideal world setting, this work environment makes it possible to both better assign people’s time and resources and provides a better way of tracking progress towards a goal, rather than simply time in a seat. However, there is a flip-side to this — this finer level of granularity also requires that there be a corresponding revaluation of labor so that it includes management, perhaps with an AI system that determines optimal value provided via points of ownership.

This is in some respects a radical system, but is one that in many respects may prove necessary. In effect, it means that management ends up with a smaller percentage of the pie, and that ultimately everyone who participates in a project receives some royalty from it. This has been difficult to achieve before because royalties generally have to be accounted for over time, but with a blockchain type accounting system they can be.

The AI component is also important— in general, twentieth century capitalism strongly favored sales over technical and support personnel, because sales people were able to negotiate contract commissions early, for work that generally needed to be done by technical/creative/support. As everyone had a vested interest in maximizing their share, this meant that those later in the process typically received a smaller overall payment. By being able to track (and value) effort in comparison to what others are doing, it becomes possible to more accurately compensate everyone, not just the CEO.

Royalty, held in escrow, can then provide a person income even when they are not actively working on a project. The shift from dividend income (which is purely an investment function) to royalty income also reduces the overall impact of investors. This becomes especially important once micro-investing becomes common (crowd-funding). In effect, this becomes a safeguard against greediness.

Arguably while oil will remain a part of the equation for a while, investment in oil is becoming risky.
Meanwhile, solar investments, while much smaller, are providing comparable amounts of energy, even if at much reduced returns on investment financially. This may be an indication of things to come.

Why would investors do this? Because the overall return on investment is decreasing long-term, the flip-side of the barrier-to-entry gradient becoming lower. Large projects (such as big real-estate projects, oil rigs, mines, etc.) are comparatively high risk but usually have high returns. However, computer AIs and the like generally have much lower barriers to entry, meaning that you get the potential for more investors at a much lower level of investment, which dilutes ownership dramatically. This is true any place where digital technology impacts, from straight IT to biogenetics and pharmaceuticals to materials engineering.

Most IPOs in the tech sector since about 2008 have been busts, netting oversized profits for the investment banks but often ending up with investors underwater, and most larger mergers and acquisitions in this space have proved counterproductive. This suggests that the next coming decades will be bad ones for investors, and that in many respects the pendulum that has swung heavily towards investors over the last half century is now beginning to shift back to creative labor.

One final piece in this puzzle is the concept of a Basic Living Income, or BLI. A BLI is in effect a basic income provided to each person to cover at least a minimal existence. Resource based BLIs exist in places like Alaska, where state residents receive a royalty of oil dividends, and it is possible, to see a scenario where energy production moves to a stage where there is effectively a surplus produced by renewables, but for that to happen, most of the electric infrastructure grid would need to allow for that ability to efficiently store, transmit and utilize electric power more readily that it is possible to do today.

Unless things change radically, I think a pure BLI has almost no chance of actually being implemented nationwide. However, what may be doable is a starter BLI, coupled with “free” education and healthcare. An affordable healthcare system is possible in the United States, but it will take the collapse of the existing patchwork system for that to happen, and realistically it would not be so much free as subsidized. The Affordable Care Act highlighted thee danger to the dividend class — under the ACA, it became easier for people to effectively start their own business, which also meant that they could get insurance without being a large business. Without that health care, one medical emergency could be crippling, especially for a family.

Education is, similarly, an area where, because of closed systems and systematic underfunding at the federal levels, prices have risen at well above the rate of inflation. Businesses benefit far more by having a highly educated workforce, and the way that the system exists now, those businesses generally depend upon individual workers to pay for training to do the jobs that business most needs. By not investing in such training at any level, eventually businesses are faced with being unable to get the trained workers that they need. Yet this also means that students need to be subsidized not only for the cost of education but also the cost of supporting families. It is here where BLIs ultimately make a the biggest difference.

Ultimately, this all points down to the biggest culprit in all of this: dividends and excessive compensation. At the moment, dividends are clustered towards investors and senior (sales) management. Similar compensation, including bonuses and options, are similarly weighed heavily towards the C-Suite, despite ample evidence indicating that most C level positions provide at best only a minor salutary impact upon a company.

Indeed, it makes for an 20 thought experiment to imagine removing C-level officers of a Fortune 500 company and replacing them with AIs that are capable of analyzing and ultimately deciding the best course of action to take. An AI that is capable of replacing a skilled physician is certainly capable of replacing a sales manager. After all, the Google Assistant AI is sufficiently capable to convincingly schedule haircuts or order dinner — it is surely able to do shareholder conference calls and the occasional bit of damage control.

Given that the typical large company CEO compensation package is in the neighborhood of $20 million annually (about 400 times the average salary), if a company was truly interested in reducing expenses, an AI replacing even half of the senior level positions in a company could cut down costs by a quarter of a billion dollars a year, and probably make more than the humans would.

Yet for some strange reason you see few managers clamoring for AIs in management.

So, yeah, the AIs are coming, but realistically, what is keeping people from actually benefiting from these AIs is the greed of those who in general are taking advantage of an antiquated system to reward them far more disproportionately than they deserve. Given all the things that human beings could do if they weren’t hampered by the need to make other people wealthy, doesn’t it make sense to frame the discussion in these terms — and to hold accountable those people who have embraced greed to the detriment of everything, and everyone, else?

Kurt Cagle is a writer on artificial intelligence, futurism and the fourth industrial revolution. He lives in Issaquah, WA, where he is writing his latest novel, Storm Crow.

--

--