Man vs. Machine: We’re The People Watching

Kyle Harrison
The Junto

--

*This article was written by Kyle Harrison and Clark Brimhall, as a result of the intellectual rigor of a Junto, with Tim Riser, Erik Hansen, Rachel Wortmann, Garret Bassett, and Abram Campbell.

We don’t profess to be experts on artificial intelligence, neural nets, or balancing checkbooks (the complicated stuff.) But we are everyday people who worry about rent, and friendships, and movie releases, and all the things that people worry about. And this is our reflection on artificial intelligence, and automation generally, and the impact it could have on the world as we know it.

There are a few different camps of thought when it comes to AI, from the positive “making vacations longer” camp with Bill Gates, to the “cybernetic hellscape” of Elon Musk. Even Stephen Hawking shrugs his shoulders with a “could go either way” mentality. But when we think about automation, we don’t think of the galactic implications before we think of the impact it will have on our jobs and our families.

We live in the United States of America, where we reap the benefits of capitalism. We have plenty of friends eager to tell us why capitalism is the greatest gift ever given to mankind, and why any form of government intervention is only a weight on the potential greatness that capitalism could provide. But the theoretical future of artificial intelligence and automation causes some concerns for us when it comes to depending on unregulated free market efficiencies.

Steve LeVine, of Axios, did a pretty good job laying out our biggest concerns by summarizing a report from the World Economic Forum and the Boston Consulting Group:

“74% of executives say they plan to use artificial intelligence to automate tasks in their workplace the next three years, and 47% say skills shortages are a key rationale. Yet only 3% intend to significantly increase investments in training in the same time period. These executives say only 26% of their work force is ready to reskill for new jobs, and about 1 in 4 of these business leaders say a key obstacle is that their employees are resistant to such training. But this appears to be false. 67% of workers say they consider it important to develop skills to work with intelligent machines in the next 3 to 5 years.”

And these aren’t just factory workers and truck drivers. “Will Iverson, the chief technology officer at Dev9, a continuous delivery software firm based in the Seattle area, says that software is increasingly being leveraged to design other software, in many cases replacing the role of humans. In other words, software is beginning to write itself.”

A lot of people hear the question, “what happens when robot takes everyone’s jobs?” And they get the response, “We’ll just teach everyone to code.” And then you’re like “right but…they’re taking those jobs too.”

Of course, this question isn’t new “ What happens when the cotton gin/machine assembly line takes all of our jobs?” — these types of questions have been the panic cry of workers since the start of the industrial revolution. Yes the cotton gin did replace jobs, but it also created more demand for cheaper products of cotton thus creating a larger scope of labor, giving different jobs to those who had been replaced. Is the dawn of AI any different? Like the cotton gin, will AI just create more jobs by making products cheaper spurring more demand, spurring more work? Or is this revolution different, this time? Will there be anywhere for the general worker to go?

This isn’t to say that reskilling isn’t the way to go in the face of automation, but when you ask everyday people like us, blanket statements aren’t exactly enough. “We’ll teach everyone to code” or “universal basic income, duh” or “we’ll just go to mars.” None of those answers are satisfactory when it comes to the question of whether or not we’re employable. And we don’t mean to belittle the struggle that anyone has. We’re college graduates with tech-sector jobs, and certainly not the most at-risk employees. But if, and when workers really have nowhere else to turn for work this time, is the free market going to make sure we all land on our feet? Is government going to help?

Dan Ariely, in his book Predictably Irrational, pointed out the disconnect between policy, free markets and practicality.

“Yes, a free market based on supply, demand, and no friction would be the ideal if we were truly rational. Yet we are not rational but irrational, policies should take this important factor into account.”

Whether it’s rationality, preparation, or foresight, government needs to find the balance between enabling freedom and helping people avoid making mistakes, what Ariely, Richard Thaler, and their witty herd of behavioral economists have called “libertarian paternalism.” We need policy that takes into account human idiosyncrasies while keeping in mind the constant risk of ruining the future. Pretty tall order. But that’s one of the issues with our self-governed society. Citizens often scream at their politicians for not making their individual lives better, while the politicians are sometimes (definitely not always) just trying to keep the big picture in mind.

Back in the day, the Ottoman Empire spent 700 years on top of the world giving us advances in architecture, poetry, all that jazz. They eventually fell behind and off their pedestal. What caused their fall? Closing themselves off to innovation, even banning the printing press for half a century. The Chinese were the same way, with the potential to conquer the known world, inventing moveable type 400 years before Mr. Gutenberg came along. But when they felt that innovation was causing a turning away from their ancient culture, they went as far as “destroying all oceangoing ships and arresting their owners.” If the government decides that automation will be too damaging to our society, it’s not unreasonable to think they might try and inhibit the progress of AI research. And that isn’t the answer that any reasonable person should want.

So what can the government do to prepare for the, what we consider inevitable, wave of automation and job displacement? From our perspective there are two big questions that we have yet to hear satisfactory answers to:

Question #1: What parameters do the technologists need to have?

We tell marketers that they can’t lie. We tell doctors that they can’t kill people. Why can’t we tell technologists not to destroy humanity? People, for the longest time, had nothing but positive things to say about Facebook, and it’s stock price has been pretty solid as a result.

Nowadays, there’s a new bandwagon in town and it’s rapidly filling up with critics of the world’s social media site, even the company’s founding President, Sean Parker, called it “exactly the kind of thing that a hacker like myself would come up with.” With each like and comment, Facebook is “exploiting” human psychology on purpose to keep users hooked on a “social-validation feedback loop,” Parker said.

Facebook has had some of the most talented programmers in the world working on it, but what it lacked early on was an adequate number of moral ethicists. Why shouldn’t we learn from that mistake and encourage the thoughtful approach to AI and automation? And sometimes that encouragement could be reinforced with federal regulation. That just raises the question of what kind?

Question #2: What do we need to regulate in order to prepare for the ramifications of AI and automation?

Machines have been trained to play video games, and in a number of hours can surpass all human capability. This sounds a lot like potentially human-replacing technology. The cotton gin just didn’t have that same ring to it. When we build AI, we have to build it with human well-being in mind. But how do we define human well being?

For example is there regulation in place to ensure that a created AI has human goals and values intrinsic to it’s own? If you successfully create an AI to eradicate cancer, there needs to be parameters that include the well being of humans. Otherwise, an AI whose goal it is to eradicate cancer may find that the best way to do so, is to kill humans in the process. If nothing else, can the government be there to simply caution; “careful what you wish for” when creating a super intelligence.

Amidst the twitterverse of AI conversants, there’s a common thought experiment about paper clips and AIs. In summary, the idea is that if they programmed an AI to make paper clips and to increase its ability to make paper clips, it may continue to get better and better until it figures out how to shape atoms into paper clips. And you know what’s made of atoms? Everything. So the AI turns the world into one big paper clip. In the immortal words of Eliezer Yudkowsky, “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”

So the gut reaction of government regulators? No efficiency-maximizing paperclip machines. Done. Time to run for re-election.

But what if the processes involved in building that paperclip machine, if controlled and regulated, could have led to the cure for cancer? All the sudden, we’re the failing Ottoman empire or the ancient Chinese, cutting ourselves off to innovation.

In the surprisingly insightful book, The Elements of Journalism, Bill Kovach and Tom Rosenthall make the point that “technology did not create the attitudes of those who participate. Machines do not change human nature.” So we’re not here to say we have the sweeping solution to this concern, but we’re pretty sure that trying to slow innovation isn’t the answer. Instead, policy designed with human nature in mind is the key.

The obvious answer feels like the reskilling that everyone is talking about. “Overall, the scale of re-skilling suggests that we need a skilling revolution,” that’s how Oliver Cann, a WEF spokesman, put it.

Logically, we came to the conclusion that, if you’re a responsible government, you take the research being done by McKinsey and all these economic think tanks, and you say, “Okay, what are the 10 or so jobs that are most at-risk to being automated out of existence?” A bunch of Oxford academics and others have already laid that groundwork.

For the sake of simplicity, let’s take truck drivers as an example. There are 3.5M truck drivers in the U.S. If the government said, “Okay, we’re going to take $10K and give it to each truck driver who wants it to use for certified retraining programs.” And let’s assume that only 20% of the truck driving force really take them up on it. To service just one small sliver of one of many at-risk industries, it’s a $7B program.

What if we, instead, could help coordinate efforts between companies like Knight-Swift and JB Hunt, the largest employers of truck drivers. What if the government incentivized those companies by saying any certified spending on educational programs could be tax deductible?

We’re not the first people to say “I feel like we ought to take a look at this whole ‘robots-taking-our-jobs’ thing.” It’s been a concern since somebody at Ford invented the word automation.

Pedro Nicolaci da Costa laid it out like this:

“Enabling [retraining] on a large scale requires a government willing to invest in education infrastructure and a safety net for workers as they make the change as well as businesses willing to shoulder some of the cost. In the US, despite a fair amount of lip service paid to the subject, that’s not guaranteed. Where it is available, it’s in the form of piecemeal arrangements.”

So just because a lot of people are talking about it doesn’t mean we’re any closer to a solution. And as we’ve scoured the ever-so-lovely comments sections on the internet, the complaint comes up again and again, “this is just what happened in the industrial revolution, and there’s nothing to be afraid of.”

Right. But the industrial revolution took 5 people producing 100 units, let’s say. And the inclusion of robotics in manufacturing enabled a massive expansion of productivity, and with it, a massive expansion of consumption. We often think about how that revolution turned that job into one person with the help of a machine producing 100 units. But then, they had 5 people, each with their own machine, all together producing 500 units. No big deal, and a crap ton more scarves.

The underlying assumption with that argument, though, is that we can continue to expand our consumption. But can we? Over and over again, people have pointed out how drastically unsustainable our current rates of consumption really are. So is the solution really that all the robots will make all of the things and we’ll just eat it all, and who needs money or jobs?

All we want is to talk about it in a more actionable way. People are calling for a training revolution. So what are we doing to spark that revolution? Who’s going to pay for it? What are people going to be trained to do? Is Universal Basic Income worth looking into? What is “meaningful work?” What are the implications for currency under such a circumstance? Do we print more money to pay for that kind of program? Do we pay people at all?

What we need is an honest evaluation of the future of work. What we don’t need is hollow promises to revive the coal industry, a move that is not only ineffective, it’s actually detrimental as false hope encourages coal miners to reject retraining.

Now, as we said at the beginning, we don’t have PhDs in economics. We’re not captains of industry, masters of our domain. We struggle with the toilet locks as much as our toddlers do. And what I wish the internet could be is a place where a trained economist could critique our ideas, where we can engage in a meaningful dialogue. What we’ll likely get, instead, is some overly-aggressive professional gamer in Atlanta telling us why were stupid.

In place of a sweeping solution, we have an honest question. What should we do next?

--

--

Kyle Harrison
The Junto

“I write because I don’t know what I think until I read what I say.” (O’Connor) // “Write something worth reading or do something worth writing.” (Franklin)