Is Technology Stripping Us Of Our Humanity?

The robots we may need to fear could be ourselves

Ken Grady
In Libris Iuris
20 min readOct 15, 2019

--

RE-ENGINEERING HUMANITY
By Brett Frischmann and Evan Selinger
430 pp. Cambridge University Press
$17.94 Paperback

I carry my smartphone with me all day. I use it to check the weather, share thoughts with family members (text messages), communicate with professional colleagues (email and Twitter), stay up to date on the news, and answer questions (ahem, do research). It has become, one could argue, an integral part of my daily life and maybe an extension of me. But is it a tool I use to make my life easier or is it a demonic device that others use to engineer my life? Are all the apps on my smartphone controlling me more than I control them?

The role of technology in life evokes strong emotions in people. Try to pry a smartphone from the hands of a teenager and you may find yourself reconsidering the joys of parenthood. Bots, apps, automated this and that, and high-tech everything. For every good story about technology helping someone there is a bad story about how technology destroyed a life.

We do trust (mostly) and defer to technology. Technology is all around us and without technology — for all its faults — we would be suffering. Medieval fairs may be fun, but medieval times were not. But there is technology and there is technology. The type that seems to inspire the most awe and create the most fear, and the focus of Brett Frischmann’s and Evan Selinger’s book, Re-engineering Humanity, is the technology individuals use that is interconnected and controlled by others. That describes an ever-growing part of our world.

Technology Versus Humanity

Brett Frischmann is The Charles Widger Endowed University Professor in Law, Business, and Economics at Villanova University. His co-author, Evan Selinger, is Professor of Philosophy at the Rochester Institute of Technology. He also is the Head of Research Communications, Community, and Ethics at the Institute’s Center for Media, Arts, Games, Interaction, and Creativity. Their book explores and expands upon a theme that has bounced through literature since the early days of the Industrial Age (and probably since the beginning of technology). It is a theme we find so intriguing it even pops up in movies.

In the Matrix trilogy, Councillor Hamann and Neo discuss the relationship of humans to machines while on the engineering level of Zion:

Councillor Hamann: Almost no one comes down here, unless, of course, there’s a problem. That’s how it is with people — nobody cares how it works as long as it works. I like it down here. I like to be reminded this city survives because of these machines. These machines are keeping us alive, while other machines are coming to kill us. Interesting, isn’t it? Power to give life, and the power to end it.

Neo: We have the same power.

Councillor Hamann: I suppose we do, but down here sometimes I think about all those people still plugged into the Matrix and when I look at these machines, I.. I can’t help thinking that in a way, we are plugged into them.

Neo: But we control these machines, they don’t control us.

Councillor Hamann: Of course not, how could they? The idea’s pure nonsense, but… it does make one wonder just… what is control?

Neo: If we wanted, we could shut these machines down.

Councillor Hamann: Of course… that’s it. You hit it! That’s control, isn’t it? If we wanted, we could smash them to bits. Although if we did, we’d have to consider what would happen to our lights, our heat, our air.

Neo: So we need machines and they need us. Is that your point, Councillor?

Councillor Hamann: No, no point. Old men like me don’t bother with making points. There’s no point.

Neo: Is that why there are no young men on the Council?

Councillor Hamann: Good point.

But there is a point. We need technology — 7.7 billion people cannot feed, clothe, and house themselves without it. We need interconnected technology. Imagine air travel (approximately 1 million people are in the skies at any moment) without an air traffic control system that is interconnected. And we need technology that is controlled by others. My smartphone may raise many risks, but someone needs to control the phone system. Frischmann and Selinger are not opposed to all technology, not even all interconnected tech or tech controlled by others. They raise the alarm, however, when that tech strips us of our humanity.

Their term for the effect technology has on humans is: “techno-social engineering.” In theory, techno-social engineering could be neutral — changing behavior is neither good nor bad. But as Frischmann and Selinger see it, techno-social engineering has been used and increasingly is used in ways that “devalues and diminishe[s] human autonomy and sociality[] as we become accustomed to being nudged, conditioned, and more broadly engineered to behave like simple stimulus-response machines.” Technologists are turning us into Pavlov’s dogs.

A Debate Since The Dawn Of The Industrial Age

The Press, a Christchurch, New Zealand newspaper, published an article in 1863 titled Darwin among the Machines. The author was Samuel Butler, an Englishman who had fled his home country to escape his oppressive father. Butler later became known for his Victorian satire novel Erewhon. In Erewhon, Butler incorporated the article as one of three chapters collectively called “The Book of the Machines”. In the article and book, Butler posits the possibility that machines are evolving and one day may replace humans:

We refer to the question: What sort of creature man’s next successor in the supremacy of the earth is likely to be. We have often heard this debated; but it appears to us that we are ourselves creating our own successors; we are daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race. In the course of ages we shall find ourselves the inferior race.

Day by day, however, the machines are gaining ground upon us; day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life. The upshot is simply a question of time, but that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question.

George Dyson, the non-fiction technology author, wrote a 1997 book titled, Darwin among the Machines: The Evolution of Global Intelligence. In his book, Dyson put a twist on Butler’s idea:

Everything that human beings are doing to make it easier to operate computer networks is at the same time, but for different reasons, making it easier for computer networks to operate human beings … Darwinian evolution, in one of those paradoxes with which life abounds, may be a victim of its own success, unable to keep up with non-Darwinian processes that it has spawned.

This question of how humans and technology should co-exist can be traced back to the beginning of the Industrial Age. In the early 1800s, the Luddites were concerned that automated looms allowed factory owners to replace skilled craftsman with unskilled labor. As Jon Katz describes them in his Wired magazine article “Return of the Luddites”:

Big mills and factories meant an end to social custom and community, to personal status and individual freedom. Having worked independently on their own farms, they would be forced to use complex and dangerous machines in noisy, smelly factories for long hours, seven days a week, for slave wages. Their harvest and agricultural rituals, practiced for centuries, would perish. Fathers could no longer be with their wives and children. This new kind of labor changed notions of time and introduced concepts like work schedules and hourly wages.

It seems that technology has attacked our humanity for a long time. If so, then what new risks do Frischmann and Selinger fear from the interconnected tech controlled by others? The answer is bound to the concept of deskilling.

Deskilling Humans

You buy a fitness device to wear on your wrist, because you know you need to do more walking than sitting (the new smoking). You spend a lot of time in front of a computer, so you want an aid to get you up and moving and to track your progress. It prompts you with a beep to walk around every 60 minutes. It counts your steps, tracks your heart rate, and tells you how many calories you have burned each day. You wear it to bed, because getting a good night’s rest also is important to your health. Do you control the device, or does it control you?

You start changing your behavior to “please” the device. You walk more to get the necessary steps in each day. Some days, you feel like a squirrel in a cage as you try to find ways to get the requisite number of steps. The device says your sleep is too restless, so you take melatonin pills before you go to bed. You upload the data to a community site so you can compare your patterns to those of others wearing the device. Who is in control, you or the device?

Your new car comes with a fancy screen and a navigation assist function. You kept a stash of maps in the glove compartment of your old car. Now, those maps are gone. Whenever you need to go to a new store, doctor’s office, or restaurant, you just punch in the address and the navigation system takes over. Have you become dependent on the navigation system? Could you navigate without it?

The seemingly ubiquitous computer feels almost like an extension of your brain. To keep track of where everyone is and how to contact them, you use a contacts program and LinkedIn. Thank you notes, those things your mom made you write the day after your birthday, have now become emails. Your parents kept a budget in the kitchen drawer; you use a spreadsheet program. When you had a question as a child, your dad steered you to the encyclopedia, a nicely-bound set of books prominently displayed on a bookshelf in the living room. You learned to use the table of contents and the index. You tell your children to fire up Google.

Each of these computer programs at some point required you to enter into a contract with the vendor. That contract, of course, was online. A short blurb let you know you couldn’t proceed until you clicked, “I Agree”, which you promptly did. Indeed, today you expect that step as part of any new software and you immediately click the “I Agree” button after a quick glance at the blurb. What happened to “read the document and know what you are signing?”

The authors take stories like these, some amplified, some real, and some hypothetical, and use them to weave a theme. The more inter-connected technology controlled by others expands the more it causes us to lose our humanity. We lose skills and knowledge as we let technology take over. Exercise has become a response to a stimulus. Once, most of us could navigate by map and now most of us are lost without Google maps. We don’t read, evaluate, negotiate contracts we click a button in response to a stimulus. As technology evolves, it reprograms us into less-capable, more drone-like, weak facsimiles of our former selves. We are being deskilled.

Taylorism Made Us Do It

Frischmann and Selinger say that much of their work is influenced by Jospeh Weizenbaum. Weizenbaum was a Jewish refugee from Nazi Germany who studied mathematics at Wayne State University and eventually held positions at MIT, Harvard, and Stanford. He focused on computer science and is considered one of the early pioneers in artificial intelligence. But Weizenbaum eventually soured on computers, putting forth a dark and cautionary view of their impact on society in his 1976 book Computer Power and Human Reason: from Judgment to Calculation. Weizenbaum saw computers as capable of computation but not of “human” traits such as judgment, compassion, and wisdom.

According to Frischmann and Selinger, the real antagonist of their story is Frederick W. Taylor. “We consider Taylorism to be one of the building block philosophies that today supports widespread techno-social engineering of humans.” Taylor espoused a view of management that pitted manager against worker. Workers, he believed, were naturally inclined to sloth and laziness (something he called “soldiering”). It was management’s job to counteract workers’ inefficiency. The way to do that was through rigorous time-and-motion studies. One could determine the “one best way” to perform each job and each task. Workers should then be held to that standard. The result would be an efficient business.

Taylor lived from 1856 to 1915 and his brand of management died out soon after him. Even while Taylor was alive many of his friends and disciples veered away from Taylor’s harsh style of scientific management. They recognized that it embodied some useful ideas on efficiency, but fell far short when measured on human relations. Pinning techno-social engineering on Taylorism is a bit much, given the changes that have happened in the almost 100 years since it died out. There is, however, a kernel there worth exploring.

Vestiges Of Taylorism

While Taylorism was brief lived, some concepts which people incorrectly associate with Taylorism still exist. A few of those concepts preceded Taylor, a few arose after his death, and many are at odds with Taylorism. For example, the idea of time driving work arose with the dawn of the Industrial Age, decades before Taylor. Once there were machines, people had to be “at work” to run the machines. They could not choose to do chores during the day and make shoes at night, sitting by the fire. Thus began the tyranny of time that is still with us today. Frischmann and Selinger equate time efficiency with Taylorism, but the idea that time is limited and should not be wasted long preceded Taylor.

Standardization — the same input leading to the same output — is another theme Frischmann and Selinger use to elucidate the evils of techno-social engineering. Taylor was not a stickler for this type of quality. That concept grew out of Fordism and later W. Edward Deming’s work. Ford needed cars that could be repaired, which required interchangeable parts. Deming saw poor quality as waste — why make something only to throw it away? Extending those ideas, why have workers waste time or make useless parts?

Following World War II, the Toyota Motor Company developed the Toyota Production System. Most people know it by the generic term “lean thinking”. Lean thinking focuses on removing waste. That increases efficiency, which causes many people to believe lean thinking is a direct descendant of Taylorism (the same is true with various other operational excellence methodologies). But that is incorrect and a simple example shows why.

Taylor believed the route to efficiency was through time-and-motion studies and reducing the time per task. In lean thinking, the focus is on waste, which means you might increase the time for a task if doing so reduces waste overall. In Taylorism, if it took one minute to make a unit and you only had 60 units to make that day, you still would spend only one minute on each unit (the one best way). In lean thinking, employing a concept called take time, you would take the workday (seven hours) and divide it into 60 even periods of seven minutes each. Production would then be spread over the seven hours using seven minutes per unit. Instead of, say, five workers to meet the one minute per unit time, you might use only two workers to meet the seven minutes per unit time.

The concepts of take time and increasing the time to do something as a means to reduce overall waste would have been completely alien to Taylor. Even more absurd to Taylor would have been the idea, foundational to lean thinking, that a worker’s time is valuable. If a manager makes a worker do things that are wasteful, the manager is wasting the worker’s time and talents. That is bad management and bad for society. We only have a few years to enjoy our existence, why spend them on wasteful activities? Note that wasteful is not the same as spending a day in the park or reading a book. Wasteful is defined as something which does not add value and both of those activities (and many more) add value to the person enjoying them. We have evolved beyond the simple Taylorian notion of “less time is always more efficient” that Frischmann and Selinger use in their analysis.

Even Frischmann and Selinger acknowledge they have trouble at times finding support for the Taylorism explanation of techno-social engineering. Another supposed example of Taylorism carried through to today and briefly mentioned in the book illustrates the Taylorism stretched too far dilemma. Reginald Heber Smith graduated from Harvard Law School in 1914 and became in-house counsel at the Boston Legal Aid Society. There, he found the Society struggling to handle the volume of client needs. He adopted Taylor’s basic idea of measuring the time it took to perform each task. How much time did it take to process each client’s problem? Many of those problems were standard and Smith wanted to reduce the time per problem. He was very successful and significantly reduced the time per matter and increased the volume of matters the Legal Aid Society handled.

Smith eventually moved to the Boston law firm Hale and Dorr (now merged with Wilmer, Cutler & Pickering to form WilmerHale), and brought the idea of timekeeping with him. Smith’s notion was that lawyers who kept time would learn how long it took to perform various tasks. The firm could study that data and standardize and reduce the time spent per task thereby becoming more efficient. At the time, lawyers did not bill by the hour, they billed a flat fee per matter so efficiency made sense for the law firm. That, of course, is not how history played out.

Time sheets, instead of becoming efficiency tools, became weaponized instruments to generate revenue. The billable hour did not make the legal industry more efficient, it drove inefficiency. Today, the biggest challenge to efficiency in the legal industry is the billable hour, which has become the drug addicting lawyers to wasteful practices. If Taylor were to see how his idea became perverted in the legal industry, he would be appalled — timekeeping in legal services is not Taylorism it is anti-Taylorism.

Frischmann and Selinger correctly point out that vestiges of Taylor’s ideas are embedded in society. They continue to influence in subtle ways and have had significant impact over the decades on how and why we do things. But in the past decade, a more powerful force has grown and today and in the future will likely eclipse vestiges of Taylorism. Today, artificial intelligence is driving much of the techno-social engineering Frischmann and Selinger fear.

When Machines Can “Write”

Writing is a distinctly human thing. As we consider what defines our humanity, writing must be near the top. Yet today, it is becoming harder — in specific, narrow situations — to distinguish human-generated text from machine-generated text. OpenAI, a not-for-profit research venture formed and funded by Silicon Valley luminaries including Elon Musk, Reed Hoffman, and Peter Thiel, claims to have software that can mimic human writing well enough to make it dangerous to release to the public.

When I use my smartphone to send a text to my children, the software prompts me with suggested words based on the words or letters I have typed. Sometimes, the software accurately predicts the word I am about to type. Sometimes it predicts a better word, which I accept in place of what I was going to type. This, to me, seems a very powerful form of techno-social engineering (though it is not “writing”).

Those who use Google’s Gmail became familiar with its Smart Reply feature. When you got an email, Smart Reply would offer suggestions such as “Got it!” that you could use to reply to the email (LinkedIn has a similar feature for Messages). Instead of composing a response, the email recipient would hit a key and let the software do the work.

Using the slippery slope meme favored by Frischmann’s and Selinger, we could say Smart Reply led us to Google’s Smart Compose. Now, if you start typing a reply to an email you receive on Gmail, Smart Compose will attempt to complete your sentences for you. Instead of canned phrases like “Got it!” Gmail will “read” what you have typed, compare it to millions of email sentences it has read, and, using predictive text techniques, propose the words to complete your sentence. Hit a key to accept.

Smart Reply and Smart Compose are not, of course, writing in the same sense that a human writes. The software has no idea what the words mean. It is simply doing a reasonably accurate job of prediction. Humans do this all the time. Two people who know each other well are said to complete each other’s sentences. The difference between the computer and the humans is that the humans know what the words mean.

Email is a giant time-waster and Smart Compose can save time, so we could pin the blame on Taylor. But something deeper than efficiency is at work here. Through AI, we are outsourcing our humanity to machines (techno-social engineering) for reasons beyond saving a few minutes each day. We are shading towards a world where computers are being asked to sit at the table with humans — even though the computers are not the equivalent of humans.

John Seebrook, in his article “The Next Word” published in The New Yorker lays out nicely how AI is rapidly moving forward in its abilities to mimic human writing, without understanding what it has written. Nevertheless, because the writing sounds so convincing it could present real problems. Consider this quote from Seebrook’s article:

The results of the first year of this work [referring to OpenAI’s work] are promising, but the big issues are about to be addressed. I asked [Dario] Amodei [OpenAI’s director of research] if we should be worried about A.I. surpassing humans in an array of specialized fields. “No, I think we can understand that it’s not going to be a society where people are robots,” he said. The safety of any new technology often hinges on how it’s regulated. If machines can learn to think for themselves, that might be a concern. But if we really want to replicate human intelligence — as most of us want to — there are several directions that researchers might explore.

Sounds sensible, right? Gotcha. That paragraph was written by GPT-2, OpenAI’s software. GPT-2 is another flavor, perhaps more advanced, of software used by Narrative Science.

Narrative Science provides software to newspapers and online services that want to convert data-heavy material to narrative stories. Think of all the local sports games played by schools in your area. Newspapers can no longer afford staff writers to cover such events. Narrative Science’s software takes the raw data from the game and turns it into a short article mimicking the style of a sports writer. What about the earnings releases from companies? Again, Narrative Science can turn that data into a pithy article that sounds like it was written by a junior journalist on the financial desk.

Going back to Seebrook’s article, we see the dehumanization theme raised with reference to A.I. writing:

A long time ago, the whole world could have said that it lived in a golden age of machines that created wealth and kept peace. But then the world was bound to pass from the golden age to the gilded age, to the world of machine superpowers and capitalism, to the one of savage inequality and corporatism. The more machines rely on language, the more power they have to distort the discourse, and the more that ordinary people are at risk of being put in a dehumanized social category.

Gotcha again. GPT-2 wrote that thought, which sounds eerily like what Frischmann and Selinger argue. Taylor was an amateur compared to modern computers.

The Point Is?

Assume that technology begets techno-social engineering, that humanity is changing, deskilling in some areas, losing autonomy in others, and that unchecked these trends will accelerate. What now? What should we do? Frischmann and Selinger have three suggestions, which I (very liberally) paraphrase here:

  1. Don’t let technologists decide for us.
  2. Create firewalls — gaps in the systems so that techno-social engineering cannot move seamlessly from small beginnings to large ends.
  3. Don’t use the efficiency argument to engineer out our humanity.

The first suggestion attacks the sin of abdication. It is all too easy to defer to technologists, who claim they have the high ground making decisions based on logic, math, and science. It can be seductive to see how far you can push technology, what you can make it do, and where you can take it. But in the words of Ian Malcolm (Jeff Goldblum) the mathematician in the movie Jurassic Park (based on the Michael Crichton novel), “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.” We all need to ask if we should, and those of us outside STEM should not abdicate that role to the technologists.

The second suggestion is a bit more difficult. Introducing firewalls (air gaps) in systems sounds sensible but is difficult to implement. The why lies in “who decides,” an issue I address below.

The third suggestion also raises the “who decides” question. One person’s humanity is another person’s waste of human resources. For example, Frischmann and Selinger use GPS systems as an example of humans losing the skill to navigate by relying on technology. But many would question whether navigation is an important skill for humans to have.

Ultimately, the “who decides” question looms large in all of the techno-social engineering challenges. The answer in a world of 7.7 billion people and growing is quite simple, no one. Technology is a genie that never could be put back in the bottle. Technology has moved to the point where no person or group controls it.

We talk about passing laws to regulate A.I. in the United States and Europe, but China, Russia, and other countries have too much at stake to follow our lead (assuming we eventually get to regulatory structures). Case in point, CRISPR, the revolutionary gene-editing technology, has not been used to modify humans. Except — a scientist in China decided (whether on his own or with state sanction) to cross that line. Unlike many dangerous technologies in the past, some of today’s riskiest technologies(e.g., AI) can be developed in private with minimal resources. Control is an illusion.

We also must consider the hubris of humans. Going back to the movie Jurassic Park, dinosaurs supposedly cannot reproduce in the wild, because the geneticists engineered them all to be females. Ian Malcolm notes the insufficiency of this approach, because “life, uh, finds a way”. Sure enough, some of the dinosaurs spontaneously change from female to male (courtesy of frog DNA used to complete the dinosaur DNA strands) and reproduction in the wild takes place. History shows us we are never as smart as we think we are.

The Future Is Still Ours To Write

Re-engineering Humanity is an important addition to the growing body of scholarship asking, “Is technology taking us in the right direction?” To some (e.g., Nick Bostrom, author of Superintelligence) we are now in an existential battle for the survival of humans. At the other extreme, we have Ray Kurzweil, director of engineering for Google, who believes the day will come soon when humans and AI-powered robots will merge into one — the singularity. Sandwiched between are the rest of us who must grapple with technology each day.

At one time, most of us believed in the idiom from The Cobbler of Preston by Christopher Bullock (1716):

’Tis impossible to be sure of any thing but Death and Taxes.

We now know that many elites have found ways to avoid taxes and, if Kurzweil is correct or if developments in biology pan out, avoiding death may be possible. Certainly, there is nothing inevitable about the future of technology. But to re-direct the path of technology, we must resolve, to adapt another idiom, that technology is too important to leave to the technologists.

Whether the ideas Frischmann and Selinger provide are up to the task is beyond the scope of this review and something you should decide based on the arguments they lay out. They are interesting. More important than obsessing over their impact is the need to move on with re-directing the future. We can always course-correct along the way.

Whatever the cause, Taylorism, AI, some combination, or something else, it is clear that technology is having a huge impact on our society and altering our behavior. Techno-social engineering is something that has the potential to re-write what it means to be human. We may choose to let it do so, but we should not default to letting it do so.

Ken Grady is an author writing about innovation, leadership, and the future of the legal industry. He is has been featured as a Top Writer on Medium in Artificial Intelligence, Innovation, and Leadership. He is an Adjunct Professor and Research Fellow at Michigan State University College of Law where his current research focuses on the digital transformation of law and the legal industry. He is on the Advisory Boards for Elevate Services, MDR Lab and LARI, Ltd. You can follow him on Twitter, connect with him on LinkedIn, and follow him on Facebook.

--

--

Ken Grady
In Libris Iuris

Writing & innovating at the intersection of people, processes, & tech. @LeanLawStrategy; https://medium.com/the-algorithmic-society.