The Future of Jobs: Part II
…
“For generations, they’ve been built up to worship competition and the market, productivity and economic usefulness, and the envy of their fellow men — and boom! It’s all yanked out from under them. They can’t participate, can’t be useful anymore. Their whole culture’s been shot to hell.”
Player Piano, Kurt Vonnegut
Part I of this blog series ended with a mention of Kurt Vonnegut’s dystopian future in his book, Player Piano, so I saw it fitting to start this second part with a quote from the same book where Rev. James J. Lasher, the Army Corps Chaplain, is explaining to the engineers Dr. Paul Proteus and Ed Finnerty that people have the need to feel they are participating in the entire production and value chain, not simply be bystanders.
It seems to me that a predominant argument in the media and amongst several discussion forums today is that tech will ultimately take us in the direction of such a future where machines carry out the entirety of human tasks. And they may have reason to be concerned, as we reviewed in part 1 of this blog , automation has already displaced millions of jobs in several industries and it seems that the use of automation will likely only increase as we proceed into the so-called Fourth Industrial Revolution. But how will technology and this potential symbiosis between cyber- and physical systems affect the structure of the labor market?
Author and Member of the Executive Committee of the World Economic Forum, Nicholas Davis, points out how labor displacement is a recurring theme in successive industrial revolutions: in the middle of the second industrial revolution (advent of electricity and mass production), around 1900, 41% of the US workforce was employed by the agricultural sector. By the start of the third industrial revolution (IT and digital communications) in 1970, that percentage had dropped to 4% and today it is at 2%.
What’s unprecedented in the current so-called “revolution” is that the labor shift could take place within the span of the next 20 years, much faster than ever before. Some estimates fix the percentage of jobs in the US that are at risk from automation and digitization at ca. 50%. So what do those experts propose as a solution or a mitigating strategy? There are two main strategies being discussed today:
“The first is to invest in building and developing skills linked to science, technology and design so that we equip the world with people able to work alongside ever-smarter machines, thus being augmented rather than replaced by technology.
The second strategy is to focus more on those qualities that make us uniquely human rather than machines — in particular traits such as empathy, inspiration, belonging, creativity and sensitivity. In this way we can reinforce and highlight essential sources of the value created by and within communities that is often completely overlooked in economic measurement — the act of caring for one another.”
- Nicholas Davis, Head of Society and Innovation, Member of the ExComm of WEF
But more specifically, what type of jobs are envisioned in “working alongside smart machines”? This is a question that a recent MITSloan article tried to answer by defining three categories: a) trainers; b) explainers; and, c) sustainers. Let’s delve in each one of those separately:
- Trainers: humans are still required today and will likely be required to teach AI systems to make fewer errors in understanding natural language, empathy and depth. Over time, these systems will get better at sounding more “human” so the need for trainers will diminish, but there will certainly be other systems that evolve and human-machine interaction is forecasted to be required and grow.
- Explainers: consultants or auditors of AI systems or algorithms that recommend actions that go against the grain of conventional wisdom. Case in point is the new EU law, “General Data Protection Regulation”, due to come into effect in 2018, will create the “right to explanation” for consumers to question any decision made purely on an algorithmic basis (e.g., credit approvals, insurance coverage, etc.).
- Sustainers: akin to system engineers to help monitor and ensure that AI systems are operating as they should and any shortcomings, outages, etc. are addressed in an urgent fashion. The authors in the MIT Sloan article even forecast the appearance of the role of the “ethics compliance manager — a sort of ombudsman for upholding norms of human values and morals” within AI systems.
This thesis of “retooling” of human capabilities, skills and knowledge seems to be in line with past experiences. The launch of Edison’s incandescent light bulb in 1879 probably displaced those in the gas lamp business, but there’s no doubt that electricity, as a new General Purpose Technology, and all the constituents in its product system (like the lightbulb!) have created large amounts of jobs (e.g. General Electric still employed +333,000 staff as of December 2015, let alone all other producers and companies in the electric energy space).
In fact, in some of those sectors most affected by technology (retail & transportation, for instance) there seems to be a resurgence of the “human touch” of sorts: Amazon launched their AmazonGo! Stores for groceries in response to the studies that showed that the penetration and growth of online grocery shopping has been consistently low, which suggest that people still want to go to physical stores and pick out their groceries. The experience of shopping in the AmazonGo! Store in itself represents a symbiotic relation between the physical and the cybernetic world: customers walk in, swipe their phones (thus logging in with their accounts) and then every move and pick of a product is recorded. With this approach, Amazon has the chance to change retail with automation and data-mining technologies on its customers borrowed from e-commerce but still provide the human touch or ability to speak with a human rep, if needed.
Another “unusual suspect” who has recently changed tack and now is looking to expand its “human touchpoints” with its customers and drivers is Uber. In response to the myriad of complaints from drivers that they spend months without the ability to speak to a representative (most communications take place solely via e-mail), it’s staffing more than 200 physical drop-in centers (called “green light hubs”) and introducing new dedicated phone lines. There’s a very sound economic rationale for Uber doing this: according to current and former employees as well as investors, Uber feels far more threatened by its broken relationship with drivers. This is an existential threat to the business because self-driving fleets won’t be ready anytime soon. As investors in Careem, the regional ride-hailing company that launched from the start with dedicated call centers for both customers and drivers, this is a big proof point and validates Careem’s strategy!
But there’s no arguing with the fact that technology has had a transformative effect on transportation and those sectors most affected by technology (financial services, retail, manufacturing). Prices for most services and goods have plummeted, but in those sectors unaffected by technology (education and health care), the prices on the contrary have continued to spiral out of control. In a stark comment on one of his podcasts, Marc Andreessen of A16Z stated, “[If the prices in these two sectors continue unabated] We’re all going to be employed in healthcare and education!”
The question here is, where’s the opportunity in slower productivity growth sectors such as health care and education? Opportunity is what will drive availability of new jobs. Potentially, with other technologies gaining ground, such as Quantum Computing for instance, more jobs will be created in areas that until today have been off limits — deeper understanding of molecular systems, cosmological systems, energy systems and other systems that we have been unable to investigate, understand and develop with classical computer systems because we have been reaching the limits of super-computers. Quantum computing, in theory, uses the laws of quantum mechanics to encode information on “qubits” (which unlike classical computer systems, is both the logical and physical element for transfer of information) which allows an exponential amount of information to be processed the more qubits are put together (again, unlike classical computing systems, where additional transistors only marginally improve the efficiency and productivity of the chips). To learn more about this, I recommend listening to Chad Rigetti of Rigetti Computers in the A16Z podcast on the same topic.
In a similar line, Thomas Davenport and Julia Kirby, authors of Only Humans Need Apply, discuss how Franciso Kitaura, an astrophysicist in Germany is using AI to research the 95% of the universe that is made up dark matter and dark energy (vs the 5% “normal matter” which is made up of the visible stars, gases and dust) — a computational task that until today has been out of reach. The authors answer to the impending automation risk is to “augment”: in their own definition, “augmentation exists when the human worker is able to create more value by virtue of having the machine’s help, and to personally reap greater gains by doing so”. These authors, suggest five ways to “augment” human capabilities by using smart machines:
- Stepping Up: develop more big-picture insights and decisions that are too unstructured for computers or robots. An example here is the insurance professional that is able to make general policy adjustment recommendations based on reams of data and analysis from not only micro-factors (such as individual medical claims) but also from macro-factors such as environmental and technological change.
- Stepping Aside: non-decision oriented work that computers aren’t (yet!) good at, such as selling, motivating people or describing the decisions that computers have made. For instance, the financial advisor that is required to be able to build rapport and engage empathically to understand the subtle nuances of “customer preferences for financial products which tend to be inherently fluid and qualitative”.
- Stepping In: engaging with computer systems’ automated decisions to understand, monitor and improve these systems. This was the case of the head of Risk Management for Washington Mutual (WaMu) who by working with the bank’s risk models between 2005 and 2007 tried to persuade senior management of the increased risk profile that the bank was incurring — unfortunately his advice fell on deaf ears and the bank went into receivership by late 2008!
- Stepping narrowly: a specialty area within your profession (or in a completely new profession, remember a great capacity to learn and adapt!) that is so narrow that no one is attempting to automate it (and may not be economical to do so) — for instance, in education, teachers or institutions that focus on students with special needs. Unfortunately, there are a whole host of learning disabilities which manifest themselves in tens/million and these may never warrant large scale automated education systems.
- Stepping Forward: developing new systems and technology that supports intelligent decisions and actions — either within the large corporate or working for outside vendors such as the large systems/consulting firms like Accenture, Wipro, etc. Indeed, some of these people that develop these type of systems by not only spotting the need for a new solution but also having the ability to market and sell them are the entrepreneurs that we seek and believe will be the next wave of enterprise-focused entrepreneurs in our region.
As we can see from the list and examples above, there are some overlaps with the forecasted jobs in the article from MIT Sloan, which may entail that there’s starting to be some form of agreement amongst experts in this field on which activities and tasks we should be specializing to prepare for the future labor market.
In essence, we need to focus on those activities that make us “more human”: creativity, empathy, diplomacy, ambition, passion, intuition and humor. Tying back to the first part of this blog, I don’t think the second option to mitigate the risk of automation, that of so-called Universal Basic Income, is a plausible answer because most governments don’t have a stellar record of addressing long-run problems in a quick and effective manner, so I wouldn’t hold my breath (I would argue that the social welfare system is essentially broken/bankrupt in Europe, for instance). Also because as my 12-year old son points out in the many conversations we’ve had on this subject, not everyone will be a poet and/or songwriter!
As the authors of the book mentioned above argue, winning in the age of smart machines will take many forms, because it will involve working with them in many configurations.