Human + Machine

Caitlin French
The Startup
Published in
13 min readSep 10, 2020

Contents

1. Intro

Human + Machine is written by Paul Daugherty (Accenture Chief Technology Officer) and James Wilson (Accenture Managing Director of IT and Business Research).

Dispelling the common narrative of the binary approach of human vs machine “fighting for the other’s jobs”, Daugherty and Wilson argue in favour of a future symbiosis between human and machine, in what they call “the third wave of business transformation”. Wave 1 involved standardised processes — think Henry Ford. Wave 2 saw automation — IT from the 1970s-90s. Wave 3 will involve “adaptive processes… driven by real-time data rather than by a prior sequence of steps”.

This “third wave” will allow for “machines … doing what they do best: performing repetitive tasks, analysing huge data sets, and handling routine cases. And humans … doing what they do best: resolving ambiguous information, exercising judgment in difficult cases, and dealing with dissatisfied customers.” They go further in discussing the “missing middle” — human and machine hybrid activities — where “humans complement machines”, and “AI gives humans superpowers”, as well as making a business case for AI: “AI isn’t your typical capital investment; its value actually increases over time and it, in turn, improves the value of people as well.” And unexpectedly, “the problem right now isn’t so much that robots are replacing jobs; it’s that workers aren’t prepared with the right skills necessary for jobs that are evolving fast due to new technologies, such as AI.” Future issues will actually surround the constant re-skilling and learning of the workforce.

AeroFarms, Accenture’s Precision Agriculture Service — an example of using sensor data to improve crop yield and reduce waste

2. AI — An Overview

First envisioned in the 1950s, Artificial Intelligence (AI) and Machine Learning (ML) allow computers to see patterns in data and produce outputs without every step of the process being hard-coded. To distinguish the two, AI is the broader concept of machines carrying out ‘intelligent’ tasks, and ML is an application of AI where machines can be provided with data (e.g. an individual’s health data), learn for themselves, and produce an output (e.g. a recommended course of treatment for an individual).

After some funding dry-ups, AI has been gaining popularity since around the 2000s and is becoming ever more visible in businesses and everyday life today: you wake up to an alarm set by Alexa, who tells you your schedule for the day. You open your computer, likely made by smart robots in a factory. You log in, making use of its facial recognition system. You procrastinate from work by scrolling a curated Facebook feed designed to keep your attention, or on Netflix where you’re recommended TV shows and movies to cater to your viewing-tastes. It’s also likely your work software implements many kinds of AI and ML.

These algorithms work by having train and test data. For instance, a CV-checking model could be trained on past hiring data, accepting through to the next stage people who demonstrate qualities described in the job description, and rejecting those who do not. The model then gets fed new test data (new CVs it hasn’t seen before) so someone can check whether the predicted outcomes are desirable. Once the model has been tweaked, it is rolled out for use on actual applicants’ CVs. The issue then is making sure the training data is not biased — take the 2014 case study where Amazon’s trial recruitment algorithm was trained on data where a majority of men were hired, resulting in female applicants being penalised — this algorithm was soon scrapped.

Audi Robotic Telepresence (ART) — An expert technician remotely controls this robot, which works alongside an on-site technician to make repairs

There are different branches of ML too:

  • Supervised learning: labelled train data. The algorithm wants to learn the rules connecting an input to a given output, and to use those rules for making predictions e.g. using which past job applicants got hired to decide whether to hire a new applicant, this has a yes/no label to whether the applicant was hired
  • Unsupervised learning: unlabelled train data. The algorithm must find structures and patterns of inputs on its own e.g. clustering types of customer based on demographic info and their spending habits, there are no labels as the customer segments are not yet known in this example
  • Reinforcement learning: training an algorithm given a specific goal. Each move the algorithm makes towards the goal is either rewarded or punished, and the feedback allows the algorithm to build the most efficient path toward the goal e.g. a robot arm picking up a component on a production line, or an AI trying to win the game Go
  • Neural network: a machine with connected nodes/neurons which strengthen when fired more often, similar to the human brain e.g. a neural network which learns how to make stock market predictions based on past data
  • Deep learning: more complex neural networks, like deep neural networks (DNN), recurrent neural networks (RNN), feedforward neural networks (FNN)
  • Natural Language Processing (NLP): computers processing human language e.g. speech recognition, translation, sentiment analysis
Quid — Using NLP data visualisations of large bodies of text data, from patents to news reports
  • Computer vision: teaching computers to identify, categorise, and understand the content within images and video e.g. facial recognition systems
  • Audio and signal processing e.g. text-to-speech, audio transcription, voice control, closed-captioning
  • Intelligent agents e.g. collaborative robotics (cobots), voice agents like Alexa, Cortana, Siri
  • Recommender systems: making suggestions based on subtle patterns over time e.g. targeted advertising, recommendations on Amazon/Netflix
  • Augmented reality (AR) and Virtual Reality (VR) e.g. flight simulation training
IntelligentX Brewing Company — There’s even a company who have made the first beer brewed by AI using feedback from Facebook Messenger

3. Human + Machine Diagram

Human + Machine

4. MELDS Framework

A key focus of the book is the MELDS framework, with 5 key principles required to become an AI-fuelled business:

  • Mindset: reimagining work around the missing middle
  • Experimentation: seeking opportunities to test and refine AI system
  • Leadership: commitment to responsible use of AI
  • Data: building a data supply chain to fuel intelligent systems
  • Skills: developing the skills necessary for reimagining processes in the missing middle

5. Responsible AI

The Leadership part of MELDS calls for Responsible AI, an important consideration as AI gains increasing prominence in today’s society, for instance in cases where:

  • An algorithm is making a decision, deciding what factors to optimise e.g. deciding a student’s A-level grade with the goal to reduce grade inflation vs the goal of individual fairness, deciding whether to give an individual a loan with fairness vs profit in mind
  • There are already systemic issues in society e.g. when an algorithm is trained using biased hiring data which favours certain demographics: white, male, heterosexual. Or when an algorithm is used to make criminal convictions and disproportionately convicts black defendants
  • The stakes are high e.g. self-driving cars where people’s lives are at stake, political advertising on social media

When designing responsible AI, we should address a number of legal, ethical, and moral issues, ensuring there is/are:

  • Trust, transparency & responsibility — consideration of societal impact, and clearly stated goals and processes of the algorithms
  • Fair & ethical design standards
  • Regulation & GDPR compliance — defined standards for the provenance, use and security of data sets, as well as data privacy
  • Human checking, testing and auditing — of systems before and after deployment
Wings for Aid — Remotely Piloted Aircraft Systems that deliver humanitarian goods to people isolated by natural disasters and man-made crises

Trust, transparency & responsibility

When implementing an AI/tech system, it requires public trust, whether this be trust that a self-driving car will keep us safe, or trust that a test-and-trace app is worth downloading, using, and following instructions from. The problem is that tech projects don’t always go to plan, and just as with any process, mistakes can be made — as we know, an A-level algorithm can give you the wrong grade. What’s important to ensure public trust is being transparent in how an algorithm works, as well as allowing for appeals and changes to decisions made by computers.

With AI also comes the issue of where responsibility lies. Although on the whole, self-driving cars reduce numbers of road fatalities, if a self-driving car collides with a pedestrian, who is legally responsible: the algorithm designers or the person behind the wheel? And in terms of public attitudes, Gill Pratt, chief executive of the Toyota Research Institute, told lawmakers on Capitol Hill in 2017 that people are more inclined to forgive mistakes that humans make than those by machines. This desire to put trust in humans rather than machines (even if statistically machines perform better) has been termed “algorithmic aversion”, and happens for instance where people would rather rely on the decisions of a human doctor than trust AI.

Regarding responsibility, the term moral crumple zones has also been coined: “[T]he human in a highly complex and automated system may become simply a component — accidentally or intentionally — that bears the brunt of the moral and legal responsibilities when the overall system malfunctions.” This could be as serious as facing charges of vehicular manslaughter, down to an Uber driver getting bad feedback from a customer because their app malfunctioned and directed them to the wrong place — the human becomes a “liability sponge”. “While the crumple zone in a car is meant to protect the human driver, the moral crumple zone protects the integrity of the technological system, itself.”

Tesla In 2016, Tesla announced every new vehicle would have the hardware to drive autonomously but at first Tesla will test drivers against software simulations running in the background. Tesla drivers are teaching the fleet of cars how to drive

Fair & ethical design standards

Firms should do all they can to eliminate bias from their AI systems, whether that be in the training data, the algorithm itself, or the outcome results/decisions. Examples include HR-system AI which looks over CVs or video interviews, which could have a white-male bias. Or software used to predict defendants’ future criminal behaviour, which could be biased against black defendants.

However, just because AI has the potential to be biased, it doesn’t mean it is not worth using. Humans themselves have biases, whether that be the interviewer for a job or the jury in a court. When created responsibly and tested thoroughly, AI has the potential to reduce this human bias, as well as having added cost and time savings. A healthcare AI can allow doctors to treat more patients, a CV-checking AI can scan through many more CVs in a fraction of the time.

To address data biases, Google has launched the PAIR (People + AI Research) initiative, with open source tools to investigate bias. Some of the ideas they discuss in their Guidebook include:

  • Being clear about the goal of the algorithm. For instance, when picking a threshold for granting a loan or insurance, looking at goals such as being group-unaware (even if there are group differences e.g. women paying less for life insurance than men as they tend to live longer), using demographic parity, or providing equal opportunity
  • Providing the right level of explanation to users so they build clear mental models of the system’s capabilities and limits. Sometimes an explainable model beats a complex model with better accuracy
  • Show a confidence level of the model’s output, which could be a numeric value or a list of potential outputs. As AI outputs are based on statistics and probability, a user shouldn’t trust the system completely and should know when to exercise their own judgment

AI models are also often described as “black box systems” where we don’t fully know what’s going on behind the scenes, and what the computer finds to be the important decision-making criteria. Take the example of an image recognition system that learns to “cheat” by looking for a copyright tag in the corner of all the correct images, rather than the content of the images themselves. In comes the need for Explainable AI (XAI) (thoroughly-tested systems where we test how changes in input affect the output decision or classification), or “human-in-the-loop” AI, where humans review AI decisions. IBM has also developed an open source AI Fairness toolkit for this purpose.

HireVue — Video interviews for recruitment

Regulation & GDPR compliance

Should it be down to government regulation to keep tech companies in check? If yes, can this regulation even keep up with such a fast-moving and complex field? And what happens when there are conflicts of interest, like with the FBI-Apple encryption dispute where Apple refused to unlock a terrorist’s phone? Or what about when tech companies have used social media for election-meddling?

Although Institutional Review Boards (IRBs) are required of research affiliated with universities, they’re not really present in the commercial world. Companies often take it upon themselves to develop their own rules regarding their research ethics committees. Companies need to think about the wider impact of their technology, and there is a place for considering ethical design standards, such as those proposed by the IEEE, with the general principles of human rights, well-being, accountability, transparency, and awareness of misuse.

Fortescue Metals Group use drones to improve safety and productivity in mining

Human checking, testing and auditing

Testing is vital for AI that make decisions or interact with humans. For many companies, their brand image and perception are tied to that of their AI-agents: interactions with Amazon’s Alexa will shape your perception of Amazon as a company, meaning the company’s reputation is at stake. One example of this human-AI relationship turning sour is Microsoft’s chatbot Tay, who in 2016 was trained on Twitter interactions, and consequently tweeted vulgar, racist and sexist language. In an ideal world, Microsoft would’ve envisaged these issues and implemented “guardrails” like keyword/content filters, or a sentiment monitor, to prevent this from happening.

Other suggested checking methods include letting humans second-guess AI: trusting that workers have judgments and often understand context better than machines, for instance with the example of using an AI system to inform hospital bed allocation, but letting humans have the final say.

Amazon Go — a store where cameras monitor the items you pick, sensors talk to your phone and automatically charge your account

6. Fusion Skills

The “missing middle” embodies 8 “fusion skills” which combine human and machine capabilities.

1. Re-humanising time: Reimagining business processes to amplify the time available for distinctly human tasks like interpersonal interactions, creativity, and decision-making e.g. Chatbots like Microsoft’s Cortana, IBM’s Watson or IPsoft’s Amelia, which handle simple requests

2. Responsible normalising: The act of responsibly shaping the purpose and perception of human-machine interaction as it relates to individuals, businesses, and society e.g. A-level algorithm

3. Judgment integration: The judgment-based ability to decide a course of action when a machine is uncertain about what to do. This informs employees where to set up guardrails, investigate anomalies, or avoid putting a model into customer settings e.g. Driverless cars, restricting the language/tone a chatbot/agent can use

4. Intelligent interrogation: Knowing how best to ask an AI agent questions to get the insights you need e.g. price optimisation in a store

5. Bot-based empowerment: Working well with AI to extend your capabilities e.g. financial crime experts receiving help from AI for fraud detection, or using network analysis for anti-money-laundering (AML)

6. Holistic melding: Developing mental models of AI agents that improve collaborative outcomes e.g. Robotic surgery

7. Reciprocal apprenticing: Two-way teaching and learning from people to AI and AI to people e.g. Training an AI system to recognise faults on a production line. The AI is then used to train new employees

8. Relentless reimagining: The rigorous discipline of creating new processes and business models from scratch, rather than simply automating old processes e.g. AI for predictive maintenance in cars, or to aid product development

Amazon robots, after their 2012 acquisition of Kiva Robots

7. New Roles Created

Returning to the idea of the “missing middle”, it features 6 roles, with the first 3 seeing humans training ML models, explaining algorithm outputs, and sustaining the machines in a responsible manner. The latter 3 see machines amplifying human insight with advanced data analysis, interacting with us at scale through novel interfaces, and embodying human-like traits (for instance with AI chatbots or agents, like Alexa, Cortana or Siri).

These missing-middle roles could create new kinds of jobs of the future, for example:

  • Trainer: providing the train data e.g. language translators for NLP, empathy/human behaviour training
  • Explainer: explaining complex black-box systems to non-technical people
  • Data supply-chain officer
  • Data hygienist: for data cleansing, quality-checking, and pre-processing
  • Algorithm forensics analyst: investigating when AI outputs differ from what is intended
  • Context designer: balance the tech with the business context
  • AI safety engineer: anticipate unintended consequences and mitigate against them
  • Ethics compliance manager
  • Automation ethicist
  • Machine relations manager: like HR managers, “except they will oversee AI systems instead of human workers. They will be responsible for regularly conducting performance reviews of a company’s AI systems. They will promote those systems that perform well, replicating variants and deploying them to other parts of the organisation. Those systems with poor performance will be demoted and possibly decommissioned.”
Einstein app from customer relationship management (CRM) vendor Salesforce — making use of computer vision, deep learning, and NLP for targeted advertising or customised recommendations

8. Conclusion

AI can have far-reaching impacts, working in collaboration with humans using the 8 “fusion skills” as part of the “missing middle”, and a number of new jobs being created. Businesses of the future should apply the MELDS (Mindset, Experimentation, Leadership, Data Skills) framework, and will need to address big questions: How do we ensure public trust in AI, and that companies design responsible AI? What design standards should we use to mitigate bias, and what do we do when AI outputs go wrong? How do we deal with the mass skilling and re-skilling of the workforce of the future? Governments, businesses, and wider society will shape the direction of AI and the Human + Machine relationship. What kind of future do we want to build?

9. Further Reading

Reports by Accenture, McKinsey, BCG, IBM (who also have an open source AI Fairness toolkit), Cognizant, World Economic Forum Centre for the Fourth Industrial Revolution, AI Now Institute, Partnership on AI, AI for Good. Also look into YCombinator startup accelerator, funding many innovative tech startups. Former president of YCombinator Sam Altman has taken over as CEO at OpenAI, an AI research lab, aiming to maximise AI’s benefits to humanity and limit its risks, and considered a competitor to Google’s DeepMind.

--

--

Caitlin French
The Startup

Wharton MBA | Oxford Physics | McKinsey | Climate Tech & Entrepreneurship