Robots replacing Nurses? Not in our Lifetime

Rather than replacing human experts in healthcare, software will play an augmentative role. I’m going to lay out my arguments against robots replacing nurses anytime soon. Recently gave a keynote at a healthcare conference about AI/ML and the topic was about robots (software) not replacing nurses in our lifetime.

Watch the talk here.

“In our lifetime” is a fairly long period of time when it comes to software technology. This is a contrarian view; recent headlines are on the opposite side of this prediction…

“Artificial Intelligence will replace half of all jobs in the next decade” -CNBC
“Humans Need Not Apply” >9M views
“Robots could steal 40% of all U.S. jobs by 2030” — Fortune

And fairly intelligent people are on this bandwagon…

Elon Musk: “Robots will take your jobs” — CNBC
“Stephen Hawking warns artificial intelligence could end mankind” — BBC
“Kurzweil Claims That the Singularity Will Happen by 2045” — Futurism

Kurzweil went on to pronounce: “2029 is the consistent date I have predicted for when an AI will pass a valid Turing test and therefore achieve human levels of intelligence. I have set the date 2045 for the ‘Singularity’ which is when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created.”

There are a number of problems with the first sentence here.

First: the Turing test was passed in 2014 by a chatbot named ‘Eugene’. Second, Turing himself thought that the question whether machines can think is itself “too meaningless” to deserve discussion. I met in 2016 with the team that built ‘Eugene’. They used a brute-force mechanism, essentially a huge hierarchical tree-structure to drive responses based on input. If this is ‘intelligence’ then my Xfinity box at home is a polymath.

Singularity

As far as the ‘singularity’ is concerned, Swedish philosopher Nick Bostrom provides a crisp definition of what enables this. Bostrom’s book states that the entire thesis (for the singularity) hinges on one relatively straight-forward assumption: that software will be able to evolve itself.

This is quite simple actually: a computer program that is able to make itself more advanced becomes out of control and “takes over.” It makes a lot of copies of itself, propagates across the Internet, and so on. The philosophical conundrum posed by Bostrom and others has to do with the ways in which humanity might be able to “control” such a thing, while that’s still possible.

Indeed this cornerstone idea makes logical sense, even to non-coders. So long as humans are the only ones creating code then they have control. Code that doesn’t create itself cannot evolve without human involvement.

If you prefer to not read the entire book, watch here beginning at 43:25 (below is the verbatim)

“And at some point, presumably in this whole-brain emulation, at some point probably fairly soon after that point, you will have synthetic AIs that are more optimized than whatever sort of structures biology came up with. So there’s a chapter in the book about that. But the bulk of the book is — so all the stuff that I talked about, like how far we are from it and stuff like that, there’s one chapter about that in the beginning. Maybe the second chapter has something about different pathways.”

“But the bulk of the book is really about the question of, if and when we do reach the ability to create human-level machine intelligence — so machines that are as good as we are in computer science, so they can start to improve themselves — what happens then?”

The bulk of the book, he admits, and the entire premise of AI as an existential threat to humanity is — code evolving itself.

There’s only one problem: code evolving itself isn’t happening anywhere in software, it’s not a concept in computer science.

Thinking Machines

The most watched video on the topic of robots taking over jobs is “Humans Need Not Apply”, with over 9M views on YouTube.

The premise of this video can be distilled to a single prediction: “just as horses were replaced by automobiles, so too will humans be replaced by thinking machines.”

source: TGC

Here’s the problem with this statement: we could build a mechanical horse if that was a useful thing (it isn’t). We understand horse anatomy sufficiently to at least approach the problem. But “thinking machine” is problematic. We understand very little about how the human brain thinks. We cannot build a machine to perform a function we do not comprehend; we cannot automate what is not thoroughly understood.

Bee Brain

But wait — humans are complex, what about simpler animals with intelligence?

A honey bee has 10⁶ neurons, orders of magnitude fewer than you or I. The bee is intelligent: it can navigate 3-dimensional space, it can find flowers, communicate the position of pollen to others, live in a complex social structure, etc.

The punchline: we don’t understand how the brain of a bee works. Not even a little. And there are no laws prohibiting the dissection of bee brains or experimentation on bees. We just don’t know.

So nobody is building a “thinking machine” without understanding how brains think, unless the definition of ‘thinking’ is something else. And if we don’t even know how a bee brain works, how could we possibly build a machine that “thinks” on the level of a human being?

Magic

One reason why most of what’s written about artificial intelligence is appealing is the effect of anthropomorphism. We can’t help but assume that something is intelligent if it demonstrates a glimpse of it.

And this can get out of control, for example the ‘Sophia’ humanoid robot. ‘Sophia’ is nothing more than a humanoid robot being puppeted by a person typing in what is spoken from her speakers. There is software that drives the movements of her head, eyes and mouth. But she appears to be real.

So what is this thing we refer to as ‘artificial intelligence?’ and what role can it play in augmenting human experts?

Hans

To explore this we should first return to the horse. In the early 20th century, in Germany, there was a horse named ‘Clever Hans’. He amazed audiences by correctly answering simple math problems like 5+3 or 9–5. This is impressive for a horse, it must be intelligent!

After a scientific team was assembled to study this phenomenon, they concluded that the horse was picking up subtle cues from the audience. As he tapped his hoof and arrived at the correct answer, the audience’s cues told him to stop.

The horse understood nothing about mathematics, it lacked an understanding of what it meant to perform addition or subtraction. It was trained to use audience cues to arrive at a reasonable answer.

Knowledge Work

So-called “artificial intelligence” is software that has been trained to perform specific (often very narrow) ‘knowledge work’. The training data is treated in such a way as to extract these cues for prediction, classification, etc. This is useful work.

Take a text classification engine, for example, used to find documents that are about some identified topic. The software is effective and it does productive work, however it understands nothing about human language. The text classifier looks for patterns to make a guess as to what the words of a sentence are associated with — a parrot understands language this way.

Remove the training data or point it to a different kind of work and the text classification software is as inept as Clever Hans.

AI and human experts working together

The ultimate role of software “AI” is to augment human experts. In the case of nurses, software can augment parts of a conversation with the patient.

In “The Future of Messaging Apps” I referred to this as ‘human augmented automatons’ — chat-bots with human guidance. Why should a nurse use the time to send a link when the intent of the request “can you send me info on lipitor?” is readily discerned?

Healthcare organizations have no tolerance for the kind of response many chat-bots have become famous for. The nature of a conversation about health is contextual, collaborative and carries zero tolerance for absurdity.

The healthcare setting is not a place for “AI only” responses.

Healthcare messaging solutions striving for efficiency are unwilling to take risks in doing so, thus there needs to be a way to blend automated and human responses with high integrity.

The Innovator’s Prescription

Clayton Christensen’s book “The Innovator’s Prescription” can be summarized as follows: by capturing institutional knowledge and decentralizing it towards the patient you achieve efficiencies, lower costs and improve care. Here’s an excellent video on this subject, for those that want more detail.

A beautiful example of institutional knowledge is a quick screener — an assessment. A series of interconnected questions given to the patient, collecting data from which decisions can be made. Often such screenings are periodic and data must be looked at longitudinally.

It turns out that in most cases a machine is better (more consistent, more accurate, never forgetful) at administering such surveys than a healthcare professional. This is time consuming work for the nurse or clinician.

conclusion of an automated quick screener in Vela

We can achieve efficiency by using secure messaging and automating certain ongoing conversational exchanges with patients.

Not all elements of a conversation in health care are formulaic and deterministic.

Be Probabilistic About It

A reasonable chat-bot framework should produce a probability value for a prediction.

data: “what should I do about high blood pressure?”
probability: 0.861 intent: BP_info

Using one of a number of approaches to intent classification, a probability is produced given the sentence entered by the user and a given intent. The entry is matched with a level of probability with some useful response. Vela uses a neural network classifier to generate an intent probability.

The key to human augmented response is an intent classification probability. Let’s look at a schematic showing the response process:

Noteworthy aspects of this flow:

  • a probability threshold can be set to whatever is appropriate, depending on setting (eg. off-hours responses)
  • when above threshold probability, the human expert need not be involved
  • when below threshold probability, there is additional machine learning

Stronger Responses With Practice

As the system generates responses below the threshold, the human confirmation or override provides machine learning from which to improve our data. The result is further increase in efficiency without sacrificing quality.

Humans in the loop provide data for machine learning.

Using this process flow, a healthcare organization can initially set an artificially high initial threshold and review the probability scores for each suggestion over a period of time. In this way a threshold can be set that provides for an acceptable tolerance, which can be adjusted for different settings:

  • after-hours (no human experts available except for emergencies)
  • informational (only non-patient related informational inquiries)
  • specific task-oriented (pro-actively conducting an assessment)

The Best Of Both Worlds

By combining (‘blending’) human expert messaging with automated response we improve efficiency without sacrificing quality. In a setting like health care it’s crucial to be able to drive quality of care while at the same time driving efficiencies. This is what the Vela.care team has been focused on.

We should not spend time worrying about nurses losing their jobs, instead our attention needs to be on augmenting their work and providing them leverage. Case loads are increasing and we have a nursing shortage in the U.S.

This isn’t futuristic, it’s upon us now, it’s not science fiction — it is real.

Watch the talk here.