Neuroprosthetics in the Reel World: The LUKE Arm
How far are we from engineering something straight out of Star Wars?
[Excerpt from Star Wars: Episode V — The Empire Strikes Back, screenplay by Leigh Brackett and Lawrence Kasdan; Story by George Lucas]
At that instant, Vader’s sword comes down across Luke’s right forearm, cutting off his hand and sending his sword flying. In great pain, Luke squeezes his forearm under his left armpit and moves back along the gantry to its extreme end. Vader follows. The wind subsides. Luke holds on. There is nowhere else to go.
VADER: …join me and I will complete your training. With our combined strength, we can end this destructive conflict and bring order to the galaxy.
LUKE I’ll never join you!
VADER If you only knew the power of the dark side. Obi-Wan never told you what happened to your father.
LUKE He told me enough! It was you who killed him.
VADER No. I am your father.
At Neurotech@Berkeley, what catches our attention aren’t Luke’s attempts to replace his father — 40-year old spoiler alert: he does get a “new” father, so to speak — but the arm he receives at the end of the movie. It’s a remarkable neuroprosthetic: fully under neural control, deft enough to handle a lightsaber, and most importantly, reflex responsive to touch stimuli. Incombustible metallic bones aside, it’s indistinguishable from the real thing.
It’s also the dream of many neurotechnologists. How far are we from engineering something straight out of Star Wars? We’ll discuss this question in two main parts: the first is a reference to neuroprosthetic devices in general, and the second coverage of a recent breakthrough in the field.
PART I: Neuroprosthetics
We focus our view on brain-controlled artificial limbs, or motor neuroprosthetics, especially those that aid amputees and paralyzed patients. Generally, we can break down their function into three phases:
1.1: Record and decipher brain activity before, during, and after a voluntary movement.
Our brains are networks of neurons, nerve cells that transmit information using electrical and chemical signals. Let’s narrow our view on that part of the brain responsible for planning, control, and execution of voluntary motion — the motor cortex. To record electrical signals from the motor cortex, we’re aware of three main approaches. The first two are electroencephalograms (EEGs), a series of electrodes placed on a subject’s scalp, and electrocorticography (ECoG), surgically inserting electrodes into the brain.
The third, electromyography (EMG), measures electrical pulses not from the brain, but from associated muscle groups. We separate it because it has received a lot of attention lately: interested readers might look into CTRL-Labs an EMG startup recently acquired by Facebook for the tidy sum of $1 billion. Post EEG/ECoG/EMG, decomposing these signals into their constituent frequencies yields useful data: the alpha, beta, etc. patterns we commonly call “brain waves.” We are now ready for Phase 2.
1.2: Relay deciphered signals to an external computer and process them.
Since each neuroprosthetic is unique to the user, the computer in question will have to continuously learn the neural mechanism as it relates to the user’s desired movements. In the case of an artificial arm, for example: what kinds of frequencies can be associated with a certain kind of arm movement?
This must suit the patient’s needs and work with great efficiency in real time. We can design a program that takes data, makes guesses that extrapolate on that data, evaluates those guesses’ accuracy, and takes that accuracy into account when extrapolating next. This enables it to continuously learn, unlearn, and relearn based on external stimuli and with minimal human intervention.
This kind of algorithm — one that accepts certain things as facts and builds on those facts to form an evaluation framework — is essentially a limited artificial mind that allows neuroprosthetic devices to be “trained” on a particular user. For recognition’s sake, many such programs are called neural networks, which are themselves types of machine learning algorithms. Once satisfied with the quality of our algorithm, we can move on to the final step: Phase 3.
1.3: Create a three-pronged communication network between human (the brain), computer, and the machine (the prosthetic device).
Once the computer has laid the foundations for the prosthetic device to function, it relays the relevant information to the device. These data — a systematic set of instructions that the prosthetic device uses to move — are called control commands. This can pose a serious engineering challenge: how would you describe, for example, all of the motions needed to hold a cup? We might say “form a mirrored C-like shape with the thumb and curl fingers around the cup.” But the control commands must model every subpart of the C-shape and the curl, most of which are subconscious.
This is where our algorithm comes in: the computer tells the device to execute the same motion repeatedly and stores data on the relative successes and failures. This data is called feedback data, as it provides information on how effective the system is and allows the algorithm to make alterations. We repeat this process until the user can effectively perform the action with a negligible failure rate. At that point, when the prosthetic hand’s sensors encounter an object, the algorithm automatically closes the fingers around it.
While these three phases provide a general model for neuroprosthetics that move by brain-command, one of the things that has been hard to overcome is how to incorporate the sense of touch into prosthetic limbs. This is where we return to Star Wars.
PART II: The LUKE Arm
Developed by DEKA Integrated Solutions Corporation and the University of Utah with funding from DARPA (Defense Advanced Research Projects Agency), the LUKE arm is a robotic prosthetic arm that is made of metal motors and has silicone “skin” covering the hand, as shown in the picture above. It is powered by an external battery and is connected to an external computer.
The original LUKE arm, one that was approved by the FDA for commercial use in 2014, was the first computer-driven prosthetic arm that could perform different movements at the same time. With up to ten powered joints and multiple grip patterns, it was designed for a large range of motions. It is also currently the only commercially available neuroprosthetic limb that has a powered shoulder.
The new LUKE arm, however, not only mimics the signals that the brain sends to the hand but also the signals that are sent from the hand back to the brain. In this way, the newly upgraded LUKE arm can restore amputees’ sense of touch and in turn, their ability to grasp delicate objects.
2.1: The Three Phases as Seen in the LUKE Arm
The LUKE arm exhibits the three phases of our general neuroprosthetic model, but it also involves a feedback mechanism to the brain.
As noted before, the first phase is to record and decipher brain activity before, during, and after the execution of voluntary motion. The LUKE arm uses EMG because it provides larger, more easily detectable amplitudes, is noninvasive, and has fewer problems with compromised sensory feedback.
In particular, LUKE gathers muscle feedback through the Utah Slanted Electrode Array or USEA, which relies on a bundle of 100 electrodes attached to the nerves in the upper arm, above the amputation site.
Phase 2 and Phase 3 work as before: the USEA relays signals to a computer, which interprets them and converts nerve signals into digital signals that command the prosthetic arm. It is after Phase 3 that the LUKE arm becomes very exciting:
2.2: Feedback to the Brain
So far, we’ve only talked about how the nerve signals can be used to control the neuroprosthetic. But with the LUKE arm, this process is a two-way street — when the hand portion of the LUKE arm comes in contact with an object, some of the 119 contact sensors on the hand trigger simulation of microelectrodes that send signals to the surviving arm nerves. This step mimics the sense of touch that involves the limb relaying signals to the brain.
A small experiment: take any small object that’s lying around you, grasp it, and pick it up. Notice how effortless it was to avoid dropping it or crushing it. Now imagine a scenario where your only sense of the object was from seeing it — one where you felt absolutely nothing between your fingers. It would be very difficult to avoid applying too much pressure, crushing the object, or applying too little pressure and failing to pick it up.
The sense of touch is enormously important in manipulating objects. With the LUKE arm’s feedback mechanism, amputees can grasp and manipulate objects with dexterity approaching that of an actual arm. To approximate touch even more faithfully, the University of Utah has developed a mathematical model to approximate how the human body naturally receives these signal patterns.
With the LUKE arm, we have our answer. As an immature version of Luke Skywalker’s artificial arm, it suggests we are very close indeed to engineering something straight out of Star Wars. This, of course, only generates more questions. It will not be long before the functionality of neuroprosthetics matches or surpasses the real thing. At what point will neuroprosthetics become neuroenhancement? Is an artificial arm still a human arm? Facetiously, when will there be lightsabers to deftly manipulate?
Thanks for reading, and may the Force be with you.
This article was co-authored by Shreyash Iyengar and Josephine Tai.
Shreyash studies Electrical Engineering and Computer Science at UC Berkeley.
Josephine studies Molecular and Cell Biology at UC Berkeley.
This article was edited by Christopher Zou, an undergraduate student at UC Berkeley who studies Neurobiology and Computer Science.
For a list of sources, please contact Neurotech@Berkeley at firstname.lastname@example.org.