Neuroprosthetics in the Reel World: The LUKE Arm

How far are we from engineering something straight out of Star Wars?

Oct 11 · 7 min read
Design by Albert Yeung, Neurotech@Berkeley.

[Excerpt from Star Wars: Episode V — The Empire Strikes Back, screenplay by Leigh Brackett and Lawrence Kasdan; Story by George Lucas]

At that instant, Vader’s sword comes down across Luke’s right forearm, cutting off his hand and sending his sword flying. In great pain, Luke squeezes his forearm under his left armpit and moves back along the gantry to its extreme end. Vader follows. The wind subsides. Luke holds on. There is nowhere else to go.

VADER: …join me and I will complete your training. With our combined strength, we can end this destructive conflict and bring order to the galaxy.

LUKE I’ll never join you!

VADER If you only knew the power of the dark side. Obi-Wan never told you what happened to your father.

LUKE He told me enough! It was you who killed him.

VADER No. I am your father.

At Neurotech@Berkeley, what catches our attention aren’t Luke’s attempts to replace his father — 40-year old spoiler alert: he does get a “new” father, so to speak — but the arm he receives at the end of the movie. It’s a remarkable neuroprosthetic: fully under neural control, deft enough to handle a lightsaber, and most importantly, reflex responsive to touch stimuli. Incombustible metallic bones aside, it’s indistinguishable from the real thing.

It’s also the dream of many neurotechnologists. How far are we from engineering something straight out of Star Wars? We’ll discuss this question in two main parts: the first is a reference to neuroprosthetic devices in general, and the second coverage of a recent breakthrough in the field.

PART I: Neuroprosthetics

1.1: Record and decipher brain activity before, during, and after a voluntary movement.

The third, electromyography (EMG), measures electrical pulses not from the brain, but from associated muscle groups. We separate it because it has received a lot of attention lately: interested readers might look into CTRL-Labs an EMG startup recently acquired by Facebook for the tidy sum of $1 billion. Post EEG/ECoG/EMG, decomposing these signals into their constituent frequencies yields useful data: the alpha, beta, etc. patterns we commonly call “brain waves.” We are now ready for Phase 2.

1.2: Relay deciphered signals to an external computer and process them.

This must suit the patient’s needs and work with great efficiency in real time. We can design a program that takes data, makes guesses that extrapolate on that data, evaluates those guesses’ accuracy, and takes that accuracy into account when extrapolating next. This enables it to continuously learn, unlearn, and relearn based on external stimuli and with minimal human intervention.

This kind of algorithm — one that accepts certain things as facts and builds on those facts to form an evaluation framework — is essentially a limited artificial mind that allows neuroprosthetic devices to be “trained” on a particular user. For recognition’s sake, many such programs are called neural networks, which are themselves types of machine learning algorithms. Once satisfied with the quality of our algorithm, we can move on to the final step: Phase 3.

1.3: Create a three-pronged communication network between human (the brain), computer, and the machine (the prosthetic device).

This is where our algorithm comes in: the computer tells the device to execute the same motion repeatedly and stores data on the relative successes and failures. This data is called feedback data, as it provides information on how effective the system is and allows the algorithm to make alterations. We repeat this process until the user can effectively perform the action with a negligible failure rate. At that point, when the prosthetic hand’s sensors encounter an object, the algorithm automatically closes the fingers around it.

Illustration by Amy Wang.

While these three phases provide a general model for neuroprosthetics that move by brain-command, one of the things that has been hard to overcome is how to incorporate the sense of touch into prosthetic limbs. This is where we return to Star Wars.


Illustration by Amy Wang.

Developed by DEKA Integrated Solutions Corporation and the University of Utah with funding from DARPA (Defense Advanced Research Projects Agency), the LUKE arm is a robotic prosthetic arm that is made of metal motors and has silicone “skin” covering the hand, as shown in the picture above. It is powered by an external battery and is connected to an external computer.

The original LUKE arm, one that was approved by the FDA for commercial use in 2014, was the first computer-driven prosthetic arm that could perform different movements at the same time. With up to ten powered joints and multiple grip patterns, it was designed for a large range of motions. It is also currently the only commercially available neuroprosthetic limb that has a powered shoulder.

The new LUKE arm, however, not only mimics the signals that the brain sends to the hand but also the signals that are sent from the hand back to the brain. In this way, the newly upgraded LUKE arm can restore amputees’ sense of touch and in turn, their ability to grasp delicate objects.

2.1: The Three Phases as Seen in the LUKE Arm

As noted before, the first phase is to record and decipher brain activity before, during, and after the execution of voluntary motion. The LUKE arm uses EMG because it provides larger, more easily detectable amplitudes, is noninvasive, and has fewer problems with compromised sensory feedback.

In particular, LUKE gathers muscle feedback through the Utah Slanted Electrode Array or USEA, which relies on a bundle of 100 electrodes attached to the nerves in the upper arm, above the amputation site.

Illustration by Amy Wang.

Phase 2 and Phase 3 work as before: the USEA relays signals to a computer, which interprets them and converts nerve signals into digital signals that command the prosthetic arm. It is after Phase 3 that the LUKE arm becomes very exciting:

2.2: Feedback to the Brain

A small experiment: take any small object that’s lying around you, grasp it, and pick it up. Notice how effortless it was to avoid dropping it or crushing it. Now imagine a scenario where your only sense of the object was from seeing it — one where you felt absolutely nothing between your fingers. It would be very difficult to avoid applying too much pressure, crushing the object, or applying too little pressure and failing to pick it up.

The sense of touch is enormously important in manipulating objects. With the LUKE arm’s feedback mechanism, amputees can grasp and manipulate objects with dexterity approaching that of an actual arm. To approximate touch even more faithfully, the University of Utah has developed a mathematical model to approximate how the human body naturally receives these signal patterns.


Thanks for reading, and may the Force be with you.

This article was co-authored by Shreyash Iyengar and Josephine Tai.

Shreyash studies Electrical Engineering and Computer Science at UC Berkeley.

Josephine studies Molecular and Cell Biology at UC Berkeley.

This article was edited by Christopher Zou, an undergraduate student at UC Berkeley who studies Neurobiology and Computer Science.

For a list of sources, please contact Neurotech@Berkeley at


Writers, consultants, engineers, and designers working toward advancing neurotechnology for the benefit of humanity.


Written by

We write on psychology, ethics, neuroscience, and the newest in neural engineering. @UC Berkeley


Writers, consultants, engineers, and designers working toward advancing neurotechnology for the benefit of humanity.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade