Paralysis is no match for Guitar Hero

In 2010, Ian Burkhart suffered a broken neck from a diving accident, and was left with a C5 spinal cord injury that resulted in paralysis from the elbow, down.

Four years later, his skull was opened and a neural implant was inserted. Now, Ian can play Guitar Hero.

Ladan Jiracek, host of the Neural Implant Podcast, talked with Ian at the 2017 Society for Neuroscience conference. Together, they explored the Ian’s, and the psychology around having a neural implant.

Tl;dr: in this post, I’ll discuss how a neural implant has helped a quadriplegia patient play Guitar Hero, then discuss my thoughts on this incredible feat!

The Episode

In this episode of the Neural Implant Podcast, Ladan and Ian begin by discussing Ian’s incredible claim-to-fame: a brain computer interface has enabled him to play guitar hero, despite his quadriplegia. This process has taken years of training, and requires both a cortical implant (i.e., a Utah array implanted in Ian’s left motor cortex) and a custom-built array of 130 electrodes wrapped around his right forearm.

The ground-breaking work was part of a study led by Dr. Chad Bouton of the Feinstein Institute, and its success was the result of a large collaborative effort between a multidisciplinary set of engineers, researchers, and surgeons.

Image: The Feinstein Institute—Dr. Chad Bouton!

When Ian was first approached about participating in the study, despite the world-class researchers, he faced a difficult decision: surgeries — especially brain surgeries — are risky. There was no guarantee that the implant-enabled system would work, and it would take many months of training before Ian might see therapeutic improvements. Ian could have continued living his life, and making the most of it, without the risk.

Ian chose to go under the knife; his nerves were calmed, in part, by meeting with the neurosurgeon who would be performing the operation. This surgeon routinely inserts electrodes for deep-brain stimulation (a method of alleviating symptoms of Parkinson’s Disease involving an invasive neural implant), and these electrodes travel far into the brain; a surgery that “only” involves inserting an electrode array into the surface of the cortex would be proverbial cake!

So, Ian gave it the good ol’ thumbs-up. Since receiving the surgery, he has spent 3.5 years working with the research team to improve the neural implant system. Although his training regiment of three days per week, for 3–4 hours a day, has been exhausting, Ian has made tremendous progress. He can perform everyday tasks (e.g. picking up a toothbrush), he can play Guitar Hero (!), and it’s now second nature for him to perform simple actions like opening/closing his hand just by imagining it.

Right now, Ian can’t take this setup out of the lab because there are still miniaturization and regulatory hurdles. But he feels strongly that the additional mobility given to him by the neural implant could significantly improve his daily life, and in the lives of many other patients in similar positions.

The Literature

Here, we’ll get a little bit technical and talk about a 2016 paper that reported on Ian’s progress and the amazing work of the team behind the implant. Lead-authored by Dr. Bouton, the paper titled Restoring cortical control of functional movement in a human with quadriplegia dives into the details about this system for returning motor control to paralysis patients.


Before the scientists could implant the electrode array, they had to determine precisely where to insert it. It’s known that signals to control motor output (i.e., moving some part of your body) generally occur in the primary motor cortex (intentions to move, more or less, show up in the premotor cortex). Sure enough, when the researchers placed Ian in an MRI scanner and asked him to repeatedly imagine performing certain hand motions with his right hand (remember: he’s paralyzed, so he wasn’t actually performing these motions), they saw activation in a particular region of the left primary motor cortex, which is often abbreviated “left M1”. (One additional note: the mammalian brain uses contralateral representations…in other words, if I’m moving my right arm, the left hemisphere of my brain is mainly responsible for controlling it.)

With this assurance, Ian went under the knife. During surgery, the surgeon decided the precise placement of the electrode array, determined in part by the knowledge gained from the MRI scanner, and in part based on the surgeon’s visual identification of a location that wouldn’t damage major cerebral arteries. In the image below, the red regions indicate those which were activated while Ian imagined movement; the small green region is the footprint of the electrode array; the yellow region represents the overlap between the red and green regions.

Image: Bouton et al., 2016

As mentioned above, the implanted device was a Utah Array, manufactured by Blackrock Microsystems (this device is the current standard for clinical implantation in humans). (A jargon explanation: “clinical” means “used to treat people.”)


The set-up

Ian’s neural interface has two major components: the electrode array in his left M1, and an array of 130 electrodes wrapped around his right forearm.

When Bouton and his colleagues are recording from Ian’s brain, they connect a wire to a piece of hardware protruding from his skull. This wire, in turn, connects to a piece of hardware referred to as the headstage, which does initial amplification of the analog signal; the headstage then connects to another amplifier and signal processor called a NeuroPort, which in turn plugs into a computer. See the Blackrock website for details on a generic Utah Array + NeuroPort setup.

Image: Blackrock Microsystems (

The second major component of Ian’s system is an array of 130 surface (non-implanted) electrodes wrapped around his right forearm. These electrodes can run different electrical stimulation patterns through his forearm muscles, therefore causing them to contract in a variety of ways. In total, different stimulation patterns can produce different hand movements.

Image: Bouton et al., 2016


For the last 3.5 years, Ian has trained to perform six separate movements with this neural implant. The training paradigms weren’t fully specified in this paper; in general, however, the setup entails Ian sitting in front of a monitor that displays virtual hand movements. His goal, therefore, is to mimic the hand movements on the monitor.

Image: Bouton et al., 2016

In the podcast, Ian mentions that the research team leverages mirror neurons by having a researcher sit next to Ian and make the movements he’s supposed to make. For the uninitiated (i.e., me until very recently), mirror neurons are neurons in the premotor cortex that respond when someone else makes a movement, in addition to when you make that movement. See this review for more detail. The Bouton et al. paper does note, however, that mirror neurons are not required for Ian to actually execute the movements, since he can be given textual or auditory cues and still perform the tasks.


The team set up a camera overhead to record Ian’s hand movements. To track his progress over time, they devised a benchmark task in which Ian has to perform 5 blocks of the 6 hand movements, with each movement being presented in random order. Three measures of effectiveness are employed.

  1. Based on video recordings, the researchers look at the percentage of video frames during which Ian’s hand movements match the cued hand movements — this is referred to as Accuracy. Accuracy can be decomposed into Sensitivity and Specificity; the former refers to the chunk of Accuracy that falls during the visual cues themselves, and the latter refers to the chunk of Accuracy that falls outside of the period during which the visual cues were displayed.
  2. The researchers utilized a “functional movement task” to determine how well Ian would be able to perform a real-world movement. The task they chose was for Ian to pick up a bottle, pour dice from that bottle into a jar, then pick up a stick to stir the contents of the jar, then set the stick back down. Equivalent, it seems, to pouring cream into coffee.
  3. A psychiatrist performed a standardized Manual Muscle Test (MTT), known as the Graded and Redefined Assessment of Strength, Sensibility, and Prehension (GRASSP). In general, this test measures upper limb sensorimotor function. Five domains of movement are examined:
  • Strength
  • Dorsal (back) sensation
  • Ventral (front) sensation
  • Gross grasping ability, aka “qualitative prehension” (something like, “qualitatively, how good are you at picking things up?”),
  • Prehensile skills (something like, “quantitatively, how good are you at picking things up?”).


Ian and the research team achieved the following results:

Overall accuracy: The range was 93.1 +/- 0.5% (p < 0.01) for wrist flexion…to 97.3% +/- 0.3% (p < 0.01) for thumb flexion (wrist flexion and thumb flexion are two of the six movements Ian has learned to perform for this assessment)

Sensitivity (accuracy during the presentation of the cue): The range was 32.9 +/- 3.8% for thumb extension…to 81.9 +/- 2.6% for wrist extension (note the big difference in success between thumb extension and wrist extension — this is total speculation, but maybe this is because thumb extension requires finer-scale muscle movements than wrist extension. This hypothesis was inspired by my very scientific methodology of moving my thumb, and then my wrist, and then my thumb again. Did I mention this is speculative?)

Specificity (accuracy when the cue wasn’t presented): The range was 94.8% +/- 0.5% for wrist flexion….to 99.8 +/- 1.0% for thumb flexion


  • Ian’s apparent strength improved from that associated with a C6 spinal cord injury to that associated with a C7-C8 spinal cord injury
  • Ian’s gross grasping ability improved from that associated with a C7-C8 spinal cord injury to that associated with a C8-T1 level spinal cord injury
  • Ian’s prehensile skills improved from those associated with a C5 spinal cord injury to those associated with a C6 level spinal cord injury
  • A quick note on interpretation: C6, C7, C8, T1, etc. are all levels of the spinal cord, and each level of the spinal cord sends out different efferent/motor nerves to the body via its ventral roots, and receives different afferent/sensory nerves via its dorsal roots. If the spinal cord is injured at a specific location, it means that sensory information from below the injury can’t travel up to the brain, and motor information coming down from the brain can’t travel past the spinal cord lesion (lesion is just a fancy term for “injury” or “tumor” or “something that’s causing a disruption in the normal functioning of the nearby cells”). That’s why it’s such a meaningful improvement that, for example, Ian’s strength improved from a C6 injury to a C7-C8 injury — it’s as if the injury is lower on his back, and there are fewer spinal cord segments that can’t communicate with his brain!

Functional movement task (similar to pouring cream into a cup of coffee and stirring): Ian was able to compete this task 3/5 times, in 42 +/- 10s each time.

As we see from the quantitative results, this study describes a neural interfacing system that offers tremendous opportunity to improve the quality of life for a patient with quadriplegia. If you have an opportunity to listen to this episode of the podcast, you’ll hear Ian testify as much!


What’s it like to have a neural implant?

Obviously, there aren’t many people who can answer this question in the way that Ian can.

Brief pause for a runaway tangent: here, I’m considering cortical implants, as opposed to non-cortical implants. Many people have non-cortical neural implants—the most prolific are cochlear implants, which have helped ~400,000 patients so far with no, or diminished, hearing. By way of brief overview, the cochlea is a small spiraled structure deep within the ear that turns sound into neural signals. Cochlear implants contain electrodes that get embedded into the cochlea, then stimulate the auditory nerve based on sound picked up by a microphone (instead of the damaged organic human ear). The auditory nerve carries neural signals about sound from the inner ear to the cochlear nucleus in the brainstem, then eventually to the thalamus, then to the auditory cortex. Sorry, that was a lot, I just find sensory pathways particularly awesome.

Returning from our runaway tangent…Ladan, the host of the Neural Implant Podcast, has interviewed one of the other rare humans who’s had the opportunity to use an experimental nervous system implant, Dr. Kevin Warwick. Dr. Warwick was the first human cyborg, and although his implant was in his arm and rather than his cortex, he used the implant for something the cortex usually does: communicating with another person! Dr. Warwick and his wife both implanted electrodes into their arms, and when one squeezed their forearm muscle, the other’s arm would feel it. This is simple, but it’s profound (profound because any kind of nervous-system-to-nervous-system communication seems pretty profound to me).

In my view, the fact that so few people have experience with advanced neural implants is a significant issue. I think that the path to ethical BMI involves a very gradual rollout that includes extensive interviews with patients about their experiences using neural interfaces. This is similar in sentiment to how user experience design is undertaken in software development: build a prototype, give it to some users, get buckets of feedback from them, and make modifications to the product accordingly — all before opening up the product to the public. It’s relevant to note that Ian feels as though he’s become a part of the research team, as opposed to just a patient; this is an embodiment of the UX design approach, where the user plays an integral role in product development (although it admittedly feels painfully trivializing to use e.g. iOS app development terminology to describe a subject as weighty as putting technology into someone’s brain for therapeutic purposes).

Perhaps one step forward in our society’s journey towards ethical BMI introduction is to create a regulatory/government-facilitated roadmap for rolling out new interfaces. This roadmap could, in part, specify protocols for interviewing users at each stage, with the protocols based on existing user research methodologies in the technology field. If we’re going to be changing the experience of being a person, we damned well better ask people what that new experience is like, and perhaps even build regulation around that inquiry process.

The earliest and most vocal proponents of neural interfaces will be…patients using neural interfaces.

Maybe this feels obvious, but I think it’s important nonetheless. In his interview with Ladan, Matt Angle from Paradromics hypothesizes that once millions of people are using BMIs for therapeutic purposes, the public will start really paying attention, and start to ask questions about enhancement/augmentation. I think the mechanism by which this hypothesis will turn into a verified thesis is that of enthusiastic testimonials, akin to “oh my gosh, this cataract surgery was amazing!”.

Unlike the development of other massively-influential technologies like the smartphone, BMI development will be driven by truly dire need. The improvement to a human life a BMI could bring is, to use a linguistically demonstrative but non-mathematical back-of-the-napkin calculation, at least an order of magnitude greater than the improvement a smartphone brings. Therefore, there will be significant market demand — and my best guess is that advocacy for neural implants within the public domain won’t come from organizations or politicians as much as it will come from the benefactors of neural implants: patients. I’m not sure if there’s precedent for a technology that has the data/security/post-humanity implications of e.g. the internet or Facebook, and the human life implications of medical technology.

That’s why I find this notion of patient-as-proponent to be significant: I don’t think it’s ever happened before. If you have thoughts on how this might play out (or if you think I’m entirely wrong, and clearly something else is going to happen instead), please write me!

Learning to use a BMI is exhausting.

In the interview, Ian comments that his training can be both boring and exhausting, a 10/10 combination if there ever were one. Broadly, this is due to the fact that in order to control a closed-loop neural implant system (closed loop = motor control + some kind of visual, somatosensory, or neural feedback), your brain has to undergo plasticity. That takes time, and energy.

Ian remarks that playing Guitar Hero was a far more enjoyable training task than, e.g., the GRASSP assessment. This makes me think that, for widely usable BMIs (therapeutic or otherwise), there will need to be a fun on-boarding process. Otherwise, patients won’t want to use the technology and therefore outcomes won’t be maximized. I see this as another scenario where commercialized/large-scale BMIs could take cues from technological product development.

In order for BMIs to advance, people need to make personal decisions to take a risk.

To directly quote Ian: “Research isn’t going anywhere unless you have individuals that are willing to do something that might be seen as risky.” This starkly contrasts other domains of research, such as computer science. For example, there’s currently a raging debate about the ethics and safety of artificial intelligence; but, regardless of public discourse, large companies like Google and Facebook are eagerly composing world-class teams of AI researchers whose progress is limited mostly by the rapidity with which their fingers klick-clack on a keyboard, and by how many processors they can use to train their algorithms. BMI development, on the other hand, can’t go anywhere unless people want it — it requires consent. The basic research here requires consent, and this might be an automatically-enforced method of democratic conversation…the amount of progress made in human BMI research will be proportional to the human demand for it. (An immediate objection to this point: “Oh, but there will be demand amongst the socioeconomically advantaged…and that will just create a bigger socioeconomic rift when the already-rich-and-powerful get augmented with neurotechnology, but everyone else doesn’t.” Really valid objection, and my tentative response to myself here is that: 1) for the foreseeable future, most BMI research will be therapeutic and not augmentative; 2) therapeutic BMI applications can be covered by insurance, which drastically reduces the financial burden of BMIs and therefore reduces the potential for socioeconomically-stratefied integration of BMIs into our society, as well).

In Conclusion

Yikes, that was a long one. In this post, I discussed Ladan’s super interesting interview with Ian Burkhart, an inspiring individual who’s made enormous progress in overcoming his quadriplegia with the assistance of a neural implant and the incredible team behind it. We dived in-depth into a paper describing Ian and his implant system, and I then offered up some thoughts seeded by the interview.

One of my primary goals with The Substrate is to encourage an ethics-first conversation about brain computer interfaces. To that end: comment, write me at, or find me on twitter (@averybedows). I will respond! (seriously, I will).

Until next time,