Thad Starner—Pioneer of Wearable Computing

Yewon Park
Digital Shroud
Published in
6 min readApr 27, 2020

Thad Eugene Starner is a pioneer of wearable computing and a founder of the Contextual Computing Group at Georgia Tech’s College of Computing. He graduated from the Massachusetts Institute of Technology with a B.S. in Brain and Cognitive Science and B.S. in Computer Science in 1991. Then, he earned M.S. in Media Arts and Science and Ph.D., in Media Arts and Sciences from the MIT Media Laboratory. His doctoral publication was entitled “Wearable Computing and Contextual Awareness” which was dealing with the pattern recognition and how wearable computing can be used for various purposes.

He has worn his own customized wearable computer since 1993 where he has been supporting for everyday-use and continuous-access systems (computers). While Starner was most well known for being a strong advocate for wearable computing, he helped to create one of the earliest handwriting recognition systems at Georgia Tech as an associate scientist with BBN’s Speech Systems Group. It was the application of pattern recognition and was able to recognize on-line cursive handwriting of six subjects transcribing text with a 26,000 word lexicon and 86 symbols. This system could be adapted to use in a PDA or other tablet-based system, and it later allowed to see the similar pattern of the speech recognition as well. Moreover, he published his research paper called “On-line cursive handwriting recognition using speech recognition methods”.

Starner was the co-founder of the IEEE and the first member of the MIT Wearable Computing Project where he customized his own wearable computer system. He designed the hardware of the system based on the original wearable by Doug Platt. It first had a head-up display with a one-handed keyboard called a Twiddler that allows him to type about a hundred and thirty words per minute. The screen that shows through the glass it opaque but because the human visual system shares the images between the two eyes it appears to him like he can see through it.

Starner’s First Wearable Computer

It allows to look at the person right in front of you and the computer screen at the same time. In 2008, He was able to evolve the system that includes head-up display of 640x480 screen resolution, a Twiddler, OQO Model 1 Ultra-Mobile PC with a GHz processor, 512 MB of RAM, 30 GB hard disk, and most importantly Wi-Fi built in as well as a mobile phone with cellular Internet access as well. The latest iteration was Google Glass, where he helped developing as a technical lead while also teaching full-time at Georgia Tech. Starner’s augmented-reality glasses led him to form “intellectual collectives”; his publication “Wearable computing device capable of responding intelligently to surroundings” talks more in depth. He was able to access the Internet talking to others and walking around where he was able to take notes on a conversation in real-time. Also, he could have two conversations at once; one online and one face-to-face. Moreover, when he gives students a lecture or giving a talk in a discussion, he lets students attend remotely to his wearable’s camera and microphone feeds, which then they can text him ideas and answers that only he can see. And he moderates their input then selects the best ones to share with his audience. Moreover, many communities are getting good benefits from Google Glass. According to a pilot research at the Stanford University School of Medicine, children with autism were able to improve their social skills by using a smartphone app that was paired with Google Glass to help them understand the emotions conveyed in people’s facial expressions. In addition to augmenting the outside world, he stated that he has a speech impediment but by having a computer with him all times, it led him to improve his talking by speak more clearly when prompted by a computer.

Google Glass was used to help children with autism

In addition, Starner discovered and developed the Passive Haptic Learning and Passive Haptic Rehabilitation allowed wearable computer users to learn complex manual skills with little or no attention on the learning.

You can learn more about it in this video:

In this video, it shows how simple or complex skills that you have no experience with or skills with no attention can be conditioned by just wearing a wearable computer glove called “Mobile Music Touch” which also talks about it on his publication “Mobile Music Touch: mobile tactile stimulation for passive learning”. There are vibrators in each of the finger parts of the glove and it vibrates to indicate which finger you should use to press a key and taps the finger along with each note as its played. And over the course of an hour or so you will actually get that piano melody since you are conditioned by the finger tapping that follows with the note; this proved that the muscle memory can be conditioned without having a much attention to it.

Picture of the Mobile Music Touch Glove

Moreover, his preliminary studies with people with partial spinal cord injury and stroke suggests that the same system might be used for hand rehabilitation. His doctorate student Debbie Backus asked if this system can be used for patients with tetraplegia in rehabilitation center, who has only some sensation and motor control after a big injury that caused them spinal cord injury. As a result of multiple trials, there was an increase in sensation over the course of eight weeks of the trial with the patients who had gloves on than patients who didn’t wear the gloves and there was a significant increase in their ability to move objects around. Since this research is relatively new and this is the studies that he first started, he has been looking for ways to apply to different fields especially for rehabilitation purpose.

Furthermore, one of the most interesting researches that he has accomplished was to focus in the involvement of wearable computing with American Sign Language. It was to distinguish individual signs and phrases of ASL directly from the motor cortex using fMRI. Its’ application was going forward to different areas to help the community. One of the potential applications was to create an interface for people who has Amytrophic Lateral Sclerosis. In fact, movements attempted by individuals with ALS generate brain signals similar to actual movements by neurotypical people.

Starner testing out feature extraction and hand ambiguity of the American Sign Language

Therefore, his hope it to teach the sign language to people with ALS before their nerve cells are completely break down, which reduces muscle functions completely. He also worked to create a bridge between the deaf and hearing communities that facilitate communication between the two. For example, his team developed a game for deaf children called CopyCat. It was designed to improve memory skills of deaf children by allowing them to practice sentence construction in American Sign Language and they’re trying to improve by having much larger vocabulary, longer sentences, and hand shape features to handle the large vocabulary etc.

This video has further explanation and demonstration of the CopyCat:

We can say that Thad Starner is helping and contributing the community throughout his accomplishments on familiarizing computer with individuals using various technology as a strong advocate for wearable computing.

References:

--

--