1960s-2010s: Humanistic Intelligence and History of Wearable Computing

Synced
SyncedReview
Published in
5 min readMay 17, 2017

Marshall McLuhan’s oft-quoted phrase “the medium is the message” is referenced in Sandy Pentland’s 1998 Scientific American article Wearable Computing. While functioning in the same fashion as most information and communication technology, “wearables are more personal […] because they are a constant part of one’s physical presence.”

Today people are largely represented by their mobile phones. Phones collect, store and share data on behalf of their owners. In the future, the cellphone as a medium may be replaced by wearable or other forms of brain-computer interface. In the context of wearable computing, the human body is becoming the new medium that communicates new messages.

Short History of Wearable Computing

The Dark Ages: Claude Shannon and Ed Thorp the Gamblers.

Wearable computing is a subfield of human-computer interaction. HCI strives to engineer the relationship between humans and machines — an obvious HCI application is using a mouse and keyboard to operate a PC. One of the earliest ideas in HCI, however, involved having humans actually wear a gadget.

In 1961, Ed Thorp and Claude Shannon designed a cigarette pack-sized analog device designed to improve the odds on roulette wheels by timing the release of the ball and calculating its probable final landing spot. In clandestine Las Vegas field tests, the device improved returns on bets by +44%. A decade later at Caltech, Alan Lewis made a digital camera-case computer to tackle the same task. Similar to Thorp and Shannon’s approach, it involved a data tracker at a computer and a radio link to the player at the table. After doing real-time calculations, the data tracker would relay predictions to the player through an earpiece. Eudaemonic Enterprises built upon these ideas in 1978, stuffing the entire transmission system into a shoe controlled by toe movements.

Thus the development of wearable computing began with quirky inventors going undercover at gambling tables. It would be years before such research left the casino and moved into more conventional environments.

Research Renaissance: MIT Wearable Computing Project.

The research paradigm of wearable computing research surfaced in the early 90s, summoning interested researchers including Steve Mann, Thad Starner, Alex P. Pentland and others to MIT. The group initiated their wearable computing project under the MIT Media Lab.

Steve Mann and the WearComp Project on the cover of Linux Journal

“In the early days, a lot of it was gimmickry,” says Steve Mann, whose research took many different devices in many different directions. “I was inventing smart shoes, smart watches, smart eyeglasses, and implantable chips,” he recalls.

Indeed, the 1990s wearables gang had a plausible range of interests, despite being far-fetched at times. Swedish professor Chip. Q. Maguiere was considering neural interfaces, while others engineered fabrics, tactual displays, spatialized audio, HDR imaging and other wearable intelligent signal processors. Pentland wrote:

It is too early to tell which approaches to wearable design will prove popular. Some people, for example, may be comfortable with head-mounted video displays […] The Media Lab, however, is not taking a passive attitude toward this issue.”

All of this happened against the backdrop of the AI winter, when funding for machine intelligence was in short supply. In 1994 the US Defense Advanced Research Projects Agency (DARPA), turned its attention to robotics application for military use, and funded the new “Smart Modules Program” to develop wearable and portable computers. In 1996, DARPA organized the “Wearables in 2005” conference, which kickstarted the ensuing military and commercial use of wearables.

Large corporations hopped on board as well. Boeing registered 204 people for a follow-up conference; while IBM, Toshiba, Motorola, and start-ups such as MicroOptical in Boston and the Flexible PC Company in Northfield also contributed to hardware development.

At the centre of the action were Carnegie Mellon, MIT, and Georgia Tech. These three universities co-hosted the first IEEE International Symposium on Wearable Computers (ISWC) in Cambridge in 1996.

New Frontiers: Wearable Computing and Humanistic Intelligence.

Today, personal computers and mobile devices are in their heyday. Researchers are swarming standalone AI, focusing on how to automate self-learning intelligent systems. The interfaces for wearables meanwhile are evolving from smart screens to gesture commands, like those often seen in AR and VR commercials.

The early researchers at MIT also embarked on different applications for wearables. Thad Starner led the technical team of Google Glass, probably the most well-known wearable product on the market.

Steve Mann headed to University of Toronto and founded the Humanistic Intelligence Lab (formerly EyeTap personal imaging lab), where he broadened and refined the concept of humanistic intelligence. One of the key conceptual frameworks is “mediated reality,” which helps us to understand interrelations between VR and AR. In mixed reality (MR) the system supports both virtual and augmented realities, and people can select, filter, and even delete information intake through wearable processing devices. Reality is by default “mediated.”

Humanistic Intelligence (HI) in Relation to Wearable Computing, Human-Computer Interaction (HCI), and Brain-Computer Interface (BCI)

In 2013, Mann joined Meta — an AR hardware start-up that added raised $73 million in funding — as the company’s Chief Scientist. He also continued to work on various gadgets on the side, which seems like the extension of his career-long preoccupation with Mechanics and Design. These inventions include the Sequential Wave imprinting Machine (S.W.I.M.), Integral Kinematics, and the Hydaulophone, to name a few. Mann is trying to incorporate the S.W.I.M. into his new research project at Stanford University, where he uses an array of light to “print” real-time electromagnetic and ultra-sound waves for surgical applications, replacing devices such as the oscilloscope.

“In general, [research] has its ups and downs. It has oscillatory behaviour like any other system, full of excitement and hype, and disappointment and more hype. It is just like the wave of a sequential wave imprinting machine. Cos (wt) plus some base shifts maybe. We have this phenomenology in just about any discipline,” says Mann.

Steve Mann is now a full visiting professor at Stanford University, working with Michel Kliot, Clinical Professor & Director of Peripheral Nerve Centre.

In the near future, Marshall McLuhan would be surprised to find himself as the new medium of the new messages he receives. As humans add more “plug-ins” to their clothing, body and mind, the world as it is perceived becomes a different place. Mann wants to make sure things are well-engineered. He also believes that to do so requires not just STEM education, but DAST education, focusing on design, art, science and technology at once.

Author: Meghan Han | Infograph: Meghan Han, Hideaki Ishii | Localized by Synced Global Team:Michael Sarazen

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global