Google Glasses: A Use Case For Future Wearable Technology

Lupo Benatti
7 min readDec 7, 2019

This year we have witnessed the rise of several startups focusing on AR glasses and smart glasses[i], including North Focals, “which bring minimalistic smartphone notifications into the field of view plus Alexa voice controls”[ii], and Vuzix Blade, that shows real-time language transcription on your lenses.[iii] Will they have the requirements to succeed? Is the market ready? To better answer these questions, we’ll look back where it all started.

Source: Google Glass

In 2013, Google launched its latest invention, the Google Glasses — a small, lightweight wearable computer with a transparent display for hands-free work[iv]. This new wearable technology provided an augmented reality Head Mounted Display (HMD) which included a half-inch display, a camera, speaker and microphone, Wi-fi and Bluetooth. However, as in the case of several fashion electronics, the Google Glasses (aka the Glass), lacked a clear value proposition as well as a defined target customer segment.

On the one hand, a research made by GfK Global discovered that the Glass appealed to millennials, who showed the intent of purchase towards wearable technologies[v]. On the other hand, the $1500 set price created a halo around the image that the Glass was more oriented towards wealthy techies and fitted for a more mature target audience.

The Glass relied on enhancing the smartphone digital experience through a hands-free voice and image recognition to analyze data and provide useful insights on-the-go. On the surface, the idea of moving the efficiency of smartphones into a non-invasive eyewear technology was a bold vision. Nonetheless, the promise of “accessing your digital life without checking your phone”[vi] didn’t seem to suffice when evaluating the purchase of this expensive product. “The team that built Glass neglected to understand its target customer and define why people need it[vii]”. In fact, given the wide variety scope of applications, the target market was divided into multiple customer segments, leaving a question mark over which specific customer needs the Glass addressed.

Data Collection

The main underlying promise was the user’s digital ubiquity. The combination of multiple sensors and connections developed an immense potential for data collection[viii]. The data was retrieved through the camera (and/or the microphone) placed next to the lenses and it was sent directly to the cloud to be analyzed. The computer would first perceive the surroundings as a series of pixels or as a vector image and then organize the data through classification and feature extraction.[ix] By collecting a huge amount of data from multiple sources, the Glass had the opportunity to gather almost every information about someone’s life. To some extent, this data gathering process already existed in our smartphones, thus tarnishing the actual need of the Glasses. Nonetheless, the added value of the Glass was the combination of other technologies such as wearable computing, ambient intelligence, smart clothing, eye tap technology, smart grid technology, augmented reality, 4G and android technology.[x]

Data Processing

By intaking all the user’s surrounding through several input devices, and thanks to a constant network connection, the camera could digitize the reflected image of each scene and sent it to a computer which then processed the data. First, the Glass linked to the Web through tethering, so by sharing the smartphone’s Internet via hotspot.[xi] Second, the computer simplified the image by extracting important information and leaving out the rest to build an image classification model.[xii] Third, it analyzed the geometrically encoded images by feeding the data to a trained predictive model, forming constructs depicting physical features and objects. Once the computer processed the image, it would send output to a projector attached to the glasses. “The projector sends the image to the other side of the beam splitter so that this computer-generated image is reflected into the eye to be superimposed on the original scene.”[xiii] This process focused on inputting visual data, elaborating them, and providing an output in the form of computer-mediated reality, commonly known as augmented reality,[xiv] allowing the Glass to overlay computer-generated data over top of the normal world the user would perceive.

The Glass Advantage

The Glass offered other services that were activated through other inputs, including voice controls and external notifications. The features included a live video chat, mobile maps, voice-activated commands, and the opportunity to receive and interact with your smartphone’s notifications while living hands-free.[xv]

Sources and Challenges of Data Collection and Processing

The major source of data collection was the built-in camera, which could absorb data every moment as soon as the Glass was worn. An extraordinary shift towards an improvement in data gathering, both in terms the quality and quantity, since the world is largely made by “interactive information that is largely visual in nature”.[xvi] Processing the data fast enough for dynamic augmented reality became accessible thanks to the 4G technology. Which raises another question about the future of wearable technology, what could be potentially developed today with the fast rise of 5G technology? How can the process be enhanced?

On the downside, Glasses faced a couple of major challenges. Despite the huge potential of data collection, the efficiency of the image recognition model was could have been hindered by the hardware processing power and cleansing of input data, which was an essential step incorrectly classifying the true values.[xvii]

Academic researchers “have been able to quantify delay (latency) characteristics, reliability, and performance of the device, which depends on the number of messages and the size of exchanged images.” Thus, proved that the Glass could reach saturation points and therefore encounter connection losses.[xviii] Furthermore, voice recognition models are still not optimized for a flawless performance, which hampers both the collection and the processing of the data.

Another threat, both in regard to the data collection as well as the safety of the user, regarded the constant interaction of the user with the Glass. The user’s attention while walking or driving was seriously obstructed through the glasses, which could instantly show you unrequested notifications, such as a pop-up message or due to a misunderstood voice command. Was the Glass’ benefit worth the distraction?

Privacy, The Main Challenge

The privacy breach presented the main challenge for the Glass. Not only the ubiquitous collection of data was challenged, but specifically the idea that at every point of time a Glass user could take a photo or an unauthorized video of other people. In fact, people were fast to complain about security measures and the potential privacy breach that the Glass presented.

Tensions arose and the term “Glasshole” was coined to categorize the first Glass users.

Google, which already has access to all our researches, attempted to provide users with extra benefits through the Glass in exchange for ubiquitous data. The fine line lies in the definition of privacy, which for Google means “what you’ve agreed to”. Those who wear the Glass might agree to share her or his information with Google, but what about all the other users?

Non-users, on the other hand, are worried that every Glass user could be filming everything and uploading it to Google’s servers, thus breaching into their privacy since none of them agreed that Google could aggregate, sift, and profit out of their data.[xix]

The impact on the market and innovation performance

Therefore, mostly due to a widely shared lack of societal consensus over the ultimate purpose of the Google Glasses, Alphabet had to halt the sales and withdraw the product in 2015.

The setback raised more public awareness over privacy concerns and Google unregulated access to its user’s data. Market wise, compared to its rival in wearable technologies Apple, Google had a strong hit on its IoT reputation due to the intense marketing campaign and the poorly performing results. Google attempted to capitalize on the initial hype for the potential of the product without success.

Among the KPIs that Google could have possibly adopted, four pillars emerged:

1) the number of the Glass sold and the growth rate; 2) the adoption of the new Glass app (assuming that an application was developed to better manage the respective wearable technology); 3) the tracking number of live footage recorded with the Glass and stored or shared; 4) monitoring the features that are used the most and least.[xx]

Google’s vision of digital technological ubiquity and connectivity had to wait and eventually prevailed under new settled forms, like Google Home or Google Nest. An important lesson learned for future innovations at Google, which will now remember that, "in an exploding digital market, emerging technologies need to have clear value to their users by solving clear problems."[xxi]

The Glass didn’t disappear completely. In 2017, Google pivoted its original idea and relaunched the product calling it Glass Enterprise Edition[xxii], this time with a different target customer and new value proposition targeting farms and factories. With the new target group, the product is now gaining traction, but will the customers be ready to embrace this new wave of eye-wear technology? Hopefully, 2020 will give us an answer.

Sources:

[i] https://www.wareable.com/ar/the-best-smartglasses-google-glass-and-the-rest

[ii] https://www.wareable.com/ar/future-of-ar-smartglasses-7677

[iii] https://www.vuzix.com/Blog/72

[iv] https://www.google.com/glass/start/

[v] https://www.campaignlive.co.uk/article/wearable-tech-google-glass-too-expensive-target-audience/1219009

[vi] https://www.bynorth.com/focals

[vii] https://www.mediapost.com/publications/article/244524/google-glass-and-market-research-a-cautionary-tal.html

[viii] https://hbr.org/2014/11/digital-ubiquity-how-connections-sensors-and-data-are-revolutionizing-business

[ix] https://marutitech.com/working-image-recognition/

[x] https://www.slideshare.net/SaiCharan41/google-glass-ppt-presentation

[xi] https://www.washingtonpost.com/news/the-switch/wp/2013/07/30/were-using-a-ton-of-mobile-data-with-google-glass-were-about-to-use-a-whole-lot-more/

[xii] https://marutitech.com/working-image-recognition/

[xiii] https://viscircle.de/einsteigerguide-ein-kurzer-ueberblick-ueber-eye-taps/?lang=en

[xiv] https://wikivisually.com/wiki/EyeTap

[xv] https://www.slideshare.net/SaiCharan41/google-glass-ppt-presentation

[xvi] https://en.wikipedia.org/wiki/EyeTap

[xvii] https://marutitech.com/working-image-recognition/

[xviii] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5191122/

[xix] https://www.theguardian.com/technology/2013/mar/06/google-glass-threat-to-our-privacy

[xx] https://www.quora.com/As-the-PM-for-Google-Glass-Enterprise-Edition-which-metrics-would-you-track- How-do-you-know-if-the-product-is-successful

[xxi] https://medium.com/nyc-design/the-assumptions-that-led-to-failures-of-google-glass-8b40a07cfa1e

[xxii] https://nymag.com/intelligencer/2017/07/the-rebirth-of-google-glass-on-the-factory-floor.html

--

--