A HUD for Reality
In the Sherlock Episode His Last Vow, the main villain Charles Augustus Magnussen wears glasses which seemingly super-impose the intimate vulnerabilities about everyone he meets onto his vision. Although it turns out to be a clever trick of his elaborate memory castle and not related to any device at all, it nevertheless leaves the audience wondering…
What are the kind of (consumer) experiences which would benefit from a Heads Up Display on reality?
1. What details do I wish to know about another person’s life? Super-imposing social media onto a person could be helpful. It might have the benefit of being able to inquiry deeper into a friend’s life: what they care about, what they are thinking about. For people who’s first name and details I forgot, it might act as a helpful reminder. It would benefit me if I had memory loss issues.
2. Can the system record my life as well as display previous details? If it acts as a transcription of the words I’ve said and heard. The things I’ve seen. Places I’ve been and the things that I have experienced. It could assist in my learning process, by matching my experiences to the appropriate resources that can improve my life. My own personal tutor. Of course, this also has obvious advertising benefits. As we shall see with Snap’s Spectacles.
3. Is is possible to catalog all the products that I come across in life? To be able to add additional information to inform my purchase decisions, like carbon footprint, supply-chain labor practices, nutritional composition? A massive amount of data harmonization would need to occur in order to render accessible this information. Which is made somewhat easier by the participation of retails and/or producers if they see a benefit to themselves in doing so. Or it could be crowd-sourced. Right now fashion and apparel identification is being integrated into photo apps. See Hook and Pinterest.
4. What sports would benefit from this kind of interface? Cycling, racing, running, boating… anything where one wants to keep a pace and doesn’t entail lots of jostling motion. The process of driving entails benefit to having real-time route finding and course correction as well as other performance related metrics which would otherwise be displayed on the dashboard. Provided that we don’t all have self-driving cars first. Controlling drones is another example… though drone control is more suited to VR than AR.
5. What are the range of surveillance state applications? Security, policing, protection. They all seem to want to be able to know more about a person in order to assess the threat level of their environment. Details that are observed like heart rate and eye dilation can be amplified and brought to conscious attention. A la the scifi show Continuum. Of course, recording is already being done by bodycams for future analysis and evidence. But with identity recognition, all known details can be added as well in real time… such as arrest warrants.
6. How can the visual limitations of an environment be augmented? Firefighters can use sensors in a range of the EM spectrum which is transparent to smoke to create a clear picture of the environment. People with color blindness can have corrected colors added to their field of view by their glasses. People can see at night. Inspectors can see gas leaks. A whole range of phenomena which are invisible to humans can be rendered visually.
7. What use would the ability to identify objects in my surroundings be? The catalogs of information from ornithology, entomology and dendrology would all apply to a wilderness hike. I could distinguish edible from inedible plants. The types of grubs in my lawn to match with the right pesticide. Sky watching with AR allows me to identify constellations, satellites and planets. Artwork in museums need no longer have a plaque to describe it.
8. What public databases could be used to show me place details? I could see the house pricing / assessment details on a neighborhood drive thru. Like having Zillow on your head while you drive. I could see the police report history of a street corner. I could have city names super-imposed over my view across the river, on top of a skyscraper or from an airplane. Or the street numbers for buildings appear when trying to find a destination.
9. What are the cross-linguistic opportunities? Reading signs, billboards and documents in my own language is something already possible with Google Translate. Having the transcript of someone talking to me instantly translated into my language would make conversations like watching movies. Currency conversion for the prices of products that I see and the expressions and body language specific to a culture pointed out would help me in my travels.
10. Do I wish to be able to read people better? A Sherlock Holmes type of deduction about people based upon the details I see makes for a good party trick. Transcribing the emotional content of people’s faces, comportment and body language would benefit people with autism. And the micro-expressions of public speakers could be used to rate their truthiness like a real-life Lie To Me. Of course, salesmen and customer support staff also benefit from greater emotional awareness… as well as your purchase history.
These are just a few of the possible applications which a more intelligent field of view might provide. And as smartglasses tech advances to the point where some of it actually becomes possible to display in real-time, another key question will be whether any of this will be necessary if we all already have an even more powerful device with a screen and a camera in our pocket.