Creative Direction

The process has begun. My initial approach is to backtrack on how I went about my rather successful previous project on Bang & Olufsen. My approach for that project saw me writing a logline, doing a couple creative exercises to get my mind going, and I also developed a thorough project schedule to keep me on track. This time, because I have more time to produce the project I have been much more contemplative and have yet to really produce anything substantial.
The Process
The above mock-up is what first came to mind when I thought about what an entirely Augmented Reality (AR) desktop experience could be like. In the image I have added a holographic internet browser (the cool, new, still-in-beta Opera Neon browser) hanging in the free air, with a slight hint of transparency and backlighting to draw the eye towards the hologram. My further thinking was that the glass surface on the table would be kept entirely free in order to make space for holograms of things like the user’s folders, bookmarks, incoming notifications and the far edge is finally equipped with a horizontal bar, reminiscing the MacBook Pro 2016’s touch bar. This bar, however, would also use holographics to display the operating system’s main categories as either 3D icons, text links or dynamically change based on voice commands. I will probably end up doing several versions of the same concept illustrations in order to examine different means of user input.

The Touch Table
Microsoft developed an actual touch-screen table working just like any smartphone screen, which unfortunately did not see any remarkable market penetration (possibly due to its high price). The HoloLens headset is, however, capable of making any surface a screen, so perhaps there is also a way of making such interfaces compatible with hand gestures?

Hand Tracking
Ever see the Spielberg movie ‘Minority Report’ (2002)? It used some futuristic technology to sell its premise, but what few people seem to know is that the motion control that Tom Cruise used in the movie was indeed a very real technology developed by Oblong Industries. Seeing as the HoloLens can understand gestures made in front of its camera, I don’t see why this sort of technology shouldn’t be adaptable for an AR interaction scheme, perhaps combined with the touch table above, only there would not be a touch screen in the table, the touch element would be simulated by 3D projections which the AR device could then interact with (making the whole AR-screens-everywhere-concept possible by not requiring any monitor built into every single surface)

Voice Control
Another intriguing approach to human-computer-interaction (HCI) is voice control. It seems like every major technology company is working on a virtual assistant, Amazon’s Alexa being particularly popular, also with third party companies. Furthermore, movies like Spike Jonze's 'Her' (2013) have explored the uncertain territory of the virtual assistant that becomes so humane that a human person might actually start behaving like they have a relationship with their operating system. It is of course a fictional take on where the future could bring us, but nevertheless relevant to keep in mind.
My personal interest in voice control would be as a means of hiding clutter from the PC experience and then ask your computer for what you need, no need to manage archives of files, look for a missing program shortcut or simply the few clicks and seconds spent navigating to the Start menu and finding the app you want to execute; it’s all about incrementally improving HCI, even if we’re talking about a very small amount.

The road from here
So, as has been discussed above, I have many thoughts on how to go about designing a conceptual illustration of an AR interface and, as my tutors advised me upon seeing my mock-up, perhaps it is smarter to aim for a more narrow approach instead of rebuilding an operating system from the ground up. In other words, it might be a better idea for me as a student to approach this with a single element of the UI/UX in focus, instead of the entirety of it (even if I would like to design the entire OS). I found the advice to be quite helpful and we had a great talk about the direction based on this. We all agreed that it is much more important as a practitioner to communicate the process of designing something, more so than having a technically impressive product.

With the above in mind I am now working on developing a concise plan for how to approach the challenge of designing part of the user experience. I am more or less settled on the idea of analysing the HoloLens and Windows Holographic as a platform, because I see this device as the most future-proof one, if compared to the Meta 2 headset or the still undisclosed device from Magic Leap. Microsoft simply has more resources, a world-leading operating system and many developers currently working on native AR content. Furthermore, it is very easy for developers to start creating apps and content for the HoloLens because of companies like Unity and Vuforia are partnering on making their platforms compatible with the device. And Unity just so happens to utilise Microsoft’s own programming language C#, making the free game engine a perfect fit for developers.
To meet a high standard I aim to keep a tight schedule via a Gantt chart which I will update on a daily basis. To further help me ensure that I can reach the end goal, I have developed the below road map, cutting my project brief into bite-size pieces and also disclosing the reward for doing the various tasks:

And with that covered I now have not only a timetable and a road map to get me across the finish line, I also have a concise current goal: Define something about Windows Holographic to either add because it is missing, or enhance because it is currently dysfunctional.
My next post will probably be an analysis of the Holographic UI based on what material I can get my hands on.

