Why did design cross the flat road?
To get to a mixed reality.
Skeuomorphism is dead. It’s been dead for so long that we don’t really notice its absence any more. That said, I can still remember the Battle of Gloss, the Texture Front, and many others, as people wrestled with the Allied assault lead by both Google’s Material Design and Apple’s iOS 7 guidelines.
Of course, the third ally was already there and just welcoming its compatriots into the fray: Windows had gone flat in Windows 8 with its Metro design language. There was a lot of rejoicing about this stylistic victory among designers, as the clean, sleek, and flat lines and shapes replaced the funny, endearing, and sometimes realistic counterparts.
However, alot of others were up in arms. Amongst them the biggest Apple fans who kept pointing towards Apple’s older, more humanistic guidelines and with it the need for skeuomorphism. If the S-word is still not ringing any bells, it’s best explained as a design style that tries to mimic the activity and/or use associated with the subject in question. Apple had some great examples: a writing app with the icon of a bottle of ink, a calendar app copying the look of torn off pages, buttons shaded and raised to imply their button-ness, that kind of stuff.
It makes things easy to anticipate, clear in their use, and… somewhat ugly and childish to certain designers. Myself included, by the way. I was one of the people that joined this Allied front against skeuomorphism almost instantly. But when glossing over those famous Apple guidelines, I did start to question myself and the need for it entirely. The old concept seemed correct: skeuomorphic design actually helps with making sense of software. I might like all those thin lines and light cards, but they were making software harder to use. Why on Earth was this change happening?
Some ideas were floated, arguably the most convincing one was that we “got used to using our current interfaces” and “we did not need skeuomorphic design any more”, the training wheels could finally come off. That made sense. After all, we are still shackled to older skeuomorphic icons like the floppy disk to denote saving of a document, so yeah, please get rid of stuff that nobody actually wants to remember. Yet, even then, wouldn’t it make sense to help people regardless of their skill level?
The three emperors’ new clothes
Floris and I started looking at these interfaces a bit more closely. Why use Material? Why use Metro? Why follow iOS 7?
Windows actually had a darned good reason to: tablets. Or more specifically, the Surface tablets. These PC-replacement devices needed to run full-fat Windows. Disregarding the Surface vs. Surface RT mess, the design language was rendered to make the most of the screen space: large icons instead of tiny lists, consolidated menus, and trimmed window-chrome to remove accidental activation of actions. It also famously applied this to its smaller Windows Phone-line and while the smartphones never sold as well as hoped, the interface was a proper competitor to iOS and Android and scaled very well.
So a design language geared towards touch and differing screen sizes. Check.
Google went with cards as the core concept within Material. Everything was a card. It’s probably the most skeuomorphic element you can think of without actually being it. After all, a rectangle is beautifully minimalistic and inviting; like a real piece of paper. It also used depth and shadows to add a z-index. To dictate which item was on top of another and through it denote importance. Taking out superfluous window-chrome also helped point towards the most important actors on screen, while the cards contained and displayed content in an orderly fashion.
So a design language geared towards clarity and import of actions. Check.
Apple went with bright colours to denote interactive elements and texts. This created some confusion into whether certain elements were links or buttons (making clear exactly why skeuomorphism was important to start with). Curiously, it also started blurring a lot of backgrounds, while maintaining UI zooms next to pans to denote position. It added an even more curious 3D tilt effect to its iOS wallpapers, mimicking depth (in a flat design). In the wake of the Apple Watch launch and moving towards iOS 10, another change was added to all of this: text was literally made more bold and took center stage as clear labels. It enhanced clarity, but — tellingly — also made text more readable on smaller screens, or rather, at smaller resolutions.
So a design language geared (and tweaked) towards three dimensional projections, greater dependency on text at varying resolutions. Check.
By your powers combined
OK, so you’ve read the headline and know what I’m getting at: mixed reality. All these three design languages are made to facilitate a spatial OS within mixed reality. Not virtual reality like with the Oculus Rift and HTC VIVE headsets, not augmented reality like with Google Glass (though both will certainly tag along), but mixed reality. The stuff of Microsoft HoloLens en Intel’s Alloy. Just last week Windows actually announced the Windows Holographic Suite, an actual spatial OS, making that already a real thing.
But why is it so important to ditch skeuomorphism for mixed reality?
This is where the new elements of all three platform come into play. Mixed reality is of course the interplay between the real world overlaid with virtual data and/or visuals. It’s not a fully virtual world as in VR and it’s not merely a single layer of information as in AR. This is truly mixed, with data following real world shapes and objects, while actions in the real world manipulate the virtual.
Controlling and acting in a mixed reality is going to be highly contextual and performed by hands and speech. Why? Because they form the most natural interface. The mouse was kind of a crutch. You couldn’t interact directly with the screen, so a layer of software and hardware was put in between to facilitate translation. The iPhone burst the last shackle on the manipulation front and opened up a more logical one: touch. Or rather just doing what you want with your hands and fingers. Now you can directly interact with the screen.
“Yeah, but that still doesn’t explain the mixed reality thing?” Correct, but what happens when you take the screen out of that equation?
And that’s when it all slots together. The screen, I am sad to say, is also a crutch. It is merely a window, a limiting frame. We’ve grown comfortable with its limited view and have been steadily expanding its size, but the future holds no more size, just everything you can see. It is in need of a design language that is spatial.
Just look around you for a moment. Imaging you are viewing this through your “mixed reality smartphone device” (or whatever we will use eventually) and you get a call. What would happen? Would it project a phone for you on a table, to pick up and hold like a phone? In essence be skeuomorphic? As enticing as that may sound, it’ll become real bothersome after, I think, the second call. You’ve just replaced a real phone with a virtual one and are cluttering unnecessary virtual objects everywhere. Instead you’ll most likely get a notification. That notification is at a different information level however, and needs to stand out from what you are looking at, which is the real world and contextual projections.
How can you enhance the contrast of the real world with that of the virtual? Remove any trace of the real world. Get rid of skeuomorphism while retaining all the elements of UI. “Getting used to interfaces,” indeed. We needed to be prepared for one that would function more clearly in mixed, projected views.
This is where Tom Cruise peeks around the corner to perform his Minority Report jazz hands bit. Though we’re not going to need his gloves or anything (or Tom himself for that matter), but we are going to need the interface to conform to touch and gestures, to be clear and in high contrast, to obviously stack or blur information and denote importance, to be able to be projected into a 3D space and remain legible. Text input? That’s going to be Siri, Cortana, or whatever-Google-is-eventually-going-to-call-their-agent, whispering in your ear: “waiting for dictation”. At least, until we get our brains hooked up directly.
All three — Google, Microsoft and Apple — have been moving in this direction. All three might have been commented on as being on the forefront of VR, AR and MR, and some might have been blasted for (publicly) not doing anything at all! But their design language makes it clear what they’ve been cooking up exactly. All these design languages were never meant to enhance your smartphones, they were meant to be merely compatible with it. To provide a solid basis for mixed reality interface to work, without ever having to shock people into completely relearning the tools of the trade, to prepare for a spatial OS.
Of course, this isn’t a guarantee that the design languages as they are now are going to be used directly, but they will evolve to incorporate more and more elements to enhance VR, AR and ultimately MR experiences.
So next time one of these three updates its design language, look very closely. You might be able to stare directly into Tom’s eyes.