Responsive design in the world of web quickly became the biggest thing since sliced bread. Everyone who’s anyone has a responsive website now, and this movement also brought forth revived business for any existing projects that now needed to “get with the times”. Why did this become so popular, so quick? Well with the advent of mobile devices and screen sizes galore, the responsive web became the creative solution to a multitude of challenges this new technology presented. The responsive web is just the first step in this new evolution of adaptable design and user experience, and new concepts based on this breakthrough are continually happening online and in products that we use everyday. It’s not enough to just adjust to a screens width or height, these devices are paying attention to our every move and interaction, and are constantly aware of what’s going on around us. I am fascinated by this trend of proactive and context aware user interfaces. This concept of creating less friction, predicting user intent and changing the experience based on environment are the variables that shape the evolution of responsive design. Think of the automatic door at your local mall or grocery store.
It’s typical for user interfaces to react to a click or an explicit input, but the delight comes when you take it a step further and anticipate the user’s action or harness it. Creating a proactive UI doesn’t need to be complex, let’s take a look at the Headroom widget. Headroom adds basic functionality to the typical top menu bar by sliding it out of view when scrolling down and slides it back when scrolling up.
The concept while simple to achieve is more interesting than the execution. This widget solves for small screens and limited vertical space, but also pays attention to the user’s focus. When the user scrolls down, they are actively scanning the content, the intent dictates the UI. It’s not until the user decides to scroll back up do we anticipate the intent to click a menu item and proactively display it regardless of where they’re at in the page.
Let’s take this proactive concept a step further and anticipate what the user is attempting to click. The Aim plugin was created in the effort to predict what the user is going to click on your website and trigger the event before it happens.
Aim was originally inspired by Amazon’s mega drop menu, which solves for a very important context problem that come with this type of UI. Without Aim when you try to move your mouse from the main menu to the submenu, the submenu will disappear out from under you like some sort of sick, un-winnable game of whack-a-mole.
Just relying on the hover pseudo element won’t help you when the menu items obstruct the mouse’s path. It’s key to pay attention to the user intent and present the content they mean to interact with rather than the content they may accidentally trigger.
The Trial plugin is yet another solution designed to predict user input. The cursor’s proximity to your call-to-action or interactive input is also a great identifier of intent, or creating a bit more awareness to your button by making it brighter, or scaling the size as the mouse gets closer.
How about instead of detecting what the user will click on your website, we detect if they’re going to leave your website. How can you keep their attention with your last hail mary to pique their interest, or your last ditch effort to capture the lead. Nudgr has developed a machine learning engine to not only capture mouse movements, but understand a number of variables to more accurately detect the users exit intent.
Now, you’ve probably already said or thought it a couple times, “these are all mouse based UIs, you can’t detect these things on a mobile device.” As a southpaw, I’ve often needed to adapt to the right-handed world, but sometimes left-handed people, just want to be left-handed. Why not do our best to accommodate to this unique trait. How about detecting user gestures, from natural movements to make some assumptions.
The natural scroll arch of the left thumb mirrors that of the right, by using Gesture Recognition we can capture the gesture pattern and subtly adjust the UI to fit or ask the user if they’d like to set a left-handed preference. Now that swipe to enter, or button location can be placed in a more natural position for the thumb.
Microsoft has done a great amount of research for pre-touch interfaces that can not only detect which hand or if multiple hands are holding the device, but can detect where the user is going to touch the screen and can proactively display the necessary UI.
Recently Apple filed for a patent to detect handedness based on the device hardware as well. It’s nice to see that all 10% of us southpaws are getting some user empathy.
How about a click free world? Really, could there be such a thing? Who says you need to click a button in order to see what’s behind it? An interesting concept, and one the folks at Don’t Click It are tackling with their research on click-less interactions.
Ok, so killing the click altogether may be a little extreme, although it does feel like the site is almost reading my mind, and it does feel very proactive like someone opening the door for you. It reminds me of the experience that Tesla is creating with their cars. You walk up to the car, it senses you and before you know it the door is opening for you, and closing behind you.
So what’s an easy entry into the click-less world? Particularly, I love the idea of not having to use the mouse wheel, click or even swipe to scroll down the page or pan through content. jQuery Scroller or this nifty little fiddle does just that. I’m not saying I’m lazy, but if all I have to do is move my wrist slightly to see 10 pages of content, it’s definitely less friction than the alternate of scrolling slower by using your finger over and over.
The same concept also works for mobile, your content turns from 10 pages to 40 pages on the small screen and scrolling turns into 40 swipes or taps. Phones are getting bigger, and information is increasing, we could definitely look at using the accelerometer, at least this is what the maker of TiltScroll has done in an effort to enable our laziness.
This type of UI may work in some cases, but what if you suddenly need to put your phone down, this may send your page scroll into oblivion, so how about we reduce that massive amount of swipes to just one, with a virtual joystick, cleverly named Nipple.js due to it’s likeness of the human body part of the same name. A simple drag of the joystick and viola you’re sailing through your page with barely any effort at all.
This joystick component makes for a nice segue into Fluid Touch Gestures. A topic made popular by Ralph Thomas and supported by the mobile guru himself Luke Wroblewski, these fluid gestures may help to solve the tap overload we see in mobile UIs, and start to proactively look at how users may naturally move around a touch screen. With one fluid motion, you are able to select an email recipient and bring up a new message pane without lifting your finger, now that’s what I’m talking about!
Proactive UI is on the rise, with patents, plugins and exploration of all kinds uncovering new ways to read your mind it’s exciting to see what’s next.
Context Aware UI
In an attempt to make our world and the IoT more ubiquitous and aware, we capture the context of both the available information and the variables of the environment to craft the ideal user experience. Let’s start with another basic content aware scenario where a user wants to search for specific content. If your website covers a wide range of verticals, categories and topics then it’s important that you show all relevant info based on the user’s interest. Myspace, yes I said Myspace, actually does this very well with their federated search results. When you visit the site and just start typing, you get results under every category in real-time.
Now taking it a step further, let’s understand the context of the user. Our devices whether desktop or mobile can literally listen to us. If I want, I should be able to talk to my phone, tablet, or even my house and have it listen to me because we’re just bossy like that. Enter in the technology used to create Siri, Cortana and Alexa… these ladies pack a big punch when it comes to context awareness.
You don’t need to have a massive team of engineers to leverage this type of technology to make your interfaces easier to interact with. Try out these popular voice control libraries and see how you can say a command and submit a search, pause your music or navigate to the next story in your favorite blog. Punchcut.com has also done some significant research in the field of Voice Interfaces (VI)
Let’s switch gears for a second, and see if we can solve a problem for all the nomophobics out there. God forbid we ever have to deal with the complete and utter torture of a dead battery. If we want to use our powers of context we can detect if the battery on our device is low, and give some options to squeeze the last drop out of these suckers. Studies have found that dark or predominantly black interfaces can actually prolong battery life on amoled displays up to 40%.
The same can be said for squeezing the last drop out of your batteries in something as major as your car. We’re quickly entering a world of nomophobia in our vehicles with a new ailment known as range anxiety, so let’s do what we can to proactively save the day. We can capture your current location, look through all the possible destinations and serve up the magic. Picture this scenario; you’re driving along in your electric car, you’ve been doing some serious car karaoke and the your available battery life is starting to deplete because you weren’t paying attention. Your Tesla can monitor all the variables and say, hey dude(tte), you have 2 options, go to this nearest supercharger, or get back to your house before you hit the point of no return. This sort of context aware UI can literally save your day.
What are your thoughts on this evolution of responsiveness? Do you have a favorite feature that blew your mind, or somehow became your latest obsession because it just “got you”? Please share your article, story or ideas on the topic, like I said, I’m fascinated by this topic and I’d love to hear your perspectives.
Who is this guy?
Practicing design since 2001 and ever learning. Brandon is currently the Director of Digital and Creative Services for CKR Interactive, one of the nation’s largest employment marketing agencies, and has been a part of the organization since 2007. Brandon is responsible for providing leadership, innovative and strategic solutions to meet the needs of a very industry diverse clientele.
Before working at CKR Interactive, Brandon held a position as Multimedia Designer at Motorola Mobility (Formerly Netopia, Inc.) and led the design team for website fulfillment for their partner network. He was also instrumental in the enhancement and creative development of their proprietary web development platform NXG™ and was in charge of in-house and partner design-team training of the platform and demonstrating product releases.
Never one to shy away from a challenge, Brandon is comfortable making the hard decisions. He has a passion for design, and solving interesting problems within a collaborative team environment. Autonomous, competitive, curious and analytical with a tenacious work ethic, he considers the world of design a labor of love.