Why Your Car’s UI Sucks - Part 2 - Potential Guidelines and Solutions

Vehicle Navigation & Entertainment Systems UI’s are a dog’s breakfast, here’s a few thoughts on cleaning up the mess.

Automobiles suffer from scattering the same UI elements across the control areas of the vehicle, touch screen options are replicated on the dash (climate and audio controls) and steering wheel. This is largely out of convenience for automakers, so if a buyer upgrades the navigation or entertainment package, the option is simply slapped in at the factory. This contributes to the problem of controls not operating consistently across scattered locations.

Additionally, these different physical UI elements have varying lifespans. Automakers have to contend with a product that needs to function a minimum of 5 years, and will most likely have a lifespan of 10-20 years. At that point the touchscreen will likely be marred to hell, but those buttons on the wheel and dash will still be working. So, if you can only access specific settings through the touchscreen, that may become a long term ownership issue.

From this we can cull a guideline of Consistency, that is all the functionality should be accessible from all UI locations, in a similar manner. This also helps address the need for actual physical Durability in the interface, which is seldom a concern for software UI design. Durability needs to be a guideline too, the controls need to be physically robust to survive the vehicle’s lifetime. Auto maker’s are aware that a used vehicle says as much (if not more) about a a brand as a new one.

It’s easy to call for Consistency, but another thing entirely to implement it. Currently in vehicles we see the dog’s breakfast mix of UI in touchscreens, touchpads, dials, rollers and buttons accessing varying funcitonality. This fragmentation of input technologies makes it hard to apply the same UI paradigm across control areas. Here’s the thing, the important take away, all those input devices are special cases of a gesture interface. Left or right, up or down, scroll, depress those actions are all gestural, they just happen to interact with a physical element or surface. That becomes an important thought for Consistency, because it lets as look at all the control areas in a unified manner. Yes, there’s a language shift there - control areas, not surfaces.

One things automakers haven’t been is forward looking or predictive of new interface technologies. Mercedes may bundle night view assist (or night vision to the rest of us) into their latest offerings, but how you get there isn’t near as innovative. So automotive companies need to be predictive of User Interfaces technologies now, that will be common place within the next five years. If only because you can’t upgrade physical systems.

So where should automaker’s be looking? Google Glass offers a paradigm for heads up display and interaction based on cards. Gesture based interfaces have arrived. Of note Samsung Galaxy S4 features AirView for accessing details information “behind” a UI element like the e-mail behind a header (essentially a hover gesture), and AirGestrure letting you flip between tracks, photos, or answer calls with a bit of hand waving. Apple is issuing a number of similar patents, which is important because cellphones sit at the leading edge of consumer technology with a ubiquity that educates the massage in the usage of new technologies. To be even more predictive, automakers should be looking at patent trends in UI. So a looser guideline then, be predictive, or even better upgradeable.

One interesting element of these technologies that’s valuable for automakers to draw on is they are minimally intrusive to the driving experience. Heads up displays are well executed on BMW, especially their turn by turn navigation, which is just a special case of what Google Glass offers. Ford has added a gestural control allowing you to open the Escape’s rear hatch with a mimed kick - a very special case. Our next guideline then, minimally intrusive and that needs to apply to input as well as display.

Voice control, already plays an increasingly important roll in automotive UI, because Keyboard-style touchscreen entry of addresses for navigation, or songs for selection are simply distractingly unsafe design. Think driving (large target motion) while undertaking small target motions involved in hitting keys - though a keyboard with only four target zones like SnapKey (http://www.snapkeys.com/en/) offers an alternate solution. Where touch is used then, hotspots need to be large and easily targeted by the drive.

One final thought, a vehicle’s navigation and entertainment functionality should gracefully degrade the loss of internet access. UI elements like voice control can not fail or degrade due to a lack of connectivity, which is why the stopgap of having your vehicle use your cellphone as it media and navigation center is a failing strategy. Also, it ignores the longevity of a vehicle compared to the lifespan of a given smartphone operating system or docking connector. The joy of something the size of a car is that it has a lot more space available to house the required computing power than a cellphone.

So, what would our vehicle’s navigation and entertainment UI look like trying to taking these considerations into account - or at lease my vision of it? Remarkable slim, and cheaper to produce overall.

Let’s draw heavily on MicroSoft’s SYNC, because this is a company that has all the resources, but has failed to meld them together. SYNC is also used by Ford, who is in the middle of a renaissance of quality — so obviously has an interest in improvement. To start, lets banish the touch screen, and replace it with a heads up display center windscreen shared by the driver and passenger. For our gestural interface, let’s use MicroSoft’s Kinect which has a readily available SDK and hardware - the similar systems are already out there mounted on the review mirror of vehicles. Using Kinext, the system could detect if there is a driver, or a driver and passenger - sizing and positioning the heads up display appropriately - one more guideline the UI should be adaptive. For the moment I’m going to assume the driver and passenger aren’t going to be hand talkers - we’re looking a broad strokes here.

On the Heads Up Display (HUD), we’d have major areas of functionality displayed as cards. These cards could be swiped between - Navigation, Phone, Environment, Radio, Music, Vehicle and Service Info, Overall Vehicle Entertainment Settings. Push into a card, and it reveals the next level of menu; say Radio presets to flick between, with the last two cards being station selection for non-presets.

Being an adaptive UI, the cards will be ordered based on usage, potentially with different counts based on the presence of a passenger or single driver. In keeping with our minimally intrusive maxim, menus should be kept as shallow as possible — having to drive down and dig deep into a system while driving is a fail. Selecting between cards amounts to left or right flicks. Selecting a card could be accompanied by a push gesture, driving into the menu. Scrolls, by an up or down flick. Precise selection, say of a radio station, could be accomplished by a stationary select area of the scroll, think iPhone date rollers, and a push gesture. Let’s assume the hand waving doesn’t need to be an exaggerated motion, we are in a small consistent space for the system to identify gestures. Conveniently, we’ve accomplished durability by minimizing actual push-the-button physical interaction.

That’s the premium system, obviously, so for the base model, we take the general case of the gestures and apply it to steering wheel and center console controls. A left button, a right button, an up and down roller (or up and down buttons) in the middle, and a select button (our push gesture) would suffice — in this way we’ve two durable backups to the gesture interface, that are also gesturally consistent. We also have four hotspots with which to use a SnapKey style system for Navigation address entry as a backup to voice control.

One point behind this very basic gestural interface, is it is upgradeable. As long as the basic gestures (scroll up, scroll down, flick left, flick right, push in) are consistent in the system the UI is upgradable at time of service.

What about speedometer and tachometer? Some design elements are best left alone, those gauges have evolved since Carl Benz decided looking at a horse’s ass was passé in 1886. Eventually, just as the speedo and tach elements found their perfect placement, navigation and entertainment systems will find theirs. The guidelines purposed here might not be the perfect path, but it offers an option to guide that evolution culled.

Next Story — Will the Oil Sands’ growing Social Media Spill be Environmentalists’ Gain?
Currently Reading - Will the Oil Sands’ growing Social Media Spill be Environmentalists’ Gain?

Will the Oil Sands’ growing Social Media Spill be Environmentalists’ Gain?

The Albertan Oil Sands’ rocky time with coverage generated by opponents like Neil Young, and Robert Redford, may be nothing compared to the social media coverage spilling from its workers on site. Can big oil use the tools availalbe to mop up the spill?

The media spill over the Albertan Oil Sands generated by Canadian rocker Neil Young, and Keystone XL naysayer Robert Redford, isn’t the only mess oil companies need to clean up. Social media coming out of the Oil Sands, or if you’re old school and haven’t taken to the rebranding — Tar Sands, tells an increasingly off message story. It’s a series of public updates that without intervention gives fuel to those who oppose the ecologically contentious extraction and processing of tar-like bitumen into oil.

Using EchoSec, a geospatial social media search platform, it’s easy for anyone and any organization to examine the social media coming out of the Fort McMurray area. What one sees is not the clean, safe development of a natural resource polished for public consumption by oil sands supporters. Instead, the raw social media stream posted by oil sands workers is unaccompanied by the traditional “clean and safe oil” government and corporate spin. Call the stream “social media bitumen” then, unrefined and un-messaged freely posted public information awaiting processing.

Many of these are not new images. The aerial views of sweeping tracts of devastation are nothing new, but social media offers a more intimate view of the oil sands.

For oil sands opposition digging through usual social media mix of selfies and over-tagged Instagrams, could yield valuable information.

For others this view provides insight into the muck, grime and labour of day to day work in the Fort McMurray area. There is a certain honour to it — it’s honest work done by people aiming to make their lives and those of their families better.

Where the record starts to stray from family friendly and safe oil is in the growing record of on site incidents.


Some updates could be construed as being sensitive to the security of oil sands operations, since as the contention over the Oil Sands and the Keystone XL Pipeline grows, so does the probability of potential action against them on the part of more extreme eco-activist groups or individuals.


Such individuals are being handed a significant amount of identity information publicly posted online by workers in social media updates. Along with backtracking workers timelines this could handily facilitate the creation of access credentials. Or, in a more traditional criminal application this information could simply give identity thieves a running start.

In a very basic experiment, a partially obscured face pic from a workers Twitter account was searched for using Google Image and the worker’s username. The worker’s image was located in the top of the search results. Using Google’s ability to search for similar images effectively provided facial recognition, and yielded a full face shot, links to the worker’s FaceBook account, revealing the individuals full name allowing for the mining of other personal pages. Using open and available tools, it’s relatively trivial to establish the work patterns, routes, and background of some workers.

Beyond the security concerns social media pose, they also provide potential workers a particular sense of what working in the oil sands is like with companies like Suncore, Canadian Natural Resources Ltd.or Syncrude at center stage. At the very least elements of the stream make recruitment more difficult. They also provide insight into on the job behaviour and safety concerns that could be construed as compromising the work and/or workers.

Workers of course need to be aware that social media doesn’t just let them voice thoughts on their employers, but lets potentially allows more technologically savvy employers look in on them. Even with the highly redacted EchoSec demo, oil companies, like Syncrude Canada Ltd. could be actively moving to mitigate work hazards, or outright avoidance of duties based on publicly posted information.

For oil companies social media represents the new panopticon. Those individuals and organizations concerned with the environment can easily look inwards on a stream of images revealing the industries dirtier moments. Workers themselves are proving to be unwitting watchmen within, providing the raw materials for those watching. The question is, will oil companies and those sympathetic to them be able to use the tools available to mop up the spill?


Note: All the social media content in this post was publicly posted and openly available. I’ve made my best effort to avoid identifying specific individuals in order to protect them from repercussions.

Sign up to continue reading what matters most to you

Great stories deserve a great audience

Continue reading