Photo by: William Ivan

Near Future Web Device Design Challenges

While much of the dust has settled, there is more on the horizon.

Smartphones are immensely popular: over two thirds of Americans own a smartphone, more iPhones are sold globally each day (avg 378k) than babies born (avg 371k) — and globally iOS only represents roughly 14% total market share; these are staggering numbers, especially when we recall that the iPhone debuted in 2007.

As web designers and developers, we for years have focused a great deal of learning and effort on addressing various screen sizes and device cases. Progressive Enhancement vs Graceful Degradation, Static Widths vs Fluid, Responsive, Adaptive, Flexbox, JS Grids, CSS Grids, Frameworks, Touch, Peripherals, etc.

All of this has been valuable and necessary to grow our collective understanding of the web medium, and while much of the dust has settled, there is more on the horizon.

Web devices in flux

In our web designs, we have targeted “mobile devices” (which I refer to as “handheld” and wrote about in a previous article), tablets, desktops, laptops, phablets, ad nauseum.

While we considered iOS or Android devices as touch supportive, we simultaneously discounted (or just ignored) that one day laptops would be touch capable (e.g. Microsoft Surface). Some devices we previously referred to as laptops can be used as a tablet via removable screens, and increasingly our “desktop” devices also now support touch interfaces.

I feel in general, we’re wrapping our arms around all of that pretty elegantly, with designs that commonly consider various screen sizes (irrespective of device), screen orientations, and interfaces (touch vs peripheral, sometimes concurrently). But what might be next?


Smarter Touch/Physical Gesture TVs

Even if your “TV” display isn’t internet connected, there are countless ways to utilize your largest screen for computing: game consoles, computers, plug in peripherals like Chromecast/Fire Stick/Apple TV/Roku/etc. Increasingly, our TVs themselves will have these capabilities and could likely support touch. We can already use our smartphones to control various apps or devices on our TVs, or even use our physical gestures.

Shared Displays

I’d bet most of us regularly use multiple displays with a single device. Right now, I’m writing this on a 24" display attached to my Macbook, which itself is displaying related research. But how often might we expect multiple devices sharing a single display (e.g. mirroring your smartphone on your desktop or TV, split screening, etc), or a single display “stretched” across multiple devices? What design challenges might these scenarios bring (e.g. proximity of devices to one another, multiple concurrent and simultaneous device interfaces during a single app)?

One display stretched over multiple iPhone devices

I previously talked about a distinction between “mobile” and “handheld” — my big distinction is that “mobile” refers to a use case, not a device class. What I mean by that is from a user experience point of view, irrespective of device, a user might be using their device while physically moving around the city (the mobile use case), or while stationary at home sitting on the couch and binge watching Netflix via their Smart TV. The mobile use case presents different UX challenges to solve for than does a stationary use case; for example, you probably wouldn’t whip out your iPad at airport security to scan your boarding pass; that’s a pretty distinctly “mobile” use case. It’s a subtle, but I think important distinction.

I certainly would never use my desktop device in line at Starbucks to pay for my espresso drink right? Maybe soon we will…

Multiple Use Case Devices

We’ve been on the cusp of commercially viable roll-up displays for several years; it’s just a matter of time before they become mainstream:

In the coming years we’ll be designing for roll-up (maybe foldable), touch interface computer devices. It’s easy to envision such a device being used with various screen sizes; e.g. rolled out 1/3 of the way for reading news on your commute, or fold it in half so you can write an email on one side while your kids watch Frozen on the other.

All replaced by a 6 oz device in our pocket.

These new devices could easily end up being our primary devices for most tasks that we currently use several devices for today. Imagine the following scenario:

You’re at work using your roll-up computing device via a wireless keyboard + mouse + 2nd monitor, you roll up and take it with you to attend a meeting where you unroll it on the conference table to take notes via it’s soft keys. When it’s time to go home, you tuck it into your coat pocket and board a packed commuter train, where you unroll it just enough to read the day’s news in one hand before reaching your station and tapping it to buy flowers while it’s again completely rolled.

In the above (near future?) scenario, we have used a single device in multiple use cases, at several screen sizes, and with varied interfaces; and this scenario doesn’t even engage the multi-user/shared display cases, or physical gesture/remote touch interfaces I mentioned earlier.

Are We Ready?

Frankly we’ll just have to make ourselves ready, which is entirely my point here. Some of the web products we are building today may not be ready for launch for another year or two, but certainly whatever we’re creating should be expected to last; so let’s ask ourselves: What devices or use cases that don’t yet exist should we plan for?

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.