Reconsidering the Hardware Kindle Interface
Small tweaks to incentivize engaging with a book’s content.
I’ve been using Kindles on and off ever since they launched. Our relationship has been contentious but I’ve always been seduced or re-seduced by their potential. At their best, they are beautiful devices. At their worst, infuriating. They are always so close to being better than they are.
Initially they didn’t have touch screens, but Kindle.app on iOS did. The iOS app worked in its own funny way: adopting its own interaction model. An analog to that model found its way to hardware Kindles. I think this was a mistake.
What is the iOS Kindle interaction model? The iOS Kindle model is the “hidden spaces” model. That is, all active interface elements are invisible. This “hidden spaces” model of interaction is supremely user antagonistic. 
There are no affordances to the taps. No edges to the active areas. Nothing to hint at what might happen. This creates what I call a “brittle” interface — where one wrong tap sends you careening in an unknown direction, without knowing why or how you got there.
Tap most of the screen to go forward a page. Tap the left edge to go back. Tap the top-ish area to open the menu. Tap yet another secret top-right area to bookmark. This model co-opts the physical space of the page to do too much.
The problem is that the text is also an interface element. But it’s a lower-level element. Activated through a longer tap. In essence, the Kindle hardware and software team has decided to “function stack” multiple layers of interface onto the same plane.
And so this model has never felt right.
When is a generic hardware bucket great? When the objects placed into it are unpredictable. And more so when the purpose of the objects is unpredictable. Hardware buttons inextricably tie you to a specific interaction model. So for the iPhone to be a flexible container into which anything can be poured it makes most sense to have (almost) no hardware controls.
But the hardware Kindle? Oh, what a wonderful gift for Amazon designers. The Kindle is predictable! We know what we’re getting on almost every page. And the actions of the user are so strictly defined — turn page, highlight, go back to library — that you can build in hardware buttons to do a lot of heavy lifting. And yet! Amazon seems to ignore (to lesser and greater degrees depending on the device) how predictable a hardware Kindle is.
My ideal Kindle interaction model isn’t too different than the current one. But the sanding away of small frictions pays huge dividends in interfaces. As a user performs the same actions hundreds or thousands of times they feel those marginal changes. And even if they don’t understand them, you must believe they appreciate them.
My ideal Kindle model is as follows.
- Page forward
- Page back
What does this get us?
It means we can now assume that — when inside of a book — any tap on the screen is explicitly to interact with content: text or images within the text. This makes the content a first-class object in the interaction model. Right now it’s secondary, engaged only if you tap and hold long enough on the screen. Otherwise, page turn and menu invocations take precedence.
What benefit comes of making the content of the book a first class object? It removes the brittleness of the current interaction model. Currently — when you tap — you might invoke a menu, a page turn, a bookmark, or a highlight. Meta actions are on a layer above content interactions. A Kindle is just a content container. And so this feels upside down.
Touchscreens work best when they allow direct and explicit engagement with the objects on the screen.
If the content of the book was the only screen object, a tap on a word would instantly bring up the dictionary. A drag would highlight. A single tap on an image would zoom in. Suddenly the text is alive and present. Your interaction with it? Thoughtless. Confident. No false taps. No accidental page turns. No accidental bookmarks. This further simplifies the logic of the touch engine watching for taps in the background, making these interactions faster, programmatic logic simpler.
When content becomes the first-class object, every interaction is suddenly bounded and clear. Want the menu? Press the (currently non-existent) menu button towards the top of the Kindle. Want to turn the page? Press the page turn button. Want to interact with the text? Touch it. Nothing is “hidden.” There is no need to discover interactions. And because each interaction is clear, it invites more exploration and play without worrying about losing your place.
The design of hardware Kindles has confounded me for years. They nailed it with the slate keyboard edition. Since then it feels like a cascade of misguided optimizations — thinner (and yet wider; the Oasis fits into almost no pocket, can’t really be held easily in one hand), fake buttons (the weird non-buttons of the old Voyage), beautiful leather cases with too-weak magnets that don’t stay attached, etcetera.
Whereas the interaction model is still as brittle and undiscoverable as it’s ever been.
A hardware Kindle is essentially a single purpose device. With worldwide 3G connectivity it verges on magical. Yet Amazon does its best to limit the connectivity — pushing it to all but the most expensive tier — and complicate the interface on what should be one of the most simple, most intuitive electronic devices on the market.
No, the current Kindle isn’t that complicated. But it’s one degree more complicated than it need be.
I’ve been using Kindles on and off since they launched. I’d love to see the next software and hardware iteration finally embrace what they are: Containers for books.
 Also, unnecessary. iOS handles swipes so beautifully that swipes should be for page turns, taps for interacting with content, and pinching to move out into a more macro view.