Abstract approachability

OCTOBER 28TH, 2016 — POST 298

Daniel Holliday
6 min readOct 28, 2016

So there’s a new MacBook Pro. A physical redesign of models last overhauled in 2012 and 2013, what Apple showed on stage yesterday with its new line of MacBook Pros will almost certainly be what we’re living with for the next 3, 4, or 5 years (and perhaps longer due to the slowing down of the PC upgrade cycle). There’ll be refreshes along the way: processors will improve with Intel, stock configurations should get better, and battery life could stretch with developments in battery technology. But for the most part, what we saw yesterday of Apple’s flagship laptop line is what we’ve got for the next while at least.

The body has been slimmed, the Force Touch trackpad is now huge, the keyboard uses the same shallow butterfly switches from the 12” MacBook, it boasts Thunderbolt 3 over 4 USB-C ports, and the screen is brighter and poppier that the previous Retina build. For the most part, these are all significantly consequential upgrades and — as a hunk of metal and glass to fill with steadily upgraded components over the next 4 years — they’re both expected and exactly what was needed. But you won’t hear much about these things more than in passing, in the building out of context on the way to the big story: the Touch Bar.

Some will want to draw the “Apple is an input method innovator” line — from iPod click wheel through to iPhone multitouch to Apple Watch digital crown — when talking about the Touch Bar: a touch LCD strip that sits above the keyboard in the real estate once afforded to physical function keys. Where we once had keys for brightness control, volume (in their default settings), and the legacy F1-F12 with the aid of the ‘Fn’ modifier, the Touch Bar has been designed to offer context-sensitive input arrays. Inside Microsoft Word? The Touch Bar will have toggles for bold and italic. Inside Safari? You’ll have open tabs and favourites a single touch away, rendered in a colourful tile on the strip of touchscreen above your keyboard. In Photos? You’ll be able to use the strip to rotate photos just by operating a slider!

This last feature should prompt pause. Wasn’t one of the features of the class-leading multitouch offered in the class-leading trackpad that’s been in MacBooks for ages meant to be exactly this? Who could forget all those MacBook ads that feature two fingers “twisting” on the trackpad to rotate photos? But now you have another way: sliding an index finger along the top of your keyboard. Bold and italics were never that hard to find, sitting predictably at the top of the editor (and trivially accessibly with Cmd+B and Cmd+I respectively). And did we forget that we actually have a mouse when it comes to switching tabs? I’m having a legitimately time understanding what this thing is for when I — and no doubt countless others — long ago took their eyes off of their fingers when working on a laptop.

But that’s because the Touch Bar isn’t for me. Nor, I would argue, is it for anyone who one might intuitively expect to be a candidate for a computer with a “Pro” moniker and one of the highest price tags in the game. No. The Touch Bar is a symptom of a trend of approachable interface design. To be clear, it is an input method in the order of the keyboard or trackpad, a way in which a user is more “easily” able to interface with the computer. It doesn’t, as I hoped, provide anything new. It’s not a notification ticker (imagine getting the first line of an email — with the ability to mark read, archive, delete — along it instead of those annoying top-corner notifications). It’s not a clipboard manager (think if your 20 most recent copies where viewable — and swipe-through-able to make “active” — on the Touch Bar). And there are no micro apps for it (it’s just begging to have a single-line commandline emulator written for it to rip out compiling scripts or manipulate files in your directory). It’s unclear if the API developers have access to would even allow things such as these to be built, but the paradigm has been set by Apple and will be present system-wide. The paradigm — plain and simple — is to remove one or more layers of abstraction between you and the things you already had (albeit somewhat abstract) access to.

This is in line with the march of input methods — every one becoming more human in its operation. The search for the next leap — whether in voice, motion, or neural lace — is motivated by the tenant of approachability. A computer ought to — through increased processor speeds and machine learning — be able to imitate an understanding of more and more human gestures. Siri is “dumb” because she doesn’t understand me (even though I have about zero capacity to understand her, she’s the dumb one). But this imitation of understanding sets up a widening buffer between us and computers, creates a “cushion of posture” that more and more hides away from the user the way a computer is actually doing the things we ask of it. We no longer have reams and reams of MS-DOS commands to memorise or sit in documentation alongside our computers when we work. We use GUI and pointing devices that bring a more human layout to information. Trackpad gestures emulate human interactions like swiping over to another space or the pinch-to-zoom. But as our interface with a computer becomes less abstracted for us, the cushion of posture — the work a computer has to do to imitate that it understands us — becomes fatter and things become more abstract for the computer.

Essentially, we’re in a constant process of lengthening the strings of the marionette. For most users, the puppet at the end of those strings is invisible given their length. Computer literacy is not really computer literacy anymore but rather process memory as we understand less and less about the computer as we force it to understand more and more about us. Tech’s utopian streak states this as a goal: that more and more users are able to bring computers into their lives the lower the barrier to entry falls, the longer the marionette’s strings grow, the more a computer is able to fake an understanding of them. But are we actively designing away a computer literacy that would seem critical to a future that is increasingly computational?

If you enjoyed this, please take the time to recommend, respond, and share this piece wherever you think people will enjoy it. All of these actions not only help this piece to be read but also let me know what kinds of things to focus on in my daily writing.

Thanks, I really appreciate it.

--

--