Apple Pencil: Redrawing the post-PC battle lines

Another input method finally comes into its own


Some seem to be viewing the Apple Pencil as an Enterprise focused capitulation. “Businesses want to write on stuff, so we better make a product that does this.” At the very least, I’d say that Apple’s position is “Businesses want to write on stuff, so we’ll make a product that does this better.” And before we start pointing out how the Surface did active pens first, let’s remember the trouble that Windows Mobile had even though they were indeed some of the first Smart Phones. Windows 10 certainly puts Microsoft in a better position than it was when the iPhone came out, but they’re only just now starting to pick up steam after the onslaught of Apple’s touch-native operating system and devices. Now the iPad Pro has drawn new battle lines in making iOS a much better experience when using a stylus. Windows Mobile came up short the last go around by focusing on function over experience, and that appears to be the same problem the Surface has when compared to the iPad Pro — we’ll know for sure in November.

Apple didn’t make a stylus now just because businesses want them. I assert that Apple knew that the general usage of digital writing devices was an inevitability. Just as the original Mouse interaction model and recently the Touch interaction model has fundamentally changed computing, using a writing implement is going to do the same. Touch hasn’t replaced mice, and none of them adequately substitute for each other. So I don’t think the Pencil will change everything for everyone, but eventually, everyone will expect to poke at things with their fingers or with a Pencil and have the digital device understand and respond naturally. It’s easy to see the longing for this in the kajillion styli for the iPad, in addition to the necessity of these types of tools for artistic, engineering and creative folks. Apple just wasn’t willing to make using a writing implement part of the experience if they couldn’t make the experience fundamentally better than anything that’s come before. Well, that time seems to have come!

Let’s talk about touch as an interaction method, and use that to help us understand drawing tablets and styli. Fewer people have experience with active styli because they’ve just been too hard to use until now, but there are many parallels.

The Drawing Tablet

My first drawing tablet, the Wacom Graphire

It’s been a long time since I first used a Wacom tablet. It sent position and pressure information to my Windows 95 computer over a serial port. The active area of the digitizer was the size of a 3/5 card. The computer had 32 Megabytes of RAM, ran Windows 95 Plus!, had 2 GB of hard drive space, and a 233 MHz Pentium 2 processor. It was a powerhouse!

I loved drawing on my computer. I first used Photoshop 4.0 on that machine. Eventually my enthusiasm faded for drawing tablets, though I bought several more, including an Intuos 2 with an 11 inch drawing area. It seemed truly massive in the time before the iPad. I used paper less and less. I absolutely love the visceral feel of drawing on paper, but the lack of flexibility would drive me to doing digital sketching. And then I’d have to deal with how disconnected I felt from the lines I was making. I’ve since used a Surface Pro 2, and I own a Surface Pro 3. It has a “really good” digitizer pen, at least compared to what I’ve used before. It doesn’t have all of the features of a Cintiq, but neither the Surface Pro 3 nor the Cintiq have the same feedback as drawing on paper.

I still sketch, take notes, and brainstorm on paper for my programming work. I’ve used OneNote for the flexibility of taking notes and doing sketches with mixed images, text, hyperlinks and tables, but the interface for navigating the “notebooks” and “pages” is just nowhere near as good as flipping through a legal pad, spiral bound notebook, or looseleaf paper.

The iOS app Paper is made for the iPad and has an interface that makes it easy to navigate and pick pages and notebooks. Unfortunately, the actual experience of drawing is floaty, disconnected, and wholly unsatisfying. They’ve done a tremendous, amazing, wonderful job with their application, but the iPad’s hardware simply wasn’t made to intelligently interpret the fine-grained control that a human has over a writing utensil. It was made to accurately interpret touches by fingers, with large contact areas.

The iPad does feel magical. You poke the thing you want. This is a huge step forward for usability. It provides far more feedback than the multiple layers of indirection created by using a mouse, trackpad, trackball or desktop drawing tablet. These control interfaces are all useful, but the feedback provided by a touch-based tablet makes it so the computing inexperienced can pick up an iPad or iPhone and immediately feel they can master it. They point at the thing that they want, and it happens!

The Uncanny Valley

Beyond the interaction of poking a thing, there is almost no accuracy. Even the Surface Pro 3, with its system-wide stylus support rarely knows what to do with a stylus besides making it act like a mouse — sort of. There’s “hovering” to point like a mouse and “click” if you have engaged tap the stylus on a point. Why are there 256+ levels of pressure sensitivity if it’s always treated as either on or off like clicking a mouse? 99% of applications don’t interpret pressure data. The applications that do are usually drawing or art applications, and even these have a “canvas” area that is the only part of the application that understands pressure — the rest of the app like the brush controls, selection tools, and color picker butons throw pressure data away.

It’s an uncanny valley problem. Let’s use Siri as an example: When you first talk to her, she seems to understand your natural human language. You are delighted! You ask her more questions. More complex questions. After a couple queries you realize that she just has the appearance of understanding, not actual understanding. Similarly, when you start using an iPad or an iPhone, you start to feel this piece of glass understands me!

Then you try to draw or write something. And it just… sucks. You don’t know how to describe it, but it’s disappointing. You realize that it just had the appearance of understanding your physical interaction with it. It only understands fingers, and only in this very limited way. Touching it with different objects, or trying to draw on it — even with tools designed to speak its language — fails to communicate anything but the most general nature of your input. I have owned multiple iPads, but I haven’t kept them, because they feel fundamentally disabled. They just don’t understand me! They can understand a tap, a poke, a swipe. But they cannot understand the precision of a line, the sweep of a curve or the elegance of flowing script. I can touch an iPad, and I want to draw on it, but it just doesn’t work right.

So we feel utterly disconnected from the operating system when we use it with an active stylus. Our fingers can feel that we are sending more information to the application, but it is not interpreted or used. Ask a person why they don’t like using a stylus, and they probably won’t be able to articulate this. But just because they can’t describe it doesn’t mean that it isn’t the case. And the reason they stop trying to use it is the same reason that some people have given up on using Siri. It’s awesome if it works — but it’s hard to predict when it will work. Active styli like those used on the Surface Pro 3 and the Wacom Cintiq tablet monitors are much more precise mechanisms than a finger, but this, I believe is the same reason that people hated touch screens before the iPhone and it’s the reason people generally aren’t stoked about styli in this day and age: The technology works, but software largely throws away or mistranslates the information being sent to the device.

Touch goes mainstream

Why did we start to demand touch screens when 15 years ago they were maligned? We started to make operating systems and applications that were designed with the interpretation of fingers on glass as the primary interaction method. Previously, these touch screens tried to translate finger location data to mouse pointer movements, two fundamentally different ways to interact. A finger is not a mouse! The simplicity of touching the button you want is much easier to grasp than moving a separate piece of hardware an arbitrary distance (based on the mouse sensitivity and acceleration) and then clicking a mouse button. Just as a finger is not a mouse, a stylus is not a mouse. Despite this obviousness, the best we can do in almost every current app on Windows, Mac OS or Linux is to translate it to a mouse, due to all the legacy programs that rely on mouse pointing information. This sucks, the same way that touch screens that translate finger touches and swipes into mouse movements suck.

People like pens and pencils. They are precise. A pencil doesn’t throw away your input. Varying the pressure always has a varying effect: the pencil may tear or poke through the paper, or it may just make a stronger line. You can vary the angle of your stroke to change the line width, and if you press too hard, the pencil tip will break. All of these help us to feel connected to the purpose and output of the pencil.

Making digital drawing natural

People have talked of styli with the same derision that they did the touch screens before the iPhone and iPad. Those that speak of them positively are willing to see past their current limitations to the power a digital canvas holds. Most people aren’t able to overcome that hurdle. This is because the OS and the developer SDK’s aren’t fundamentally designed to take advantage of the extra information that can be communicated by a stylus — and there’s been lots of input lag. If I am right, then the iPad Pro is aimed at addressing these problems. iOS is well suited for this interpretation, especially since pressure is going to be a standard piece of information available through easy to use API’s, even in the iPhone.

Everyone knows how to use a pencil, crayon or pen, just like they know how to point and touch the things they want. Which means writing implements were never going to go away. They are too natural, too precise, and too helpful for creating and communicating critical kinds of information. But we needed hardware, an OS, an app SDK and apps that are built with pressure (and precision) in mind to make them feel right in a digital setting. This could allow for much better, much faster, much more satisfying workflows in our applications. It requires new interaction design, as did the Mouse, and Touch, but I think it will ultimately be a fundamental part of what it means to use a computer. Notice how the iPad Pro uses the same subsystem to understand Touch as it does to understand the Pencil. They don’t have to have two separate digitizer layers to understand touch and styli. This means that they won’t have to add a whole new component to the other iPads to make them compatible. They’ll just update the touch subsystem. Sure, they’ll be paying more for an improved component, but it benefits everyone, even if they don’t use a Pencil with it. Similarly, when they bring this ability to the iPhone. It won’t involve a whole extra component that won’t be used by the majority of people. It will be an improvement to the same component which also understands the Pencil.

Touch is the most natural interaction method in the world. The next most natural is the Pencil.