Sketch is not a UX design tool

The popular design app for Mac is a refined evolution of the “pen-and-paper simulator”, but does not attempt to model user experiences. Even the people who built an entire React renderer just to be able to use Sketch for UX design are looking to switch away from it. The future is somewhere else.

Drawing vector graphics was one of the very first professional applications for personal computers. In those Wild West days of the late 1970s, as soon as there was a microprocessor and a display device of even bearable quality, you would also find the brave hacker trying to somehow squeeze linear algebra algorithms and detailed model data into as little as 64K of memory. Some of them succeeded, a few even made billions — my favorite startup book The Autodesk File is the unedited account of the meteoric rise of one such company.

The computer I’m typing this on has an incredible 262144 times as much memory as those systems, but vector graphics software looks surprisingly similar. Those original drawing programs were quite literally “pen-and-paper simulators” because their output was a plotter. If you’re of the Snapchat generation, the easiest way to explain this device is with a picture:

A Roland DXY-100 plotter from 1982

The plotter is quite simply a very primitive robot that moves a pen on paper. High-quality laser and inkjet printers were not yet available on the general market, so a plotter was the only option for getting your vector drawings out of the computer and onto a permanent medium. (You’d really want to do this. The diskettes of the era were certainly not a place to store any valuable data long-term...)

The fundamental design of vector graphics applications derived from the plotter. Your on-screen workspace is an artboard — that literal piece of paper. The pen tool represented an actual pen. The default render operation for paths is “stroke” because that’s what the pen does. Fills and gradients came later (after Adobe invented and standardized PostScript). Illustrator built on the raster capabilities of PostScript, but did nothing to change the paper-centric core workflow.

All this seems obvious. How could it be otherwise? For one possible answer, we can look at an alternate evolutionary line of computer design tools which was not bound by the same pen-and-paper constraints: 3D modeling and rendering.

By and large, 3D is an “authentically digital” form of expression. There are CAD tools for industrial design that do model behaviors of the physical world, but as a baseline, modeling and animation tools don’t even pretend to represent anything physical. You have an infinite workspace and an infinite number of ways to combine elements. For producing images out of models, you use shaders — a genuinely digital concept derived from programming rather than from any specific physical model. (Shaders can be used to simulate physical surface materials, but also purely invented visual models —this is a good example of a major conceptual leap beyond merely simulating pen and paper.)

So here we are, 30 years after Illustrator first shipped, and very little has changed in this corner of the design tool space. Illustrator itself has naturally accumulated baggage over time, and recent competitors like Sketch and Figma have found success by offering a streamlined take on the same functionality… But the underlying model itself has remained the same. You start a Sketch document by selecting an artboard size, then place elements inside it; everything is built from paths which have no more intelligence than the ones used to model those plotter drawings in 1980. It’s still a pen-and-paper simulator.

A recent open source project highlights this problem. Last month, Airbnb released an interesting tool named React-sketchapp. It is a library that converts the output of React JavaScript code to Sketch documents. As news of this release spread on social media, there was widespread excitement… (“React” and “Sketch” are catnip for UI hackers today.) But there was also widespread confusion: how exactly does one use this library?

On the Hacker News discussion thread, the library’s developer Jon Gold helpfully explains its use case at Airbnb. After visual components have been implemented in code, the team can use React-sketchapp to create Sketch files containing rendered vector graphics that matches the code. At a large company like Airbnb, this keeps the code as the “source of truth” and ensures designers are working using the latest versions of the visual components.

There is an elephant in this room: why does the code need to be converted into a flat “pen-and-paper simulator” type of document so that designers can work on it? Imagine if this library were called “React-plotter”: it would produce plotter-compatible files from code, so that designers get a paper printout of the UI components and can use scissors and glue to make their designs… That sounds kind of backwards, doesn’t it? Yet the reality of flattening live components into lifeless Sketch documents is fundamentally the same.

I’m not criticizing React-sketchapp in any way. It’s a very clever, beautifully implemented library that solves an acute problem at Airbnb’s scale. But the reality is that the author himself recognizes the problem I pointed out. In the Hacker News thread linked above, he writes:

[…] This is a baby step to get us to the point where component-centric tools like Deco and Subform are good enough to realistically switch our design team to.

The people who built an entire React renderer just to be able to use Sketch for UX design are looking to switch away from it. The future is somewhere else.

The next generation of tools can’t be pen-and-paper simulators anymore. Artboards, layers and paths have had a great 40-year run, but they are not suitable metaphors for designing software. You don’t see architects planning entire buildings using Lego blocks… So why are designers in the software industry still stuck with tools that have the same (non-existent) level of modeling intelligence about user interfaces that Legos have about buildings?

I believe that the evolution of CAD tools can provide some very valuable ideas for the software industry as well. In an upcoming post, I’ll explore the state of modeling in CAD and how some of the concepts might apply to software development. One core learning is this: UI designers will need to start thinking on a higher level — but that also means giving up some control. The CAD tools used by architects don’t focus their attention on picking precise colors of curtains and tablecloths; yet in the UX world, that degree of fussy control over details is often seen as a core focus for design work. (Not to say that attention to detail isn’t important — it is! But the purpose and form of the building has to come way before the texture of the curtains.)

For now, I’ll leave you with some links to new tools that attempt to model software UX in a designer-friendly way, including the two mentioned in the above quote from Jon Gold:

Deco
https://www.decoide.org

A component-centric IDE for React Native.

This is still a programmer’s tool. Deco doesn’t have a design UI for creating layouts, so you need to know JavaScript and React Native to make anything.

The company was recently acquired by Airbnb and development of the tool has been discontinued, but the entire Deco IDE is open source so anyone can continue work on it.

Subform
https://subformapp.com

A new design tool that points the way to the CAD-like future I’m hoping to see.

So far Subform is not yet available to anyone except beta testers, so it’s impossible to judge how it actually fulfills the promise.

Another downside of Subform is that it doesn’t provide links to developers. Designs are still static entities separate from code, and so implementing them will still require manual programming labor just as before. There also doesn’t seem to be any way to bring components implemented in code back to the design environment (similar to what React-sketchapp does).

React Studio
https://reactstudio.com

A front-end design tool for web apps.

React Studio produces complete ReactJS projects (using Facebook’s “create-react-app” toolchain).

This code generation is the unique selling point: you don’t need to be a programmer to use React Studio, but the software also provides many hooks for developers to customize the code output through scripts and plugins. The core of React Studio is called the “Design Compiler”. Using plugins, you can also bring manually coded or modified React components into the design environment, so it’s a two-way link.

The downside is that there is a substantial learning curve: many of the concepts are foreign to designers with a traditional graphics background.

Native Studio by Neonto
neonto.com/nativestudio

A sister product to React Studio. It uses the same Design Compiler approach, but outputs Xcode and Android Studio projects instead.

Native Studio offers something called “frameworkless cross-platform mobile development”. Quite simply it means that the tool produces native code for each platform; there is no intermediate framework or library. Traditional cross-platform solutions like Xamarin, React Native etc. all use a runtime library to translate the cross-platform code into native concepts. The Native Studio approach instead produces native code at buildtime. It’s a very different way of doing cross-platform, and requires a degree of collaboration/trust between designers and developers to be truly effective.

Any other software you think should be mentioned here? Let me know in comments!

(You can also follow me on Twitter, if you absolutely insist!)

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.