Quick Draw

How a custom drawing engine transformed Adobe Comp CC’s UI from static taps to dynamic gestures.

jordan kushins
Adobe Comp CC
4 min readMay 10, 2016

--

“Our vision is for Comp has always been about speed over precision.” — Mathieu Badimon

Designer Mathieu Badimon joined the Comp team during the initial rounds of user testing on the first prototypes — back when tapping icons dropped pre-determined shapes on the screen — and watched as newbies to the app tried to figure it out. “They had a tough time navigating around, but understood the concept and were interested in using it.”

He brought friend and former colleague Phil Baudoin — an ace prototyping engineer — on board to brainstorm ways to streamline the interactions and interface, while Comp itself was still very much a work in progress. “We made the concerted decision to have a generalist app,” he says. “It’s not for a specific type of design. We knew we would include more features for higher precision later, but we wanted to establish certain guidelines from the start. Even now, though, there are lot of unknowns; we’re still working in uncharted territory.”

“We knew gestures would look cool, but we weren’t convinced they would be useful or practical.” — Phil Baudoin

The idea of incorporating gestures into Comp had been previously tossed around, but no one knew whether they would actually work within the context of the app. Baudoin’s first order of business was creating a workable set of gestures to explore possibilities that could — potentially — modify or completely eliminate the need for users to tap on that menu of primitive shapes.

Baudoin took cues from how people express their ideas with traditional tools rather than with touchscreen software. “If someone is sketching a wireframe on a piece of paper, what are they going draw? A couple of lines for text; an X in a box where an image would go,” he says. Those basic forms — along with squares, circles, and horizontal lines — became the foundation for the initial iteration.

Rather than creating visual mockups to imagine these dynamic concepts, Baudoin skipped straight to code, and within a week or so, they had a new working prototype. “It was one of those things where we were both surprised at how well it worked,” Badimon says. “It was totally natural. From there, it was like: We need those. To get things right we have to have gestures.”

Baudoin continued to develop the drawing engine, coming up with a series of gestures in the process. “We brainstormed dozens of different gestures that could be used to generate various design elements or different layout commands,” he says. “It became a kind of alphabet, with grammar that we could build upon.” Of course, not everything made the cut. Some experiments with text placement and alignment — say, tap to the left or right of a box to justify words to one side or the other — proved too clunky to keep.

“It was pretty easy to get gesture recognition working about two-thirds of the time. Getting it to work ninety- to ninety-five-percent of the time was way more difficult.” — Phil Baudoin

Ask ten people to draw a square, and you’ll get ten different interpretations of a very simple shape. Baudoin and Badimon had to account for the fact that everyone is going to draw the same things differently, with their various finger sizes and personal styles, on a range of devices.

In early builds, a special button was included; if a tester drew something that wasn’t recognized, they’d push it, and that sketch was sent straight to Baudoin to process. After a while, he realized he needed more data to get things right. So a switch was made, and everything that everyone drew — like, tens of thousands of drawings — were submitted. “If you’re doing machine learning or pattern recognition, you want to get as much data as possible,” Baudoin says. “That huge inventory was invaluable to help me write algorithms to process the information and perfect the system.”

“We’re always working to make things faster and easier.” — Phil Baudoin

Even at the official launch last year, Comp had two modes — “normal” and “drawing” — and the user had to switch back and forth between them; it was clear there was room for improvement. “Khoi was challenging us to merge these two modes into one cohesive experience,” Badimon says. In Comp 2.0, released last October, they were able to eliminate the original mode and go gestures all the way. “That second phase felt fantastic,” Badimon says. “It just felt right.”

We’re cataloguing all of Comp’s gestures and more over on our growing visual gallery — check it out for how-tos and at-a-glance guidance on navigating the app.

Comp is free (!) for Creative Cloud subscribers* — download it now for your iPhone or iPad. Not yet a Creative Cloud member? Sign up for a FREE Adobe ID today and try out the latest CC apps too.

*Android users, hold tight — we expect to have a beta release for you this summer.

--

--

jordan kushins
Adobe Comp CC

writer (words); rider (bikes); maker (jewelry, ceramics, prints, stuff). jordankushins.com