What is prototyping? Common misconceptions and some clarity.

Y. A.
8 min readMar 16, 2024

--

At the highest performing teams at tech companies, prototyping is not a nice-to-have, but an expectation. Most of them have bespoke internal tools just for prototyping, but only some of them make them public (e.g., Origami, by way of Meta). Prototyping is hard and so, naturally, few designers can do it. What’s more, most products are low-interactivity and do not require much interaction work. Because of this, I think there’s some lack of clarity in what prototyping actually is, and the term is often used in ways that have different meanings to different people.

What does “prototyping” mean?

“Prototyping” usually just means anything that demonstrates some complete user flow in a way that simulates the actual usage of the software in some capacity. So, instead of having screens lined up, some linear progression that shows a user going from point A to point B in some flow. This is usually to explain to XFN partners how some software should work, to pitch an new idea, etc.

This can range from simple clickable prototypes you can make in Figma, or less popular solutions, like Adobe XD/Axure (not sure if both of these are still around), all the way to highly detailed prototypes in Origami, to just straight up building the interface in SwiftUI or JS yourself.

What does prototyping mean to most people?

For most people, prototyping means a “clickable” prototype, like what you can make in Figma (or, extremely rarely, XD/Axure). These are simple prototypes that rely on simple conditionals and triggers, and they mainly do things like load another view, or hotswap some portion of a view with something else based on some simple trigger (a user taps on some specific element, etc.).

Figma has an additional feature called “Smart Animate” that allows simple lerps, allowing users to transition properties of an object on the canvas (e.g., a button transitions linearly from gray to blue when a field is filled). This is very simple and, for most people and products, is just fine, as most products are low-interaction and very simple.

This is a pretty simple, clickable prototype made in Figma. This is the kind of thing you can also make in Sketch/XD/Axure, or whatever simple prototyping tool of your choice.

What does prototyping mean to other people?

For others, prototyping means building a truer-to-life interface simulation, either in something like Origami or, in the truest-to-life, something like SwiftUI or JS. This is not just for funsies (but it is funsies) — this is because the product necessitates this.

Something like Medium, Slack, or Airbnb are very simple products, on the whole. They are text editors, simple messaging interfaces, and hotel book software that don’t require a ton of interaction work — mainly simple taps, maybe some drags (e.g., duration pickers for dates), and other simple interaction models like this.

But consider your phone: it has far more complex interactions. It doesn’t have a physical home button, so the way you get to home is by swiping from the bottom. It doesn’t have a “task manager” app, so the way you manage executing processes on your device is by swiping from the bottom while holding. You’ll notice it responds to the exact acceleration of the user’s finger — so if you pull from the bottom very slowly, the task manager animates in very slowly until you feel some haptic feedback that tells you you’re in task manager mode.

Instead of a standalone “task manager” app, managing processes on your device is now behind a gesture, which impacts how the phone is navigated at a core level. The haptic and gestural feedback when going into this experience and closing processes, alike, “feel” right.

Another example: NameDrop on iOS. You can hold your phones together and swap contact info. As you hold your phones together, a continuous wave appears that goes from phone to phone, and shows you when the process is complete. It’s beautiful, but it also functions as a loading state that clarifies what is happening across your devices and gives user feedback for when the process is under way, and when it completes. This is not just for fun — it’s not just skin deep. You can imagine a loader appearing abruptly — this can feel broken and stilted. A wave moving across your devices communicates clear progression and process status. When you’re standing around awkwardly trying to get someone’s number, this makes it way less awkward. (It is also not a minor detail that it’s beautiful — as it should be!)

NameDrop at work!

There are other interactions on your phone, but consider that the above examples I provided are (1) core to how some functionality on your phone works (read: not decorative), (2) more complex than simple tap triggers (compare swipe up + longpress + specific y position = trigger is set to true to tapped on a button = trigger is set to true), and (3) are very gestural, and they have substantial consequence for the core structure/“information architecture” (as some call it) of the device, from digital to physical (e.g., removing the physical home button in favor of a swipe gesture means the hardware is totally redesigned, and also fundamental aspects of navigation on your phone has to be completely redesigned).

Engineers at Apple are not the ones building and simulating these features and their interaction models — designers are.

Is prototyping making animations (e.g., microinteractions)?

For people who do not prototype, this is a common view. But, as shown above, this is not the case. Making beautiful and refined work is an important part of design’s responsibilities at the highest performing teams in the world but, again, animations are just one part of prototyping. Animations have to feel high quality in order to create clear user feedback (e.g., pulling from the bottom of your phone to return home isn’t abrupt and jarring, it accelerates based on the user’s y position, or the slow shader effect that loads from phone to phone when NameDropping — which feels more clear and intuitive in both cases).

All of these decisions add up to an experience that feels reasonable and clear, and also impact significantly how you use your phone. You’ll notice that fun checkbox animations (classic example of a microinteraction) are atypical in iOS or Android — decoration for the sake of it is not common because it can sometimes distract from quick work and create too much visual noise.

Prototyping is more than microinteraction work and, in fact, tends to not be common among the most skilled prototypers. Prototyping, in its truest sense, is mainly used when you work on a highly gestural piece of software that is too difficult to explain in simple images. These gestures are not fun, optional things — they are core to how the software works (again, consider Tinder’s left/right swipe gesture — it’s fun, sure, but it has fundamental consequence for how the app is used and how dating on these apps works today).

This kind of complex prototyping (complex triggers and all) is known in engineering as signal processing. This involves taking signals that fire from various triggers you’re watching, and then synthesizing them into logic gates you use to trigger other things.

You can see an example of some fairly complicated signal processing here. I’m watching for an element (“Card 1”) to emit a pulse when “Downed,” and also emit a pulse when it isn’t — synthesized together into a logic gate that emits yet another pulse, and then synthesized with a “Drag” watcher, which I use to pull the dimensions from a 2D translation.

Or, consider TikTok: gone are the days of simple taps to bring up a video to watch — now, watching on TikTok is all behind a flick gesture. This decision to make watching more gestural had a substantial and core impact on its adoption and usage, leading it to be one of the rare breakout apps in a time when this is becoming almost vanishingly rare. It’s a mistake to believe that things that are fun, refined, elegant, or beautiful are “frivolous” and that this is all a “function of visual design” or “production design.” A simple gesture completely transformed the watching experience for TikTok. Simplicity that hides complexity is deceptive — it can be lost on those with little prototyping experience, too.

It’s a fun experience that reduces all the friction behind what to watch. It’s as simple as changing the channel back in days of yore!

TLDR

For most, “prototyping” means simple conditionals set up in Figma/XD/Sketch/Axure. In Figma, if statements happen on the artboard level — in Axure, that can be done, but also some if statements can reload just parts of an artboard. Either way, the mechanics and limitations are exactly the same, and neither provide complex conditionals allowing for things like Tinder card drags, or asynchronously running animations alongside other triggered elements (e.g. something continuously rotates while a user drags on something). This is just fine for most software — even Medium. It’s a simple web/native app that mainly works with simple tap targets.

Even something like a drag gesture for repositioning in a user’s story while a talking head bounces (demonstrating that a user is talking!) can’t be done in something like Figma/XD/Axure. The dragging is happening dynamically while a random seed feeds into the scale of the avatar, while the user can tap and completely swap the orientation.

But, as you go deeper into your career and work on highly gestural software (e.g., Instagram stories, reels, iOS, macOS, etc.), you can’t get away with simple, clickable prototypes anymore.

Notice that the position, rotation, and scale attributes change based on the user’s drag position. Also notice that hint text is toggling while the mic button pop animates in. None of this is possible in Figma/Axure/etc.

In my own work, I’ve found that it’s impossible not to prototype, given that I work on very heavily gestural products, and I tend to select for products that require this kind of work. The “information architecture” and other core aspects of this kind of software are inextricable from interaction work — there are no “Motion Designers” I can rely on to do this work for me, it’s all me, as it’s core to the feature’s functionality.

This is the same feature, but the “information architecture” of this view is completely impacted by the difference of a single interaction model (Tinder swipe versus TikTok swipe).

At some point, you need to start crossing into engineering territory — whether you like it or not! As a designer friend over at Apple Maps said: “the job of the designer doesn’t end in [Figma].”

Making audio consumption experiences better involves things like on-the-fly subtitles. Hard to communicate how important that is to the listening experience without showing it. Also in this prototype is a bouncing head that makes clear to the user who’s talking — this is in sync with audio spikes, and is running while the subtitles reveal themselves. Not shown here is the ability to scroll this while this all happens. Impossible to do in Figma/etc.

FAQs

Is Figma/Axure/XD/Sketch prototyping an example of prototyping?

Mentioned earlier! Yes, but the simplest possible kind of prototyping. As mentioned, this will do for most software products, as most software is not gestural/interaction-heavy, and this is totally fine. This is especially true if you mainly work in e-commerce, or web apps, etc. The more you get into mobile territory, and the more you get into entertainment/social, the most you lose the ability to use simple prototypes.

Is Figma/Axure/XD/Sketch prototyping an example of programming?

No! Though they offer conditionals (e.g., if statements and some stored values) and this is something all programming languages allow for, but this is not what establishes something as programming.

Where can I learn to prototype?

Most prototyping tools at the highest performing teams are internal and, therefore, not externally available. But Meta externalized theirs (Origami). You can also just go straight to rolling your own using SwiftUI or JS, if you like. This is probably better, longterm.

Should my company be encouraging prototyping?

For most, the answer is most likely no. But some exceptions can be made under these conditions:

  • You want to cultivate a “high-design” design brand
  • Your software is very gestural and difficult to get right without simulating complex gestures
  • You generally care about the small stuff

If you work on a webapp, or e-commerce, etc., you won’t really need this, and it’s probably overkill unless something in the above list is true.

--

--