Criss Moldovan
May 4, 2018 · 7 min read

Hello world,

quick questions:

Aren’t (Web / Mobile) Designers and Developers supposed to speak the same language?

Aren’t we all in pursuit of apps / sites that look pretty and work smoothly?

Can we make the whole mobile / web building process a bit more humane?

There’s clearly a gap between designers and developers. This seems all too familiar, and to my surprise it seems we’ve been mostly trying to navigate around it instead of facing it head on.

Can we right this wrong?

About a year back, as I was working on a mobile app, I got myself stuck in a never ending ping-pong with my designer, mainly tweaking how the app looks and behaves. Most of these changes were only about positioning, colors, spacing and so on. Instead of using my time to implement the real value bringing guts of the app, about 60% was spent doing non-business-logic relevant work aka “pixel pushing”. Essentially, all the “actions” the designer was doing in Sketch, I had to re-do in code. That, to me, looked like a completely inefficient process and the paved way to frustration. And a missed deadline. Or two :)

My first thought was “I’m probably not using the right tools… there must be a way…”

Not being a designer myself, and fairly unfamiliar with Sketch, I started to look around for some solutions that could help, but all I could find were plugins or apps that were doing part of the job. So, why bother using something that would only add more complexity to an already inefficient process?

Hang on, so… what are we actually looking for?

In a nutshell this is what I would like:

  1. the designer should be able to “preview” the designs on his device / browser in realtime. But, not simulated. He should see the preview as the real deal, as if a developer would have implemented it for him.
  2. when the designer draws a button, the developer should get a “Button” code snippet ready to be used in the app / site
  3. The developer should chose the snippet’s language and dialect or coding style.

It looks like there are some excellent tools to cover parts of the requirements, and they fall within two categories: prototyping tools and target specific “helpers”.

While tools such as InVision and lately Sketch’s own built in prototyping system are doing a great job for prototyping, they only go as far as “prototyping”. They are mostly approximations or simulations of the end result, and they stop being useful right there. There’s not much more you can do with your prototypes once you have built them (apart from contemplate their beauty). They cannot be re-used or further expanded toward a “real”, production ready result. Hence you now need a developer to code the UI, so back to square one.

On the other hand, Target specific tools, such as Zeplin, do a great deal in helping the developer “pick” styling information, but they rarely give the full context, plus they are exactly what they are called “target specific”, with hardly any configurations possible.

Another worth mentioning approach has been proposed by the guys from Anima, through their Launchpad plugin for Sketch which exports plain HTML/CSS.

While there are plenty of tools that’ll get you a fair bit ahead towards the goal, the designer still needs a developer to be able to experience his designs in the “native world”, plus a lot of manual tweaking is needed afterwards.

Then, the thought:

What would it take to capture the designer’s “input” and translate it to code, in real-time?

And this was the seed thought that brought together a bunch of techies whom I met at the 2017 JSHeroes conference in Cluj, Romania.


The shortest way we could imagine about how to tackle this challenge was to try to define the layout representation in a code-agnostic format from which, through a parser of some sort, we would generate the code.

Given we are describing a web / mobile document’s structure, a VDOM-like model would have been the initial choice, but somehow coupled with some concepts found in AST. We finally opted for a custom stripped down structure inspired by them. The JSON representation appeared to be the format of choice for the task at that time, given that:

  • it is humanly readable
  • it is widely supported by virtually any coding language
  • it is easily storable and transferrable

JSON Intermediary Representation (JSON IR)

The proposed JSON IR consists of a tree-like structure of nodes which we’ll call elements. Each element has at least a type attribute, and several other optional attributes such as style, children, and name.

Without going into too many details, as they are to be covered in a future article, a few notes about each of the above attributes:

  • type: a string that defines the nature of the element. Regardless if we speak about web or mobile design & development, the building blocks are roughly the same: views, texts, images, inputs, and buttons that are eventually aggregated into more complex elements. We’d go as far as to say these building blocks are here to stay even if we go into the AR / MR world, so a generic descriptive naming convention can be agreed that could be translated via a mapping to any target.
  • styles: a JSS object. JSS has been chosen as it covers all web styling properties and can be translated to other formats, such as CSS, React styling objects, React Native StyleSheet objects, etc…
  • children: an array of elements or a string (in the case of a simple label for example)

To make all this more visual, let’s take an example. The following image shows a basic UI made of a box, that contains a label and an image:

Example of a box, a label, and an image

It’s equivalent JSON IR would look like this:

With our representation layer defined, we can now focus on actually capturing the visual information from the designer’s environment.

The Sketch Plugin (or the design source)

Sketch’s open file format allows us to read its content but it does not fulfill our real-time requirement. However, Sketch allows a plugin to subscribe to events happening within its workspace, and to read the properties of the Sketch objects, that we convert to our JSON IR.

So far, we have a design source (Sketch) and an real-time intermediary representation (our JSON IR). Next we have implemented a local server, that acts as a storage and relay system for the JSON IR generated by the Sketch plugin.

Code Generation and Live Previewer

Once our JSON IR was persisted, we could pass it over to a code generation library or a live previewer. We have targeted React and React Native at this stage, given they use a component base architecture and they cover two important “targets”: web and mobile, so we have built a pair of each for both targets.

Below you can see a very basic flow diagram describing the JSON IR flow toward a code generator and a live previewer, with their respective outputs:

We’ll publish soon more information about how they work but, for now, let’s look at how a React generated code would look like for our JSON IR:

Alternatively, the React-Native code would look like this:

NOTE: in this example the positioning is deliberately set to absolute for the sake of simplicity. We’ll cover this topic in a future article given the complexity of the subject. Meanwhile, you can take a look at Karl’s article about Figma to React code generation.

Sneak Peek

Here’s a little demo of how this all works. In the video you can see our early stage Sketch plugin in action and how the design-source > intermediary representation > code generation > live preview flow feels.

The video is a simulation of a designer’s experience building JSHeroes website’s menu.

On the screen:

  • the left side: our dashboard shows the JSON IR and generated React code, side-by-side
  • the top right corner: a web live previewer
  • the lower right: Sketch with the Teleport plugin loaded.

Wrap up

So far, we confirmed there’s a viable technological path for building real-time design-to-code experiences through which the design source and the target code can be completely decoupled.

So far, we’re able to generate React, React Native, Vue, HTML/CSS and AngularJS code.

Sure, the holy grail would be a bidirectional approach, where JSON IR could be generated by interpreting the source code, hence we love Jon Gold’s approach in React Sketch App. We’d highly recommend checking out the project.

We also aim to open source these tools ASAP, via Github on https://github.com/teleporthq, so, stay tuned.

We’re looking forward for your thoughts and feedback, so, please get in touch with us via Twitter. :)

Cheers!

P.S.: if you’re interested in learning more about our journey, you can sign-up to our newsletter here.

teleporthq.io

teleporthq.io is a collaboration platform for designers and developers with design-to-code real-time capabilities

Thanks to Timofte Nicu, Paul Brie, and Alex Pausan

Criss Moldovan

Written by

teleporthq.io

teleporthq.io is a collaboration platform for designers and developers with design-to-code real-time capabilities

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade