Coded Illustrations

zach lieberman
MIT MEDIA LAB
Published in
9 min readMar 18, 2021

One of the more exciting calls I get as an artist working in the field of new media is for editorial illustration work. It’s rare. I most do generative design as a commercial practice. I don’t have a background in illustration or even a portfolio of this kind of work, so I am always excited when an art director looks at my mostly abstract and gestural animations and inquires about a possible connection. I really enjoy brainstorming and thinking about what images or animation would help bring an article to life.

I am a teacher — at MIT Media Lab, where I help run the Future Sketches group, and at the School for Poetic Computation, and one of the things I teach is a practice of computational sketching. I love to see students engaging with code — seeing computation as a malleable medium they can use for art and design — but one thing I always see students asking is, “how can I use this professionally?” It’s nice to teach young people how to make art and poetry, but also there’s a need to show the practical side. I am always thankful when I can show how you can use this medium to buy bread.

One of the biggest challenges for code based artists is figuring out how to interface with traditional workflows. How can we export images, videos at resolutions and formats that work. In addition, we are building tools as we build our art. This is both a gift and a curse. It’s a gift in that we can often do things that are hard or impossible with traditional tools, but also a curse in that the tool building part of our work can be really time consuming. Imagine if every time you went to cook a meal you also had to construct the pots and pans for cooking.

To help explain how I’ve used code for illustration work I want to do discuss several projects and give a bit of an aesthetic and technical breakdown.

This week I did the cover for the New York Times magazine. To be totally honest, doing the cover art for the magazine has always been a dream of mine, ever since I saw the John Maeda NEW cover

John Maeda’s NEW cover (1999)

The cover I did this week was for an article around Clearview AI and face recognition, written by Kashmir Hill. The art director, Annie Jen, asked me to send some ideas and I responded with a keynote deck with some animations that were around fragmentary faces — it feels like these companies are building portraits of us that are unsettling made from disparate pieces. One idea we both gravitated to was a sketch I made in 2016 where you see the face split across 68 windows. This sketch was inspired by Raven Kwok, who has made beautiful sketches involving multiple animating windows.

https://www.instagram.com/p/BFA8AJapNve/

Typical face tacking software gives you a number of feature points — essentially landmarks where you can see where someone’s nose, face, eyes, chin are.

For this sketch I took those 68 points and used the data to control 68 actual windows and the visual form I draw was based on my webcam. My 6 yo found it a lot of fun:

fun times!

As we aligned on the sketch as a potential approach, the next step for us was to figure out how to use it. The Times commissioned photo and film work, and I spent a Friday reviewing the footage and giving notes about camera position and lighting. Face tracking tends to be notoriously fickle, turn too much to the right or left and you loose the face, so it was helpful to give feedback on the shoot. I built an openFrameworks app that allowed me to drag and drop video and images, and would help visualize what the face would might like on the cover.

the app allowed control over square size and image scale

A stylistic decision was made to drop the window “chrome” but to keep the vibe of an OS X windows by having a drop shadow. Transparency and shadows can be a bit tricky in openGL contexts. I recently spent days trying to get transparent video out for a client (which was brutal!). I also spent some time creating drop shadows in OF for this project — which was a fun challenge (I leaned heavily on this addon)

some tests of drop shadow, which is not easy in openGL! I wish I had canvas

In the end, my solution, which proved very flexible was to generate each window as it’s own PNG, and then using a tool called ImageMagick, stitch them together into a PSD, where drop shadow could be easily applied. For someone who typically works with realtime software, it was a time consuming process, but it lead to really flexible artwork that could be adjusted and manipulated by the art direction team at the times.

Export individual pngs, stitch into PSD, and add drop shadows in photoshop

And this is how the work appears in the magazine:

In addition to the cover, opener and secondary artworks, we also discussed making an interactive for the web. I don’t do a ton of web programming so I offered to help with data, and was excited to be reunited with Jacky Myint, who I went to graduate school at Parsons with and Kate LaRue, who commissioned earlier work from my partner Molmol and I for 538. I exported json data with tracking and normalized faced positions and built an OF app to show how this data could be used and Jacky went to town. It’s always cool to see your work get translated from one language to another. We really wanted to see these squares in motion, like my original sketch. The javascript code plays the movie, parses the saved json data, and allows you to control the scale (how zoomed in or out the rectangles are) based on your scrolling position so you zoom into or out of faces as you read the text. What’s different from print is you can see shakiness of the tracking and how the movement can abstracts the face. Here’s a link to the article.

Another New York Times project that I’m incredibly proud of is artwork I made for an article on opioid addiction written by Shreeya Sinha. I was approached by Rumsey Taylor about helping with illustration work and invited him to come by my studio. He explained the times is interested in doing more “explainer” / “wikipedia” style articles that they can link back to when there’s a new story around a given topic. In this case, the explainer was about the different stages of addiction and what you body feels like when it goes through them. We looked at a variety of animations and he was quite excited about all of my body based sketches.

An idea was hatched to commission a dancer, Bailey Anglin, to essentially interpret through dance the quotes from this article — she was filmed by Leslye Davis. Shreeya Sinha and Jennifer Harlan interviewed a dozen former opioid users and their families and it’s their words and stories which inspire the movement.

I then wrote software to analyze the movement, and to visualize and transform the body.

It was extremely important to me that the visual forms really respond to the language and words from the interviews. It’s easy to make pop-y, cool, weird things with the body, but could I match the quality of the movements and express what it feels like to relapse or take medication? The page itself is a scrolling article, and as you move through the different stages of addiction you see visual forms to represent the stages.

Here are all the animations I made — I do encourage you to see this in the context of the article, but in case it’s helpful I’ve stitched them all together here:

To create this I made an openFrameworks app to track the movement of the dancer. I exported high resolution images which were stitched together to make videos.

One thing I love about this project is that we also did a 2 page spread in the paper. I was so happy to run around Brooklyn buying every copy I could find.

If it’s helpful, the times did a nice write up on the process.

In general I’m really excited about the field of creative code and illustration and happy to see more art directors reaching out to our community for work. Beyond the John Maeda cover I referenced earlier, I want to give a shout out to some artists working this space — Adam Ferris, Yoshi Sodeoka come to mind. If you are working with code + illustration let me know, I’d love to assemble a list — my sincere hope is that in addition to traditional physical and digital approaches, we see more illustration and editorial work from the p5, p5js, OF and other creative coding communities.

Additionally, I just want to quickly thank Annie Jen, Rumsey Taylor and the larger New York Times team for the invitations to be involved in this kind of work.

--

--