#Art4GlobalGoals’ digital paintbrush: A technical breakdown

polyclick
DENKWERK STORIES
Published in
10 min readJun 19, 2018

--

In my latest project with denkwerk, I helped create an interactive website for the #Art4GlobalGoals awareness campaign and travelling exhibition featuring famous German artist, Leon Löwentraut. Not only can site visitors learn about the 17 Global Goals and explore Leon’s artistic interpretations of them, but they can physically strike through each issue with a digital brush thereby “signing” an online petition and joining a global movement to raise awareness about the most pressing global issues.

Brushing through an issue, revealing the underlaying art piece.

Great cause, but how do you translate this symbolic movement into digital space? Challenge accepted! With the encouragement of the denkwerk designers, I set out to push the boundaries of creative coding and create the most realistic and natural looking brushstroke possible — one worthy of such a powerful statement. From a technical standpoint, this meant focusing on three essential factors: performance, variation and strikethrough detection.

Performance & variation in interactive experiences

From the beginning, I knew that keeping things performant and thus responsive, was going to be a tricky but essential part of creating this website element. It’s a recurring theme in interactive experiences which heavily rely on displaying visual feedback to the end user. Most users crave instant gratification and feel disconnected from the experience if they don’t get immediate visual feedback on what they’re doing. So, if a user swiftly brushes across the screen, it’s our job to make the virtual brush duplicate this movement and produce visual results in milliseconds. I’ll explain how we were able to boost performance by choosing the right framework for the job later in the article.

The other key factor is variation. This is a common issue when creating interactive experiences that mimic some kind of human behavior. Imagine picking up a real brush and painting on a blank canvas. You know that what you paint will be different that what someone else does — even if it is the same theme. This is what makes our task so difficult: it’s nearly impossible to predict brushstroke outcomes because of the sheer volume of variations. The length of each individual bristle, the amount of paint applied to the brush, the amount of pressure applied, the brushing velocity; all of these play an important role in creating that naturally imperfect look we appreciate in art.

So, my main focus for this project was isolating the brushing experience and building an initial prototype that pushes the limits of graphics programming on the web — while keeping these two essential factors in mind.

Let me take you through a technical breakdown of how I built the entire interaction in a few days.

Spray and pray?

Before we really dive into the first challenge, let’s take a trip down memory lane. This isn’t my first time creating this kind of interactive experience; I built a few prototypes in Processing where users could spray around shapes and graphics on a canvas to create their own works of art. The algorithm for this is very easy. You basically just spawn random shapes and objects underneath the mouse cursor at a steady, regular interval.

Initially, I thought I’d be able to use what I’d already made to create the brushstroke by porting the code from Processing (Java) to Javascript. But after fooling around a bit and changing up the programming code, I realized I’d have to build a more elaborate solution for this project. You see, our main artist, Leon Löwentraut, uses a combination of brushstrokes and paint tubes in his beautiful art pieces, not spray cans. So, back to square one.

I was really unsure about how to proceed until I passed a designer colleague’s desk and was suddenly inspired. She was working on a design in Photoshop using the built-in brushes the software offers. I immediately opened up Photoshop and started investigating how they’d programmed the brushing mechanics. I was looking for small clues as to how they made the brushing engine by examining each individual step when dragging my cursor across the screen.

I noticed that the brush didn’t really change over time and realized I had approached the entire situation in the wrong way. You can’t just spray shapes onto the screen and expect them to look as if they were created with a brush. Now, what Photoshop does is very simple: it repeatedly stamps a fixed textured pattern over the path of the dragged brush stroke. I later called this “stamping” because it literally does what the word says: stamps an image with bristle-spots between two mouse positions.

Stamping between two consecutive mouse positions.

This was the first break-through that brought me a little bit closer to making the painting action look and feel more realistic.

Here’s how it works on a technical level: as the cursor moves over the screen, we receive repeated updates about every new cursor position. For each new cursor position, I calculate the distance between the previous cursor position and the current one. Next, I stamp the fixed pattern by interpolating the position of the stamp from the previous cursor position until I reach the new cursor position. After this, I stamp the pattern onto the screen by copying the pattern’s pixels onto the final brush layer for each interpolated position.

I like simple ideas that have a major visual impact. Excited about my discovery, I immediately got to coding and also realized that the math was going to be super easy. Several minutes and a few simple lines of code later, I had a basic working idea. If this all seems to go to be true, it’s because it is. As I continued to work, the first challenge began to rear its ugly head: the performance started to bottleneck. But why? Well, as we learned from my previous blog post: if you try to push around tons of pixels in a very short amount of time your computer will start to complain about not being able to handle the huge volume of intense actions. This “system overload” results in a massive frame rate drop thus disconnecting the user from the experience because all the drawing actions take ages to complete.

WebGL to the rescue!

Lucky for me, I could use the same solution as with the previous project. Instead of doing loads of pixel operations on the processor (CPU), we needed to offload them to the graphics card (GPU). Once again, we turned to the graphics programmer’s best friend: WebGL. By using this API, we can tell the browser which operations should be executed by a specific processor.

Work smarter, not harder, right? Instead of writing some raw WebGL API code which is super boring, low-level and super hard to get a quick grasp on, I prefer to use a nice framework that does all the hard stuff for me in the background. This allows me to focus on the creative process of making the brushing experience instead of the nitty-gritty details of the complex WebGL API.

I’m a big fan of choosing the right tool for the job, so I went online and started investigating different WebGL frameworks. The one that proved itself to be the best candidate was Pixi.js. It offers access to the WebGL API from a high-level perspective but keeps its focus on displaying 2D graphics, applications and games. And after my slight setback earlier, I was happy to learn that setting up a Pixi.js project was super easy and straightforward. I could basically copy/paste my initial code, adapt it to fit the framework and enjoy the result: silky-smooth brushing!

Here’s the code snippet, executed 60 times per second in the paint method:

Spicing things up: adding texture

Satisfied with my break-through, it was time to add a little “spice” to the overall experience, e.g. more customization and making the brushstroke more realistic. Here, I came up with the idea of adding a supply level to the brushstroke movement which would allow users to see how much paint is left on the brush. When a user starts brushing the supply level is at 100% (a wet brush) and then, over the course of a predefined distance the brush dries up linearly until it reaches 0% (a dried-up brush).

The predefined distance is calculated dynamically in relation to the size of the screen diagonal, so that the distance a visitor has to travel is relative to the screen size of his/her device. I chose 45% so that a fully-completed brush stroke would have a relative size of about half of the screen. Proportionally speaking, this looked and felt the best.

Draining supply level indicator.

Where’s the spice, you ask? Ok, so the supply level indicator doesn’t really change anything visually, but it does have an important functional application: we can use it as a hook for dynamically changing other parameters over time because it gradually goes from 100% to 0%.

For example, something linked to the supply level that I was able to change was the amount of bristles that touch the surface at a specific time. Initially when the brush comes in contact with a surface, all bristles touch the surface (full brush) and as the painter continues through the motion, the brush loses paint supply. For a more realistic effect, I gradually removed more and more bristle layers which thinned out the brush visually.

Varying bristle count touching the surface over time.

Another small tweak was scaling the brush down over the course of the traveled path. When a real brush touches a “canvas” it is pretty wide because all the bristles fight a place on the surface, pushing each other outwards. Then, as the supply level drains and the painter begins removing the brush from the surface, the scale increasingly normalizes until it finally reaches the final size, which is smaller than when it first hit the surface.

To add a bit more texture and irregularity, I implemented a small check that randomly selects a percentage between 0% -100% and then completely skips the rendering in 25% of the cases. This irregular stamping creates a nice intermittent pattern of imperfection.

Final step: strikethrough detection

After building a digital brush that looked and felt like a real paintbrush, there was still one more task to complete the experience: detecting when a user brushes through an issue. Now, there are tons of ways to do this. Some of these might require hours and hours of programming, but why waste time when we can find a solution applicable to 95% of all cases?

Structurally, each issue on the Art4GlobalGoals website has a clearly defined box, typically called a “bounding box”, surrounding it. We can use an issue’s bounding box to test whether specific points fall in- or outside of them. Then, if a specific threshold percentage of points fall within the box, the program signals a successful strikethrough.

Bounding box used for hit detection.

At first I tried to use individual measured mouse positions to detect whether these points fell in- or outside the bounding box. However, the detection ended up wonky due to the low resolution of the points. This problem became even more obvious when the user brushed with variable speeds (e.g. starting slow and speeding up towards the end). For a successful strikethrough, I had to somehow get a list of consistent, equally-distanced points that span the path from beginning to end. I then realised that I could use the coordinates of the stamped points instead of the irregular mouse positions.

This break-through immediately led to great results. After tweaking several individual parameters, I ended up choosing 20% as the minimum threshold of points that had to fall within the bounding box. Using the draining supply level, I decided to check if the user traveled a set minimum distance to filter out some edge cases. The user had to brush for at least 25% of the supply level to trigger the detection logic.

Equally-distanced points spanning the full path. Hit detection points highlighted.

Conclusion

Simulating something with an natural look and feel is always a big challenge while coding creatively. This is especially true when performance is crucial to the overall experience. I could’ve pushed it a lot further by adding more irregularity, e.g. random brush splats that drop onto the surface at the start and end of the brush stroke, Perlin Noise function to add a more organic feel. But in the end, it’s important to find good balance between functionality, performance and the main focus. Implementing more randomness drastically dropped the performance and took away from the concept of brushing swiftly through each issue to reveal the individual Global Goal.

Visit the project website at https://art4globalgoals.com

--

--

polyclick
DENKWERK STORIES

Cologne based audiovisual artist attempting to connect the physical and digital through creative coding. Won some awards.