Fluid Dynamics 101: Byte by byte

This year I participated in the PC 1k intro category in the Assembly 2019 demoscene competition with my entry “Fluid Dynamics 101” and placed 3rd (check out 1st, 2nd and 4th). This is a byte by byte analysis of the entry. The rules of the competition are that you must create an executable or a website that is only 1024 bytes in size. This is an extremely limited amount of space to produce audio and visuals. This first paragraph is already 515 bytes or characters long, so the entire entry is only twice as long.

PNG compression and bootstrapping

To get a bit more than just 1024 characters of HTML and JavaScript to work with, we can compress the JavaScript into PNG image, which we then let the browser to decode and execute. Here is the final HTML file that contains the PNG image data in the beginning and the HTML and JavaScript required to decode and execute at the end. The browser will ignore the PNG data while reading it as HTML since it doesn’t know what to do with it and just executes the HTML when it sees that. When the same file is read as a PNG image, the header tells how many bytes to read and the HTML part is outside that, so again it’s ignored, and the file works just fine as PNG image.

Image for post
Image for post

I’m using a slightly adapted compilation pipeline what was seen in Core Critical. Instead of relying on JSEXE doing the PNGification, I opted to do that myself inspired by p01’s write up on BLCK4777. Using better optimized bootstrapping code saved me 3 bytes vs. JSEXE’s version. Another 3 bytes was saved by better PNG compression using combination of PNGOUT, ZopfliPNG and PNGWolf. So, in total this approach saved 6 bytes. The code above should run in any modern browser. You could save 4 bytes by removing IDAT checksum and Chrome will still happily run it, but Firefox wont.

Image for post
Image for post

Let’s open up the HTML part first. ”<canvas id=s>” is used later on within the compressed JavaScript to create a WebGL context which allows us to run the fluid simulation and rendering on the GPU. Fluid simulation is heavy, and CPU wouldn’t be fast enough to run it 1080p at 60fps. The rest of the HTML and JavaScript first loads the file as PNG image into the second canvas, then reads the image as characters to variable named “_”, and finally evaluates it as JavaScript.

Rendering pipeline and Speech Synthesis

Now let’s take a look at what is hidden within that compressed section. Below you can see it after decompression. Because it’s been minified with all line breaks, spaces and such removed, it remains very hard to read.

To make it more readable, I’ve added indentation, line breaks and comments to the code. If you would remove those, it would be identical to the code above.

Audio and JavaScript is a very tough combination for 1k intro, because there aren’t any easy ways to access Audio APIs in browser. The alternatives are basically writing WAVE and playing it with Audio element or using Web Audio API. Both of these take over 100 bytes to produce even a tiniest beep. Unfortunately, the setup of multiple textures and fluid simulation steps take such a large amount of code that there is simply no room for that many bytes for audio. The SpeechSynthesis API on the other hand only takes 30 bytes compressed, so it was a perfect fit here.

I actually had the entry ready and packed using setInterval() loop in JavaScript. However, this caused micro stuttering every now and then when the screen refresh rate doesn’t exactly match the simulation speed. After noticing the stuttering I couldn’t unsee it and just had to fix it. Switching to requestAnimationFrame() approach, which is the correct way to do it cost me extra 10 bytes and took another 4 hours of optimization to fit into 1024 bytes. The end result is a smooth frame perfect fluid simulation that makes me very happy.

Fluid simulation

The fluid simulation is nearly carbon copy of the one presented in Chapter 38. Fast Fluid Dynamics Simulation on the GPU. To save space, some steps were merged together. I also dropped boundary conditions, which creates somewhat weird interactions at the edges, but it’s a 1k, we aren’t looking for perfect realism here.

We can’t really afford space wise to have multiple textures to store velocity, pressure and divergence, so the whole simulation state is stored into a single texture. We then read texture A and write to texture B and then swap them after each draw call except when rendering, since we don’t want to save that, just display it. Red and Green channels are used to store X and Y velocity, Blue for pressure, and Alpha for smoke which is used for rendering.

Future work

That’s it. Fluid simulation and speech synthesized audio in 1024 bytes. I think that there isn’t too much room for creativity with this tech in 1k space. I don’t know how much space the setup of textures and shaders take in native code, but at least sound is much easier to access, so perhaps something can be done on that side. In a browser you could omit audio to write slightly better visuals, however silent intros aren’t very captivating and extra 30 bytes isn’t much to work with.

That said, in 4k, 8k or 64k intros I think you could use both fluid simulations and speech synthesis to great effect. I haven’t seen too many intros using them and as demonstrated here, neither takes that much space. Additionally, the SpeechSynthesis API allows altering the sound with SSML tags, so you could make it sing something rather than just read the source code in a monotone voice.

Image for post
Image for post

Written by

a.k.a Bercon

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store