Lost in Parallel Universe
Long story short, my ArtBlocks application — Lost in Parallel Universe got rejected. And you can find my project here. I got to say I was a bit gutted. I would like to share my mistakes here so that other artists can avoid repeating these mistakes. Also, I am going to share some little cool tricks and ideas I’ve used in this project. I hope you will enjoy it.
I always feel skeptical when it comes to art. It is a very strange industry and art can also be very subjective. When I decided to go for it, I knew that there is no guarantee that my project would get accepted. In ArtBlocks, there is a Cruation Board will do the gatekeeping to make sure the quality of the submitted artwork is up to certain standard. So I did some basic ground work investigation. I did some screening based on the approved projects(Curated and Factory) on ArtBlocks and compare them with the project I was working on, I was confident that my project would be at least got accepted to be released as a Factory project on the platform. Then when the application opened, I submitted my project on the first day. So after almost 2 months of wait, here is the first response:
Our review committee has evaluated your submission and, unfortunately, we have decided not to move forward with this project on Art Blocks. We’ve seen a huge increase in the quality and quantity of submissions, and although we think your work is strong, your project could be pushed a little farther. In Lost In Parallel Universe, we felt that the subject was distracting the beauty of the background. I wonder what it might look like if you deconstructed the elements a little.
We really appreciate taking the time to apply and create new work. Please keep at it and reapply in the future! You may continue work on this project or submit a new project. Applications will remain open, so take your time in your artistic process.
Okay, fair enough. As usual, I agreed that there were always things that I could keep pushing. But at this point, I’ve already invested 2–3 months into the project, I would like to have a bit more feedback so that I can improve it and resubmit or create a better one next time. In my head, I believed that it is not just about the quality of my project, so I decided to ask if it is because the style of my artwork not what ArtBlocks is looking for? And here is the reply:
Yes exactly, we want a collection that reads more like a series of artworks. The work you submitted is giving a 3D virtual environment feel. We are looking for projects that express variety, rarity, strong color palettes, and composition.
It turns out I got it all wrong from the beginning: The Definition of Artwork.
It looks like ArtBlocks was looking for a certain style of projects. In fact, I can still find so many artworks similar to mine on their platform at the time of writing this article. Did they change their direction in favour of those traditional fine art styles? I can’t say for sure. But this second email definitely made it clear that their feedback in the first email was completely irrelevant.
Not to put blame on ArtBlocks. It is their platform and they have all their right to choose what kind of work to be shown on their site. In my opinion they should have communicated better upfront and been transparent with what they were looking for. Especially there are a lot of artists like myself went for a completely opposite alternative art direction.
But anyway, I would like to summarize the mistakes that I’ve made that leads to the failure of my application:
- I didn’t do enough research on who was going to judge my project. It was an amateur mistake. When I started the project, I thought I would just make the project I like and kept pushing the quality of it, then I would get a chance. That’s completely wrong. You will never be able to sell a pair of shoes to those who were asking for a pair of gloves, no matter how good a shoe maker you are. Know your target audience, ALWAYS. Here is the official list of their Cruation Board. Stop writing your codes and learn what your audience wants. It is not about making art that you like, it is about making commercial art that sells.
- I naively thought that making a real-time visual project would gain any advantage for my application. By sacrificing image quality to achieve some real-time effects such as smoke simulation to run in 60fps was a stupid idea. The visual quality may looks great in my line of work in the web advertising industry, however it won’t cut it in front of the Cruation Board who value quality of the image over all those smart tricks that I have done to get my project running at 60fps. I was convinced that when the art committee saw this fully interactive experience, they would be impressed. So, within those 2–3 months of production time, I kept on working on the performance optimisation and adding features such as accelerometer navigation for mobile devices but I was totally detached from the reality that the members in the Cruation Board are not someone working in our industry who would appreciate these sort of details. So, if you do 3D scene graph like I do, forget about real-time rendering, you should go for an offline rendering approach such as this in-browser path tracing test that I have made recently.
- I wasted so much time to do the optimisation and adding details and features. Basically after 2 weeks of development, my project got to the level of pretty much 80% of what I submitted in the final version. If what I did in the early stage didn’t fit what the Cruation Board wanted, I don’t think that the extra 20% would make any difference. Instead I should have explored different art styles and tried producing as many as diverse projects as possible and hoping one of them will be chosen by the Cruation Board.
In conclusion: Know your audiences and try to look at your work in another’s perspective. Otherwise, your art won’t sell even if you are Vincent van Gogh.
Clarification!! My application was actually rejected in the first screening by their Art Committee and it was not screened by their Curation Board. Their Art Committee is made up of Art Blocks staff. More information can be found at https://www.artblocks.io/learn
So what’s next for me? I don’t think I am going to release this project as a NFT. Some of my friends suggested that I should release it on fxhash to earn some cash out of it. However, firstly, I don’t think it is worth it. If I am going to earn just a few thousand dollars(only if I am lucky) out of fxhash, I would rather release it online for FREE like I always do for those experiment projects we’ve done in my studio(see bottom of this article). Secondly, fxhash is a bit different from ArtBlocks, ArtBlocks has a really strict file size constraint due to the scripts on-chain feature and no one wants to pay $30k for the gas fee of a jpeg. When I designed the project for my ArtBlocks submission, it really felt like I was working on a 64K demo. For example, I would spent a couple of days to figure it out how to procedurally generate blue noise texture in real-time to render the smoke better. In fxhash you probably just need to import a blue noise texture and that’s it. Without this file size restriction, there are much more I can do to create a better visual on fxhash. Anyway, I think this ArtBlocks adventure would probably be my only short generative-artist-wannabe period of my life. Thanks ArtBlocks to give me this sweet dream and I will probably get back to my office and do what I do best — Creative Coding, Not Generative Art.
But still, one thing I found ArtBlocks better than fxhash other than the cash perspective is the stricter gatekeeping. Even ironically I got casted out by this gatekeeper, I still believe that a stricter gatekeeping can keep the quality of the projects on the platform a lot higher. However, I believe we need a more inclusive generative art platform that allows different kind of generative artists/tech artists especially 3D artists to have the similar opportunities.
Here are some random ideas I would love a platform to have:
- It can run Blender project in the backend and generate Cycle/Eevee render outputs. Using blockchain token to generate the seed random values in Python and artists can create generative artwork in codes or leveraging the Geometry Nodes to create something far more interesting artwork.
- Same for Houdini artists. If the platform can run the HDA created by the artists, the creativities are limitless. For rendering, Mantra and Karma would be enough but it may take forever to generate the final image to reupload to OpenSea. Otoy RenderToken can be a good option via a generated orbx file.
- If real-time rendering is not a critical thing, there is no reason not to be able to run scripts in Python directly in the backend. It can also open the door to generative AI artwork. How amazing it can potentially be?
Of course, the ideas I mentioned above probably require a lot of sandboxing ground work for security issues. But it is always good to dream in the ideal world, maybe someone is brave enough to build one in the future.
Since I’ve already spent countless hours on this project, I would like to share some cool tricks and ideas that I’ve learn in this project in case any creative coders are interested.
From the beginning I knew that I want my artwork to have strong contrast and with a night time set up. It was an obvious choice to create a starry night using a nebula skybox. But like what I said, in ArtBlocks, you can’t really import image as texture, so I had to create it procedurally. The skybox comes in 3 parts:
- The galaxy looking nebula: It was created by multiple octaves of simplex noise. I also used the similar technique to add the 2 colours gradient to make it looks a bit more interesting.
- The small stars: If you used the similar high octave simplex technique but use a super high level of
pow()to clamp down the brightness and ramp up the contrast, you can get some small noisy grain-liked stars.
- The dynamic blinky stars: Adding some dynamic stars which will blink over time can add some subtle details to the composition. I’ve only used 1024 but it is already pretty convincing imo.
Protip: Due to the complexity of the shaders, the whole skybox cubemap was generated over a couple of frames to prevent it crashes on mobile. Each nebula skybox is unique and I really like the result of it with just some simple codes.
The story behind my project is that in the future time, people are lost in another parallel universe. People need to harness energy from the ground in order to survive. But you are not alone. The centre piece of the visual is the energy beam, to show that there are other people living in the parallel universe, I decided to created some neighbours.
The neighbours are essentially visualised by other light beams which use the same flickering pulse effect to unify the centre light beams and the neighbours. I also created 2 types of landscape silhouettes randomly: Terrain and building to enhance the neighbour aspect of the story. Plus, they are super easy to procedurally generated. Also the silhouette is a good way to clip out the ground/water reflection and further divide the sky and the ground to create a better composition.
Realtime Volumetric Smoke
The fluid simulation was borrowed by this amazing project. However, I want some 3D smoke instead. The idea was to take the 2D fluid simulation as 2D height field and ray march it. But if I just ray march the 2D height field, it will obviously look so 2D. The hack is to cache the previous simulation over several frames to create this fake 3D volumetric simulation so that the lower level of the smoke will return the more current results and the higher level of the smoke will return the older results.
In order to make it run smoothly on mobile, I need to downscale my raymarcher and use lower sample counts. So, I need some blue noise to improve the image caused by the low sample count. Unfortunately, no texture allowed on ArtBlocks. I tried transpiling some C based blue noise generator into JS but without success. Then I ended up found this Blue Noise Generator (I asked Leonard Ritter’s permission for the use of this snippet). I tweaks the shader a bit so that it can run in WebGL 1 without the bitwise operations. Finally!
There is another challenge of the smoke rendering. If there is an object occluding the smoke, the thickness of the smoke needs to reflect that otherwise it will look wrong. In normal scenario, I would just rely on the depth buffer. However, in WebGL 1 you only get a super low 8 bit precision of depth buffer. To avoid rendering everything twice, I stored the normalised depth which is based on the camera ray box intersections with smoke volume in the alpha channel. Finally, my cheap fast volumetric smoke renderer is done.
In fact, the smoke simulation is interactable but since the smoke is a rare feature in my project, I don’t want to add some exclusive interaction just for this effect.
In my project there are 4 different main visuals: Particles and Wires, Geometric Shapes, Rings and Grid Blocks
Particles and Wires
It is the classic curl noise simulation with wires and instanced based geometric shapes. There are 3 different particles shapes I’ve used in this experience: Spheres, Pyramid and Rectangle Blocks.
The wires are the rare feature in this main visual. Even though the wire simulation is not physically based , it still added some visual details to the composition.
This is a fairly simple effects. It mix and match different geometric shapes which spins among the light beam up to the sky. To make the visual look a bit more interesting, I created a pattern generator which generate various seamless patterns through the canvas APIs and pass the 3 different patterns to the shaders via the texture with RGB channels.
Rings is probably the easiest one however, the simplicity leaves this visual with some empty space to breath unlike the other complex main visual systems.
This is actually a pretty interesting main visual. I simply did some KDTree-like splitting on a 2D square to get some interesting grids. Then for each grid, I created N x 3D shapes among the Y-axis(up) with random height values and then I normalise the height across different grid so that I can create a Y-axis loopable animation without any overlapping. By moving the shapes vertically and scale them by time, I created this unique blocks shape grid system.
In order to create enough variety and improve the overall composition. Some supported side visuals are needed to fill up some empty spaces.
This is a pretty expensive side visual. To render this crystal refraction correctly, I had to do a multi-pass rendering as following:
- Sort the instances from back to front
- Before each render of the crystal instance, cache the viewport and blur it.
- Render the crystal with specular light only with some generative FBM noise as imperfection. Then sample the blurry viewport cache with RGB offset based on the surface normal to create that fake crystal look.
This is a classic CG mograph kind of side visuals. There are 3 different types of light props as following:
This effect can’t coexist with the volumetric smoke effect because it will add a lot of complexity to the smoke rendering by adding more light sources. I added various placement, size and instance count to this side visual to improve the variety of my project.
Like the Geometric Shapes of the main visual, but it provides some variety of chaos to the overall composition. In my option, this side visual is a bit chaotic, however I asked around and some of them like this sort of dynamic high energy visuals.
There are some other small details such as rains, distance field based water ripples, imperfect ground reflection etc.
This is everything I’ve done visually and I’ve put my heart and soul in this project. Regardless the decision of the ArtBlocks situation, I am so glad that I finally released this project.
File Size Optimisation
If you are working on a ArtBlocks project, you want to watch your file size because it can cost artists a lot of cash for the gas fee. So here are some tricks that I’ve learnt to reduce the file size:
- In my project, I used Viteas the bundler and it is built on top of ESBuild which is super fast and it can produce very clean pre-minified codes. Choosing a good bundler helps a lot to produce smaller file size project down the line.
- You need to lose your classes. You shouldn’t create a class for anything you don’t instantiated as it will create a lot of necessary `this` and public function or property names that are not minifier friendly.
- You may want to leak some variables to global.
- You may want to hash your depended library. For example, do a forIn loop with the THREE.js and hash the function and property names with lesser characters.
- If you do shader scripting, you should macro everything. For example, use
U2 u_barinstead of
uniform float u_fooand
uniform vec2 u_bar. Then you can quantise the attribute, uniform, varying ids through simple regular expressions. Of course, you need to minify your shader using tools like GLSLX.
- Last but not least, JSCrunch. It is a crazy script that can convert your codes into ascii string and using an
I also created a sphere based modelling system in Houdini FX. For example the teddy bear was constructed by a couple of spheres with different translation, rotation and scale. And the based64 data for this 3D model is
That’s it! If you like the work that I did, go follow our studio on Twitter. My studio produces many award winning just-for-fun web experiments like these:
My Little Storybook
This is a Webby winning interactive story about a bird family crossing the river. It is how we reimagine children books can be done these days leveraging the immersive Web technology.
A high fidelity web based automobile visualisation. We tried to visualise one of our interpretations about the contrast of motion and style on the web.
An infinite fashion show running 24/7 in the virtual space. You don’t need a ticket to the Fashion Week, you just need to click the link below.
A little toy game we’ve created that you can control an UFO to abduct a hundred of thousands of people within the browser