The technology behind Vortex: A real-time, browser-based, seamless texture generator

Talin
Machine Words
Published in
6 min readOct 26, 2017

Vortex is a web-based program for generating seamlessly tileable textures for 3D modeling and games. It is entirely browser-based, written in TypeScript, and generates textures interactively in real time using WebGL, which means that the actual images are produced by the GPU.

Here’s what the interface looks like:

Want to try it yourself? Just go to https://vortex.run/

Want to hack on the source code? Check out https://github.com/viridia/vortex

Vortex documents, once saved, can be shared by URL. The intent is to create an online ‘scratchpad’ very similar in concept to jsFiddle or CodePen.

Instructions on how to use Vortex can be found on the GitHub page, see above link.

Overall application architecture

Most of the code is in the client, which uses Preact, a lightweight alternative to React. State management is handled via MobX.

There’s also a server component that handles saving documents to a persistent store. Note that you can run Vortex without the server component; you just won’t be able to save anything (although the document you are working on will be preserved in local storage).

Data Model

The primary data model is a Graph which consists of GraphNode and Connection objects. Each GraphNode has a number of Terminals representing inputs and outputs. All of these are MobX observables, so the view components can automatically re-render nodes and connections as they change.

In previous projects I have used Redux, but in this case I chose MobX because it seemed like a natural fit — I was going to have to manage large numbers of interconnected objects, and it would be cumbersome to try and model this using immutable data. The nice thing about MobX is that it puts very little constraint on the design of my data structures, and also involved much less repetitive boilerplate than using Redux would have.

In addition to the graph, there are Operators which represent the various node types — noise, blend, mask, and so on. There is one Operator instance for each operator type, and multiple graph nodes can reference the same operator.

Most of the intelligence of the graph is in the operators; The graph nodes are fairly “dumb” objects which have little information other than their coordinates, a list of connections to other nodes, and a map of editable properties. The operators, on the other hand, define the data types and names of those editable properties, describe what inputs and outputs a node may have, and contain the logic for actually rendering the node (by invoking a shader program written in OpenGL shader language).

User Interface

The main interface consists of a number of panels which are Preact components. I decided to go with Preact because of its simplicity and small size; Because this app’s UI is fairly modest and doesn’t need any complex third-party UI libraries, I decided not to use the preact-compat package, and just go with “basic” Preact.

I should mention that Preact and TypeScript work particularly well together; The “basic” Preact doesn’t include property validation on components, but all of that is handled at compile time by TypeScript anyway, so run-time property validation doesn’t have as much value.

All of the controls on the page are “custom” controls, with the exception of the buttons which are standard HTML5. As such, there is no need for a widget library such as React-bootstrap.

The main workspace panel is the most complex view component, and actually consists of several layers:

  • The node layer, which is an absolutely positioned element that contains the representations of the nodes.
  • The connection layer, which is an SVG document element that is used to draw the bezier curves for the connection wires.
  • A backdrop element that consists of a subtle CSS checkerboard pattern, and which is also used to capture background clicks.

The workspace panel supports HTML5 drag-and-drop, both for adding new components and for dragging connections between components. (Unfortunately editing connections does not use drag-and-drop but plain mouse events, and the reason for this is that SVG elements are not draggable. Oh well.)

(If you are wondering why not make the entire thing SVG, the answer is that I wanted to take advantage of HTML’s dynamic layout algorithm and text rendering for the node content.)

Property Panel

The property panel on the right side of the screen is dynamically constructed based on the node that is currently selected. It will introspect the various properties of the node and render the appropriate UI elements — a slider for a scalar property, a color picker for a color property, and so on.

The sliders are not traditional GUI sliders, but are modeled after the ones in Blender3D — they allow clicking, dragging, clicking on the arrows at each end, but you can also double-click them to edit the numeric value directly.

Rendering

Each operator constructs OpenGL shader programs to render the texture for each node that it is attached to. Mostly this “construction” is just sewing together pre-written fragments containing OpenGL functions, however the ‘main’ function for each shader is generated from an expression tree. This tree is created by walking the graph from the current node to all of it’s transitive inputs, and using this to generate a hieraarchy of function calls, taking the return result from one node and passing it as an argument to another node. If a connection joins two nodes that have different data types (such as float vs RGBA), then type casts are inserted into the generated code as needed.

Note that this means that nodes which accept inputs from other nodes will contain a copy of the source code for those other nodes. Vortex doesn’t render each node to a texture and then feed that texture into the next node; Instead the shader code for each node is entirely self-contained and contains all the code for itself and its inputs. This means that you can copy the shader code and use it directly in your OpenGL app!

The one exception to this rule is the ‘blur’ node, which does in fact cache the output of the node connected to its input terminal as a texture. The reason for this is that in order to do a blur properly, it needs to sample a bunch of pixels and then compute a weighted average; This would be very slow if each sample required redoing the entire calculation for that pixel!

Each node also maintains a ‘modified’ bit which is set whenever one of its control parameters is changed; This tells the renderer that the node’s shader needs to be run again (although this is debounced using requestAnimationFrame so that rapidly dragging a slider doesn’t cause large numbers of wasted redraws). There’s another signal that tells the node to rebuild it’s shader; This is needed when the input connections change.

The ShaderAssembly class contains many helpful methods for dynamically generating shader code. This includes importing common code functions (and making sure that they don’t get imported twice even if needed by multiple nodes), declaring uniform variables, and generating the main() function.

The GLResources class is used to track shader programs and textures that have been allocated for a node, so that these resources can be released when the node is deleted from the graph.

Finally, the Renderer contains an off-screen WebGL canvas. Rather than trying to deal with multiple WebGL drawing contexts (which is difficult and not well supported), I decided to use a single, shared WebGL context and then copy the resulting rendered image to individual on-screen canvas elements, one for each RenderView component.

--

--

Talin
Machine Words

I’m not a mad scientist. I’m a mad natural philosopher.