Stunning WebGL Dot Spheres

Will Howard
8 min readNov 30, 2022

Beautiful, interactive WebGL globes have had something of a moment in the spotlight lately with both Stripe and GitHub featuring them prominently on their homepages. Each later went on to write a blog post about how they did so (Stripe’s is here and GitHub’s here if you’re curious).

Both globes are made up primarily of dots, which got me thinking about the various ways in which dots can be distributed across the surface of a sphere. Sphere packing is a complex puzzle under active deliberation by mathematicians so for the purposes of this article I’ve limited myself to laying out a few basic approaches and how to achieve them in WebGL.

Setting up the scene

Before going any further, we need to establish a basic WebGL scene in which to construct the sphere. I’m using Three.js as the de facto framework of choice for interacting with the WebGL API. I’ll aim to keep the code snippets in this article as concise and relevant as possible; explore any of the embedded Sandboxes for the full code.

After creating a scene, we establish a dotGeometries array which will eventually contain the geometries for all our dots. Then we create a blank vector, a 3D point in space inside the scene, the position of which will be reassigned each time we create a dot.

// Set up the scene.
const scene = new THREE.Scene();

// Define an array to hold the geometries of all the dots.
const dotGeometries = [];

// Create a blank vector to be used by the dots.
const vector = new THREE.Vector3();

// We'll create and position the dots here!

After we’ve created the dots and pushed their geometries into the dotGeometries array, we can merge them into a single geometry using the handy mergeBufferGeometries utility. Then we just need to create a mesh from the dot geometries, give it a material and add it to the scene.

// Merge all the dot geometries together into one buffer geometry.
const mergedDotGeometries = BufferGeometryUtils.mergeBufferGeometries(
dotGeometries
);

// Define the material for the dots.
const dotMaterial = new THREE.MeshBasicMaterial({
color: DOT_COLOR,
side: THREE.DoubleSide
});

// Create the dot mesh from the dot geometries and material.
const dotMesh = new THREE.Mesh(mergedDotGeometries, dotMaterial);

// Add the dot mesh to the scene.
scene.add(dotMesh);

Now let’s explore how to go about creating and positioning the dots.

The basic approach

The easiest way to go about adding dots to a sphere is simply to define the number of latitude lines and longitude lines we’d like the sphere to have, then distribute dots along them. There are a couple of important things to note here.

Firstly, we’re defining the phi and theta angles for each dot. These angles form part of the spherical coordinate system, a system for defining exactly where a point sits in 3D space in relation to its origin (which in our case is the centre of our sphere).

Secondly, phi and theta are both measured in radians, not in degrees. The key to this is to remember there are π radians in 180º. So to find phi here all we have to do is divide π by the number of latitude lines. But to find theta, we need to divide 2 * π by the number of longitude lines because we want our longitude lines to continue around the full 360º of the sphere.

// Loop across the latitudes.
for (let lat = 0; lat < LATITUDE_COUNT; lat += 1) {
// Loop across the longitudes.
for (let lng = 0; lng < LONGITUDE_COUNT; lng += 1) {
// Create a geomtry for the dot.
const dotGeometry = new THREE.CircleGeometry(DOT_SIZE, 5);
// Defin the phi and theta angles for the dot.
const phi = (Math.PI / LATITUDE_COUNT) * lat;
const theta = ((2 * Math.PI) / LONGITUDE_COUNT) * lng;

// Set the vector using the spherical coordinates generated from the sphere radius, phi and theta.
vector.setFromSphericalCoords(SPHERE_RADIUS, phi, theta);

// Make sure the dot is facing in the right direction.
dotGeometry.lookAt(vector);

// Move the dot geometry into position.
dotGeometry.translate(vector.x, vector.y, vector.z);

// Push the positioned geometry into the array.
dotGeometries.push(dotGeometry);
}
}

With this in place, we get the following result:

If you interact with the sphere to rotate it, you’ll notice the rings at the top and bottom are much more densely packed than those in the middle. This is because we haven’t varied the number of dots on each latitude line. This is where sphere packing comes in.

The phyllotaxis approach

If you’ve ever looked at the head of a sunflower or the base of a pinecone you’ll have noticed an unusual and distinctive pattern. This pattern, created by an arrangement based on the Fibonacci sequence, is known as phyllotaxis. We can use it here to position our dots in such a way that they appear much more evenly spaced over the surface of the sphere.

This time, instead of defining the number of latitude and longitude lines, we simply define the total number of dots we want to appear on the sphere. Instead of looping across the latitude lines, the dots will be rendered in a single, continuous spiral from one pole of the sphere to the other.

// Loop across the number of dots.
for (let dot = 0; dot < DOT_COUNT; dot += 1) {
// Create a geomtry for the dot.
const dotGeometry = new THREE.CircleGeometry(DOT_SIZE, 5);

// Work out the spherical coordinates of each dot, in a phyllotaxis pattern.
const phi = Math.acos(-1 + (2 * dot) / DOT_COUNT);
const theta = Math.sqrt(DOT_COUNT * Math.PI) * phi;

// Set the vector using the spherical coordinates generated from the sphere radius, phi and theta.
vector.setFromSphericalCoords(SPHERE_RADIUS, phi, theta);

...

}

The result looks like this:

This is much more satisfying. But what if we want to pack the dots as evenly as possible, but still have the freedom to define the number of latitude lines?

The linear approach

This time we’ll define the number of latitude lines required but the number of dots will also scale based on the circumference of the latitude line they’re positioned on. To give us greater control over the spacing we’ll also define a dot density parameter.

The fiddly part here is calculating the radius of each latitude line. Once we’ve got that, it’s relatively simple to figure out how many dots to display on it and then find phi and theta for each in a similar manner to the first approach.

// Loop across the latitude lines.
for (let lat = 0; lat < LATITUDE_COUNT; lat += 1) {
// Calculate the radius of the latitude line.
const radius =
Math.cos((-90 + (180 / LATITUDE_COUNT) * lat) * (Math.PI / 180)) *
SPHERE_RADIUS;
// Calculate the circumference of the latitude line.
const latitudeCircumference = radius * Math.PI * 2 * 2;
// Calculate the number of dots required for the latitude line.
const latitudeDotCount = Math.ceil(latitudeCircumference * DOT_DENSITY);

// Loop across the dot count for the latitude line.
for (let dot = 0; dot < latitudeDotCount; dot += 1) {
const dotGeometry = new THREE.CircleGeometry(DOT_SIZE, 5);
// Calculate the phi and theta angles for the dot.
const phi = (Math.PI / LATITUDE_COUNT) * lat;
const theta = ((2 * Math.PI) / latitudeDotCount) * dot;

...

}
}

This results in a very pleasing dot arrangement:

So we’ve covered how to get the dots displayed on the sphere. But what about achieving more complex effects?

Shape masking

Figuring out how to display the dots in ever-more complicated patterns can quickly descend into a mathematical headache. However, by using one of the above packing arrangements in combination with a mask image we can achieve some extraordinary effects.

To do this, we’ll first need to create an HTML canvas element and draw our mask image on it. This element won’t actually be rendered onscreen; it’s just a convenient method by which to extract the pixel data from an image. We only need to do this once, so we’ll do it upfront and then pass the extracted image data to our renderScene function.

// Initialise an image loader.
const imageLoader = new THREE.ImageLoader();

// Load the image used to determine where dots are displayed. The sphere
// cannot be initialised until this is complete.
imageLoader.load(MASK_IMAGE, (image) => {
// Create an HTML canvas, get its context and draw the image on it.
const tempCanvas = document.createElement("canvas");

tempCanvas.width = image.width;
tempCanvas.height = image.height;

const ctx = tempCanvas.getContext("2d");

ctx.drawImage(image, 0, 0);

// Read the image data from the canvas context.
const imageData = ctx.getImageData(0, 0, image.width, image.height);

renderScene(imageData);
});

Now that we have the image data available, we need to add a couple of utility functions. The first takes a point on the sphere and returns the UV coordinates of that point on our mask image, were it to be mapped onto the sphere.

// Utility function to convert a dot on a sphere into a UV point on a
// rectangular texture or image.
const spherePointToUV = (dotCenter, sphereCenter) => {
// Create a new vector and give it a direction from the center of the sphere
// to the center of the dot.
const newVector = new THREE.Vector3();
newVector.subVectors(sphereCenter, dotCenter).normalize();

// Calculate the UV coordinates of the dot and return them as a vector.
const uvX = 1 - (0.5 + Math.atan2(newVector.z, newVector.x) / (2 * Math.PI));
const uvY = 0.5 + Math.asin(newVector.y) / Math.PI;

return new THREE.Vector2(uvX, uvY);
};

The second returns the pixel data from the mask image at the given UV coordinates.

// Utility function to sample the data of an image at a given point. Requires
// an imageData object.
const sampleImage = (imageData, uv) => {
// Calculate and return the data for the point, from the UV coordinates.
const point =
4 * Math.floor(uv.x * imageData.width) +
Math.floor(uv.y * imageData.height) * (4 * imageData.width);

return imageData.data.slice(point, point + 4);
};

Now we have everything we need to apply the masking effect. After calculating the position of each dot, we compute its bounding sphere. We can then use this to pass the centre point of the dot to our spherePointToUV function. After this, we can use our sampleImage function to find the data for the specific pixel at that point. If the pixel is not transparent, we include the dot. If it is transparent, we don’t include it.

// Move the dot geometry into position.
dotGeometry.translate(vector.x, vector.y, vector.z);

// Find the bounding sphere of the dot.
dotGeometry.computeBoundingSphere();

// Find the UV position of the dot on the land image.
const uv = spherePointToUV(
dotGeometry.boundingSphere.center,
new THREE.Vector3()
);

// Sample the pixel on the land image at the given UV position.
const sampledPixel = sampleImage(imageData, uv);

// If the pixel contains a color value (in other words, is not transparent),
// continue to create the dot. Otherwise don't bother.
if (sampledPixel[3]) {
// Push the positioned geometry into the array.
dotGeometries.push(dotGeometry);
}

In practice, this means that we can specify a png image with a transparent background to act as the mask. Dots will only be rendered on the sphere where the corresponding point on the image is not transparent. From an image containing a simple diamond pattern we get this striking result:

We can use more complex mask images to achieve shapes such as this earth effect:

Or even to render text:

That’s a wrap

I’ve used these spherical mapping techniques in various places as the basis of WebGL showpieces. Hopefully they inspire you to do the same. If you’ve enjoyed this article or it’s helped you in some way, please do let me know! My website is over here.

--

--

Will Howard

Full-stack engineer building unique apps and web experiences for global brands. https://willhoward.co.uk