Early Attempts of Writing a Procedurally Generated 2D Terrain Using Perlin Noise in Swift

Federico Mazzini
Major League
Published in
5 min readJul 14, 2016

--

I‘m a big fan of roguelike games (a genre recently enjoying a burst of popularity), and all sort of games with depth and a lot of micromanagement. Since I started playing these, I’ve been amazed at the complexity involved in the core mechanics found in many of them.

One of the main characteristics of roguelike games is their wide use of procedurally generated content. When applied to the generation of the environment, the player finds himself in an ever changing sandbox. Whether he creates a new session, enters in an unexplored area, or finds a new object, he’ll never know what to expect. I can come up with several examples of these features, in the form of two of my all time favourite games: Dwarf Fortress and Cataclysm DDA.

By the way, if you haven’t ever played Dwarf Fortress, please stop reading this article and go play it. Have some fun. Really, what are you doing reading this anyway?

These games use procedurally generated content to create the game world, in two different ways. The first one creates hole continents, biomes, civilisations, trade routes, legendary figures, legendary objects, legendary creatures, etc… lets them develop and interact with themselves through hundreds of years (for the player it’s only minutes). And then gives you the option to settle a fort and have fun anywhere in this world:

Dwarf Fortress World Generation Screen.

The second game I’ve mentioned creates much smaller “chunks” of the world as the player moves through the environment. Creating cities and buildings, roads and forests, hazardous waste sarcophagus and futuristic abandoned labs as you move forward exploring.

Cataclysm DDA “Map” Screen.

One way of achieving these sort of “realistic” distribution of different elements in space is applying a function called Perlin Noise. A noise function can be described as a function that:

[…] takes a coordinate in some space and maps it to a real number between -1 and 1. Note that you can make noise functions for any arbitrary dimension. […] you can graph a 1-dimensional noise function just like you would graph any old function of one variable, or consider noise functions returning a real number for every point in a 3D space.

Example of a 1D Perlin Noise application.

In case you wonder, that extract is from this article, which I’ve found to be the preferred reference article in this matter for many programmers. Bear in mind I don’t recommend to further study the mathematical theory behind Perlin Noise if you’re only planning to use it for game programming experiments.

Example of a 3D Perlin Noise application.

With that in mind, I can move forward to my own Swift experiment. It is largely based on simple, step by step, but inefficient Perlin Noise pseudocode found on the internet. Not only is the second Swift example I know about (following this one). But it’s the only Swift example I know that isn’t rendered to pixels in an image. This one actually uses the different values of the (2D) noise function to place elements in the screen in a grid pattern. And match them to different objects in a way some games would do, to load different sprites:

Image from a mod that loads tree sprites on top of Dwarf Fortress gameplay terrain.

First I tried to generate a bitmap image with a greyscale coloured pixel corresponding to each floating number on the 2D array, I made the array the same size as the screen and coloured each pixel in a greyscale going from 0 to 1:

iOS Simulator screenshot.

Using the array of pseudo random numbers generated by the noise function in an actual program is rather simple, just assign each one to an object, in this case a object of the class Element. And each object to a square section of the screen. And map each number to, for example, a color. I used an extremely inefficient switch statement just for fun:

My results (with a preposterous amount of computational overhead) where the following (From the app at my github page):

iOS Simulator Screenshots.

It’s easy to visualise a game character walking through this terrain the way it would do in the Dwarf Fortress image with the tree sprites.

In the right image you can see a bunch of water (blue) close to sand (yellow), and in the left you can see a cluster of trees (red) in the lower part of the screen and some more water and sand in the upper part. The green stuff is grass. You can navigate the map tapping the screen close to the edges. The coloured squares are sprite nodes, they can be loaded with small square images or sprites but that would only add more overhead to the example, so I left it the way it is, only with different background color.

I think I just figured out the reason why there aren’t lots of mobile roguelikes around with the ability to generate deep in-game content. There’s just too much overhead. There are good reasons roguelike games like the ones I’ve mentioned are some of the most resource intensive applications around: they implement pathfinding and fluid behaviour mechanics, and all sort of heavy CPU load stuff running every time something happens.

Hopefully this will lead to some more development on the roguelike genre, and specially in Swift. Since it’s my main work tool in my work as an iOS developer at Lateral View.

If you want to know more about technology and innovation, check us out at Lateral View!

--

--