We made a multiplayer browser game in Go for fun
This post was originally published in May 2017 at http://blog.u2i.com/we-made-a-multiplayer-browser-game-in-go-for-fun/
Go has gained a significant popularity over last years. Many people like the simple and pure approach it offers, along with some cool features like concurrency, structures, implicitly satisfied interfaces. Together with my colleagues at u2i I decided to give it a shot and see what it has to offer. We didn’t have any commercial project to use it in, so we begun in the way the most of you probably would do — by reading a tutorial. But doing a tutorial solely is not enough to learn how to use a technology. That’s why we decided to create something on our own — a multiplayer browser game called Superstellar. In this post I want to give you a brief overview on the problems that we’ve had and how we solved them using Go.
If you want to know how we approach learning a new technology, see my other post about forming a self-learning group in a company.
I just want to mention that when it comes to game development we’re no experts. We develop web applications on a daily basis, but for most of us it was the first browser game ever created. Whatever you read in this blog post is just our own way of solving problems that we encountered — not necessarily the best one. I hope you’ll find it useful though!
Superstellar is a multiplayer browser space game. It has been inspired by the old arcade space shooter called Asteroids (everybody played that!). We picked it, so that we could focus just on the implementation and not on designing the game itself. And honestly, we just enjoy shooting each other.
The rules are simple: destroy moving objects and don’t get killed by other players and asteroids. You’ve got two resources — health points and energy points. You lose your health with every hit you get and every contact with the asteroid. Energy points are consumed when shooting and using a boost drive. The more objects you kill, the bigger your health bar grows.
The game has two parts: one central server and a front end app running in each client’s browser.
We picked this project mainly because of the backend part. We expected it to be a place where many things will happen simultaneously: game simulation, client network communication, statistics, monitoring, you name it. All of these should run in parallel and efficiently. Therefore Go with its concurrent-oriented approach and lightweight manner seemed like a perfect tool for the job.
In the rest of this article I would like to cover the backend part, leaving the client app for another time.
Game state master simulation — in one place and one place only
Superstellar is a multiplayer game, so we needed to have a logic that decides what’s the current state of the world and how it changes. It should be aware of all clients’ actions and make an ultimate decision about which events occur — e.g. whether a projectile hit a target or what’s the outcome of a collision of two objects. We couldn’t let the clients do this because it may happen that two of them decide differently on whether somebody was shot or not. Not to mention malicious players willing to hack the protocol and gain illegal advantage. Therefore the perfect place to store the game state and decide on its changes is the server itself.
Here is the general overview on how the server works. It simultaneously runs three different types of actions:
- listening for the control input from the clients
- running the simulation to update the state to the next point in time
- sending the current state update to the clients
The picture below shows a simplified version of a spaceship’s state and user input structures. At any time users can send a message and therefore modify the user input structure. The simulation step wakes up every 20 milliseconds and execute two actions. First it takes the user input and updates the state (e.g. increases the acceleration if the user enabled thrust). Then it takes the state (in the moment t) and transforms it to the next moment in time (t+1). And the whole process repeats.
Implementing this kind of parallel logic was very easy in Go — thanks to its concurrent features. Each piece of logic runs in its own goroutine and listens to some channel in order to get data from the clients or synchronize to the tickers defining the pace of simulations steps or sending updates back to the clients. We also didn’t have to worry about the parallelism — Go automatically utilizes all available CPU cores. The concepts of goroutines and channels are simple, yet powerful. If you’re not familiar with them, take a look at this article.
Communication with clients
The server communicates with the clients over websockets. Using websockets in Golang is easy and reliable thanks to the Gorilla web toolkit. There is also a native websocket library, but its official documentation says that it currently lacks some features and Gorilla is recommended instead.
In order to make websocket running we had to write a handler function that gets the initial client request, establishes the websocket connection and creates a client struct:
Then the client logic simply runs two loops — one for writing and one for reading. Because they had to run in parallel we had to run one of them in the separate goroutine. It’s very easy with the language keyword go:
Below you’ll find a simplified version of our read function, as a reference. It simply blocks on the ReadMessage() call and waits for new data from that particular client:
As you can see, every read or write loop is run in its own goroutine. Because goroutines are language-native and very cheap to create we could easily achieve a high level of concurrency and parallelism with very little effort. We didn’t test the maximum possible number of simultaneous clients, but with 200 of them the server runs just fine having a lot of spare computational power. The part that turned out to be problematic under that load was front end — browser didn’t seem to catch up with rendering all objects. That’s why we limited the number of players to 50.
As we set up a low level communication mechanism we needed to choose a protocol which both sides will use to exchange game messages. And it turned out not to be that obvious.
Communication — protocol must be small and light
Each field in the struct is described by the referencing JSON attribute name. This way serializing such struct to JSON was straightforward:
However it turned out that JSON is just too talkative and we send too much data over the network. The reason was that JSON is serialized to string representation that contains the whole schema, along with field names for every object. Moreover every value is also converted to string and therefore a simple 4 byte integer can become “2147483647” which is 10 bytes long (and it gets worse with floats). Since our simple approach assumed that we send the state of all spaceship to all clients it means that the server’s network traffic grew quadratically with the number of clients.
Here is a simplified version of the protobuf’s definition of our spaceship’s structure:
And here’s the function that converts our domain objects to protobuf’s intermediate structures:
And finally serializing to raw bytes:
Now we can simply send these bytes to the client over the network with minimal overhead.
Move smoothing and connection lags compensation
In the beginning we tried to send the state of the whole world on every simulation frame. This way the client would only redraw its screen on receiving a server message. This approach however caused heavy network traffic — we had to send the details of every object in the game to all clients 50 times a second in order to make the animation smooth. Way too much data!
Once we did that, our network traffic dropped significantly. This way we could also mitigate the effects of network lag. If a message got stuck somewhere on the Internet every client could simply carry on with its own simulation and eventually, when the data arrived, catch up and update the state of the simulation accordingly.
From one package to event dispatcher
Designing the application’s code structure also turned out to be an interesting case. In our first approach we created one Go package and put all the logic inside. That’s probably what most people would have done if they had to create a hobby project in a new programming language. However, as our codebase grew bigger, we realised that it’s not such a good idea anymore. So we divided the code into a few packages, without spending too much time on thinking how to do this properly. It came back to bite us very quickly:
$ go build
import cycle not allowed
It turned out that Go doesn’t allow packages to circularly depend on each other. Which is a good thing in fact, because it forces programmers to carefully think through the structure of their applications. So, having no other options, we sat down in front of a whiteboard, wrote down each piece and came up with the idea of introducing one single module that would pass the messages around between other parts of the system. We called it the event dispatcher (you may call it an event bus also).
Event dispatcher is a concept that allowed us to wrap all the things that happen on the server in so-called events. For example: client joins, leaves, sends an input message or it’s time to run a simulation step. In these situations we create and fire a corresponding event using the dispatcher. On the other end every struct can register itself as a listener and learn when something interesting happens. This way could just make the problematic packages depend only of the events package and not on each other, which solved our cyclic dependencies issue.
Here is an example on how we used the event dispatcher to propagate simulation update time ticks. First we need to create a structure that will be able to listen to the event:
Then we need to instantiate it and register it with the event dispatcher:
Now we need some code that runs the ticker and triggers the event:
This way we could define any event and register as many listeners as we liked. The event dispatcher runs in a loop, so we need to remember not to put long running tasks in the handling functions. Instead we could create a new goroutine and do the heavy computations there.
Unfortunately Go doesn’t support generics (which may change is the future), so in order to implement many different event types we used another of the language’s features — code generation. It turned out to be a very effective way to tackle this problem, at least in the project of our size.
In the long perspective we realised that implementing an event dispatcher was a valuable thing. And because Go forced us to avoid circular dependencies, we came up with it in an early phase of development. This is something we probably wouldn’t have done otherwise.
Implementing a multiplayer browser game has been great fun and a very good way to learn Go. We had a possibility to use its best features like concurrency tools, simplicity and high performance. Because its syntax is similar to dynamically typed languages we could write code quickly, but without sacrificing the statically typed safety. It’s very useful, especially when writing a low level application server like we did.
We also learned what problems you have to face when creating a real time multiplayer game. The amount of traffic between clients and the server can be quite significant and a lot of effort must be put into lowering it. You can’t also forget about lags and network problems that will inevitably occur.
The last thing worth mentioning is that creating even a simple online game requires a massive amount of work, both when it comes to internal implementation and also when you want to make it fun and playable. We spent endless hours discussing what kind of weapon, resource or other feature to put in the game only to realise how much work it would require to actually implement. But when you’re trying to do something that completely new to you, even the smallest thing you manage to build gives you a lot of satisfaction.