Building a Realtime Multiplayer Game with Deepstream

Moriz Büsing
Jun 28 · 6 min read

Back in 2016, we were tasked with showcasing the newest Chrome version that supports the WebVR spec. WebVR is a technology that allows you to connect VR devices to your Browser and experience VR directly on a website. We decided to create a multiplayer ping pong game.

You can play the game here. Google has since been changing its policy on WebVR so I can’t guarantee whether that part will still work. But this post will be focused on the networking part. How it works: a player (we will call them the host) can open the site and will get a 4-digit code. They can then send this code to a friend who can use it to connect to the host’s room. As soon as both players are in the room, the game will commence.

Deepstream

We actually started to build the game with WebRTC until we realized that at the time, Safari on iOS didn’t support WebRTC. Bummer! We had to regroup and go back to websockets. Deepstream is a websocket server that can be used to synchronize data between devices, so just right for a multiplayer game.

Deepstream has a concept of records, which are the main building blocks we’re gonna use to build the communication between the clients. A record is essentially a piece of data that’s stored in-memory on the server. Any client can update a record, and the server will then update the record on clients that are subscribed to it. For our game we set up some records: one for the position and rotation of each paddle, one to synchronize the status of the game (game starting/started/paused/over), one for ball hits (stores the last position the ball was hit as well as a 3d vector that stores the current ball velocity), one for misses (when the ball moved past the paddle) and one for pings to measure the latency. More on this later.

You might wonder why we’re not storing the ball position. Since we have the last position the ball was hit, and the velocity with which it’s now travelling, that’s not necessary — we can just initialize the ball with that velocity at that position, and the physics engine (we used the awesome cannon.js) will calculate its route the same way on both clients, since it’s a deterministic system.

Every record has a textual id that clients can use to subscribe to it. We are prefixing this id with the 4-digit code of the game so that only clients in the same room see their record updates.

Setup

The entire server-side script for websocket communication we’re using is this:

Since Google was planning to advertise this experiment, we set up multiple servers to make sure we could withstand the expected load. At launch, we had four ec2 instances in different AWS regions. We since reduced this to only one instance, since traffic decreased.

Now, from the client’s perspective, if we have 4 servers, we need to find a way for users to first find the closest (or fastest) server, and then have their opponent connect to the same server. That might not be the closest server for them too, but for the sake of simplicity we assumed that most people playing each other will at least live on the same continent.

Promise.race comes in handy here — it will resolve with the promise that finished first. The pingServer method will try connecting the the server, resolving as soon as it manages to open a connection. Now we can connect to the server we found:

Room negotiation

After the connection is established, the host will open a room. This method is called from outside after a user clicked on the “open room” button.

You can see that the first character in the 4 digit code actually encodes which of the four servers we are using. We use the first quarter of the alphabet for the first server, the second for the second, etc. Doing it this way gives us almost all the entropy of the the 4 digit code while still being able to encode that information — we could now theoretically host 262144 rooms on each server. We removed the 1, I and l from the available characters to avoid ambiguity when reading the code. After the opponent receives the room code, they will try to connect to that server:

Setting up records

After the 2 clients are connected to the same server and share a room id, we can set up the records we need for the game.

Then we can set up all our event listeners and callbacks to communicate back to the game when we received a status update:

The ping record is, as mentioned earlier, for measuring the latency to the currently connected server.

We’re setting 20 separate ping records, one each second. Since we also listen to updates on this record, we can measure the time it took from setting the record to the point where we received the update. That would be one entire roundtrip time. After taking the median of the roundtrip times and halving it, we end up with the current one-way-latency. Note that this is the latency to the server, not end-to-end. Since the latency will change over time, an even more accurate method would be to keep doing this for as long as the game takes, and store a window of recemt roundtrip times, taking the median from that window on each ping. But this approach turned out to be good enough for our use case.

We should also keep in mind to discard of the record once the game is over or a user exits. Otherwise the server will keep the records in memory, causing a possible leak.

This concludes the basic setup for all necessary records. You can find the full module here. This is enough for a working game, but the experience is not very enjoyable. The ball will fall through paddles and magically teleport to other locations, the opponent’s paddle not move smoothly. It will feel pretty janky and not very real-time.

Latency disguise

Every realtime multiplayer game has to deal with latency. It is an unavoidable fact that there is a theoretical limit to how fast information can travel over a network. No matter how far technology advances, this will never change. The actual latency is of course much higher than the theoretical minimum. This caused a major issue: if you hit the ball, you’ll see it travelling towards your opponent’s paddle. The opponent will then hit the ball and send that information to you, but by the time you receive it, the ball will already have flown through the paddle. A possible remedy to this is to visually slow the ball down as it travels towards your opponent, so that the visual hit happens at approximately the same time as you receive the hit information. This is called after you have hit the ball:

This has worked well enough in practice and fixed the issue, if latency is very high, the ball will fly very slow, but then again, if the latency is too high you just can’t have an enjoyable experience, no matter which technique you use.

The problem we ran into with this solution though, is that since the latency is always changing a little, it would often happen that received the hit at the right time, but the ball position that was sent along with the hit was a little off. Directly setting the ball to the updated position was visually very noticeable. We came up with an interpolation technique that linearly interpolates the visual ball position to the ball position sent over the network as it travels towards you. This is running every frame as long as the ball travels towards you:

Where the ballInterpolationAlpha is a number that linearly tweens from 1 to 0 after each hit, and the ballPositionDifference is a vector that stores the difference from the visual ball position to the one that was received over the wire.

Further reading

Here are some things that helped me figure out how to code this thing:

  • Cube Slam, a game with a similar mechanic implemented with webRTC. At the time it also had it’s source online
  • Valve’s article on lag compensation and client side predition
  • 0fps’ blogposts about networking in games

You can find the full repository of the game we made here, but beware, the project is now a couple years old and will likely not compile without complaints. If you run into issues you might want to try an older node version, like 4 or 5.

If you’ve made it this far, let me know what you think in the comments! If you made something similar, I’d love to hear about your solutions to these problems!

madebywild

digital studio with offices in Vienna, New York and Berlin.

Moriz Büsing

Written by

I’m one of these digital natives your mama warned you about. Tech director @ wild.as — morizbuesing.com

madebywild

digital studio with offices in Vienna, New York and Berlin.