This is part two of my series on making a collaborative music app. For part one, click here
For a preview of the final product, see below.
Last we left our heroes, we had the ability to play sounds based on the mouse position. Now we need to build a multiplayer version of our app, and add visuals. Simple? Not quite.
Playing sounds in multiplayer is a little more complicated than one might think. This is because it takes time for a users input to reach the server and then be sent to other users. If another user and I both played a series of sounds, I’d hear my sounds immediately, and then their sounds after a 80ms or so delay. Even if I delayed my own sounds until they hit the server and came back, I might have different server lag than the other user, and then the two rhythms would be out of sync.
In short, I need some way to either force all the sounds into a particular beat, or ensure that lag is exactly the same between all users. In addition, if I choose to force sounds to fall onto a beat, I need to decide whether to choose a client-side or server-side view of what notes exist at a particular instant.
I chose to have the client enforce a beat system, where the client plays the notes available to it at a particular instant. This means that the device gets its own input some 80 ms or so ahead of the inputs on other devices, so there’s a possibility that the clients may disagree about what notes are being played at a particular time.
An alternative I may implement in the future is to have the server group the positions of all clients into a single update, which is consumed as the single source of truth by all clients. However, this comes at the cost of input lag for the user, and some kind of client-side beat would still be needed, to prevent the beat from lagging when the server lagged.
For now, let’s stick with a mostly client-side system. The main concept of the beat system is simple: When the application loads, I start some kind of interval or recursive function that runs once a “beat”. This recursive function will check if each users mouse is pressed, and if it is, it will play a note for that user.
However, there is a significant edge case. Say I tap the mouse briefly. The note never plays! To handle this, instead of just checking if the mouse is pressed, we’ll use another variable, shouldPlay. ShouldPlay becomes true on mouse down, but it doesn’t become false until after a beat passes. So, play a note if the mouse is currently pressed OR the mouse was clicked between beats. Do this for each user. Let’s implement it.
Now that we have that in place, all we really need as far as sound is concerned is to connect our clients to the server. To do this, I used socket.io. To keep this as simple as possible, I identified the different clients by using Math.random() on the client side as an id. (In a production app, we wouldn’t do this, we’d get a unique id from the server via a post request, but this will do for our simple purposes.)
We when we receive coordinates from other players, to see if we have a player with that id. If we don’t, we will add them to our ‘players’ array, if we do, we will update that object with the new keys. We will also check if the mouse was previously not pressed, and if it wasn’t, set the shouldPlay flag to true.
Cool! We have a way to play sounds together!
All we have left to do is the visualizations. The main concept is that we want to render a history of the mouse y-coordinate of each user. My implementation involves unshifting these values onto arrays at a regular interval, and then painting circles onto canvas using y-coordinate of the circle to represent the pitch. These circles are drawn onto the canvas using the canvas API. The horizontal position is determined by time,
( represented by position in the array.) As more positions are pushed to the array, the older positions move to the left, eventually fading away. I won’t go through this code step by step, but feel free to take a look at the code using the link below: