PlayMyBand WebRTC real time game
A team formed by Jaime Casero, Carlos Torrenti and me (Carlos Verdes) has won the Madrid TADHack 1st prize with this HTML5 real time video game.
The winners from the 14 TADHack locations running around the world are reviewed in this weblog. The Global Winners are…blog.tadhack.com
This article (writen by all of us) is about what is behind the scenes so we hope that it will help others to develop similar applications.
You can find all the code in the next GitHub:
The idea is to explore the usage of Multiparty WebRTC collaboration, not only by sharing video or audio, but also by sending data to all parties involved using the not-so-usual WebRTC Datachannel. This approach can be used in multiple scenarios where data has to be sent to a network of peers without the need of a server node to stay in the signaling path relaying all communications.
You can clone this framework at github:
git clone https://github.com/Mobicents/webrtcomm
PlayMyBand is a colaborative rythm game where 3 players simulate to play an instrument of the same song. The challenge is to match the notes while they come down a canvas and to synchronize the 3 players (each one in a different browser/location).
Here you can view a 4 minutes video showing you how the app works:
The project is a full Html5 and JS browser application which uses TeleStax servers to make possible the WebRTC session among the participants (using SIP for the session handshake).
The topology used is Mesh (graph bellow), so once the three players are connected no server is involved (improving latency and avoiding server saturation).
The flow is something like this:
All the code is based on HTML5 and JS using a Yeoman scaffolding template as starting point (generator-angular-fullstack) and Grunt with NodeJS.
So to run our application we just run:
… and the application is ready :
To create the HTML we choose Jade as a template engine, letting us to code nicer pages, create layouts and reuse some templates (using Jade mixins).
After the panel layout and the progress bar mixin has been created you can make a new view with just 4–5 lines of code!!:
Now is time for Angular… we need to decide which view to show and the data/logic for this. We have choosen the angular ui-router and defined different states like “registering player”, “selecting song” or “playing”.
Each ui-router state can have childrens (so if some of your views share some data or controller logic you can create a common parent state) and define a view, url and controller:
To communicate the controllers and other services (like SIP or WebRTC services) we use the $scope.$broadcast() and $scope.$on (Angular API).
A good example can be the first state (registeringPlayer). A SIP call is made when the user clicks the button. When the SIP call finishes (async) it broadcasts an event that is intercepted by the registeringPlayer controller… so the controller can make the transition to next state (select song).
It’s very important to highlight the decoupling on this design (based on microservices architecture) so if tomorrow we decide to change from SIP to another protocol (like Jabber) the controller hasn’t need to be changed (the new implementation just has to send the same event!!).
All started when we decided to present something for the next TADHack 2015. We wanted to do something technically challenging and relevant for the event, while bringing something exciting for the end user. We thought WebRTC Data channel was still a feature not quite used. To add more challenge, we thought multiparty (more than two users) would benefit the Hack. With that in mind, Torrenti told us about his ideas of Musical E-learning site. Then Jaime said… the best way to learn is by playing!!
So, we started the game by looking at he FretsOnFire opensource project. We borrowed some media resources (ogg and midi files), and started the project. We selected a Midi JS library, and soon we had something working. Verdes added a canvas (creating an Angular directive) to render the midi events. We were close enough, but it was time to introduce some communications so we asked the Telestax guys to help us with Data Channel.
Jean Deruelle sent us a WebRTC demo establishing a mesh network among 3 parties, which was close to what we wanted. After playing with the WebRTC library, we were able to release a demo based in SIP MESSAGE features delivered by Mobicents over Websockets. But that was not what we wanted, since as we know, Websocket is not peer2peer.
It was time to deploy the whole project onto an integration environment, for which we considered using Google Compute Engine. In addition, each one of us had its own development environment working either by using a Docker container or a virtual machine running Restcomm. We used the Google environment in conjuction with Xirsys to generate a fully working Restcomm node in the cloud with ability to cope with STURN an TURN servers negotiation.
After getting the Websocket example working, we fixed some minor stuff in the WebRTC JS library, and then we managed to receive our first truly Peer2Peer WebRTC Data message. We simply sent an stringified JSON over the Data channel, and at the other side a simple JSON.parse did the trick. With this approach established, we changed the code from SIP MESSAGE to WebRTC Data channel in two days, and we were just three days away from the Event.
Then we started spending effort on enhancing the look&feel of the demo, for presentation purposes. And that lead us to have a good demo ready to show in TADHack.
We thought it would be a good idea to implement some features for the demo during the Hackathon itself, and why not, let the people in the event decide what to implement. So we built a kind of product backlog, and tried to engage people to decide on what user stories to implement. The winning stories were: enhancing A/V synchronization, adding videoconference, and enhancing session establishment.
Since we had the SIP calls already in place, adding sound and video was quite easy. In about 3 hours we we able to see the first video images in our demo, so players can see each others faces/voices during the gameplay. We spent some more time making HTML screen look good.
The hard thing was to enhance A/V. Verdes did a good job there debugging the current code, and discover the great problem, we were discarding ChangeTempo Midi events to account for the accumulated delta time of each note, so we were adding lag to the game artificially. After that fixing, we noted a great improvement for the canvas rendering and the audio song synchronization.
We were done, and we went for the presentation. We did it at the first attempt. We received a lot of support for our Hack during the event. It seems the idea was really engaging, someone told that he would even pay for that service!!!
So all in all, good experience. Met some interesting people, and ideas during the Hackathon. Thanks a lot again to Telestax for the support. Special thanks to Jean Deruelle for the initial demo, and Orestis for the support during the event.
Our time during this hack was limited and thus we did not have time to develop all the stories in our mini-product backlog. We think there is a lot of room for improvement and we plan to continue developing this application with hopefully new and exciting features. These are some of the most immediate ones, but feel free to contribute!!
- Debug the call setup to understand why it takes a lot of time on some occasions (ICE, TURN/STUN negotiations?).
- Contribute to Telestax/Mobicents source code if there is anything that can be improved on the server side.
- Fully close the mesh for all participants in the game (currently player 3 does not see player 2).
- Set Fx volume so that there is no annoying error “beep” upon hitting a wrong note.
- Download songs from a server URL.
- Debug chrome browser to get it working 100% of the time.
- Improve overall playability and layout.
- Integrate with Social networks and use it as login and authentication.
- Be able to pause songs.
Article written by Jaime Casero, Carlos Torrenti and Carlos Verdes