Phone Browser as Game Pad for your Big Screen Game

Pontus Alexander
ReCreating Megaman 2 using JS & WebGL
6 min readAug 11, 2016

A couple of days ago I was spending half a day on a train and after a suggestion from my friend Eric, I decided to see if I could use my smart phone as a game pad for my game engine running on my computer.

I began chopping away at it.

First step; fill smart phone screen with NES-graphics.

I already had an SVG — vector graphics file that is — of a NES-controller that I could use. First step was to fill the screen with it. One very nice aspect of SVG in HTML is that you can attach events to the shapes. If the nodes in the SVG has id-attributes you can use #id selector to find them. This is the closest I’ve been to image maps since 1998.

A node from an SVG with id B, for NES B-button.

After a little hacking on the train I came up with code that did what I wanted.

In the above code, first I find the SVG-element. Because I use the SVG embedded I need to access the contentDocument, which is similar to an iframe.

After that I set up a small map between the touch event names and the names of the events in the game engine.

The touchHandler is the function that receives and handles touch events. In it I check if the element has an id, if it does I assume it’s a button press and send the name of the button to the input handler of the game engine together with the requested state that has been translated by the map.

Embedded SVG syntax.

Now the only thing left to a proof of concept was to instead of sending the key presses directly to the game, I need to send it over the air somehow. There is an excellent API for this available in decent browsers called RTCPeerConnection. Instead of sending a signal from one computer, to a server on the internet, to the second computer, it establishes a direct connection between two browsers. To initialize the connection it needs to connect over a third server to exchange connection descriptions with each other. These descriptions contain information on how to route the signals through NAT layers and other network nodes that otherwise would prevent a direct connection.

I was not in the mood to set this up from scratch, so instead I used the fantastic Peer.js library. It not only abstracts the API, it also optionally provides the middle node to exchange connection information on.

All you need to share is an id, which is basically a “room” where browsers can see each other. Once they can, they can exchange the network descriptions and hopefully connect.

Really, this is how easy it can be to set up a direct connection between two browsers.

So, I got everything working between two Chrome browsers locally, and now I wanted to take it to the world. I uploaded my project to my GitHub public repo and loaded it up on my iPhone 6 aaaaaaaand, nothing happened.

A little annoyed I realized I would not be able to debug Chrome on iPhone on the train. Reluctantly I deferred this operation until I could try it on my Android.

A little later I learned that all browsers on iOS are basically themed, crippled Safari Web views. I could rant about that for 45 feet, but I’m going to hold back.

Back home I booted up my old Nexus 4 and loaded the page and it started working. But hold that champagne! As magical as it was controlling my Megaman game on my desktop computer from my makeshift NES controller, tons of things were off. Nothing but a simple tap did what you would expect. One of the worst experiences was that sliding your finger off the button would keep it pressed and not press any other buttons.

I wish I could convey all the tiny problems adding up to the uselessness of the initial implementation, but I dread it will not make sense in writing. Anyway — back to the drawing board.

Running NES-Remote on Firefox on Android.

After fiddling with the browser events to get them to fire what and when I wanted I realized this rabbit hole goes deeper and came up with an idea.

What if I ignore the SVG elements for the touch events, and just use them for hot areas that I intersect with the touch events on touchstart, touchmove, and touchend? That should work.

The thing I came up with is much more elaborate than I was hoping for, but let me step you thru it.

Above we find the SVG element just like before, but instead of binding to it I iterate over all nodes that has an id and convert each to an “area” object containing id of the area, position and radius. These areas are then collected in an array, and the key ids are stored in another array for easy iteration. I also set up a state object that will remember if a button is considered pressed or not at any given moment.

Then just a basic “does two circles overlap”-function taking the radius of both circles and their positions.

The touch handler starts out by running preventDefault() on the event. That is just common housekeeping for events in JavaScript to avoid something other from what you’re doing from happening.

Then we create a filter object which will hold the key names for buttons considered pushed during this event followed by a check if anything is touching at all. It is possible that there are no touches when “touchend” fires.

After this, we unroll the touches-list to an array so that we can loop over it with forEach. Then for every touch point, we check if it intersects any area. If it does, we take the key name of that area and set it to true.

Lastly we loop over all keys we know about, and if we have detected that it is touched, we send the key name to the sendEvent()-function with true. If we didn’t detect it being touched, we send it with false.

Since we only want to send button presses over the network if we detect a change, the sendEvent()-function keeps track of the current state for all buttons. If it gets a request and the state of the button is the same of the state requested, we ignore it. Otherwise we tell the vibration API to vibrate the phone vibrator for 50 ms, update the memory of the state, create a payload to send over the network containing key name and key state, and send it via RTC.

At the other end we need to receive it.

Just like on the remote end we use Peer.js but we set it up with a random id that we generate. When a connection has been established to the middle node, we print a URL to the remote app in the console. This is the URL opened on the mobile device.

Then, once a connection has been opened we listen to the data-event and when data comes we send the changes to the game engine.

Below a video of my brilliant colleague trying it out. :)

Next step is of course to run a co-op multiplayer Megaman 2 on projector where anyone can join and spawn a guy.

--

--

Pontus Alexander
ReCreating Megaman 2 using JS & WebGL

I’m a software engineer, living in Stockholm. I write about random things that interest me.