An Intro to WebRTC and Accessing a User’s Media Devices

A tutorial on using the getUserMedia() to access as user’s input devices

Sebastian Patron
Dec 13, 2016 · 7 min read

For years, low level access through a browser could only be done through complex flash programs or Java applets. As flash and applets started to fade away, a new solution was needed to access the low level parts of a user’s computer. Meet WebRTC, a Javascript framework that allows easy access to a user’s input devices and creating peer-to-peer connections.

WebRTC is special because it simplifies peer-to-peer connections between browsers. This becomes especially useful for messaging, video chatting, or even file sharing apps. To do this, WebRTC includes a set of protocols and api’s to easily enable real time communications. For our purposes, we’re going to use the getUserMedia api to easily access a user’s built camera for our selfie app.

Limitations of WebRTC

As great as WebRTC is, it still has many limitations on it. Namely, that some of the api’s aren’t supported by all browsers. A good list of what is and isn’t supported can be found on http://iswebrtcreadyyet.com/. For our purposes right now, Chrome, Firefox, Edge, and Opera browsers will all work, as they all allow usage of getUserMedia(). Edge is behind with some features, but catching up quickly, while Safari doesn’t even implement WebRTC at all (but they’re supposedly working on it).

Getting Started

Before we get started, you need to have Node.js installed on your machine. I won’t go into the details on how to do so here, but https://nodejs.org/en/download/ has instructions on how to install node on any operating system.

Once you have node installed, we’ll start setting up our folder structure. You can easily create a project folder with command line by typing $ mkdir directoryname. Your folder structure should look like this:

SelfieApp
- run.js
- node_modules
- public
 | —  index.html
 | —  style.css
 | —  javascript.js

Once we have our folders set up, we need to install a few node modules to make developing our selfie app easier. Node modules are external libraries that we can import into our project that helps us organize our code into separate parts and takes care of certain responsibilities. Think about it as a lego block that someone already made for us, which we can then take and put together with more lego blocks to create our final vision. We’re going to use the node-static module to help us serve a static file in our browser.

In your command line, type in:

$ npm install node-static

Now, in our run.js file we are going to type in the following code:

Let’s quickly break down what’s happening in this code: var static = require('node-static'); finds and loads the node-static module so that we can use the functions associated with it. new static.Server('./public'); creates a file server instance. It’s one of the functions that we get from the node-static module. We pass it the argument ‘./public’ so that it serves files in that directory. require('http').createServer(function (request, response) { ... } This function starts our server instance and passes it our HTTP request. .listen(8080) just sets the port that we are going to use when we access our selfie app in our browser.

And with that, our server is setup! We can test it now by going into our command line and typing $ node run.js and then visiting http://localhost:8080. If we did everything right we should get a blank web page. If you get an error message such as “localhost refused to connect”, double check your code and make sure you are connecting to the correct port number (and not typing anything like https:// before localhost). To stop running our server, type in “control + c” on your command line.

Next up, we need to setup our html page. The code is as follows:

index.html

Now, if we run our server again with $ node run.js, we should get a html page when we load http://localhost:8080 that looks like this:

What our index page looks like now

Next up, the WebRTC and Javascript parts

Now that we have the html page all set up, we’ll shift our attention over to the Javascript file. I’ll break this part up more so that we can understand what we are doing. First, let’s setup some of our variables we need

var constraints = { 
   audio: false, 
   video: { 
      width: 640, 
      height: 360 
   } 
};
var canvas = document.querySelector(‘canvas’);
var video = document.querySelector('video');
var filters = [‘’, ‘grayscale’, ‘sepia’, ‘invert’], currentFilter = 0;

Let’s take a look at what’s going on here: constraints are parameters that we’re going to pass into our into our video feed. We specify the camera feed size so it’s not too big, and disable audio since we don’t need it. canvas selects the canvas element, which can be used to draw graphics. This is where we will display our taken selfie. filters is an array that contains all our possible filters to use. We also define currentFilter here, and set it equal to ‘ ’, or no filter.

Next we’re going to access the user’s camera and display the feed in our selfie app. This is where we use the getUserMedia() api.

navigator.mediaDevices.getUserMedia(constraints).then(function(stream) {
   var videoTracks = stream.getVideoTracks();
   video.srcObject = stream;
})
.catch(function(err) { console.log(err.name + ": " + err.message); });

And with that, we have our video feed working! It’s actually pretty simple and straight forward, especially when compared to where the api was several months ago. Let’s take a quick look at what we’re doing: navigator.mediaDevices.getUserMedia() prompts the user for permission to use their web cam. If access is allowed, a promise is returned with the resulting stream object. We pass it our constraints that we specified earlier. stream.getVideoTracks() returns a list of all the video tracks within the stream object. This is where we would find all the user’s webcams if they happen to have more than one connected. video.srcObject = stream; allows us to associate our video stream with an element on our page. In other words, this gets finally gets our video stream onto our web page. The last part of this code, the .catch(...) statement is for error handling. When dealing with any experimental technology like WebRTC, it’s important to have solutions to deal with any errors that might pop up, rather than risking crashing our entire program.

Now that we have our video stream working, it’s time to

document.querySelector(‘#capture’).addEventListener(‘click’, function (event) {
   if (video) {
     canvas.width = video.clientWidth;
     canvas.height = video.clientHeight;
     var context = canvas.getContext(‘2d’);
     context.drawImage(video, 0, 0);
   }
});

This code here is pretty simple. First, we add an event listener to the capture button. Once the capture button has been clicked in our program, the function checks to make sure that our video stream is working. context.drawimage(..) takes a snapshot of our video stream and draws it on our canvas.

Now our program is almost done! All we have left is to implement our filters. This is a two step process. First, in our Javascript file, we need to put event listeners on our filter buttons. After that, we need to define our filters in our CSS file. Lets start with the Javascript file first:

document.querySelector(‘#noFilter’).addEventListener(‘click’, function (event) {
    canvas.className = filters[0];
});document.querySelector(‘#blackAndWhite’).addEventListener(‘click’, function (event) {
    canvas.className = filters[1];
});document.querySelector(‘#sephia’).addEventListener(‘click’, function (event) {
    canvas.className = filters[2];
 });document.querySelector(‘#invert’).addEventListener(‘click’, function (event) {
    canvas.className = filters[3];
});

This snippet of code is pretty similar to our previous one for taking our selfie. We first add event listeners to our filter buttons. When clicked, we change our current position in our filter array to the filter we want to use.

Before we move on to the CSS, lets add one last function to our Javascript so that we can save our selfie:

document.querySelector(‘#downloadLnk’).addEventListener(‘click’, function (event) {
    var dt = canvas.toDataURL(‘image/jpeg’);
    this.href = dt;
});

Once again, this is a similar piece of code to what we previously wrote. canvas.toDataUrl(..) converts our canvas object to a jpeg image, and this.href assigns the jpeg to our href element back in our html file.

And we’re done with our Javascript file! Here’s what the completed file should look like:

javascript.js

You’ll notice that the getUserMedia(..) function has a few changes. These are mostly just logs to our console to make sure everything is working all right. They’re not necessary for our application, but in the real world they make debugging and testing a lot easier. Now, lets move onto our CSS file. This is where we create our actual filters to be applied.

.grayscale {
    -webkit-filter: grayscale(1);
    -moz-filter: grayscale(1);
    -ms-filter: grayscale(1);
    -o-filter: grayscale(1);
    filter: grayscale(1);
}.sepia {
    -webkit-filter: sepia(1);
    -moz-filter: sepia(1);
    -ms-filter: sepia(1);
    -o-filter: sepia(1);
    filter: sepia(1);
}.invert {
    -webkit-filter: invert(1);
    -moz-filter: invert(1);
    -ms-filter: invert(1);
    -o-filter: invert(1);
    -filter: invert(1);
}

This code is pretty straight forward. .grayscale converts our image to black and white, .sephia converts to sepia, and .invert inverts our image.

And we’re done with our program! Let’s test it out to make sure everything works. In your command line type $ node run.js and then visit http://localhost:8080 in your browser. If everything was done right you should get a page like this:

The finished product

If your selfie app is not working, double check your code and take a look at the git hub repository here.

Taking it Further

Even though our selfie app is finished, we can still take the project further by implementing new features or making the app more user friendly. We can reverse the video stream with css for a more natural look, implement our filters with Javascript for more control, or even use the canvas object to paint a hat over ourselves. With WebRTC and Javascript, the possibilities are endless.

I hope you enjoyed this intro to WebRTC. If you have an questions, feel free to leave a comment below and I’ll get back to you as soon as possible!