WebRTC Demystified — Project Setup

Let’s create a video chat and file sharing application

Furkan Can Baytemur
Orion Innovation techClub
9 min readSep 26, 2022

--

Establishing real-time communication is a bit tricky. As mentioned in the previous article, there is no standard signaling method to connect peers. We can use WebSockets or an intermediary app like WhatsApp or Discord. The choice is up to us. Besides, pure signaling is not enough. There are many things to take into consideration to establish a WebRTC connection.

Starting from this article, we shall be building an app called Magny. Magny is a demo project designed to show what WebRTC can do. It has three main functionalities:

  • Video call
  • Text chat
  • File Transfer

Magny is a web project, so we will use Node.JS to develop it. Its framework will be Vite, a build tool designed for modern web projects. Instead of Javascript, TypeScript will be used since it has many modern functionalities and it can provide a modular project that is easier to examine. Without further ado, let’s set up our development environment.

Setup

Before we start, make sure that you have Node.JS installed in your system. If it is not installed, you can install it from here. If you are a beginner, it would be ideal to install the LTS version.

With Node.JS ready, let’s create our project. Open up your terminal and navigate to the folder where you want to create your project.

This command will ask you the project name. We named it magny . Then, it will prompt you to select a framework and variant. Let’s select vanilla framework and vanilla-ts variant. With that completed, our project is created. Navigate to the project folder and install the required node packages by:

Before running the project, let’s open the project in a code editor. We will be using Visual Studio Code. Since we may need to run our project on different devices, change package.json like this:

By adding --host flag to dev script, we can access the project from our phone if the computer and phone are connected to the same network. To see our project:

You will see your project running up and the terminal will give you two URLs to see your project. You can either go to localhost or your device’s local network IP. However, the browser will not allow you to access the camera device if you test the application in the local network IP. This is because the local network does not provide an SSL-supplied connection and the browser will not allow device access in insecure connections. We will be testing the app cross-device when we deploy it to Netlify.

If you look at the project, you will see a template provided by Vite. Since we need a clean slate, delete counter.ts and typescript.svg . To put icons on buttons, we need Font Awesome Kit. Register to Font Awesome and they will e-mail you your kit. You just need to put the kit just before the body closing tag.

Since we will be focusing on WebRTC, I have prepared the UI beforehand. You can clone it from the Github branch below:

As you can see from the figure above, we aim to establish a video chat with the remote peer. The process will be very simple. We shall first start our webcam and this will also create a RTCPeerConnection . By making a call, the app will create SDP and ICE candidates and upload them to the Firestore database. We only need to deliver the call ID to the remote peer. The remote peer will paste the call ID and answer the call. Let’s continue with creating a local connection.

Create a Peer Connection and Streams

Firstly, we need to create a peer connection and create some info to connect to the remote peer. Go ahead and create a file called connect.ts . You will notice that the file gives an error as soon as it is created. This is because this Vite variant has a system in which just the file main.ts compiles. Other TypesScript files are sort of class files and they need to export simple variables or functions.

Before establishing a connection, we need to initialize a new RTCPerConnection which is supplied by a couple of STUN servers. There are many STUN servers for us to use but we will continue with servers provided by Google. iceCandidatePoolSize allows us to create ICE candidates even before creating an offer. How many ICE candidate we will get is up to the content of the connection. For this case, we will be adding camera tracks to our connection and will get a couple of ICE candidates beforehand. Speaking of cameras, let’s add both our video and remote peer’s video.

Since video and audio require constant data exchange, we are using streams to exchange them.

The first function will start to get video from our webcam. We are enabling both video and audio. After getting the stream from our webcam, we can iterate over it by getTracks() method. getTracks() method will give us a couple of tracks coming from the stream of the webcam. We will be iterating it and adding each of them to our peerConnection so that streams would be sent to the remote peer.

Since we will be testing the app in our local device, it would be a good idea to turn audio to false , otherwise it will cause an echo.

The second function gets the stream of the remote peer. We are listening for tracks coming from the peerConnection . We add each track to remoteStream which will reflect the video and audio to remoteVideo . Now the app should look like this when we click on the “Start Webcam” button.

We made the first step for establishing a connection, opening a track between peers. To bind these tracks, we need to signal the remote peer. In our previous article, we signaled the remote peer by copying and pasting. However, this solution would be far from ideal since we need the user to open up the console. Instead of getting the entire SDP, we will store these SDP objects and ICE candidates in a database. The remote peer will fetch credentials from the database. The only thing we need to send is the ID of the call which consist of offer and answer candidates. To achieve this, we will be using the Firestore database of Firebase.

Create and Import a Firebase Project

Firebase is a great tool to develop mobile and web applications. It is used by millions of developers and it is backed by Google. For our use case, we will use it as a database for signaling. Go ahead and log in to Firebase with your Google account.

After you logged in, enter the console by clicking on the indicated button. It should be in the right-top corner. In the console, you will see your projects (if any) and some options like demo projects. We will be creating a new project from scratch. Click on “Add Project”.

For the first step, let’s name our project “magny”. You can name it whatever you want it to be. There is no limitation such as “this name is taken” because Firebase will assign an ID for every project so that every one of them will be unique even if they have the same name.

The second step will ask you if you want to enable Google Analytics for this project. It would be a good idea to enable it if you want to improve this project further. Do note that Google Analytics is out of the scope of this tutorial. If you enable it, Firebase will ask you to select an account for Analytics. Choosing the default account is enough for now. After completing this step, wait for the project creation, it should take a few seconds.

After creating the project, we need to add an app to use the database. For this case, we will be selecting a Web project.

Go ahead and register your app with a nickname. We named it again with “magny”. There is no need for Firebase hosting for now but it can be used to deploy our app when we completed it.

After waiting for a couple of seconds for the app to register, you will see some credentials for your app. firebaseConfig will be very crucial since our app will access the database by using these credentials. If you want to see these credentials again, go to the console main page and click on “Project Settings”.

Here, you will your apps if you scroll down.

We will be using npm to get the required SDK. Before doing so, we need to create a Firestore database.

Go to the console main page. You can create Firestore either from the left bar or from the box in the middle of the page. Click on either of them and click on the “Create Database” button.

Since we will be testing the application many times in the development process, create the database in test mode. Test mode will allow all read/write requests as long as credentials are true.

Test mode will only allow requests that are made within 30 days. After 30 days, you need to change security rules to access your data.

Lastly, select the location of the cloud. There are many options. Select the one closest to you. Selecting a Multi-region location would be ideal since they have the broadest support in Firebase.

The result will look like this. We are done with the Firebase console. We will be adding collections directly from our app. Let’s get back to our app.

For a more modular structure, copy camera functions to a new file called camera.ts . To use peerConnection , export it and import it in camera.ts . You can check the state of the app from the branch below:

So far, we set up the base UI for our app. We are also showing the camera to the user. In the next article, we will focus on how we can achieve signaling with the Firestore database.

Until next time…

--

--