Integrate RecordRTC with Angular 2 -Typescript

RecordRTC is a great library which also supports WebRTC’s getUserMedia API, the api responsible for simplifying audio/video access in HTML5 among others. Since I could not find a good example of how to integrate RecordRTC with Angular 2 I thought of writing one. If you want to see it in action, just clone this repo (Angular2-RecordRTC) and run it.

Let’s see how to integrate RecordRTC with Angular 2 in Typescript. If you already have a Angular2 repo, then skip Step 1.

Step 1: Download Angular2 repo. I prefer preboot/angular2-webpack. Here is my fork of it (angular2-starter) with angular/material and bootstrap cooked into it.

Step 2: install yarn globally. Yarn is a package manager similar to npm but better.

npm install -g yarnpkg

Step 3: install recordrtc using yarn. If this is the first time you are using yarn, it might take sometime but subsequent runs will be faster.

yarn add recordrtc

The yarn add is same as npm install --save.

Step 4: create a folder record-rtc and create the Angular 2 component related files inside it. At the end it should look like this


Step 5: Let’s prepare the html and css first.

The html has a simple HTML5 tag, and three material design buttons to start the recording, stop recording and to download (This can be replaced with upload to S3, to your backend server etc). Also if you have noticed we are using the template reference variable #video on the video element. This is required as we want to reference the video element in our typescript file.

Step 6: The logic to make this all work together resides inside, you guessed it, record-rtc.component.ts. Let me go over the important things here and for the rest of the file you can refer to the github

  • import record-rtc — import * as RecordRTC from 'recordrtc';
  • make a reference to the video element — @ViewChild('video') video: any
  • set an initial state for the video element in the afterViewInit method
ngAfterViewInit() {
// set the initial state of the video
let video:HTMLVideoElement =;
video.muted = false;
video.controls = true;
video.autoplay = false;
  • video is not muted initially (I will explain why it is muted during recording).
    enable controls
    disable autoplay as there is nothing to play yet.
  • The handler for record button is shown below
startRecording() {
let mediaConstraints = {
video: {
mandatory: {
minWidth: 1280,
minHeight: 720
}, audio: true
.then(this.successCallback.bind(this), this.errorCallback.bind(this));
  • navigator is a webAPI using which you can request access to media controls available on the user’s system.
  • Once the navigator returns PromiseLike function, after obtaining the user’s permission to use the video and audio, we use a callback function to do additional things. One IMPORTANT thing to notice is Binding the context for the successCallback & errorCallback functions. This is usually not required in Angular 2 app but it is required here as we are referencing from the navigator API which is a global object and then this will not be referring to the record-rtc component anymore. So just remember to bind the context for the success and errorcallbacks.
this.successCallback.bind(this), this.errorCallback.bind(this)
  • The successCallback from the navigator returns a MediaStream Object. We store a reference to this object (why? find it out later), we also store a reference to the recordRTC object which gets created in the successCallback function.
successCallback(stream: MediaStream) {
var options = {
mimeType: 'video/webm', // or video/webm\;codecs=h264 or video/webm\;codecs=vp9
audioBitsPerSecond: 128000,
videoBitsPerSecond: 128000,
bitsPerSecond: 128000 // if this line is provided, skip above two
}; = stream;
this.recordRTC = RecordRTC(stream, options);
let video: HTMLVideoElement =;
video.src = window.URL.createObjectURL(stream);
  • Recording is started by this.recordRTC.startRecording()
  • The template reference is used to set the url for the video to this MediaStream and that is how we will be able to see on the screen the user’s video.
  • The toggleControls() switches the states of the video. We don’t need controls during recording, the video is muted so that there is no echo/ collision between your own voice and the sound from the speaker (which is also your own voice). It is essential that the video is muted to make this work.
toggleControls() {
let video: HTMLVideoElement =;
video.muted = !video.muted;
video.controls = !video.controls;
video.autoplay = !video.autoplay;
  • Once the stop recording button is clicked we do the following
stopRecording() {
let recordRTC = this.recordRTC;
let stream =;
stream.getAudioTracks().forEach(track => track.stop());
stream.getVideoTracks().forEach(track => track.stop());
  • stop the recording, use the reference to MediaStream to release the microphone and the webcam (recordrtc does not do this automatically, I am not sure if it is supposed to do that, and you would end up with the microphone and webcam still being ‘on’ despite the recording is stopped).
  • Once the recording is stopped, you can piggyback the recorded url to the video so that you can check your video, if satisfied save it, else re-record.
processVideo(audioVideoWebMURL) {
let video: HTMLVideoElement =;
let recordRTC = this.recordRTC;
video.src = audioVideoWebMURL;
var recordedBlob = recordRTC.getBlob();
recordRTC.getDataURL(function (dataURL) { });
  • To download, click on the download button and the video gets saved to your local disk.
download() {'video.webm');

That’s it, it’s that simple to integrate record-rtc with Angular 2. I have also an example of ‘Angular 2 with pure WebRTC’ but since the MediaRecorder API is not yet widely used, it’s a bit more challenging. Maybe I will post an another article about it.