Implementing LHLS on hls.js

Tomohiro Matsuzawa
FRESH LIVE Developers Blog
5 min readApr 29, 2018

Preface

FRESH! is a live streaming service. We are using HLS as a streaming protocol. HLS makes our service to easily scale. However, it also introduces a lot of latency depending on segment duration. Quite a few broadcasters gave us feedback that it is difficult to communicate with viewers because of the latency. We started looking for some solutions to accomplish low latency at scale.

I found that Periscope is using a very interesting approach which satisfy both low latency and scale. They call it Low-Latency HLS (LHLS).

It is using chunked transfer so that a streaming server can send players video data on a frame by frame basis even before HLS segment is completely created at the streaming server.

Periscope is using LHLS for their mobile applications but not using it on their web players as far as I know. There may be some technical reasons for that. However, most our users are using web browsers, so we needed to make it work on web browsers.

hls.js

We are using hls.js as HLS client. hls.js is open source and it is a very active project and supporting many features. hls.js is divided into several components. The loader component read HLS stream from network and it passes MPEG-TS segment to the demuxer component. It splits MPEG-TS into video and audio data and pass them to the remuxer component. It remuxes them and place them into fMP4 which Media Source Extension can play.

It passes through video data on the pipeline on a segment by segment basis. It needs to be changed to a frame by frame basis to support LHLS.

Progressive loader

It needs to handle HTTP chunk progressively instead of handling data after receiving complete HTTP response. Fetch API and ReadableStream interface makes it possible.

However, only Chrome supported ReadableStream at that time. There was some workaround. Firefox has moz-chunked-arraybuffer extension to let caller access to HTTP chunk data from HTTP response progressively. Other browsers can also expose incompleted HTTP response through XHR onprogress function, so that caller can gain HTTP chunk from incompleted HTTP response. It require to hold entire response in memory every time and inefficient, but still works.

There is an awesome library to abstract workaround above which is called stream-http.

It works well and made it much easier to implement progressive loader supporting most major browsers.

Progressive demuxer and remuxer

The progressive loader passes MPEG-TS packets to the demuxer. MPEG-TS needs to be converted to fMP4 since Media Source Extension doesn’t support MPEG-TS on most browsers, and that’s why hls.js has the demuxer and the remuxer for container conversion(If you use HLS with fMP4, no conversion would be required).

MPEG-TS segments generally start with PAT/PMT followed by payloads. fMP4 fragments start with moof followed by mdat. A moof includes timestamps and durations for each sample on mdat. We cannot determine all timestamps and durations for each sample on an entire TS segment until we receive it completely.

Therefore, we inject smaller moof for each HTTP chunk and pass smaller fMP4 fragments(moof and mdat) to Media Source Extension.

There is another thing to consider. MPEG-TS contains timestamps for each sample but durations in contrast to fMP4. We cannot determine the duration of the last sample of HTTP chunk until receiving next HTTP chunk. Therefore, the last sample of HTTP chunk needs to be defer and will be passed with samples of next HTTP chunk.

Custom HLS tag

Here is an example of a playlist at our service.

#EXTM3U
#EXT-X-VERSION:6
#EXT-X-MEDIA-SEQUENCE:30193
#EXT-X-TARGETDURATION:3
#EXTINF:2.002,
/media_30193.ts
#EXTINF:2.002,
/media_30194.ts
#EXTINF:2.002,
/media_30195.ts
#EXTINF:2.002,
/media_30196.ts
#EXTINF:2.002,
/media_30197.ts
#EXTINF:2.002,
/media_30198.ts
#EXT-X-FRESH-IS-COMING
#EXTINF:2.000,
/media_30199.ts
#EXT-X-FRESH-IS-COMING
#EXTINF:2.000,
/media_30200.ts
#EXT-X-FRESH-IS-COMING
#EXTINF:2.000,
/media_30201.ts

We added custom HLS tag EXT-X-FRESH-IS-COMING for two reasons. One reason is to let hls.js know whether a playlist is HLS or LHLS. If the playlist includes the custom HLS tag, hls.js works as LHLS player.

Another reason is to let hls.js know which segment it should start downloading from. The custom tag indicates the segment is not completed on streaming server. hls.js needs to start downloading in-completed segments in order to download newest video frames progressively.

Adjusting buffering time

The latency no longer depend on segment duration in LHLS. If we make both server and client buffering time as small as possible, it could cut the latency below a second. However, it does not work well on the public internet due to network jitter and other reasons.

A player needs to have an appropriate buffering time. An initial player buffering time is vary depending on when a player start downloading.

Therefore it needs to be adjust to appropriate buffering time. It can adjust by html5 video seek(increasing or decreasing currentTime) or slow/fast playback(increasing or decreasing playbackRate). The seek method does not work on some browsers and the slow/fast playback method does not work on some browsers, so we use different methods depending on browsers.

There is another thing to consider. Some browser requires certain buffering time to start playing, so we use different buffering time depending on browsers.

https://developer.microsoft.com/en-us/microsoft-edge/platform/issues/11147314/

Conclusion and Future Work

We have implemented low latency HLS on hls.js and it has been working well on our streaming service FRESH!.

We put our implementation here.

hls.js estimates bandwidth to select appropriate playlist. However, it is harder to estimate bandwidth with chunked transfer. We are currently using some workaround to perform pseudo ABR. We need to find better way in future.

It is difficult to know appropriate player buffering time which is as small as possible but not causing stalls. It can depend on network jitter on both broadcasters and viewers side, and other reasons. We are currently using two different buffering time depending on whether viewer’s bandwidth is higher than a certain value. We need to figure out better way.

--

--

Tomohiro Matsuzawa
FRESH LIVE Developers Blog

Software engineer at Twitch/Amazon IVS. Specializing in Live video streaming.