Testing WebRTC clients in constrained network environments

Vittorio Palmisano
4 min readApr 7, 2023

--

Introduction

Evaluating the audio/video quality for video conference and live streaming services is an important topic because it can have a significant impact on the user experience. Poor quality audio or video can lead to frustration, difficulty in understanding others, and reduced engagement. On the other hand, high-quality audio and video can enhance communication and create a more immersive and engaging experience for users.

There are several factors that can affect the audio and video quality of these services, including internet bandwidth, network latency, device hardware and software, and the platform or software used for the conference or streaming. Evaluating these factors and determining ways to optimize them can help to improve the quality of the service.

Testing and measuring

With this article we will see how to perform tests and measurements of some popular video conference services using the https://github.com/vpalmisano/webrtcperf tool. With the tool configuration options it is possible to run multiple WebRTC clients applying some networking constraints (bandwidth, latency, packet loss) and measuring some performance indicators that are provided by the tool output (sent/received bitrates, packet loss, video resolution, jitter buffer, etc.).

Prerequisites

  • Linux machine with Docker installed;
  • a good Internet connection to avoid conditioning the test results;
  • an amount of CPU/memory proportional to the number of clients that we want to run.

Testing the Jitsi service

Running a test with 3 participants connecting to a Jitsi video conference service (set a valid ROOM_NAME env variable before start):

# Start the two participants without network limitations.
docker run -it --rm \
-v /dev/shm:/dev/shm \
-v $PWD/.webrtcperf:/root/.webrtcperf \
ghcr.io/vpalmisano/webrtcperf \
--url=https://meet.jit.si/${ROOM_NAME} \
--url-query='#config.prejoinPageEnabled=false' \
--sessions=2 \
--stats-interval=5

# Start the third participant with limited downstream networking at 500 Kbps, 50ms RTT
sudo modprobe ifb numifbs=1 # Required only on first run.
docker run -it --rm \
-v /dev/shm:/dev/shm \
-v $PWD/.webrtcperf:/root/.webrtcperf \
--cap-add=NET_ADMIN \
ghcr.io/vpalmisano/webrtcperf \
--url=https://meet.jit.si/${ROOM_NAME} \
--url-query='#config.prejoinPageEnabled=false' \
--sessions=1 \
--stats-interval=5 \
--throttle-config='{down:[{protocol:"udp",rate:500,rtt:50,loss:"0%",queue:50}]}'

Example output from the first two participants:

  • 2 video streams sent at 1280x720, 25 fps, 1060 Kbps
                          name    count      sum     mean   stddev       5p      95p      min      max
-- Outbound video ----------------------------------------------------------------------------------
sent 2 10.16 5.08 0.04 5.04 5.12 5.04 5.12 MB
rate 2 2120.52 1060.26 88.03 972.23 1148.29 972.23 1148.29 Kbps
lost 2 0.00 0.00 0.00 0.00 0.00 0.00 %
roundTripTime 2 0.040 0.001 0.038 0.041 0.038 0.041 s
qualityLimitResolutionChanges 2 5 2 0 2 3 2 3
qualityLimitationCpu 2 0 0 0 0 0 0 0 %
qualityLimitationBandwidth 2 0 0 0 0 0 0 0 %
sentMaxBitrate 2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Kbps
width 2 1280 0 1280 1280 1280 1280 px
height 2 720 0 720 720 720 720 px
fps 2 25 0 25 25 25 25 fps
pliCountReceived 2 11 1 10 13 10 13

Example output from the third participant (with limited downstream):

  • 2 video streams received at 320x180, 13–20 fps, 33–102 Kbps
                          name    count      sum     mean   stddev       5p      95p      min      max
-- Inbound video -----------------------------------------------------------------------------------
received 3 0.57 0.19 0.13 0.00 0.31 0.00 0.31 MB
rate 3 223.46 74.49 29.80 33.17 102.37 33.17 102.37 Kbps
lost 3 0.11 0.15 0.00 0.32 0.00 0.32 %
jitter 3 0.02 0.00 0.01 0.02 0.01 0.02 s
avgJitterBufferDelay 2 115.19 10.11 105.09 125.30 105.09 125.30 ms
width 2 320 0 320 320 320 320 px
height 2 180 0 180 180 180 180 px
fps 2 16 3 13 20 13 20 fps

We can send the metrics to an external Prometheus Pushgateway service using the ` — prometheus-pushgateway` command line option. Let’s change the third participant command line:

docker run -it --rm \
-v /dev/shm:/dev/shm \
-v $PWD/.webrtcperf:/root/.webrtcperf \
--cap-add=NET_ADMIN \
ghcr.io/vpalmisano/webrtcperf \
--url=https://meet.jit.si/${ROOM_NAME} \
--url-query='#config.prejoinPageEnabled=false' \
--sessions=1 \
--stats-interval=5 \
--prometheus-pushgateway=http://${PUSHGATEWAY_HOST} \
--throttle-config='{down:[{protocol:"udp",rate:1000,rtt:50,loss:"0%",queue:50},{protocol:"udp",rate:500,rtt:50,loss:"0%",queue:50,at:180},{protocol:"udp",rate:1000,rtt:50,loss:"0%",queue:50,at:240}]}'

We start limiting the downstream bandwidth at 1000 Kbps, we decrease it to 500 Kbps after 2 minutes and we increase it again to 1000 Kbps after 2 minutes. Using Grafana we could obtain a graphical visualization of the collected metrics:

Grafana view of some received video metrics.

From the Grafana view we can see that:

  • The total received video bitrate starts at ~700 Kbps; it decreases to ~300 Kbps after 2 minutes and it increases again when we change the network link capacity.
  • The average and 5th percentile values for the bitrate, the width/height and the framerate related to the streams received from the remote participants. We observe that the two remote video streams are received with different resolution/framerate/bitrate.
  • The Jitsi application adapts the video resolution delivered to the limited participant tracking the link available bandwidth.

Conclusions

With this short article we presented how to test a popular video conferencing platform in a local controlled network environment, in order to test how the application adapts to network capacity variations and how they affect the received video quality.

More new articles will follow explaining how to test other popular video conferencing services and how to customize the webrtcperf tool configuration.

--

--

Vittorio Palmisano

PhD, full stack developer. More than 10y experience in developing and testing multimedia streaming and video conference applications using open standards.