’s live video platform for the 2014 FIFA World Cup

I recently gave a talk about’s live video platform at nginx.conf with my co-worker and friend Leandro Ribeiro. He has already shared the slides, but I thought it might be useful to share this article for those who are interested but couldn’t see the presentation. I initially wrote this document as a speech, so if I panicked, at least I’d have something to read. This was the first time I spoke in an international event, in English. In the end, I did not speak anything I wrote, but I guess the talk was not that bad and we were very happy to receive some positive feedback.

globo who?

This article describes how used Nginx to deliver live video to almost half million people at the 2014 FIFA World Cup. The traffic at peak hit 580Gbps and represented more than one third of the total internet usage in Brazil. We were invited to speak about this experience at the nginx.conf when Leandro wrote a blog post about our architecture and stack.

To explain how we got there, we should first go back to 2010, when the last World Cup had taken place. Better yet, to 1925, the year Globo was founded in Rio de Janeiro by Irineu Marinho, publishing two newspapers. In 1944, they launched a radio and in 1965 a TV network. Today, Grupo Globo is the largest mass media group of Latin America. was launched in 2000 and is an independent company responsible for the internet operations of the group.

If you are not from Brazil, maybe you never heard of Globo before, but it’s really huge there. In 2010, neither me nor Leandro had joined, but was already a big company with a big audience and it broadcasted the 2010 FIFA World Cup to 285 thousand simultaneous users. For that, we used Flash Media Server (FMS, now known as Adobe Media Server or AMS). The player was written in Flash. The best quality delivered was 800kbps, which is considered a bad quality nowadays, but was a lot for that time, especially for the typical broadband in Brazil. The numbers were very impressive for the time, but in reality it was not a good experience for users. The video was buffering, people were disconnected all the time and the average bitrate was really bad.

growing pains

In 2012, when I joined’s live video team, one main problem of the platform was that it did not scale well. In 2012, people were already used to watch video on the internet with much better quality than what we used in 2010. We tried to add higher bitrates, but we failed to stream video in HD for large events using FMS.

The protocol we were using was RTMP, a stateful protocol, which makes it hard to keep the load evenly distributed across servers. If a server failed, the users would try to connect to another server and would not go back to the same server when it restarted. Under heavy load, a server would suddenly crash, and the other servers would fail in cascade because of the additional load. This happened several times, as it can be seen in the chart below, which represents the total audience for a stream.

We updated our servers, we contacted Adobe, we tried different topologies, we did everything we could, but every main event was a pain. Any event with more than 60k concurrent users was a huge stress. Basically, any football match. Moreover, since FMS is a proprietary software, we did not have many tools to debug what was going on. For troubleshooting, all we had was FMS's logs and WireShark to inspect RTMP, which is really hard.

While we were struggling with that, the world was watching the exponential growth of mobile phone, and we were still unable to stream to mobile phones. The most popular operating systems for mobile phones, iOS, and Android, do not understand the RTMP protocol. We decided to tackle this problem first since we were going to deliver a paid stream for the Big Brother Brazil show and the company had already announced it would support mobile devices.

mobile support

The protocol choice was a no brainer. HLS was created by Apple and was the only protocol supported by the iPhone and iPad. Luckily, it was also supported by Android. HLS is a protocol based in HTTP to deliver live and on-demand video streams. It uses a text manifest to describe a video and its streams (usually one for each quality). Each stream is described by a text playlist file that lists the video segments or chunks. Each chunk usually has a duration of 2s to 8s. The player streams the best quality it can with the bandwidth it has and plays the chunks in order. If it is a live playlist, it downloads the playlist file periodically to get the list of new chunks that are being created.

In a few weeks, we were able to create a working prototype using FMS and Nginx. We used FMS to segment the video streams and create the playlist files and Nginx for caching. Later on, we added to Nginx Lua and C modules for authentication and access control (we restrict the number of simultaneous streams per account and geolocation).

BBB was a success. We had no major issues with the 10 different streams that went on 24/7 during the 3 months of the show. We had a couple of issues with the RTMP stream under load, but the HLS stream was much more resilient. Even better, we could easily see and measure what was going on because it was just plain HTTP and Nginx.

one protocol to rule them all

In 2013, we were going to broadcast the “Copa das Confederações” and we knew it would be the main test for the World Cup in 2014. We were very happy with the HLS streams, but we still could not solve the issues with RTMP. We had a crazy idea (at the time) of trying to play HLS in the desktop as well, so we would have just one single protocol to care about. And one protocol that did work and was easy to scale and debug.

We had some other goals at the time, but we started working on the player on our own spare time and after a few weeks we had a working prototype we could show to the management and convince them that this could work. The only problem of HLS is that the delay is huge. From 2s-5s in RTMP, it is 10s-20s in HLS. Keeping the segments short, we were able to minimize the problem and the additional number of requests were not a problem for Nginx on our tests.

We did some A/B tests, and the users that were using HLS watched the streams for longer periods, with better quality and with fewer switches between bitrates. For the Copa das Conferações, we replaced FMS with EvoStream and invested a lot on instrumentation and monitoring. It was another huge success. We were able to deliver video to 380k concurrent users.


In 2014, with only six months ahead of us before the World Cup, a new team was created to develop a new player for the event. We wanted a player with a better user experience and we decided DVR should be part of that experience. The new team ended up creating clappr, an extensible web player.

DVR, from Digital Video Recording, is the technical term used when users have the ability to pause, rewind, and forward live video. Of course, for video streams, the server records the video and the player just plays the stream from a different starting point.

We build a simple Python application that moved the video segments from the segmenter to Redis and we developed a Lua application in Nginx to create the playlists dynamically and serve the chunks from Redis. We also used Redis to store video thumbs for each couple of seconds so the player could show a thumbnail in the seek bar.

load testing

The hard thing about live events is that you only have one chance to get them right. If something goes wrong, there is very little time to fix things and even if you are able to correct any issues, you may already have lost your audience. This is even harder if you have your own datacenter and your own servers. We need to forecast and order servers with months of advance. For historical and economic reasons, we do not use any cloud provider. The infrastructure for this kind of service in Brazil is (was?) not that great. We believe we are still larger than Akamai in Brazil, although we only have 2 datacenters. AWS opened a datacenter in Brazil just a few years ago.

We use Avalanche for capacity testing, but it’s hard to simulate the full-scale of a transmission. Luckily, we were going to stream some big events before the World Cup such as the Champions League, so we used this opportunity to battle test our platform. Our strategy was to use the least amount of servers that would handle the traffic. We only added more servers when we identified a bottleneck.

One of the things we noticed in one of such games was that sometimes the upstream layer was receiving a burst of requests and the response time started to increase. And the higher the response time got, more often than not we could see those bursts. At some point, it got so bad that we had to turn DVR off for one of these matches. The day after we finally understood what was going on: We had not configured cache lock properly. When the cache for playlist expires in a front end server, several requested were sent to upstream concurrently until one of them was returned. We had to explicitly tell Nginx to deliver the “stale” cache while one request was “updating”. The solution is very simple. Just use the following directive:

proxy_cache_use_stale updating

Another thing we noticed in one of such games, was that the core was handling all the network interrupts was very busy and we started to drop some packets when we were streaming 6Gbps per server. We enabled IRQBalance and increased the network card buffers. After that, we reached 9Gbps in a single node.

full throttle

For live events, it’s always hard to estimate how big your audience is going to be. For the 2014 FIFA World Cup, our dream was to reach one million concurrent users. has its own CDN and datacenters and in the months before the World Cup we were upgrading our links with every major telecom in Brazil to get ready for that. We were the first company in Brazil to install a 100Gbps port, that connected us to the IXP in São Paulo.

The estimate was that we would have a total capacity of 1.6Tbps, but we had only 80 machines (each with 64gb of RAM and 24 cores and 2 bounded network cards of 10Gbps). The math was very simple: To be able to deliver the full capacity of our network links, each node should be able to stream 20Gbps. The issue was we had never crossed the 10Gbps before. We went back to Avalanche and used CPU affinity to manually pin IRQs to CPUs and we were finally able to reach 19Gbps, almost the full throughput of the network cards.

By the way, both NIC were listening to the same IP address, so they were bounded. We used direct server response in our load balancers. It means that the server listens to the public IP address and the load balancer only routes the incoming traffic to the server. The servers send the response directly to the user.

One last thing we noticed is that the server was gzipping the playlists every single time on demand and this also became a bottleneck. We had to configure a cache for the compressed content.

sorry, you are not on the list

So, according to our calculations, we would be able to deliver our installed network bandwidth, but we knew the load would not be equal across all providers. We thought about delivering that extra load through a proper CDN, but we did not have a budget for this.

The solution we got was to measure the capacity of each link in near real-time and put new users arriving from full links on a waiting list. In that way, we would save the bandwidth for the users which were already streaming, instead of accepting more users than we could support and hurting the experience for everybody. Once again, we used Nginx for that. The player was modified to ask the waiting room service if a new user was allowed to stream before it started streaming. The waiting room API was also developed using Nginx, Lua. A ruby dumped the exabgp database with the IP routes for each link into a Redis database. We used a Redis fork with interval sets to make the query by IP fast. The link capacities were obtained using SNMP to talk to the routers and persisted in the same database. The waiting room API just queried the database to check what was the link for user’s IP and its capacity. All this was summarized in the diagram:

the world cup

Everything run smoothly for on the World Cup for us, except thatBrazil lost by 7 x 1. Sometimes we even pinched ourselves, because everything worked quite well. The numbers were not quite as high as we expected, mainly because everybody was watching the games on the big screen. If the water consumption in Germany decreased during the match, you imagine what was happening in Brazil. I think that the few people which were not watching the games were ourselves. In the end, we were very happy to broke our bandwidth record and simultaneous users record. Both records were broken in the match-up between Argentina and Switzerland. It was not a particularly great game, but it was one of the few days which were not declared holidays in the cities that were hosting the games, so people watched the match from work.


After the world cup, we end up doing some few modifications to our platform. One of those was to replace Redis with Cassandra for better redundancy guarantees and to be able to record more streams in parallel. We now stream more than 30 concurrent streams with a 2 hour DVR and will probably reach 50 for the Olympics. We still love Redis, but Cassandra is awesome. We have already written about this if you want to know more.

We had to write a Cassandra driver to Nginx Lua that has seen some adoption. It was used to build Kong, which has some very impressive benchmarks.

future plans

We are really happy of what we have achieved in the years we worked at this platform, but, of course, we always wish something more. Leandro and I have already left the live video team, but we know it’s in good hands.

One of the things they plan to work on is to support the MPEG-Dash protocol. We still need a flash player to handle the HLS protocol in desktop players and with Dash it’s possible to create a pure HTML5 player.

Another change that was made was to adopt nginx-rtmp to segment the streams. It is an awesome open source library that could completely replace Evostream for us. There are just a couple of features they miss.

This year they are going to stream the Olympics in Rio. Let’s stay tuned!

Machine Learning Engineer at Google. Opinions are my own.

Machine Learning Engineer at Google. Opinions are my own.