Towards crashless multimedia playback

Kieran Kunhya
3 min readJun 5, 2016

--

In $dayjob I’ve written about how we fixed a number of crashes and hangs when feeding corrupted data to various codecs by fuzzing FFmpeg using a couple of CPUs. Since then we’ve significantly increased the CPU resources of our fuzzing jobs to fuzz both FFmpeg and Upipe and fixed more issues in the process. The majority of the fuzzing was done with American Fuzzy Lop (AFL).

The fuzzing method had a number of issues:

  • Required repeated execution of the ffmpeg executable, a costly use of the fork system call. This reduced the total fuzz speed.
  • A dependency on ffmpeg.c, a complex 4000-line+ tool that may have covered up issues seen by API users.
  • Ability to build a suitable test corpus. With something like H.264, it’s easy to make samples using x264 with many coding features but we can’t exercise everything, especially when remaining within AFL’s 1MB file constraint. For example, to exercise sliced-threads properly, a slice-threads compatible file needs to be made.

And so with some additions from Andreas Cadhalpun, I modified the standard decode example to use AFL’s persistent-mode. This mode uses LLVM/clang to allow multiple fuzz runs during a single execution, drastically speeding things up:

while (__AFL_LOOP(1000)) {

/* Read input data. */
/* Call library code to be fuzzed. */
/* Reset state. */

}

/* Exit normally */

This implementation was a case of us “eating our dogfood”, and testing the API directly, building on the work done by my previous Outreachy student Ludmila Glinskih. However, it basically sat on Github for 6-months because of a lack time to pursue the project. At the recent VideoLAN Association Meeting in Vienna I was lucky enough to meet Pawel Golinski who wrote some small additions to fffuzz as part of his Google Summer of Code application. This reminded me that I hadn’t documented it, nor applied his patches :)

fffuzz is easy to use, literally

./ffuzz file.h264 /dev/null

Running it in AFL (remembering to use clang/LLVM and installing afl-clang-fast) produces a very lovely output:

This approach has been successful but also has a major drawback:

It can’t track changes in codecs, so we’re not aware whether a particular commit leads to a fuzz crash. In the FATE regression test suite we’re able to see whether commits break working samples but it’s much more complex to see if other crashes are introduced. Ideally we’d like to see information extracted from each commit (e.g from the commit message and the changed files), and a suitable corpus fuzzed. This would then mean changes to H.264 would cause an H.264 corpus to be fuzzed instead of a JPEG one that wouldn’t have any relevance. This is something we’d like to do internally at $dayjob or via a programme like GSoC and Outreachy. (If you are interested in being hired directly or via an open-source scheme to work on this, please let me know)

There are also problems with codecs like FFv1 and Teletext which don’t have a raw format (i.e have to be muxed into a container like AVI or TS) because the fuzzing process would terminate early as a result of error detection in the container and the codec would never actually get fuzzed. None of this fuzzing attempted to fuzz demuxing, a separate problem. However, Upipe’s framers, which are components designed to build complete frames out of dispersed data, were fuzzed and issues fixed.

So now fuzzing FFmpeg is easy to do and I hope many bugs are found using fffuzz. It’s available here: http://github.com/openbroadcastsystems/fffuzz

--

--