Sonification for DevOps

While perusing the internet I ran across an article in the Economist about sonification and how it could be applied to detect new insights in scientific data. Sonification is the use of non-speech audio to convey information or perceptualize data. The human ear is really good at picking up subtleties in timbre, pitch, and even attack and volume velocity. Therefore, when scientific data is converted into sound humans can detect patterns and anomalies that cannot be detected (at least not easily) via a visual representation. I thought it would be cool to apply this same principle to devops.

During my career as a Software Engineer, I have built many web apps that have to be available 24/7 and have also been partially responsible for keeping them up and running. I have many war stories of unexpected events occurring during all hours of the day and having to react quickly to get everything back to normal. The more tools you have to monitor how your system is performing the easier it is to figure out what is going on. So, why not another one that translates your system performance metrics into sound?

I always think it is interesting that the office of a SAAS company sounds the same when the system is running smoothly and when the product is on fire. The support team might be typing faster in chat than before and more engineers might be huddled up around a computer but it’s mostly the same. For the guys in charge of keeping the system running this can be unnerving and they often feel compelled to constantly have one eye on what they’re working on and another on a system performance dashboard. That’s not very productive or fun.

When I started prototyping what now is I had an idea that a good sound to represent system performance would be that of engine room mixed with a control room in a nuclear plant. When everything is running great you hear a gentle hum of the engines but when all hell breaks loose the engine is loud and alarms start going off.

When researching how to generate audio I ran across HTML5 audio and how you can generate sounds in a browser. I found MIDI.js ( that could take a MIDI file and render music in a browser. MIDI files have been around since the early 80s and were how early games generated music and how electronic keyboards generated notes. I could change the pitch and volume of sounds by passing different notes to midi.js. This was perfect to get my prototype up and running.

I found an engine room audio clip on and generated a sound font using Polyphone for mac. Think of soundfonts as instruments in an orchestra. You would have one sound font for a violin and another for the oboe. The MIDI file would contain the notes and chords each played for each instrument in a composition.

I then started up a quick node project to get things going. I used express.js to serve up the html and for real-time communication with the backend. I wrote a simple script that polled application performance metrics every few seconds and pushed the values to the browser. The browser then took the values and calculated what note, instrument and volume would need to play in the browser. The initial results were good and I had it running on a raspberry pie in my office.

I had so much fun making it, and it helped detect some system problems before they materialized to the end customers, that I tried to take it to the next step to make it more usable for other people. This took waaay longer than I thought (it always does). I had the basic prototype working in my spare time in a couple of weeks. Making the mvp was a slow process that took around 6 months but I finally finished it. It’s very niche but think other people might want to play around with it. I have no plans to charge for using it but if there is enough interest I might open source it (need to add comments and more test coverage).

To use it sign up for an account at and check out the Getting Started Guide.