Algorithmic Beat Mapping in Unity: Intro

Jesse
Giant Scam
Published in
4 min readFeb 27, 2018

With so many successful rhythm games hitting the market, beat mapping has become a popular topic in game development. Audiosurf originally defined the standard for algorithmic mapping, where a user can plug in nearly any song and get a playable map instantly. Dylan Fitterer, the creator of Audiosurf, later used it as the foundation for the VR rhythm game Audioshield. Other games in the genre such as Soundboxing and McOSU allow the user to generate beat maps either in-game or using an external tool. Holodance, another very successful VR rhythm game, supports curated, user-mapped, and algorithmically mapped content.

At Giant Scam we have attempted many variations of all of the above while working on our upcoming VR rhythm game “Chop It”. You can see progress videos of Chop It in our Game Dev Progress Playlist on Youtube. Most of the content we include is mapped using an internal tool, as this gives us the most control over the exact timing, orientation, and flow of the gameplay as it relates to the audio track. While we intend to continue releasing curated content, we wanted to explore what it would take to have an Audioshield/surf-style mode where a user can plug-and-play new audio tracks, and we wanted to see what it would take to do it entirely in Unity. We certainly cannot claim to be as experienced as Dylan and we don’t promise the perfect algorithm for every use case, but we were able to come up with some fairly convincing results for both real-time and preprocessed audio analysis. This is a best-effort shot at building a foundation that you can experiment with to make your next rhythm game, visualizer, or any other project you can dream up that uses beat detection.

I looked around a bit before deciding to write this series. When it comes to beat mapping in Unity I could not find a comprehensive explanation of the methodology and the underlying logic. Allan Pichardo’s solution follows a different algorithm than what I present here, and is meant only for real-time analysis, but it may fit what some users are looking for.

I followed two article series myself and implemented each algorithm individually in Unity. The first is this fairly popular Beat Detection Algorithm post on gamedev.net. This algorithm gave me very inconsistent results. I searched around a bit more and found an author that had the same issues with the Beat Detection Algorithm article, and so he took a different approach that ended up working out very nicely.

The article is Onset Detection by Mario on the Bad Logic Games site. The entire article is worth a read, and through the early parts of my posts we will be reiterating some of what Mario explains. We will then be following Mario’s algorithm for Onset Detection using Spectral Flux to a tee. Mario dives straight into preprocessing using Java. We’ll focus on real-time in Unity C# first, but we can reference Mario when we get to preprocessing and we can use the same Spectral Flux algorithm that he outlines in both cases.

Going forward, it’s important to define what we mean by a “beat”. In our use case, we are trying to find audio samples that are significantly interesting. That could be a hit on a bass or snare drum, plucking of a guitar string, the playing of a note on the keyboard, a fluctuation in vocals, etc… We will find ways to separate these different types of beats out, but to get us started we’ll just look for the most interesting thing happening at a point in time.

I should mention that all testing was done in Unity 2017.3.0f3, but I believe it would work identically in Unity 5.

While the goal of this series is to arrive at a usable and scalable preprocessing analysis solution, we will start with real-time and work our way there. Real-time analysis can be achieved using fairly straightforward functionality in the Unity API, while preprocessing requires a bit of finagling. By the end of it we’ll have covered the methodology, with code samples, for both real-time and preprocessed beat mapping entirely within Unity. I’ll also post a public github repo in the Outro with the full working solution for both methods.

When you’re ready, get started with the real-time analysis and then work your way into the preprocessing analysis after you’ve become comfortable with the terminology as well as the underlying logic. Please also read the associated articles that I link throughout. If you have questions, feel free to reach out at our support line scambot@giantscam.com or on Twitter at www.twitter.com/giantscam. We’d love to see what you make using these algorithms. Thanks and I hope you find this series useful and enjoyable!

Up next:
Real-time Audio Analysis Using the Unity API

Join our Mailing List: https://goo.gl/forms/3TApneFymVSVgjsZ2

--

--

Jesse
Giant Scam

Lead Developer / Co-Founder of Giant Scam Industries