Death by Frosty the Snow-lass
The redesign project that pushed my sound design skills to new limits
I’m continually searching for ways to expand my sound design skills and often end up in unfamiliar territory. Previously, I wanted to practice implementing sounds into a video game, so I went through the trouble of designing a level from scratch, even to the point of creating custom 3D models and assets. I take portfolio feedback to heart, and recently decided to pursue a project that would really add some energy to my content. Something that demonstrated as much of my capabilities and resourcefulness as a sound designer in a short, dynamic sequence.
Redesigning a sequence for Mortal Kombat offered a great opportunity to create sound with highly energetic and dynamic material. But more importantly, this was really tricky and challenging work! And in my experience, getting the sound right would mean a ton of trial and error and problem-solving on my part. Since I haven’t come across many resources that cover the sound design process from start to finish (without skipping the struggles and surprise breakthroughs along the way), here I’ll walk you through my process and experience redesigning the sequence I chose.
Before I started any recording sessions, I put in some study time with the latest Mortal Kombat, looking for a character with an appealing style and a story I could tell through sound. Then I made a plan for how I would edit, name, and master my recordings (I can’t over-emphasize how important this part of the process is!). When it was time to start recording and iterating, I made sure to capture all the little things I did to bring the project to a satisfying close. You’ll see what went into the first and second iterations of the design process, areas of improvement I identified while working, and get a look at comparison videos I made to demonstrate the differences between the results.
And if you’re more of a TLDR kind of reader, please use the section links below to navigate to the topics that interest you, because this article does get lengthy in an attempt to give a detailed view behind the mental curtain.
Jump to Section:
Note: If the links are taking you to the top of the page, right-click and open in a new tab. Internal links on Medium are finicky, my apologies for the inconvenience.
This section focuses on the research I performed before gathering recordings and starting the sound design process. I cover my thought process for selecting the source material to redesign and reference material I found. Believe it or not, this meant less time playing Mortal Kombat and more time watching and rewatching clips, obsessing over the sounds, stories, and experiences the game offers.
To start, I pulled up the character roster on the Mortal Kombat website and dove into the various characters, their backstories, and abilities. I owned and played several of the Mortal Kombat games while growing up, but I hadn’t touched the series in at least a decade.
After checking the roster, I spent some time watching character reveal trailers and gameplay videos. When reviewing potential source material, I had specific criteria in mind:
- The sequence should be 30 seconds or less
- The source video should be at least 720p and ideally 1080p
- The fatality should encourage a diverse sonic palette
- The material should get me excited about the sound design possibilities
Official channels like Mortal Kombat and GameSpot offered the most consistency in terms of video quality (1080p). Fan videos and streams are great resources for a deeper look at the gameplay. However, they often lack the definition required to keep the redesign project clean and professional. But the official, promotional character reveal videos were always high definition and featured a short demonstration of the characters’ abilities and ended with a fatality.
Initially, I planned to include regular combat that crescendoed into a fatality, but such a sequence turned out to be longer and more complicated than I was really looking for. By themselves, character fatalities are about fifteen seconds or less, which turned out to be a better mix of length and challenge for me (i.e. really high challenge packed into a shorter duration). Fatalities are designed with thoughtful pacing and demonstrate the characters’ unique abilities — they tell a quick story, involve a flourish of character, and incorporate all kinds of strange and surprising sound details.
Below were my top three considerations for the source material.
Cetrion’s combat is exciting to watch. Her abilities use the four elements (earth, water, air, fire) and include attacks with aggressive plant growth. That sonic palette would be fun and challenging to work with. If all of her powers were featured in a single fatality, I would have almost certainly gone with that.
Jacqui Briggs’ fatality has energy, attitude, and a diverse palette. There’s gunfire, explosions, and a beating heart pressed against a force field. That’s hard to pass up.
But here’s the fatality I absolutely couldn’t resist working with:
Frost has some serious action and sonic diversity in her Cyber Initiative fatality. The sequence contains an ice beam, shattered frozen body, broken spinal cord, brain delivery drone, and mechanical cyborg assembly line. I decided to use this fatality for my redesign because it has all the features I’m looking for — under 15 seconds in length, 1080p source video, and an exciting and diverse range of sounds. Little did I know how pesky that drone and it’s elusive whoosh would be.
Working with reference material is invaluable, so I considered the different types of sound effects I would need and started looking for content that would give me a good starting point. The main sound effects groups I initially identified were combat foley, gore, ice and magic sounds, and cybernetics.
First, I wanted to get a better sense of Frost’s background. I remembered from previous games that her character was arrogant, ambitious, and sought to destroy Sub-Zero, who had banished her from their clan, the Lin Kuei, for attempting to overthrow him as leader. Understanding the character’s history may help inform my decisions during the recording and design stages.
Since Mortal Kombat features… combat, I researched sound design techniques that would help me get started. Looking back, this wasn’t entirely necessary since the fatality doesn’t contain much in the way of traditional fighting like punches, kicks, and throws. But it was a useful primer for working on this sequence, and I needed to think about the weight and physicality of fighting in the game.
What would Mortal Kombat be without comically absurd gore? I was already aware of the usual methods for simulating gore by crushing, mangling, and maiming different fruits and vegetables. I was certainly looking forward to my own gore recording sessions, but why not check out a video by the very sound designers who worked on the game?
And this video by the same designers is equally great, while maybe exposing my juvenile sense of humor.
Since Frost became a cyborg in Mortal Kombat 11, it made sense to look into different techniques for cybernetic and mechanical sounds. I came across the Respawn Entertainment Audio Spotlight on the sound design for the different Titans, which are mecha-style exoskeletons from the Titanfall series. While Frost’s cybernetic sounds would not be as large as the ones featured in the video, it was useful to hear the source materials and layering used during the sound design process.
Having finally selected the right fatality sequence and gathered the necessary reference material, it was time to move on to the recording sessions.
This section covers the recording sessions for gathering original sample material. I recorded my sounds directly into Reaper using the Sennheiser MKE 600 microphone and my Steinberg UR44 audio interface. I decided early to forgo recording my own ice samples because I knew of Alex Barnhart’s Frozen Sample Library. His recordings are incredibly well done, save me loads of time, and free me up to focus on the other sessions I need to do.
The first day of recording was focused on gore. I used celery, lettuce, bell peppers, grapefruit, french baguette, potato chips, and mixed nuts in the shell. The bread was not quite as crisp as I wanted, so I threw it in the oven at 250 degrees Fahrenheit to slowly dry it out for maximum crunchiness. Also, I finally found a use for the extra can of cranberry sauce that had been sitting in the pantry for almost a year. I set up my recording space with tools, gloves, dish towels, and a paper bag for the waste.
One of my goals for this project was to improve the quality of my recordings, so I increased the sample rate to 192khz over the usual 44.1khz. When more samples are captured, the audio recording will have a much higher resolution. While this does increase the size of the audio files, I knew that I might need to heavily pitch down my samples, so I wanted to retain as much energy and fidelity as possible. I also recorded the bit depth at 32 bits, but noticed later that my audio interface only supports up to 24 bits. Whoops.
Since Frost is a cybernetic character, I needed lots of mechanical sample sources. I have recorded my appliances in the past, but I used a much lower sample rate. In past projects, my Zoom H4n was notorious for introducing noise into the recordings. Now I use a dedicated home recording space and I have a bit more experience avoiding unwanted noise, so I decided to give it another go.
By the time I was doing my second recording, I hung up some old tapestries in my recording space to achieve a crude but workable version of acoustic treatment. I wish I had thought of it for the first recording session because that would have protected the walls from flying bits of fruit and juices.
This recording session required quite a bit of gain monitoring because the different appliances each had various levels of intensity. I took extra care to check the levels when I was recording each appliance. I started with the quieter devices like a toothbrush and electric razor and gradually worked my way up to louder devices like a Vitamix and food processor.
My primary focus was to get sustained samples so I had space to pitch them way up without running out of recording. I learned from earlier projects that pitching sounds up and down is invaluable when designing machinery.
This section covers sample selection and editing, as well as how I curated my library and file-naming conventions for this project. Full warning: This is a topic I enjoy diving deep on, so it’s quite lengthy. And before I get started, I want you to know the kind of obsessive you’re dealing with. The image below is a picture of a cabinet in my kitchen.
I decided to use techniques outlined in the Creative Field Recording blog for my file-naming convention. To avoid a library full of numbered and generic filenames (e.g. Grapefruit_Squish-01, Grapefruit Squish-02, etc.) the idea is to listen intently to the samples and determine what makes each of them unique, so that the name accurately reflects the content. If there are two samples that sound the same, is it really necessary to have duplicates?
This process required extensive thesaurus use to select the most specific words to describe the sound. My hope is that by using accurate and descriptive file names, I’ll be able to find what I’m looking for very quickly during the design stage. My naming convention started with the object noun and then listing out the specific sounds that occur through the entire file. Essentially, I was capturing the cadence of the sample directly into the file name (e.g. Grapefruit_Squish_Pull_Split, Lettuce_Dribble_Plop_Splatters, PotatoChips_Crush_Squeeze_Crackle).
My Food Word List at a Glance
Burble, Burst, Churn, Crack, Crackle, Crunch, Crush, Dribble, Drips, Fizz, Gurgle, Mash, Patter, Peel, Perforate, Plop, Pop, Pulp, Rupture, Slap, Snap, Spew, Spit, Splatter, Split, Spurt, Sputters, Squash, Squirt, Squish, Tear
Defining a Library Standard
I’m considering changing the approach to how I organize my library (again). Over the years it’s been difficult to define a library standard because I’m constantly learning about recording techniques, naming conventions, the development pipeline, and personal workflow. As I gain more experience and update my standards, I’ve often realized the method I was previously using was ineffective, or only useful in certain circumstances.
The challenge with using the bucket method is that it requires a consistent naming convention across the entire library. Not only do you need to consider endless possibilities for different takes and sounds, you also need to consider them until the end of time. You may potentially be incorporating new sounds into the buckets 5, 10, 20 years down the line. That seems really hard to plan for. And if you learn something new, or find a better way to organize your samples, now the entire library needs reworking.
I tried to account for this growth by adding “AA, AB, AC,...” to the file names to separate different sounding takes and leave room for the library to expand with time. That way, I can use the same naming convention but keep adding new things without file numbers getting out of control. It also serves the purpose of giving a quick visual reference to know where different takes of the same object are.
I’m beginning to think it may be beneficial to switch to a Project-Based approach with my library instead of using master buckets that hold all samples from all projects.
For example, whenever I do a redesign, any sounds or samples I create would be thrown into a bucket for that particular project. All the samples I’ve created for this project would be thrown into a “Frost Fatality” bucket that goes into my library. I will almost certainly remember the content of any project I invested many hours into. Knowing what is in them would be easy and I can still do searches across the master library.
This also allows me to incorporate third-party sample libraries more easily into my master library. As a bonus, I can feel more confident testing out new recording techniques or naming conventions without fear of messing up my precious naming convention standard, a standard that will almost certainly never exist because I will always be learning new techniques and processes.
Proper library curation is an incredibly time-consuming process. It’s challenging to find a method that suits my workflow and is practical for the professional world. It may be best to develop a library approach and structure that is flexible and able to grow with me.
Typically, I avoid composite files because I have a tendency to only use the first or second sample in the sequence. My other reason to avoid composite files was that by having individual samples, auditioning becomes super fast. But with my new approach to library curation, splitting out every sample quickly becomes a tiresome file-naming exercise.
There must be some sort of middle ground.
I think to have the best of both, there needs to be careful consideration for the type of sounds being composited. Game development often requires subtle variations to repetitive sounds (e.g. footsteps, gun shots, and impacts). If a sound is transient by nature, then maybe it’s a composite file candidate.
As I found earlier, editing and naming samples can be very time consuming. By thinking ahead, even as far back as the research stage, I can save myself time and headaches. So, before I hit the record button, I should stop and ask myself a couple questions:
- Why am I recording this sound?
- What is the purpose of this sound?
- Are numerous variations necessary or will a few, high quality takes suffice?
Having considered all that, I decided to make a composite file for the food processor and the pulse variations I recorded. To avoid blowing my ears out when auditioning, I ordered them from shortest and least intensity to longest and highest intensity.
This is the first time I’ve used Reaper and it sure has some helpful time savers. I’m especially fond of this custom action I learned from Gordon McGladdery for arranging samples by amplitude and spacing them out by one second. It’s a very effective tool for generating composite files in Reaper.
Mastering & LUFS
The final stage of library curation is mastering. I recently learned about Loudness Unity Full Scale (LUFS) and how this guideline allows you to normalize samples based on the perceived loudness. This way, if I normalize my samples to the same LUFS level, then the perceived loudness will remain consistent across the entire library instead of jumping around as I cycle from sample to sample. While my understanding of LUFS is somewhat superficial, it seemed like a great way to add consistency to my library. I quickly found that it’s not quite that simple.
When I normalized with LUFS, the vegetable based samples clipped severely. This is obviously not a good thing.
When I normalized the appliance sounds, it worked out pretty well. The original power drill recordings varied in loudness because the intensity of the recording increased with each speed of the drill. The consistency produced a smooth auditioning experience and allowed me to hear the differences without being fooled by loudness.
Normalization becomes tricky when using composite files. If I composite first, then normalize, the higher intensity sounds will keep the lower intensity sounds less pronounced. If you normalize first, then composite, there is the risk of bringing up the noise floor on the quieter samples or ironing out subtle variations that are needed for repetitive sounds.
Careful consideration is required not only when deciding what sounds to composite, but also when to perform normalization. How the samples are processed may all come down to what the intended purpose of the sound is. Needless to say, I still have much to learn about mastering samples and using the LUFS standard.
- Understand why I’m recording a sound. What is the purpose and intended use of the sound?
- Keep a notebook handy and write notes for each take. The notes may be helpful with file naming.
- Learn more about LUFS normalization and how best to use it.
I sometimes struggle with pushing my sound design far enough, often because I’m too focused on simply designing what’s on the screen and not cultivating a truly convincing presence of sound. I decided to aim to complete this project within three iterations. With the first iteration, I will push the design well beyond what I think is necessary. The second iteration will involve refining and pulling back the excessive design. After completing the second iteration, I will seek feedback from industry professionals and then perform a third and final iteration.
I downloaded Serum because I needed to get some good synthetic laser sounds for the ice beam. When using the Serum demo, the sessions are limited to 20 minutes before the software locks. I challenged myself to work within the time limit for the entire project to see what I could create within that constraint. Getting a good laser pew and convincing ice beam was the first major challenge I faced during the design stage.
For the second iteration, I noticed my original design didn’t fit quite right, especially in the low end. Also, the sound design in the game has a nice, crisp, rising pitch effect that drops off as the ice beam depletes. I wanted to make sure I captured that with my design as well. Overall, my first iteration needed attention to detail and clarity.
The first iteration of the freezing effect was fairly straightforward. I layered up various samples from the Frozen library I purchased along with some glass and vegetable breaking sounds. I used the Soundtoys’ Crystallizer to fill out the design with extra particles.
While working, I noticed there wasn’t enough stereo presence between the ice beam on the left and the freezing on the right. Also, the Crystallizer effect was filling out the design too much and making it sound muddy. Luckily, I remembered to incorporate some small sleigh bells I picked up from the dollar store.
I began working on this project during the holidays and it just seemed like the right thing to do. They added a really nice, brittle jingle to the ice effects. This seemed like the right kind of subtle touch, but I had to be careful not to push it too far.
Designing the punch sound effect was fairly straightforward. I took inspiration from the original game audio and aimed for a mechanical wind-up sound. The major difference is that I split the design into a wind-up and a swing forward, while the original focused only on the wind-up sound.
While working on the second iteration, I noticed that the original audio was much quieter than what I was using. I figured this may be a decision to incorporate some dynamic range to make the shattering sound really stand out. I decided to attenuate my design here to align with this idea.
My approach to designing the torso shattering was pretty direct. I layered up vegetables, ice, glass, sleigh bells, and debris. I pitched down some ice impact sounds from the Frozen library to build a really thick sound. I also used the sounds of ice sliding across concrete for the limbs that go flying off screen.
While performing the second iteration, I noticed that my design had significantly more debris when compared to the original design, which I was perfectly happy with. That being said, there was far too much going on, so I focused on cleaning that up. Also, my first iteration was not quite heavy and meaty enough, so I focused on amplifying that as well.
Another consideration I had while working on the second iteration was cutting almost all sounds right before the spine break. My first iteration appealed more to realism, and the particle effects had a more natural fade. When I cut the debris right before the spine break, the breaking sound really stood out in a way I liked.
Much like the torso shatter, designing the spine breaking was fairly obvious to me. I layered the usual vegetable, ice, glass, and debris, but focused more on transient, brittle sounds.
My first iteration was a good start, but it just didn’t have enough oomph, so I prioritized adding more weight to the design during the second iteration. I think the debris I had was good, but it just needed a bit more clarity.
I also noticed while working on the second iteration that I went absolutely nuts with the Soundtoys Decapitator. I slapped that thing on almost every track for the torso shatter and the spine break. No wonder things were sounding so muddy!
The drone design was the second and largest hurdle for me during the design phase. For such a short moment, there is so much detail and intricacy that must be considered. Creating a convincing whoosh was literally the most painful part of the whole process. Nothing I tried seemed to work and I probably spent hours staring at the screen with existential dread, making almost no progress at all. In addition to that, I procrastinated for several days by playing Dark Souls 2 instead. Think about that. Playing Dark Souls 2 was less stressful to me than working on this drone. That says something.
For something that is so common and prevalent in sound design — a whoosh sound — I figured it would be fairly easy to throw together. At least with the rotors and claw movement of the drone, I had an idea of where I needed to go. For the first iteration, I was able to cobble together an acceptable whoosh by layering the sounds of packing fabric sliding across the carpet and processing it with reverb, effects, and panning.
By the time I got to the second iteration, I realized again it wasn’t quite enough. At some point, it kind of clicked for me that maybe I need to think of the whoosh as a combination of the object arriving and then the displaced air arriving after it. A delay effect might be just the tool I need to accomplish that.
After adjusting the fabric samples and tweaking the Soundtoys Phase Mistress and Echo Boy Jr., I was able to get a pretty good whoosh effect! Who knew that if you bang your head against a wall long enough, eventually you’ll phase right through it?
Designing the mechanical arm was a fun process. I mapped out MIDI notes to match the movement of the arm, cycled through different Serum presets, manipulated wave-tables, and modulated pitch to see what sort of happy accidents came about. I bounced different results and layered them with appliance samples to get the right blend of rugged and synthetic machinery.
For the second iteration, I recognized that the design was busy and lacked clarity. I reined back on some of the layers I got through Serum to help the design stand out. I also needed to balance the low end and bring out the subtle rattling noises as the mechanical arm moved around.
I also realized that my design was once again aiming more for realism when compared to the original design. That’s perfectly fine, but there seems to be a recurring theme the deeper I listen. Aiming to sound realistic is not necessarily the right approach. Some circumstances may call for it, but I’m recognizing a level of creativity with the original design that goes deeper than where I’m currently reaching. It’s about what fits with the game world, not making it sound true to the real world.
I took some creative liberty with the pull-back effect. While I wanted to have a mechanical feel similar to the original design, I thought a synthetic and dramatic sound could be effective leading into the activation of the cyborg.
With spirited help from my spouse, I tried my best to record VO that emulated the laughing of the cyborg in the original design. But my vocal manipulation skills are not quite there yet. For the second iteration, I decided to scrap the VO aspect of the design altogether and focus on a large and bold power-up effect.
- Sometimes, frustration and procrastination are a sign that I’m pressing against the edge of my abilities. While it’s uncomfortable at the time, these are growing pains. That, or maybe it’s time to take a break.
- When designing whooshes, think about it in terms of the object arriving and the displaced air arriving after it. They can be quite tricky to get right, so be patient and keep at it.
- Sound design does not need to sound realistic, it just needs to fit the world it’s in. Leave space for creativity and happy accidents — who knows what’ll happen.
- Using the right sound in the right place is more important than throwing on a bunch of effects to force something to fit.
Putting It All Together: The Final Result
This is the most ambitious piece I’ve worked on so far. At the outset, I had only so much appreciation for how intricate and detailed the sequence was. It was a fantastic learning experience and I’m humbled having settled in and put forward my best effort. I’ve never documented my sound design methodology before, especially at this level of detail. And of course it’s really satisfying to see the progress I made.
As I seek feedback, I imagine there may be some concerns about how convincing some of the design is, particularly with the drone and cyborg assembly portions. I’m sure that troublesome whoosh isn’t quite there yet. I also think that the wind-up and punch design may be a bit too larger than life.
Moving forward, I want to keep my focus on giving myself space to be creative with my design. In the past, I had a tendency to fixate on designing what I see and how I would expect to hear it in the real world. In many circumstances, that’s just not what the game or sequence needs to sound like it belongs in the game world.
If you’ve made it this far, I appreciate your willingness to read about my full sound design methodology. I hope you enjoyed the article and maybe even learned something new.
If you’d like to get in touch, feel free to email me at email@example.com or message me directly on Twitter. If you’re interested, my full demo reel and other projects are available on my website.
Finally, HUGE thanks to my good friend Daniel D’Angelo (DanDan) for taking this article, elevating it, and bringing the important ideas to the forefront. He’s a fantastic editor and wonderful poet. You can follow him on Twitter.