Using A.I. to Model the TS-9 Guitar Pedal
Open source, real-time digital clone of the TS-9 guitar pedal using machine learning (with video demo)
Analog is king in the world of guitar effects, but digital modelling has come a long way to replicate it. Detailed circuit analysis and mathematical equations can recreate the sound of many of these devices, from tube amplifiers to overdrive circuits to spring reverb. Domain expertise in the fields of electronics and physical modelling is generally required.
But what if we could skip all that math and physics and go directly to what we really want, great sound. What if you could say, I don’t care what happens inside those metal boxes, I just want the sound! This approach is called black-box modelling, and openly available artificial intelligence frameworks make it possible for anyone to do.
As an engineer, part of me wants to shout, “hey that’s cheating!” Can you really skip all the hard work of analyzing circuits, modelling components, and headache inducing math and just throw data at a computer to solve for you? The answer is both yes and no.
Artificial intelligence takes traditional problem solving and turns it on it’s head. You still need to understand how to set up your problem, but it is solved differently. Figuring out the best model architecture and training parameters is a science (or art) of its own, and it all hinges on starting with good data. In the case of the TS-9 guitar pedal, the data comes in the form of audio recordings.
By recording the input to the pedal and the output from the pedal, you treat it like a black-box. The input and output is known, but we don’t need to understand how it works. The neural network is trained on the audio data to behave the same way as the real pedal. You can even run this model in real-time on your guitar by building a high performance audio application around it, essentially making a digital clone.
Note: You can read more about my data collection methods here.
This works great for capturing the sound of a pedal at a particular setting. In the case of the TS-9 pedal, this could look like Drive at 100%, Tone at 50%, and Level at 50%. But how can we replicate all possible knob positions in a single model? More data!
Parameter conditioning allows us to create a model that can be adjusted in the same way as the real pedal. By taking audio recordings of the pedal at discrete steps for each knob, you can train a conditioned model that interpolates the full range of the knob or set of knobs.
For the TS-9, the Drive and Tone controls were chosen as the conditioned parameters. The Level knob could have been included, but it can be approximated as a simple volume knob in the real-time audio plugin. It would also add more data and complexity to the training.
For the training data, 2-minute samples were taken at 0%, 25%, 50%, 75%, and 100% knob positions. For two knobs, this yields 5 * 5 = 25 different combinations, for 25 separate 2-minute recordings plus the baseline input audio file. Once the audio was recorded, each track was exported as 32-bit floating point, mono (1-channel) WAV files. The total size of the audio data was approximately 520MB.
The GuitarML fork of the Automated-GuitarAmpModelling project was used for training. This fork contains extra code for conditioned model audio processing. The project is written in python and uses Pytorch, and contains several machine learning models for the purpose of analog effect modelling. The LSTM (Long Short-Term Memory) model was used for the TS-9 pedal.
Note: The processed input WAV files contain three channels, one for the audio, one for the Drive knob parameter (range 0.0 to 1.0), and one for the Tone knob parameter (range 0.0 to 1.0), as the input to the LSTM model. The output WAV files contain one channel, the output audio.
The JUCE framework was used to build the real-time application, named the “TS-M1N3” (bad pun intended). The Windows/Mac installers can be downloaded from Github along with the source code, and is available in VST3 / AU / Standalone formats. RTNeural was used for neural-net inferencing of the LSTM model and provides a significant speed improvement.
Note: For more details on creating a real-time audio application with the LSTM neural network, check out these articles.
Here is a video demo comparing the original TS-9 pedal to the plugin clone at various settings:
Special thanks to the UAH (University of Alabama in Huntsville) MLAMSK Senior Design Team, whose research and hard work directly impacted the results presented in this article.
I hope you enjoyed this article, thanks for reading!