Do It Because You Love It
A year ago, I got a fortune in a fortune cookie that said “Do it because you love it.” My first thought was, this is the worst fortune ever. I didn’t think anything else of it at the time, but I did keep the small slip of paper.
I want to start off by saying this is not an article on how to start a successful open source software project. It’s not a technical discussion or an explanation of the technology in my project. I want to share my story in the hope that it helps other people starting out in software to take their own leap of faith and learn from my experience.
I have always been interested in technology. As a kid, I played with chemistry sets (the dangerous ones) and was obsessed with Legos. My dad, who has a PhD in optical physics and numerous publications and patents, was a big part of why I became interested in engineering. I chose to study Aerospace Engineering in college because it sounded the coolest (and it is). Fast forward ten years into my career and I can say I’ve been blessed to work on some amazing projects with incredibly smart people on aircraft and rocketry.
So why software? I’ve found software is unavoidable in engineering these days. But, if used properly, it can benefit anyone’s career. It is a problem solving tool that can amplify the power of the most important tool available to you: your brain.
Slowly and reluctantly, I began picking up more software skills, starting with Matlab (really more of an engineering tool than a programming language), and then venturing into Python and c++. I became so interested in it that I eventually took an official Software Development job within my company. I got exposure to current software processes and tools such as Agile, git, docker containers, and web frameworks. Someone opened my eyes to Github.com, which was brand new to me in 2020, if that tells you anything (Github started in 2008).
Before moving on to my adventures on Github, you need to understand another part of me, which is that I love making music. I took piano lessons as a kid, but it wasn’t until I picked up an electric guitar at 13 that I found a lifelong hobby and passion. In high school, I briefly entertained the idea of studying music in college, but with some gentle suggestions from my parents and others, decided to pursue engineering instead.
Looking back, I can trace the lines from the people in my life to where I am today, but when you’re living it, those lines are hard to see. I’m writing this mainly to tell a story, but I will give my personal career advice here. The times I put myself out there, met people, and got to know and understand people were some of the most valuable uses of my time. Who knows what opportunities I missed when I decided not to reach out or take the time to start a conversation! And listening is way more valuable than speaking.
The setting for the beginning of my project was this; summer of the 2020 pandemic, mostly working from home, married, two kids and a third due any day. The perfect time to start a time consuming, non-profitable venture into something I knew nothing about! We had just put the 5 and 3 year old down for a nap, and I was sitting on the couch, phone in hand, mindlessly surfing the internet. I was probably looking for the next show to binge on Netflix after Tiger King.
Here’s my next stream of thought: man, it would be great to get out my guitar and amp and jam out like I used to. Maybe learn a new Zeppelin song or something bluesy. How would I do this without waking the kids? I know a thing or two about software… I wonder how hard it would be to write a guitar plugin. Then I could just put in some headphones and crank it up. Google google google… oh so there’s a framework called JUCE that does all the heavy lifting. Cool. Let’s see what’s on Github… oh, a guitar plugin project that uses neural networks? I must find out more about this technology I know nothing about!
And down the rabbit hole I went.
I’ll briefly explain what a guitar plugin does. It’s a piece of software that you “plug-in” to a digital audio workstation (DAW), like the kind used in recording studios. It behaves like an amp or effect, so that when you plug an electric guitar into an audio interface to your computer, you can hear the sound out of your speakers or headphones. Maybe you want to add some distortion, or reverb, or any number of effects to make it sound good. I had made some amateur song recordings in the past, so I was familiar with this part.
The open source project I found was “WaveNetVA”, a result of some very cool research by the Aalto University Acoustics Lab in Finland. In their publication, they describe the process of using various Artificial Intelligence models to copy the distortion sound of guitar amplifiers and pedals. They also created a playable plugin using the JUCE framework that runs in real time.
Feedforward WaveNet for black-box virtual analog modeling. This code is related to our paper submitted to SMC 2019…
After some more searching on Github (which, by the way, is a website that hosts software projects), I found a second project called “pedalnet”, which independently recreated the training code from the research paper using PyTorch (an artificial intelligence framework). You could apply the sound models to audio files, but you couldn’t play it in real time from an electric guitar.
After some learning and playing around with both codebases, I wondered how hard it would be to make these two pieces compatible. That way, I could use my own guitar pedals/amps and create digital sound models of them. Why would I want to spend the time to learn a new and complicated process to create models of amps and pedals I already own? Because I am a nerd at heart, and that’s the kind of thing nerds do.
The problem was that the training codebase used one framework (Pytorch), and the real-time code used another (Tensorflow), and these are not easily compatible. Also, the model had to be trained in the exact same way as the plugin was designed for, which was not the case. Very minor differences would throw off the whole model when played in the plugin. It would take some very methodical code study to figure out how to match them up, if it could be done at all.
I can’t overstate the generosity of many of the developers on Github. I didn’t know what to expect going in, and I typically shy away from online forums and things of that nature. But when I reached out to the creators of these codebases they were generous and patient in helping me with what I was trying to do.
It would take several months of looking at code, testing ideas, and listening to horrible static sounds come out of my headphones, but eventually I cracked the code. I won’t go into the technical details, but throughout the process I had decided to give up twice, and written about twenty pages of notes just trying to trace back code and figure out exactly how the sound flowed through each line. Did I mention I had never done audio or artificial intelligence software before? When I finally played the guitar and that beautiful crunchy mid-tone sound of my TS9 Tubescreamer pedal came out, I silently threw my hands in the air and did a happy dance, careful not to wake my (by now 3) children.
From there I decided to clean up the code and put it out on Github. Surely people would be as excited about this as I was. The idea that you could record a guitar amp or effect (costing hundreds or thousands of dollars) and use a computer to distill the sound into a small text file seemed like magic to me. I also learned a bit more about the JUCE plugin framework, and made a user interface that looked like a real guitar amp. The licenses of the two existing projects allowed other people to use their code, but I did check with the owners and made sure to reference their work in my project. Technically speaking, my contribution was not much more than a conversion script and a nice looking interface on top of their code, which did the heavy lifting.
Here is my SmartGuitarAmp project, which is currently the most popular code repository I have posted on Github:
Guitar plugin made with JUCE that uses neural network models to emulate real world hardware. See video demo on YouTube…
Almost (internet) Famous
It turns out that posting cool code no one knows about on the internet does not make you an overnight success. Who knew? But I was sure that people would be interested if they understood what it could do.
Most engineers have no idea how to market something, and I am no exception. I decided to make some videos of the plugins (using my phone) and put them on YouTube. I posted some on Reddit (also completely new to me), and was able to get some attention. Some people seemed more excited by my project than even me and helped improve the code and ran tests on their own.
Here’s my first demo video of the SmartGuitarAmp plugin:
The thing that got the most attention for my code was posting about it on a site called Hacker News, but I believe that was more of a fluke, and luck (and maybe some pity) that made that work. I was new to the site, and thought, hey, this is like free advertising. So I made a very attention grabbing title of “Free guitar plugin on GitHub sounds like $600 tube amp by using deep learning”. Which was technically true, because it was modeled on my Fender Blues Jr. amp with Tweed covering, which MSRP price was $600. Well, the users of Hacker News are much more discerning than I anticipated, and I was quickly defending my project, as well as learning that I broke some core rules of posting on their site. Because of all the people posting about me breaking the rules, the points on the article went up, so it shot to the top of the list for anyone visiting the site. Because of all that traffic, my Github project went from 5 stars to over 500 (stars largely determine how easy it is for people to stumble on your project on Github). By the end of the day my project, the SmartGuitarAmp, had over 10,000 hits, and my linked YouTube demo had 5,000 views. I had never been one to care about “views” or “likes”, but I felt like I learned something about marketing that day. And what I learned was this: I know nothing about marketing.
Here is that post on Hacker News:
Show HN: SmartGuitarAmp - Guitar plugin made with deep learning | Hacker News
The most popular guitar amps now are all solid state, but the internal computer digitizes the guitar signal, then…
Of course, once people were actually using my software, they started finding bugs, or things that would work on my computer but not for them. Or just things they wanted me to add. Or way smarter ideas than I could have ever thought of. I quickly felt overwhelmed about what I needed to figure out and fix. I also started researching another method of training models which would be much quicker, and that I could say was fully my own.
My personal email and YouTube accounts were getting tons of feedback from people interested in my project. I was even getting job offers and people interested in business opportunities (some even legit!). I had to turn them down because of my limited free time and commitment to my aerospace/software job (which I still very much enjoy and have no intention of leaving).
I wanted to create a name that I could keep separate, and also make it easy for people to group these projects together. I don’t exactly remember how, but I came up with the name “GuitarML”, where ML stands for “Machine Learning”. I made a logo and some social media accounts for it as well. A very generous person who showed interest in the project early on was alarmed that I hadn’t gotten domain rights for a website, and they even went as far as to buy the domain and transfer it to me. So then I made a simple website for GuitarML. Thank you! (You know who you are).
GuitarML GuitarML Machine Learning for Rock & Roll Check out the SmartAmp Announcing the SmartAmpPro Record…
I also started working on my second plugin, which was intended to solve the problems with the first one: improve CPU usage, integrate the model training into the plugin, and speed up the training. With the SmartGuitarAmp the training was a separate codebase, and it took 8+ hours to complete. The CPU usage was also very high compared to other plugins, and in the case of lower end computers or non Intel CPUs, unusable in real-time. I also decided to switch the Ai framework from PyTorch to Tensorflow. It seemed to have more support, and with the Keras backend was very easy to write high level code.
The training speed and CPU usage tasks were made possible by using a different deep learning model. I had found examples of a model type known as LSTM (Long Short Term Memory), which was mentioned in the research papers to be faster, as opposed to WaveNet, which the original plugin used. With lots of testing, retesting, and help from the online community, I was able to create a usable plugin from a combination of convolutional layers and the LSTM layer. The whole process took another couple of months to develop in my limited spare time. During that time got a used Macbook so I could develop and build for Apple computers. This was only the second time I invested any money in the project ($380), the first being a guitar pedal that split the signal for capturing the sound needed for creating models ($80).
The end result didn’t have as low CPU usage as I had hoped, but it was better than the first plugin. The training time was reduced to just a few minutes, but the sound quality wasn’t quite as good as the 8+ hours with WaveNet. On top of that, there was a bug in my code that I originally interpreted to mean the underlying model didn’t work. I almost gave up the whole project, but luckily I decided to keep at it, and the resulting sound was still pretty good. I decided to release it as the “SmartAmpPro”, because I am very uncreative in coming up with names.
Guitar plugin made with JUCE that uses neural network models to emulate real world hardware. This plugin uses a LSTM…
Immediately I got responses that the integrated training was not working for most everybody. Part of the problem was figuring out how to download and install the Tensorflow dependencies. It is not straight-forward for people unfamiliar with the command line interface. The second problem is that different DAWs (recording software) behave differently when the plugin is trying to run outside applications. I’m still working on solving that part. Currently this plugin is in the “pre-release” phase. It’s not available on the GuitarML website, but anyone can download it from the Github releases page.
The best part about this project is that I truly enjoy using what I created. I enjoy pulling it up and trying out different sounds on my guitar. Other people have created models of their own amps and pedals and sent them to me, which has been a very rewarding aspect of this project. I know what it sounds like to play a Revv G3 distortion pedal, even though I don’t have one, because someone else picked up my code and thought it was worthwhile to try out. (The Revv G3 is the “RevvvItUp.json” model in the SmartAmpPro code)
Revv Amplification G3 Overdrive & Distortion Pedal
Revv Amplification G3 Overdrive & Distortion Pedal
Revv Amplification G3 Overdrive & Distortion Pedalwww.amazon.com
The coolest thing I received was someone who modified their own amp into something new, and trained a model of it using my code. Now I can play his amp through my plugin, even though it’s literally the only one like it in the world (this is the “Dynabright” model included in the SmartGuitarAmp Tonepack). The sound quality of the models can vary, but for clean and low distortion sounds, it is a near perfect match to the original hardware.
That’s the story up until now. I’m 10 months in, $460 down, hundreds of hours spent, and still have many bugs and improvements on my to-do list. But I wouldn’t change anything, because I loved doing it. I found that fortune buried on my dresser a few weeks ago. “Do it because you love it.” That’s not a fortune. But it turned out to be pretty good advice.
If you want to support my work, consider joining my Patreon for behind the scenes software development posts and neural net amp/pedal models for my plugins.