Towards the end of 2015 we were faced with a dilemma. No sooner had we released the flowkey iPad app than the requests started streaming in for an Android version. “Don’t you know that Android has the majority market share?” “You do know that iPads are for mindless hipsters, right?” “Don’t you love us anymore??” “Pleeease??”
The message was clear, but the problem was large: With Android’s large market share comes a host of issues that iOS development simply doesn’t have to deal with: different screen resolutions, widely varying performance characteristics even on new high-end devices, even fundamentally different processor architectures on same-generation devices. As a small development team (there were four of us at time of writing), the prospect of making an Android app on an unfamiliar stack to try to cover all of those bases in any reasonable amount of time, all the while developing, debugging and improving the browser and iPad versions could have seemed impossible.
What we did have however were the blessings of good fortune in the form of our CTO’s early decision to go hybrid (via Meteor), and of good preparation via the huge efforts we put into optimising performance for the iPad version to make it run smoothly even on a prehistoric (in computer years) iPad 2.
The core feature of flowkey is what we simply call “the player”. You can watch and listen to the song or excerpt you are trying to learn, slow it down, make loops and – importantly – learn in “wait mode”.
Wait Mode gives you as a learner the opportunity to get comfortable with finding the notes in a song at your own pace. The video and sheet music play up until a note or chord, then pause and wait for you to play the right notes on your real piano or keyboard before continuing again to the next note or chord.
Swift is an absolute pleasure to use. Using Swift, it is fun to write concise, beautiful and performant code that reads well, is easy to reason about and easy to debug. Two years after starting with Swift I still very much have the same sentiment about it as the one expressed by Dan Kim about Kotlin here.
The Swift version of our pitch detection worked. It was extremely performant and provided a way better experience than the Web Audio API ever could have, with significantly less latency and higher accuracy. And there was much rejoicing.
The only one of those potential paths we seriously considered in the end was rewriting the pitch detection again in a more portable language like C. Yes, this would have been possible, but would have likely taken quite some time again and, considering none of us on the team had any significant experience with either of those languages, would have likely been riddled with bugs and inconsistencies. As a four-person team, we also didn’t want to maintain three versions of the essentially the same code.
A glimmer of light came in the form of Romain Goyet’s blog post about running Swift code on Android. Based on his observations, it appeared possible — especially once Swift went open source at a then-unknown point in time — to run Swift code via the JNI from Java on Android devices. After some deliberation (and considering time wasn’t on our side), we decided to take the risk and go down the path of getting our Swift code running on Android.
See the next part for How we put an app in the Android Play Store using Swift.