Why we put an app in the Android Play Store using Swift

Geordie J
4 min readJul 7, 2016

--

Towards the end of 2015 we were faced with a dilemma. No sooner had we released the flowkey iPad app than the requests started streaming in for an Android version. “Don’t you know that Android has the majority market share?” “You do know that iPads are for mindless hipsters, right?” “Don’t you love us anymore??” “Pleeease??”

The message was clear, but the problem was large: With Android’s large market share comes a host of issues that iOS development simply doesn’t have to deal with: different screen resolutions, widely varying performance characteristics even on new high-end devices, even fundamentally different processor architectures on same-generation devices. As a small development team (there were four of us at time of writing), the prospect of making an Android app on an unfamiliar stack to try to cover all of those bases in any reasonable amount of time, all the while developing, debugging and improving the browser and iPad versions could have seemed impossible.

What we did have however were the blessings of good fortune in the form of our CTO’s early decision to go hybrid (via Meteor), and of good preparation via the huge efforts we put into optimising performance for the iPad version to make it run smoothly even on a prehistoric (in computer years) iPad 2.

The core feature of flowkey is what we simply call “the player”. You can watch and listen to the song or excerpt you are trying to learn, slow it down, make loops and – importantly – learn in “wait mode”.

Wait Mode gives you as a learner the opportunity to get comfortable with finding the notes in a song at your own pace. The video and sheet music play up until a note or chord, then pause and wait for you to play the right notes on your real piano or keyboard before continuing again to the next note or chord.

In the browser, the pitch detection works via JavaScript’s Web Audio API, including getUserMedia() to access the device’s microphone. But this is where the hybrid dream starts to fall apart: the iPad’s browser, Safari, and its programmatic counterparts UIWebView and WKWebView do have an implementation of the Web Audio API, but not of getUserMedia(). This restriction makes it impossible to access the microphone from the browser context, and puts one of our core features out of reach for hybrid use.

At first we tried to get around this by using native code to inject microphone audio buffers into the WebView for processing in our existing JavaScript code. It worked, but only barely — there was a significant lag and it had a noticeable performance impact even on the latest model iPad Air. Not wanting to settle for a second-rate experience, we made a decision that would directly affect the future for our Android app as well: we decided to rewrite our pitch detection routines in Swift.

Swift is an absolute pleasure to use. Using Swift, it is fun to write concise, beautiful and performant code that reads well, is easy to reason about and easy to debug. Two years after starting with Swift I still very much have the same sentiment about it as the one expressed by Dan Kim about Kotlin here.

The Swift version of our pitch detection worked. It was extremely performant and provided a way better experience than the Web Audio API ever could have, with significantly less latency and higher accuracy. And there was much rejoicing.

Which left us a few months later with our dilemma: we had a hybrid app that already worked pretty well on Android, but one of our core features was missing (actually two – a story for another time) due to performance reasons. We had a JavaScript version of the pitch detection and a Swift version, but neither of those were suitable for our increasingly urgent Android project.

So we had to make a choice: use JavaScript and restrict downloads of Android app to only the newest devices, and still suffer a performance impact when using the pitch detection? Rewrite the pitch detection once again in C or C++? Give up on the feature all together on Android?

The only one of those potential paths we seriously considered in the end was rewriting the pitch detection again in a more portable language like C. Yes, this would have been possible, but would have likely taken quite some time again and, considering none of us on the team had any significant experience with either of those languages, would have likely been riddled with bugs and inconsistencies. As a four-person team, we also didn’t want to maintain three versions of the essentially the same code.

A glimmer of light came in the form of Romain Goyet’s blog post about running Swift code on Android. Based on his observations, it appeared possible — especially once Swift went open source at a then-unknown point in time — to run Swift code via the JNI from Java on Android devices. After some deliberation (and considering time wasn’t on our side), we decided to take the risk and go down the path of getting our Swift code running on Android.

See the next part for How we put an app in the Android Play Store using Swift.

--

--