Apple Watch Series 4 is a wonderful revision to Apple Watch. With the faster processor and the new display, Apple Watch finally feels like a platform that might be fun to develop for.
Of all the new possibilities unlocked by watchOS 5, background audio playback is definitely a fun one to explore. Since I don’t run a music streaming service myself, I decided to make a Spotify app ;)
Getting creative with WatchKit
The design of Apollo is heavily inspired by Apple Music and Apple Music Radio apps on watchOS, and in the process of making the UI, there are a lot of fun challenges. Unlike the first-party apps, third-party apps do not have access to the raw UIKit and all the interfaces has to be done using WatchKit. Personally I have a mixed feeling about WatchKit. On one side, the layout engine is really powerful so developers can focus on getting creative on how the view should look like, instead of spending a lot of time to make the layout right. However, on the other side, the lack of access to the raw UIKit means developers can only create things that’s allowed by the WatchKit. As a result, most of third-party apps looks similar and there isn’t much freedom to create some personality into apps.
Apple Music on watchOS has a really cool cover flow setup that shows user’s playlists. The new API introduced in watchOS 4 made it possible to replicate a similar experience in third-party apps. The API (Item Pagination) allows user to scroll across multiple interface controllers using Digital Crown with a semi-3D transition, which looks pretty neat for showing playlist artworks. In the process of testing this, I’m surprised how big a difference haptic feedback made to Digital Crown. The experience felt completely different (and boring) on Apple Watch Series 3 (or prior generations). With haptic feedback, the scrolling experience felt so satisfying 😉
The other fun interface component to create is the playlist rows in Explore section. Since Spotify includes playlist name on most of their playlist artworks, it’s possible to display playlist without including a separate name around it, allowing the app to show more content on the screen. The main problem here is the lack of UICollectionView-equivalent component in WatchKit for third-party apps on the Watch. In order to do something that looks nice, I have to fallback to using WKInterfaceTable and create a table row with two items and fill them accordingly.
Audio Playback & Streaming Content
There are multiple APIs on watchOS that provide audio playback functionality. Most of them sucks as they were introduced at early days of the Watch and was designed with specific limitations. Things got a lot better when Apple finally made AVAudioSession (and its friends like AVAudioPlayerNode) available on watchOS for third-party apps. And with watchOS 5, third-party apps can now utilize AVAudioSession for audio playback in background while maintaining complete control over how the audio is being played. This opens up the possibility for third-party developers to create an audio player that does not frustrate the end user 😝
With Apple Watch getting cellular connectivity on Series 3 (and later) and Spotify’s wide collection of dynamic playlists, offering audio streaming capability over cellular is something I believe really important to have as a Spotify player.
This is where things started to get tricky and a lot of people says Apple don’t allow third-party app to provide streaming capability. Since AVAudioSession related APIs only provide ability to play either local files, or raw PCM buffers, the typical way for iOS developers to offer audio streaming is done using Audio File Stream Services from AudioToolbox. That API allows developer to feed in data and get back PCM buffers that can be supplied to AVAudioPlayerNode to play. The problem is that AudioToolbox is not available on watchOS, thus third-party developers can’t easily do audio steaming on watchOS in the way they do on iOS. I don’t believe the lack of (public API for) AudioToolbox on watchOS is Apple trying to prevent developers from providing audio streaming in third-party app, but rather, the pacing of new API release hasn’t caught up with the pacing of old API removal.
Since we can’t get audio streaming for free from the OS, in order to stream audio from internet, third-party app will have to bundle their own audio decoder. Spotify provides ogg format, for which the decoder implementation is really light weight (super nice!). Once the decoder is ready and the pipeline is configured, Voila, you got audio steaming on Apple Watch!
There are other interesting challenges on the Spotify side of the stuff but that’s a different story maybe for a different time…
PS: if you really want to stream something without bringing your own decoder, there are hacky ways to achieve that with public APIs, like … create a AVAudioFile with a temp file you write your streaming data to, and pass that audio file to AVAudioPlayerNode… when it reaches the place where you haven’t buffered yet, you will get a playback error (or end of file), which you can then reschedule another AVAudioFile with the new position offset to continue once you buffered more content. This actually works pretty well, I used this method in a personal podcast app for Apple Watch.
While a lot of things have improved for watchOS development (like more APIs, better hardwares) compared to a few years ago, there are still a lot of frustrations when it comes to making an app for watchOS.
First of all? Debugging watchOS app is still extremely painful. Wireless debugging watchOS app just simply does not work for half of time, and for the other half, two thirds of the time is spent on waiting for the app to install (thanks Swift 🙃)… Dear Cupertino, not everyone have the connector to the bottom of the Watch, and we don’t have the ability to make Xcode install watchOS app directly over wire.
Next on the list, Apple really needs to revisit a lot of decisions they made in early days of Apple Watch.
For example, it appears that nsurlsessiond on watchOS simulator includes an artificial delay to simulate the slow transfer between iOS device and Apple Watch in early days. However, as hardware gets faster, it does not look like that delay is changed. As a result, testing network requests in Simulator is like 20x slower than testing on the real Apple Watch, effectively making the simulator useless for testing…
Revisiting WatchKit also falls into this category. WatchKit only made sense in watchOS 1 era as there was no developer binary running on Apple Watch. However, nowadays it’s really limiting what developers can do on Apple Watch.
Then there is limited background access and dumb complications. Apple Watch is designed for light interaction yet Apple does not allow application to proactively preload content in a more frequent fashion. That decision maybe make sense back when Apple Watch’s battery was barely enough to last one day, but the hardware has changed a lot, and now the battery can last a lot longer. Giving third-party apps more chance to preload data in background could greatly improve Apple Watch experience for third-party apps.