now supports Dark Mode! It’s Gorgeous.

The team is freshly back from Apple’s World Wide Developer Conference known as WWDC (or “dub dub” for those on the ground) held this year in San Jose, California. As always, there are a number of announcements but there are too many to list here so let’s focus on ones that get the team excited.

Michael J. Prichard, CEO

2yo Holden trying the Mustache App circa 2011.

My first WWDC was in 2011 when I was only 3-years in as a mobile developer. Back then, we had good apps but as you can tell from the picture of my (then) 2-year old we were also building apps that seemed quite useless. But they were not useless, they were us mobile developers figuring out what we could do with this technology. This experimentation led to the innovative apps that today feel quite common. Think Lyft, Instagram, your banking apps, cooking apps, work apps; the list is endless. Some question if we have peaked with what we can do; I mean what else can we do with our devices when they seem to do everything?

Well, I am happy to report that Apple is not that company and we have more innovation ahead in the world of apps and smart devices. In particular, advancements in Machine Learning and AI technologies such as Core ML 3, Create ML and ARKit 3, give mobile development teams the tools necessary to build immersive, personalized and privacy-first experiences for their customers. I would definitely take the time to watch the Apple’s WWDC 2019 State of the Union for an overview.

Create ML. We like the improved Create ML because they not only add more model types, but also make it easier to allow any app development team members to train machine learning models without a lot of data science knowledge, if any for that matter. An example would be tasking your UX and UI Designer with the job of organizing, labeling and training your facial recognition model you need for your app. This not only frees up your development team to focus on delivery of the app code, but a designer is probably better suited to think through the data problem from the point of view of the User Experience — which is exactly what you want!

Next, Dr. Miriam Friedel, our Director of Data Science, breaks down a few more of our favorites.

Miriam Friedel, PhD

Privacy and security. During the WWDC Keynote on Monday, Craig Federighi spent a fair amount of time focusing on user privacy and security, including a number of measures to keep user data on-device instead of automatically shipping it back to the cloud. This is one of the main reasons that on-device Machine Learning can be so powerful: a customized user experience combined with data privacy and security.

Core ML 3. There are tons of exciting new capabilities in Core ML 3 that we can’t wait to unpack. In particular, we are extremely excited about on-device personalization, which Apple demonstrated in a simple, but extremely intuitive, grading app. Users now get a secure, on-device custom model without a lot of technical overhead. I am also nerding out about linked models. Let’s imagine you have two image classifiers built on the same underlying network (Resnet-50, maybe?) and customized via transfer learning. If you want to include both models in your app, you can link the same base model to each of the image classifiers. This is going to improve performance and save a ton of space on device. Best of all, each of the announced enhancements to Core ML 3 is compatible with Skafos. Watch the full WWDC session here.

Swift for TensorFlow. On the plane out to San Jose, I spent some time reading the Why Swift for TensorFlow doc in the Swift for TensorFlow github repository. In addition, there was a great talk about this at AltConf, given by Brett Koonce. Bottom line: as someone who has almost always done Machine Learning in Python (with some R sprinkled in!), I’m ready to give it a try in Swift. And, as Brett highlighted in his talk, this is a language iOS Developers have already embraced. I think it will help bridge the gap from “I’m an iOS developer who is really interested in ML” to “I’m an iOS Developer who now routinely delivers ML in my apps.” And with notebook support, you can use the Skafos SDK to easily deliver what you build.

ARKit 3. Augmented reality is not only super cool, but it has the ability to deliver incredibly powerful user experiences. I recently bought a house, and Wayfair’s augmented reality app is proving to be a really useful tool for me! Apple announced (and demonstrated) a number of advancements in ARKit 3 at WWDC last week, and I’m particularly excited to try the people occlusion tool.

In closing, the global smartphone market is demanding a more personalized, privacy-first experience from apps. WWDC 2019 demonstrated that Apple is committed to providing developers with many of the tools they need to accelerate the adoption of AI and Machine Learning. As such, now is the perfect time for companies and brands to leverage on-device, in-app AI. You have my word that your customers will find it just as gorgeous as Dark Mode.