At WWDC 2019, Apple announced a slew of updates, many of which have been on our wish-lists for a while. The WWDC 2019 schedule was also released after the keynote and there are some incredibly interesting talks lined up.
In this post, I will do a developer-focused recap of the announcements along with my thoughts and list relevant sessions for each topic that you can peruse to get a deeper understanding.
- A declarative UI system built in Swift, fully compatible with existing code.
- Automatic support for spacing and insets, localisation, dark mode, data binding, interruptible animations, transitions, etc.
- Dynamic replacement — a fancy name for live preview — changes recompile instantly and can be previewed on the design canvas.
- Build multiple previews of SwiftUI views during development with different configurations such as large fonts, localizations, or Dark Mode.
- The syntax is concise, the resulting code debuggable and diff-able.
- Native on all Apple platforms.
- Requires iOS 13, not back-ported.
- Developers who are new to the platform find that one of the hardest parts of building an iOS app lies in understanding the lifecycle of a UIViewController and manipulating application state in order to get the desired behaviour. When we use a declarative system, the view lifecycle still exists. The difference is — decisions are not based on the lifecycle, but are data-driven. With SwiftUI, we declare the content and state of our view. When the state changes, rather than mutating the existing view, the view is reconstructed — SwiftUI views are very lightweight.
- Read more about declarative views in Matt Gallagher’s blog post — it was written pre-SwiftUI, but it is one of my favourite posts on declarative vs imperative views.
- Another key takeaway is SwiftUI’s focus on making basic features easy to implement. Creating simple user interfaces such as lists, complete with animations, auto-layout, localization, dynamic types and accessibility seems much easier using SwiftUI and requires significantly fewer lines of code. And of course, the more you rely on system defaults for fonts, dynamic types, spacing etc, the better the automatic features may work.
- In particular, I am excited about the out of the box animations and transitions, As working with designers has taught me, good animations can take your app from being functional to being simply delightful. However, they come at a cost — complex to build, difficult to maintain and take more time than you would think or be able to afford. I am curious to see how well the automatic animations work when using SwiftUI for more complex UIs.
- As I dive deeper into SwiftUI, I would also like to see how it compares with Flutter, which is a declarative, native, cross-platform framework that Google released in 2018.
- Introducing SwiftUI
- Integrating SwiftUI
- Data flow through SwiftUI
- SwiftUI on WatchOS
- Accessibility in SwiftUI
- SwiftUI on all devices
Core ML 3
- On-device model personalisation: models bundled into apps can be updated with user data on-device, helping personalise the models to the user while maintaining privacy.
- Significant updates to Vision and NLP frameworks, and a new on-device Speech and Sound Framework.
- Image saliency (Vision)— heat map of salient features of an image, areas the user may be most likely to focus their attention on.
- Text recognition and search in images (Vision).
- Word embedding (NLP) — understand the context of a word in the text and the relation to other words.
- Speech saliency (Speech and Sound) — pronunciation information, streaming confidence, utterance detection, and acoustic features.
- Create ML macOS app — workflows to build, train, test, and deploy machine learning models. Real time feedback for model creation and training.
Apple’s focus continues to be on on-device intelligence which is great from a privacy and security perspective. In my opinion, this year Core ML 3 has some of the most exciting updates and sessions. Check out the list below and see for yourself!
- CreateML for Object Detection and Sound Classification
- Building Activity Classification Models in CreateML
- CreateML for Activity, Text and Recommendations
- Understanding images in Vision
- Text Recognition in Vision
- Advances in Natural Language
- Advances in Speech Recognition
- Apple Pay will be integrated with QuickLook (for eCommerce). Quick stats: Wayfair reported that a user is 3.4x likely to make a purchase when they looked at products using QuickLook.
- Support for use of front and back camera simultaneously. For example: Interact with the AR scene generated using the back camera using face gestures.
- People occlusion — hiding and showing objects as people move around in your AR scene. Quite hard to do from scratch, now available in ARKit. Tracks up to 3 faces at a time.
- RealityComposer: produce 3D content for AR experiences.
- RealityKit: framework for photo-realistic rendering, camera effects, animations, physics.
I am still taking a wait and watch approach with AR with regards to real world applications other than games and eCommerce. But the updates are still really cool and I am looking forward to having a play with ARKit 3.
- Introducing ARKit 3
- Introducing RealityKit and RealityComposer
- Building apps with RealityKit
- Bringing people into AR
- Building Collaborative AR Experiences
- Advances in AR Quicklook
Sign-in with Apple
- Social log in similar to “Login with Google” and “Login with Facebook”.
- Apple won’t track user data.
- Authenticated with FaceID, logged in without sharing any information.
- Option to share a random email address with apps instead of user’s real email, forwards to real email. Each app gets a unique email address.
- Anti fraud tech — a “real-user indicator” will be shared with apps indicating whether Apple thinks the user is a bot or not.
- This feature is very much in keeping with Apple’s stance on privacy. I particularly love that Apple provide the option to create a random email address (unique per app!).
- Adoption of Sign In with Apple may streamline the signup experience for a lot of apps, particularly with FaceID support.
- Adoption will also be quick as Sign In with Apple is going to be a mandatory feature for all apps that offer up third-party sign-in options.
- Apple made a very quick reference to verified email addresses, making me wonder if this is the start of their foray into the world of identity.
New Location permissions
- Apple have added a new option to allow once-only access to location, permission valid for a single session.
- Allow location only while using app will still be an option. And if an app tries to access user location while in background, the user will be notified and will be given the option to always-allow location to that app.
- Time and again, the user will be shown a summary of location information shared with the app.
- Apple are also shutting down the ability to infer location by scanning WiFi and Bluetooth signals.
As a user, big thumbs up to this feature 👍. I invite more transparency on how apps are tracking my location and the ability to provide one-time access to an app that otherwise doesn’t need my location.
As a developer, it is important to note Apple’s focus on protecting user privacy — it is evident in their push for on device intelligence, Sign In with Apple, changes to location permissions, etc. Abuse of user data may very well result in apps being rejected during reviews.
- Independent watchOS only apps — companion iOS app no longer needed.
- watchOS only apps will have access to location, heart rate and motion.
- watchOS App Store — allows purchase, install and download apps directly to the watch.
- Streaming Audio API — stream audio on watchOS apps without iPhone.
- Extended runtime API — watchOS apps designed for session based use can now have an extended runtime and can operate in the background.
- Push notifications can now be sent directly to watch apps.
- SwiftUI support for building watchOS complications.
Apple have generally had nice apps for watchOS and have added features that have great utility while on the go —a calculator app, a voice memo app, audiobook compatibility, music streaming, etc.
However, I do not think that third-party watchOS apps have taken off as Apple or some users may have expected. That a companion iOS app is no longer required removes one of the barriers to entry. Whether that is enough to get more developers building watchOS apps remains to be seen.
- iPads now have their own OS based on iOS.
- New Home screen with ability to pin widgets.
- Multitasking improvements with Slide Over and Split View.
- Productivity gestures for text selection, undo/redo, copy/paste, scrollview scrubbing.
- Sidecar — use your iPad as a secondary display.
- iPadOS takes iPads closer to a desktop computer and Project Catalyst brings iOS apps to macOS. Details are slim, but Apple seem to be bridging the gap between their tablets and laptops.
- How does separating iPadOS from iOS affect all the universal apps that are out there? Will an app eventually run everywhere?
- Will there need to be separate deployment targets if iOS and iPadOS versions diverge? Or will they always go hand in hand since iPadOS is based on iOS?
- Will the apps need to be released separately?
- PencilKit API
- Desktop-class Browsing on iPad
- Multiple Windows on iPad
- Introducing iPad Apps on Mac
- Taking iPad Apps for Mac to the Next Level
- Apple mentioned that there are quite a few updates in this area, but really just demonstrated one significant feature — VoiceControl.
- VoiceControl as the name suggests allows users to control apps by using voice commands. There is also an ability to overlay a grid on the app to allow for precise region targeting.
- Leverages the Accessibility API.
Apple haven’t released much information about Voice Control. Looking forward to more demonstrations of VoiceControl in the upcoming sessions to understand what apps can do to support.
Since it leverages the existing Accessibility API, VoiceControl could even work out of the box if VoiceOver is setup correctly. Remains to be seen!
- User can choose to adopt a dark system-wide appearance.
- Semantic colours — in Dark Mode, the system uses a darker colour palette.
- Adaptive materials.
- Will dark mode be opt in or will apps need to be optimised for dark mode before iOS 13 is made public? From Apple’s choice of words, i.e “system-wide appearance”, it would seem that the latter is true. However, even if it were opt-in, I recommend optimising for Dark Mode because otherwise the app would stick out on the user’s device like a sore thumb and you would have an annoyed user on your hands.
- Just how much optimisation would be required? For instance, it is a common practice for developers to set a view’s background colour to white instead of relying on the default UIView colour. How would the apps behave if Apple were to automatically enable Dark Mode and the app wasn’t optimised?
- New workflows in Xcode to support SwiftUI including SwiftUI previews.
- Customisable editor layout — split any editor pane and view multiple editors at once.
- Better source editing — increased documentation support, mini-map of source landmarks in file including marks and test failures among others.
- Support for Swift packages (also supported in Github package registry)
- Test Plans — one command, all of the tests!
- Simulate thermal and network conditions while running tests.
The SwiftUI workflows on Xcode 11 look pretty cool in the demo. I have downloaded the beta software and am looking forward to giving it a go.
Test plan also sounds fascinating. While most developers write tests 🙏, we often do not leverage Xcode tools like runtime issues, localization simulations, runtime sanitizers, etc.
With test plans, we can run tests with different configurations, diagnostics, devices and simulators, under different device conditions, in parallel with one command. Also useful to automatically take screenshots on different devices for reviewing visual defects as well as for AppStore submission.
- Parameterised Siri shortcuts — Parameters enable an interactive voice experience in Siri with follow-up questions, and allowing people to customise shortcuts in the Shortcuts app.
- Configurable third-party actions.
- Context sensitive questions.
- Ability to define triggers like geofences or time of day.
- Shortcuts with multiple steps, multiple apps.
With context sensitive follow-up questions, third party actions and multi-step multi-app shortcuts, Apple brings some things that Google Assistant does really well, to Siri. It remains to be seen how well it works in practice.
- Metal support on iOS simulator.
- Advanced GPU processing (offloading more tasks to GPU).
- Improve graphics and compute potential of apps.
I haven’t had any exposure to Metal in the past as I haven’t done any work with games and 3D graphics. If you have any thoughts about this year’s updates, please reach out to us using the comments section below.
So there are my thoughts on the Platforms State of the Union.
We will continue to gain a deeper understanding of these topics as the conference progresses and the session videos start to become available.
We, at Digio, are certainly excited to try out these new features for ourselves and get some experiments going!