WWDC 2017 Recap: What’s new with Apple
By Chloe Horgan
This year I was lucky enough to attend Apple’s Worldwide Developers Conference (WWDC) in San Jose, and it ended up being one of the most exciting experiences in my life as a developer thus far. There’s really nothing like being physically present while Apple announces what’s new and exciting on its platforms over the coming year, and attending sessions to learn how to actually use those new technologies. When you’re not busy learning in sessions and workshops, there’s also many opportunities to meet thousands of other iOS developers from all around the world. Not to mention all the fascinating fireside chat speakers (Michelle Obama!!) and lunchtime talks from inspirational figures in the tech world and beyond.
Here is my list of the most exciting new things announced at WWDC 2017:
New Things in iOS 11
As with every WWDC, one of the biggest announcements was a new version of iOS. Some of the most exciting iOS 11 features include: the new Files app that allows users to download and access files on the iPhone similar to Finder; peer-to-peer Apple Pay built right into iMessage; drag and drop functionality within an iPhone app and between multiple apps on the iPad; as well as improvements and redesign of the App Store, Control Center and Siri.
It’s interesting to see more iMessage integrations allowing users to perform more and more tasks within a chat interface, as opposed to switching into a separate app. This year, Apple also unveiled the developer preview of Business Chat set to launch next year, which will allow customers to purchase products, track shipments, and speak to businesses through iMessage.
Another important change is iOS 11 on the iPad — aside from the powerful drag and drop functionality and the file system, you will now see a dock similar to what’s already on Macs. The iPad is moving more and more towards an experience that’s similar to a Mac.
Apple Takes on Amazon Echo & Google Home, or Not?
There were two significant hardware announcements at this year’s WWDC (although neither of them will be available until the end of the year): a new high-end iMac Pro and the HomePod. The beefy iMac Pro (starting at $4999USD) is Apple’s answer to the common complaint users have: they can’t work on VR with Macs. It’s nice to see Apple focus on creating hardware primarily for developers (albeit in this case specifically VR developers), particularly since many were disappointed with the specs of last year’s Macbook Pros.
The HomePods were exciting, but were not much of a surprise considering the booming popularity of the Amazon Echo and Google Home smart speakers over the past year. Many people were assuming that Apple would make a similar foray into the space. What’s interesting, however, was that the HomePods were largely presented as a “home speaker” as opposed to a “home assistant”, with more focus on the speaker specs, making it more of a Sonos competitor rather than Amazon Echo or Google Home. Some speculate this may be because Siri is currently lacking compared to the other two voice assistants, but we’ll see what happens when more information becomes available later in the year.
Augmented Reality (AR) Made Easy
Aside from new iOS functionality and improvements to XCode, one of the things that got a lot of excitement throughout the week was ARKit–Apple’s new AR framework.
ARKit works by using Visual Inertial Odometry (VIO) to accurately track the world around us, by using both camera sensor data and CoreMotion data. These two things help the device determine how it moves within a room with a high degree of accuracy, without any additional calibration. ARKit can detect horizontal planes, like tables and floors, allowing you to place objects on them, and it can also track and place objects on smaller feature points as well. You can even add shadows to your AR objects. ARKit uses the camera sensor to estimate the amount of light coming in at a particular scene, and will apply the correct amount of lighting to your objects. It’s only been a few weeks since WWDC and we are already seeing A LOT of fun projects from developers playing around with ARKit (check out #ARKit on Twitter!).
Machine Learning on Device
Another big announcement from this year’s WWDC was Apple’s new machine learning framework CoreML. There have been rumours about Apple including a dedicated AI hardware chip in their next batch of phones, so it’s interesting to see Apple focusing on getting machine learning and AI onto mobile devices, and making the technology more accessible for developers.
The idea is to make it super easy for developers to start doing common ML tasks (image recognition and natural language processing [NLP]) without the overhead/learning curve normally required. In fact, Apple provides two new frameworks out of the box: Vision Framework and NLP Framework to handle these tasks with CoreML. All the APIs run on the device (as opposed to the cloud). Advantages of this include more savings on server/data cost, sensitive user information remains on device, and 24/7 availability of the app (it’s not reliant on network connectivity to perform machine learning tasks). Along with these frameworks, Apple also unveiled CoreML tools for converting pre-existing models from popular machine learning libraries (i.e. Keras, XGBoost, Caffe, libSVM and Scikit-Learn) into models that can be used by CoreML.
The clear takeaway from this year’s conference was Apple’s big push into AR, VR, and machine learning. Apple is focusing on ease of development and making these key technologies accessible — not only to developers but also to consumers, by bringing more of these technologies to mobile. It’ll be interesting to see how WWDC’s announcements play into their hardware/device announcements coming Fall 2017.
Chloe is a Lead Engineer at TribalScale. In her free time she enjoys travelling, craft beer, yoga, and obsessing about her cat.