Apple WWDC 2024 Keynote Recap

Don’t be distracted by the OpenAI partnership — It’s Apple Intelligence all the way down

Richard Yao
IPG Media Lab
12 min readJun 12, 2024

--

Image credit: Apple

It’s not AI; it’s Apple Intelligence.

If one were to sum up Apple’s opening keynote at its much-anticipated 2024 Worldwide Developer Conference (WWDC) event in one sentence, that’d be the main takeaway.

It was never a question of if Apple would be adding generative AI to its devices, but rather how they would do it. During the WWDC keynote on Monday, the Cupertino company finally unveiled its AI roadmap — it will integrate AI into its devices with the same cautious, user-centric, and privacy-minded approach that the company has previously taken with buzzy innovations ranging from wearables to 5G.

Yes, there’s a much-reported OpenAI partnership that will bring the latest ChatGPT model to Apple’s upcoming software for free. But in the grand scheme of things, that integration is secondary to what Apple is trying to achieve with Apple Intelligence.

Here’s what we think Apple Intelligence means for consumers, as well as what Apple’s other WWDC announcements, AI or not, mean for brand marketers.

AI Integrations, the Apple Way

After Mark Gurman at Bloomberg confidently reported on what Apple was set to announce on Friday, watching the keynote became a waiting game of confirmation. After wading through nearly an hour of more conventional software updates, Apple finally unveiled a new suite of AI-powered features under the branding of Apple Intelligence, which will be available across iPhone, iPad, and Mac with the upcoming OSes. Similar to how Apple chose to brand the Vision Pro headsets as “Spatial Computing” and not “VR,” there were no buzzy cliches of “AI,” only “Apple Intelligence.”

Image credit: Apple

Unlike the chatbot-centric approach that many AI players have taken so far, Apple Intelligence features are deeply baked into Apple’s software layers to enhance the user experience. These tools include advanced text, emoji, and image generation capabilities that are seamlessly integrated into Apple’s native apps. For instance, users can expect more sophisticated text suggestions and the ability to generate custom images directly within the Notes and Keynote apps. There’s even a standalone Image Playground app that will allow Apple users to generate images on-demand based on text prompts. Bonus point for Apple: limiting the image outputs to three pre-set cartoon styles seems like a good way to circumvent the type of misuse that most text-to-image models have encountered.

Apple Intelligence features are deeply baked into Apple’s software layers to enhance the user experience.

Another important feature to note is that Apple Intelligence will automatically filter and group notifications based on their time-sensitivity and priority. This means that some notifications will be considered less important and, as a result, will be de-emphasized for delivery. Consequently, brands relying on app notifications to engage customers with offers might face marketing challenges. With lower-priority notifications potentially reaching users less frequently or with reduced visibility, companies may need to adapt their strategies to ensure their messages are seen.

More importantly, Siri expectedly received a much-needed LLM boost. Apple updated its voice assistant to be contextually aware of not only what’s happening on the screen, but also of users’ personal data found across all of their apps. Plus, users will be able to text Siri, just as one would with ChatGPT or other LLM-powered AI assistants on the market.

The revamped Siri, complete with a new visual branding, may still sound like the old Siri, but it will be able to carry out complex actions both within and across apps at the user’s request, thanks to an on-device semantic index. For example, Apple demoed a use case where Siri pulls updated flight information and restaurant reservations out of emails and text messages to contextually answer logistic questions:

All of these are compelling AI use cases that only Apple can deliver, because it has access to personal data typically locked away in apps. In contrast to the AI chatbots available on the market that are far more focused on broader real-world knowledge, Apple is interested in leveraging LLMs to deliver a far more personal and functional experience.

Instead of focused on broader real-world knowledge, Apple is leveraging LLMs to deliver a far more personal and functional experience.

As for those broader questions that Apple Intelligence is not capable of answering, Apple seems happy to outsource them to third-party partners, starting with OpenAI. This non-exclusive partnership will enable Apple users to access ChatGPT-4o for free. The experience is clearly delineated from Apple Intelligence, as users will be asked for permission before their queries are sent to ChatGPT, ensuring transparency and control over their data. OpenAI’s blog also outlined additional ChatGPT tools like image generation and document understanding embedded into Apple’s software.

Image credit: Apple

And ChatGPT won’t be the only AI model that Apple plans to integrate into its ecosystem. In discussions, Craig Federighi, Apple’s head of software engineering, hinted at potential collaborations with other AI leaders like Google Gemini. Such integrations would provide users with even more options and flexibility in how they utilize AI across their Apple devices.

Furthermore, the possibility of incorporating AI from startups like Perplexity AI and Anthropic suggests a future where Apple devices could support a diverse range of AI technologies that users can freely choose from. If that comes to be, Apple could also potentially leverage their hardware checkpoint to get a cut out of users signing up for paid AI subscriptions. It’d seem safe to assume that, like with search, Apple will whittle down queries that have to go out to third parties over time.

Naturally, Apple is positioning the Apple Intelligence features, which will only run on Apple devices with Apple Silicon chips and 8GB-or-higher RAM, as a key differentiation point for its hardware devices. One could read this cynically as a way for Apple to push more users to buy its new devices, but in reality, some of the older devices simply don’t have the processing power and memory to handle the Apple Intelligence tasks locally on device — which is crucial to Apple’s privacy goals.

Apple is positioning the Apple Intelligence features as a key differentiation point for its hardware devices.

AI Privacy as a Differentiation Point

Apple has long made data privacy a key tenet of its value proposition, and its approach to integrate generative AI is no different. Most of the Apple Intelligence features in its native apps will be processed on-device, and contrary to what Elon Musk baselessly claimed, they are built predominantly on Apple’s own proprietary foundational models, which were trained using licensed data and built into the latest software.

Image credit: Apple

For the tasks that do require more processing power, Apple has built its own private cloud infrastructure, named Private Cloud Compute, to handle them in a secure manner. According to analyst Ben Bajarin, Apple’s private cloud infrastructure is entirely powered by Apple Silicon and operates on 100% renewable energy. Apple also said it will provide cryptographic guarantees of privacy, minimizing the risk of data breaches and unauthorized access. Apple promised that user data processed on Private Cloud Compute would never be stored or made accessible to even Apple.

Apple Intelligence represents a significant leap for AI within consumer tech, emphasizing deep integration and user privacy, without sacrificing broader developer opportunities. To that last point, Apple has provided APIs for third-party developers, opening the door for a wide array of apps that can integrate these advanced AI capabilities. This move is expected to spur innovation and offer users even more extensive intelligent iPhone experiences.

As these features roll out with iOS 18, macOS Sequoia, and iPadOS 18 in the fall, we will be fully entering the scaling phase of this AI arms race. At the moment, Apple’s main competitors remain Google and Microsoft, both of us have taken a similarly integrated approach to bring AI to the consumer market. Announced last month at its I/O developer event, Google is aiming to integrate Gemini into Android phones. Same with what Microsoft is attempting to do with the Copilot+ PCs.

Yet, as the recent backlash against Microsoft around the AI-powered Recall feature shows, integrating AI into personal devices requires a delicate touch. In that regard, Apple is right on the money with its privacy-focused messaging. Overall, Apple Intelligence is an impressive yet safe approach for Apple to finally jump on the AI bandwagon and start leveraging the recent AI breakthroughs to improve its user experience and further differentiate its products.

Integrating AI into personal devices requires a delicate touch, and Apple is right on the money with its privacy-focused messaging.

Beyond Apple Intelligence — Subtle AI Features across the OSes

Before Apple got to the Apple Intelligence features, the company spent the first hour of the keynote going through its latest operating system upgrades — iOS 18, iPadOS 18, and macOS Sequoia. Notably, all of them have integrated features powered by machine learning to enhance the user experience without explicitly branding these features as “AI.”

These OS updates offer significant customization options, advanced privacy controls, and enhanced functionalities. For instance, new features like Safari Highlights, which utilize machine learning to recap web pages into easy-to-read summaries, and the revamped Photo Collection that intelligently categorize photos, providing users with more personalized experiences. Plus, advanced mail categorization will utilize machine learning to help users declutter their inboxes.

Safari Highlights | Image credit: Apple

Meanwhile, the watchOS updates introduce advanced health tracking, customizable training modes, and a new Vitals app, employing sophisticated algorithms to monitor and analyze health data. Similarly, new customization options will leverage machine learning to find the right personal photos fit for display as a watch face.

Apple has also enhanced its ecosystem with new AirPods features like head gestures, voice isolation, and personalized spatial audio, relying on machine learning to deliver improved audio quality and intuitive controls. New handwriting-centric features enabled by Apple Pencil on iPadOS 18 use machine learning to improve handwriting recognition and enhance note-taking.

A key difference between these OS-level features and the aforementioned ones enabled by Apple Intelligence is that the former is not restricted to newer devices with Apple Silicon chips. Users of any Apple device that supports the forthcoming OS updates will be able to access these improvements. Apple’s subtle integration of machine learning features across its latest OS updates demonstrates the company’s commitment to enhancing user experience without overtly marketing these advancements as AI. That’s a lesson that many AI-curious brands would benefit from.

Enhanced Immersion with visionOS 2

The Vision Pro ecosystem continues to grow, with over 2,000 apps already made for the spatial computing platform. Notable apps include the NBA app, Marvel’s What If, and the Kung Fu Panda experience, all of which received a nod during the keynote. These immersive apps showcase the diverse range of content available on Vision Pro, from entertainment and gaming to educational and professional tools.

Apple’s visionOS 2 introduces a plethora of enhancements for the Vision Pro, significantly enriching the user experience through new app integrations, advanced features, and expanded geographical availability.

Image credit: Apple

One of the standout features of visionOS 2 is the ability to create spatial photos from regular photos. Additionally, the new SharePlay feature in the Photos app with spatial personas offers a unique way for users to share and enjoy photos together, making virtual gatherings more interactive and engaging.

In addition, visionOS 2 will introduce new navigation gestures, making it easier for users to see important information at a glance, like the current time and battery level, and perform actions like adjusting the volume. Furthermore, the support for larger and higher resolution Mac displays provides users with a more expansive and detailed visual workspace, ideal for creative professionals and power users.

For developers, visionOS 2 brings new frameworks and APIs that enable them to create more sophisticated and immersive applications. The advanced volumetric APIs allow for the simultaneous use of two apps, enhancing multitasking capabilities. The TableTop kits facilitate the anchoring of apps to flat surfaces, making it easier to integrate digital content into the physical world. Enterprise APIs introduced in visionOS 2 open up new possibilities for business applications, allowing for more efficient and effective use of the Vision Pro in professional environments.

For developers, visionOS 2 brings new frameworks and APIs that enable the creation of more sophisticated and immersive applications.

A platform is only as good as the content it delivers. To that end, Apple is making it easier to create spatial content, with support from industry leaders like Canon offering a new spatial lens for the EOS R7. Vimeo’s added support for spatial video further expands the distribution of immersive content. Additional collaboration with content creators such as BlackMagic Design and Red Bull has led to the development of immersive video experiences in the 180-degree 8K format. Furthermore, Apple is set to produce more new scripted features and branded video content, demonstrating a commitment to expanding its immersive video library.

Red Bull-branded immersive video on Vision Pro | Image credit: Apple

For what it’s worth, Apple did not say when Apple Intelligence will make the jump to visionOS, but one would imagine that, when the timing is right, bringing the supercharged Siri into the spatial computing platform could further enhance the intuitive user experience of Vision Pro.

Vision Pro’s reach is set to grow with its upcoming releases in additional countries, starting with China, Japan, and Singapore. This expansion will allow more users around the world to experience the innovative features of Vision Pro, driving adoption and fostering a global community of users and developers.

Noteworthy Tidbits for Brand Marketers

To wrap things up, let’s look at some smaller tidbits that, despite not getting stage time during the opening keynote, will nonetheless directly impact digital marketers looking to reach Apple users.

First up, Apple is updating how ads are tracked on iOS devices, with the SKAdNetwork being replaced by a new AdAttributionKit framework that focuses on privacy-preserving ad attribution. Media buyers can watch this video from Apple to learn how AdAttributionKit supports re-engagement, click-through attribution, JWS-formatted impressions and postbacks, and more.

Second, there is a new Offers feature for the App Store, which will help app developers and brands to win back churned customers with special offers. According to research firm Omdia, Apple’s ad revenue, most of which comes from App Store search ads, is expected to hit $7 billion in 2024, which is $1 billion more than in 2023. Apple even teased about having “marketing assets generated for you” in the App Store, which might be a generative AI feature for developers to utilize.

Third, the aforementioned mail categorization feature meant to help users declutter their inboxes will likely come with the side effect of burying marketing email in the Promotions tab for even more users, as Gmail already does, resulting in a potential decrease in visibility and engagement with their email campaigns. In response, marketers can try crafting more personalized and engaging content to entice users to actively seek out their emails.

Lastly but certainly not least, buried under the pending updates to the Apple Wallet app, which include Apple Pay in third-party browsers, tap-to-transfer Apple Cash, and revamped event tickets, is the ability to scan a QR code to Apple Pay for completing transactions. Retailers and small businesses should look to quickly adopt and optimize these new payment features to gain a competitive edge by offering a convenient offline checkout experience. Per research by Capital One bank, 21.2% of American consumers aged 14 years and older will use Apple Pay in 2024.

Image credit: Apple

Overall, this WWDC marks an important milestone for Apple as it officially enters the AI arena. As always, Apple is sticking to its long-time value propositions and bringing AI into its ecosystem on its own terms. The OpenAI partnership may have grabbed many headlines, but it’s the compelling use cases that Apple Intelligence is set to enable that will truly add value to the user experience and, in turn, help Apple capitalize on the AI revolution.

--

--