Sitemap
Antaeus AR

The leading Spatial Computing and Augmented Reality inspiration, with writers from over 30 countries

Making a Comeback: Thoughts on the XR Industry and the 2025 Google I/O

--

Intro

For most of the past decade, XR has lurched from hype cycle to hype cycle — spectacular live demos, earnest developer enthusiasm, and long stretches where headsets gathered dust on closet shelves. Google’s keynote at I/O 2025 felt different. The company didn’t just unveil another headset; it introduced Android XR, a single operating layer that already runs on Samsung’s upcoming “Project Moohan” headset and a prototype pair of smart glasses. What elevates the announcement is Gemini, Google’s multimodal AI that listens through on‑device microphones, sees through outward‑facing cameras, and responds in natural language.

During the live demo, Gemini identified a book the wearer barely glanced at, translated a bilingual chat in real time, and reshaped plain‑text walking directions into a floating map as soon as the user’s gaze dropped toward the sidewalk. In that moment, XR stopped feeling like a science‑fair prototype and started behaving like an ambient assistant you’d keep on.

Google’s bet puts multimodal AI squarely at the center of XR’s value proposition, and the rest of the industry is sprinting in the same direction. Meta already ships voice‑controlled Ray‑Ban glasses built around its own assistant, and internal projects such as Segment Anything hint at ever‑deeper scene understanding. Apple downplayed AI‑first narratives when Vision Pro launched, but behind the scenes, it is pushing Apple Intelligence into VisionOS so that future headsets can reason about a space, not just render it. The common thread is context: once a device understands what you see, hear, and want, it transcends the label of “gadget” and becomes something closer to a companion.

The Present

Multi-Modal + Agentic AI: XR’s Killer App

One of the clearest takeaways from Google I/O 2025 is that multi-modal + agentic AI is the killer application poised to make XR indispensable. By multi-modal, I mean AI that fluidly combines vision, speech, text, and other environmental inputs to understand context and interact with a natural, almost intuitive, grace. By agentic AI, I mean AI systems that can understand complex user goals, reason about them, and proactively take multi-step actions to achieve them, often interacting with various tools and services. Google’s new Gemini AI is not just an add-on but is baked deeply into the fabric of Android XR devices. As showcased in demos today, Gemini essentially becomes an ever-present assistant that shares your perspective through the device’s cameras and microphones, capable of tasks like real-time language translation appearing as subtitles in your vision, or understanding your surroundings to provide contextual information and assistance.

In 2025 and beyond, expect multi-modal + agentic AI to become a core selling point of XR systems across the board. Apple’s Vision Pro, for instance, while an incredible hardware and interface design at its launch, didn’t initially emphasize proactive AI assistant features to the extent Google is now showing. However, with “Apple Intelligence” becoming a broader theme, they are inevitably exploring deeper AI integration into their spatial computing paradigm. Meta, a dominant force in VR, has been vocal about infusing AI across its devices, already integrating image recognition and voice AI, with research projects hinting at sophisticated future AR assistants.

In summary, the move towards agentic AI in XR aligns with a broader technological shift where AI is transitioning from being a passive tool on screens to an active, perceptive collaborator in the physical world, often mediated through wearable devices like smart glasses. Android XR aims to be a primary platform for these embodied AI agents.

A Competitor Has Entered the Chat

I’ve watched XR ride several boom–and–bust curves. The booms always start the same way — new players, fresh silicon, or a forward‑thinking form factor create a jolt of optimism. Busts follow when the novelty wears off or the tech can’t live up to its press. This time, Android XR has joined a maturing field that already includes Meta’s Horizon OS devices and Apple’s visionOS.

Google’s hand looks strong. Gemini supplies the kind of context‑aware reasoning developers have been wanting; the Play Store gives Android XR a back catalog of millions of 2‑D apps that can float in 3‑D from day one; and an open partnership model invites Samsung, Xreal, Lenovo, and others to build their own hardware without starting from scratch. Meta’s advantage is market share and a deep game library; Apple’s is premium hardware, world‑class displays, and an ecosystem famous for polish and privacy. Android’s opening offer is breadth: one code‑base, many devices, multiple price points. If Google executes, the company could bring XR to the mainstream the same way Android phones flooded the market after 2009.

Gemini nicely helps me gather information about each major player on the market ⬇️

The Future

Converging Paths, Diverging Purposes

The line between VR and AR is blurring — every VR headset now offers passthrough video, and every AR glasses project wants momentary full‑screen immersion. Yet in daily life, the two form factors are diverging in purpose. Headsets are evolving into the new “big laptop,” something you fire up for deep work, AAA games, or cinema‑scale movies. Glasses are becoming the “foldable phone,” always on, always ready to layer context onto the real world. Google’s one‑OS approach embraces the split: the same identity, apps, and AI follow you whether you’re heads‑down in Moohan or heads‑up on your morning commute.

In the near term, we will likely see this divergence continue: purpose-built devices for fully immersive XR versus lightweight devices for augmented reality. Each will iterate and possibly borrow features (e.g., headsets getting slimmer and more glasses-like; glasses improving their displays and fields of view). However, the long-term trajectory may be towards unification, or at least a seamless ecosystem. Google’s Android XR is explicitly aiming for continuity across form factors — the idea that your AI and apps (Gemini, Android apps, etc.) travel with you whether you’re using a headset or glasses. I would expect the same from Meta and Apple in the upcoming months.

This is like how we use phones, laptops, and smartwatches today for different contexts, but with synced data. If XR follows the pattern, a person might own a pair of smart glasses for daily use and a more powerful headset for heavy-duty immersive sessions, with both running the same platform and shared services. Google even hinted that consistency of the AI assistant across devices could be key to getting people comfortable with “wearing computers on their faces” — using familiar services like Gemini in a phone, then in glasses, then in a headset as needed

In summary, VR and AR are increasingly two sides of the same coin, technically speaking, but from a user’s perspective, they will feel like different tools for different jobs for a while longer. The key for platforms like Google’s will be to ensure a continuity of experience between these tools. If you can put on a Samsung Android XR headset at home to play in an infinite virtual workspace, then switch to your Android XR glasses when heading out, and have the same assistant, apps, and data accessible (appropriately adjusted for each context), XR as a whole becomes more compelling.

Cross-platform Compatibility Will be the Sweet Spot

One of the biggest strategic questions in XR is how to attract developers and content. In the past, XR platforms often struggled with fragmented ecosystems — a game made for Oculus Rift wouldn’t run on HTC Vive without modification, AR apps built for HoloLens couldn’t run on Magic Leap, etc. The industry has responded with initiatives like OpenXR and WebXR, and by leaning on popular game engines like Unity and Unreal, which can export to multiple targets. Google’s approach with Android XR doubles down on this idea of cross-platform compatibility. Not only is Android XR built in collaboration with Qualcomm and Samsung to be an open platform, but it’s also leveraging the vast existing Android developer base and app library. In fact, Google confirmed that Android XR devices will “work with any mobile and tablet app from the Play Store out of the box.”

One more cross-platform element is worth noting: integration with existing platforms. Google isn’t throwing away the smartphone ecosystem with XR; rather, it’s extending it. The Android XR glasses, for example, are designed to work “in tandem with your phone”, offloading computation and using the phone’s connectivity. This cross-device approach mirrors how early Apple Watch relied on iPhone. It lowers the barrier for adoption (you don’t need an entirely new standalone computer; your phone is the brain, the glasses are an accessory with sensors and a display).

The bottom line: cross-platform compatibility — whether through engines, standards, or cloud services — will determine how quickly the XR ecosystem can grow. Google’s moves at I/O 2025 suggest a strong commitment to openness and interoperability, which could spur competition as others feel pressure not to let their platforms be walled gardens.

Developers, Developer, Developers!

The rivalry between open and closed tech ecosystems is as old as the PC vs. Mac, or Android vs. iOS. Now, a similar dynamic is playing out in XR. On one side, we have relatively open ecosystems — Google’s Android XR, and to an extent Meta’s platforms — which invite multiple hardware makers, use open standards, and allow a degree of sideloading or alternate app stores. On the other side, closed ecosystems — epitomized by Apple’s visionOS — tightly integrate hardware and software and keep strict control over the user experience and app distribution. Each approach has its strengths and weaknesses, and their competition will shape the XR industry’s evolution.

Though closed platforms tend to produce a more integrated experience, it is heavily dependent on the platform owner. Recent examples of Vision Pro and Apple Intelligence might have developers second-guessing about developing on such a platform, while Meta’s approach to developer support helps Quest to become successful. Google’s strategy with Android XR is clearly modeled on the latter: get many partners on board and achieve wide adoption through choice and scale.

From a platform success perspective, open vs closed will influence adoption. Meta’s relatively open approach (allowing sideloading and not tied to any one phone brand) helped Quest reach millions of users and become the leading VR platform. Apple’s closed approach, while it yields a premium product, inherently limits short-term adoption because only Apple sells the hardware (and at a high price). Apple might not mind — it often plays the long game of building a smaller but highly engaged user base that grows over time (much like Mac computers had fewer users than Windows PCs but strong loyalty and profitability). Google’s bet, however, is that an open approach can jump-start the XR market through volume and variety, making Android XR the default choice for manufacturers who don’t want to be left behind.

Privacy as a Competitive Feature

Privacy has always been an issue with the XR industry as the core of the technology heavily relies on perceiving the outside world, either for object recognition or hand tracking. Google Glass had this controversy more than a decade ago: establishments banned Glass, and public opinion turned against it, branding users “Glassholes” for the perceived intrusion. Fast forward to 2025, and now we are talking about glasses and headsets that not only have cameras but also AI brains that can interpret and remember what they see. As seen in the demo, where Project Astra remembers the coffee shop the user has been to, the stakes are even higher.

Speaking of memory, the idea of an AI that remembers what’s important to you is both exciting and alarming. Imagine your AR glasses keep a log of things you’ve seen — signs, business cards, people’s faces, objects — and can recall them for you. It’s like having a second memory. This could be incredibly useful, but it also crosses into surveillance of self. Are we comfortable with our devices recording our lives to that extent? And where is that data stored, and who has access? Apple’s Vision Pro, interestingly, approached privacy by keeping the user in control of input — it does not have a front-facing world camera (only passthrough and side cameras), and eye tracking data is processed on-device with no app access to raw camera feeds of your environment. Instead, Apple maps the room and then lets apps query the 3D map in an abstracted way. We don’t know Google’s detailed approach yet, but given Android’s nature, they might allow more camera use but with permissions.

In summary, privacy is both a risk and a differentiator in the XR space. Google’s emphasis at I/O on testing for privacy signals they know it can make or break the product. If they can convince users (and the public) that “your assistant sees and hears what you do” without violating privacy, and that the data is there to help you and not feed advertising or surveillance, then people will be far more likely to embrace wearing these devices.

From Smarts to Charms

After the battle for “whose AI is smarter”, the battle for “whose AI is more likable” will begin. As seen with Meta’s LLAMA 4 Behemoth and OpenAI GPT-5 struggle, as well as the diminishing returns of scaling models, LLM might be starting to hit a wall in terms of sheer intelligence. An AI that is smart and helpful but comes off as cold and annoying might not be welcomed to go with us all day in AR glasses. On the other hand, an AI that users genuinely enjoy interacting with, that has a dash of personality and emotional intelligence, could become something people bond with. The natural next step would be to make AI more personal and likable. OpenAI’s GPT-4.5 is a great example. While it is currently extremely expensive, it sets the tone for what’s to come.

As intelligence and information become a commodity, experience becomes the product. The user needs to trust the AI and ideally feel at ease with it — this is why Sesame AI became such a hit. That means the AI should show understanding of the user’s preferences, perhaps a bit of personality matching (an introvert might want a quieter assistant, an extrovert a more chatty one). In essence, AI needs social IQ, not just IQ. The intelligence of our XR devices will soon be table stakes; what will matter is how that intelligence is delivered. An XR platform that masters the art of a charming, trustworthy AI guide will likely engender more loyalty and daily use. Google seems to recognize this, as do its competitors. The race is not just to build the most advanced AR glasses or VR headset, but to build the most advanced relationship between humans and technology. XR, being immersive and personal, is the ultimate stage for that relationship to play out. Everyone, welcome to the new age of human-computer interactions.

TL; DR

Google’s introduction of Android XR and Gemini AI at I/O 2025 is a pivotal moment for the XR industry, emphasizing unified platforms, multimodal AI, and openness. Competition stays fierce, with Meta emphasizing social immersion and Apple highlighting premium, privacy-focused experiences. Ultimately, the XR market will reward platforms offering intelligent yet charming AI companions, seamless cross-device experiences, and trustworthiness.

--

--

Antaeus AR
Antaeus AR

Published in Antaeus AR

The leading Spatial Computing and Augmented Reality inspiration, with writers from over 30 countries

Jack Yang
Jack Yang

Written by Jack Yang

Mixed Reality Engineer • Reimagining Reality with XR • https://jackyang.io/

No responses yet