Beginner’s Guide to Develop for visionOS

Berke Turanlioglu
Appcent
Published in
7 min readMar 29, 2024

--

Apple’s groundbreaking tech product Vision Pro has been released in February 2024, and we are still taking baby steps to develop for it. If you are feeling like you are missing the train for visionOS, do not worry. This article covers how you can start to code for visionOS with SwiftUI.

Vision Pro blueprint as cover for the article.
Cover: Apple Developer website

Note 1: I have to assume you have been somewhat interacted with SwiftUI few times before. Because visionOS is not the correct place to start learning SwiftUI for the first time. If you do not know anything about SwiftUI, you can start with simpler iOS or iPadOS apps :)

Note 2: Before we start, note that you have to have a Mac computer with Apple Silicon chips, which are M1 and above. There are workarounds for Intel chips, however I do not suggest since it will be annoyingly laggy. Another prerequisite is that you need to have Xcode 15 (and therefore MacOS Ventura 13.3)

Creating new project

Let’s open the Xcode and create a new visionOS project.

We can select Window and no immersive space for simplicity

Above, is the Xcode window where we can select the initial settings of our project. There are three new selections in comparison to iOS projects, two of which are shown with red arrows. You can create the project with Window initial scene and without an immersive space renderer. Now let’s try to understand what they mean.

Scenes in visionOS

There are three scene types in visionOS. First is the one that we are all used to: Window. Window scene is like an iPad or Macbook window, where all the stuff is on an 2D plane. Therefore it is easier to understand and code.

All three scenes have to be efficiently used for a good visionOS app

Second scene type is Volume. Volume benefits the 3D space of the user’s location (e.g., their room), and distributes its content into the real world. This is new to us as developers and it also provides us so much space for productivity. Final scene type is Full Space, where the reality combines with Vision Pro. User feels in a different environment in this scene.

These scene types are obviously changable throughout the different views we can design, therefore I think it is always safe to start with Window as an initial scene.

Immersive Space Renderer

Similar to scenes, there are also three spaces that users can experience.

The bigger the space is, the more mesmerizing it is.

As the first one is similar to Window, second and third ones are called “Immersive Spaces”. User can gaze to the beautiful panoramic view, or literally feel inside of it. Since we selected window initial scene, we can skip this renderer part. We can also change when we develop, too.

Building the project

When we create the project, below is the view and the code that welcomes us.

Crazy to think about when you can preview what Vision Pro does in real time.

On the left, the “Hello World” code is slightly different than our usual SwiftUI. It imports new RealityKit support for visionOS, which is used to use Model3D component in Line 15. Plus, we can also choose the window type for the Preview (Line 24). On the right, there is a classic canvas to show how would our view look on a Vision Pro inside a well-lit room.

What about the bottom of the canvas?

Well, luckily Xcode provides us many tweak options to play with the view on the canvas. Starting from left to right:

  • Interact: provides the feature of clicking with mouse to buttons and so on.
  • Tilt: we can drag with mouse to tilt the camera as a user.
  • Pan: we can position ourselves in front of the window.
  • Orbit: we can change our orbit (like position in 3D).
  • Zoom: zooming in and out inside our physical room.

So, Xcode and the Vision Pro Simulator provides us multiple options to maximize our developing methods here if we do not have a Vision Pro (which we most of us do not).

Spatial UI Design in SwiftUI

If we look at Vision Pro screen designs and other stuff, all of them have a futuristic glassy look, which is called Spatial UI. It also comes default for Vision Pro, i.e., we will not do any other modifications to achieve this design type for Vision Pros. Our code will be the same for iOS, iPadOS, and visionOS — and our design will change respectively, one of the greatest features of Apple.

Let’s talk about the differences await us for visionOS in SwiftUI.

Ornaments

Ornament is the new way of calling TabView and Toolbar components. Let’s look at the screenshot below.

In Line 20, we define a new type of component called .ornament(). Its structure is similar to a toolbar, where we fill the inside the content part. Here, I wanted to make a small demo of a Text animation with scaling up and down buttons. There is also a .glassBackgroundEffect() component in Line 32 for Vision Pro’s Spatial UI Design. Without it, Two buttons below look almost transparent.

Besides a toolbar, we can define an ornament as a TabView with TabItems. Its example is below.

Love it when label texts appears after hovering

TabView’s beauty here is that when the user hovers it with their eyes (mouse for us developers), labels expand and their texts appear. I think this is one of the most beautiful native approach of ornaments.

Note that trying any view other than “Label()” creates a glitch / bug that the icons will never appear. So, it is compulsory to use Label() there :)

Adding 3D Models (Objects)

Rest of the Window components are most likely similar to the SwiftUI we know from iOS or iPadOS. Therefore, I want to switch to blend some 3D objects to these windows. Let’s use the “Volume” space we mentioned earlier.

If you look back at the first code when we created the project, there is a Model3D() component that can easily establish a model in the middle of a view. However, this is a bit static and not getting all the advantages of RealityKit. We can customize this more.

There is an amazing Apple Model Library that provides high quality 3D models. I immediately grabbed the plane model with its pretty animation. Now, Model3D() has no capacity to activate its animation. So, we have to use something called RealityView. With RealityView, we can construct the model inside an entity first. Then, play its animation as below.

If you are not familiar with these 3D environment calculations, which are translation, scaling, and rotation, SwiftUI handles it pretty easily. Because my OpenGL days are not that bright :) You just need to enter the dimensions in SIMD3() and rest is in the compiler.

FYI: If you want to create or design more models, there is Reality Composer (and Pro) from Apple. You should definitely check it out because they aimed to reach the people without any experience in designing. So, it must be easy to understand and use.

Constructing Immersive Views

Now that we are already somewhat experienced with 2D windows from iPhones and iPads, I want to start talking a bit more about 3D environment that Vision Pro provides us. Let’s create an immersive space that we can change our environment seamlessly.

This ImmersiveView is constructed with RealityView in Line 10. RealityView creates a 3D environment for the Vision Pro. Then, we try to load our 3D image asynchronously with TextureResource in Line 13.

Later, we create big sphere material for our entity to mask with our loaded image so that we can feel ourselves inside this artificial world. After a bit of scaling and translating, we end it by adding it our RealityView content.

Now we have to enable / disable this from our main view. Code is below.

With visionOS, new Environment values come in, which are in Line 6–7. These help us to manage the immersive views. visionOS is also sensitive about them, i.e., you cannot open two immersive views back to back, only one of them has to be awake.

Abovementioned ContentView has two simple buttons that toggles two different Immersive Views — progressive and full space. These immersive views are created asynchronously, since it takes time and has to be done on main thread. Therefore, they are run with Task in button actions.

Now, if you try to build this app, you will probably get errors when pressing the buttons on the screen. Because, these Immersive Views have to be declared in the main app file as well.

In Line 8–11, we add the progressive immersive view with its style, and in Line 13–16 for full immersive view. Now we can build the app and see the results.

Immersive Space at its finest

Here as you can see, just like we introduced it at the beginning of this article, while progressive view is like only half sphere of what user can experience, full space really covers all around the user. Thanks to this 3D captured image, we feel ourselves in a beach.

Hope it helps and tickles you to start coding for visionOS. Happy coding 👨‍💻

--

--