Exploring Space with visionOS!

a guide about creating immersive space experiences on the Apple Vision Pro.

Piram Singh
14 min readNov 23, 2023

Imagine you live in a future where space travel is possible, but it costs you $150,000 to take a trip to the moon and back. After reading that price tag, you’ll probably never want to think about even purchasing a ticket.

But …. what if there was a $150 experience that allowed you to immerse yourself in the depths of space and travel on a rocket to the moon?

(That sounds pretty cool to me! idk about you) To make this experience possible we are going to need a couple of things:

  1. 💫 An Immersive Space: We need to make an immersive space that allows the user to see from the perspective from the astronaut.
  2. 👨🏽‍🚀 Astronaut & 🚀 Equipment Information: We need to showcase the different astronauts & the different types of rockets that are going to be on the trip.
  3. 🎥 Video : We are also going to need a video to showcase the trip, get people excited about traveling to space creating with some amazing promotional material!

With these three requirements in mind the best medium to build the experience is through VR and I decided to go with Apple’s visionOS! (check out my article on visionOS here)

Now building this app — you are going to need a good amount of knowledge in Swift & SwiftUI and I personally don’t have a lot of experience in Swift so I learned along the way but here’s I how app is broken down:

  1. 🏠 Main Menu : This will be the landing page that allows navigation into three pages
Main Menu

2. 🧑🏻‍🚀 Astronaut Page : This has pictures + info about the four astronauts

Astronaut Page

3. 🚀 Equipment Page : This has the immersive spaces to view the rocket capsule & the full rocket in space

Equipment Page & Immersive experience

4. 🎞️ Mission Page : This page has the video stored for people to watch and get hyped up about the mission!

Mission Page

Alright with that down , lets do a deep dive into the code behind each page!

Note: For every page in the app, each page has two main components: a backend file that stores the data (Swift) and a frontend file that formats that page (SwiftUI) . I’ll breakdown each page into a backend section & a frontend section. Also all this code is inspired by & replicated by this amazing video!

🏠 Main Menu

Backend

import Foundation 


enum Area : String, Identifiable, CaseIterable,Equatable {
case astronauts, equipment, mission

Let’s decode this — we first do import Foundation which is the basis for Swift files, so we are going to be doing this with all the swift files. Next, we initialize an enum. An enum or enumeration is essentially a special data type declaration where we know all the cases before hand.

A little confused? Here’s a quick example of using an enum — For example if we were building a movie app where we knew the contents would only be 5 Marvel Movies we would use an enum data type because the contents of the app will never change. If the movies were changing every month, then we would use a struct which is a special data type that allows for changing data. Okay, let’s go back…

Once we set the enum up we have to set up some protocols for the enum to allow for all the data to pass through. These are String, Identifiable, CaseIterable, Equatable. Strings are data types to understand text. Identifiables are a protocol that allows individual views to allow for dynamic views like the ones we are using for the equipment pages. CaseIterable is a protocol that allows us to create a collection of all our cases. (This will be useful for creating a good UX in the app). The Equatable protocol allows you to compare object with each other which will be necessary in this app.

 case astronauts, equipment, mission
var id: Self {self}
var name: String {rawValue.lowercased()}
var title : String {
switch self {
case.astronauts:
"Inspiration 4 mission crew members..."
case.equipment:
"Inspiration 4 mission equipment..."
case.mission:
"Inspiration 4 mission trailer..."
}
}
}

The next step in setting up the enum is to intalize the cases. cases in this situation are the three different pages > astronauts, equipment, & mission. The next step is to initialize a couple of variables for each case. We have id, name, & title. id will just refer to itself with the cases, name will be a string all lowercased, and title for each screen will be different for each case, hence the switch (similar to an if loop in this case). That’s it for the backend on this page, let’s move to the frontend code!

Frontend

import SwiftUI

struct NavigationToAreas: View {
var body: some View {
VStack{
Text("Welcome To The Inspiration 4 Mission By SpaceX")
.monospaced()
.font(.system(size: 40, weight: .bold))
.padding(.top,250)

This file is called NavigationToAreas which is fitting because it provides all the navigation to every page. You start by coding import SwiftUI as this code is in SwiftUI. You then create a struct because the data is going to change in this area as you click on page by page. The next thing you would do is set a vertical stack of components where the first component is text and then some formatting with the monospaced, font, & padding.

    HStack(spacing: 25){
ForEach(Area.allCases){area in
NavigationLink {
Text (area.title)
.monospaced()
.font (.system(size: 40,weight: .bold))

The next step is to set a HStack or Horizontal Stack of info which utilizes the CaseIterable protocol in the backend. You create three navigation links that are all based on the title variable we intialized earlier. Same formatting as previously. The labels & buttons for all these links are intialized later in the code.

    //sub-views go here
if area == Area.astronauts{
CrewArea()
}
else if area == Area.equipment{
EquipmentArea()
.environment(ViewModel())
}
else if area == Area.mission{
MissionArea()
.environment(ViewModel())
}

Spacer()

The next part of the code is essentially an if loop. if the area clicked is astronauts, open up the file called CrewArea(), if the area clicked is equipment open up the file called EquipmentArea(), and if the area clicked is mission open up file MissionArea(). Pretty simple here. You do have to pass the .environment(ViewModel()) for equipment and mission as the type of environment will change for those two pages. (I’ll get to this later in the article).

Spacer() moves up the Navigation Links up top so the screen looks evenly distributed.

} label: {
Label(area.name, systemImage: "chevron.right")
.monospaced()
.font(.title)
}
.controlSize(.extraLarge)
}
}
}
.background(){
Image("Inspiration4")
}
}
}

The last part to this is just formatting the buttons and adding labels to each of the navigation links. I also added the background image to the assets tab in Xcode from dilmer’s github repo here.

From this let’s build the code for the Crew!

🧑🏻‍🚀 Astronaut Page

Backend

import Foundation

enum Crew: String, Identifiable, CaseIterable, Equatable {
case jared, haley, chris, sian
var id: Self {self}
var name: String {rawValue.lowercased()}

var fullName: String {
switch self{
case.jared:
"Jared Isaacman"
case .haley:
"Haley Arceneaux"
case.chris:
"Chris Sembroski"
case.sian:
"Dr. Sian Proctor"
}
}

For the crew’s backend, it’s the same logic as the Main Menu, are just adding two new variables. The first variable is fullName which includes a switch statement to go through every case like an if loop to set the names up for the crew screen.

 var about: String {
switch self {
case.jared:
"Jared Isaacman is the founder and CEO of Shift4 Payments (NYSE: FOUR), the leader in integrated payment processing solutions."
case.haley:
"When Hayley was 10 years old, one of her knees began to ache. Her doctor thought it was just a sprain, but a few months later, tests revealed Hayley suffered from osteosarcoma, a type of bone cancer."
case.chris:
"Chris Sembroski grew up with a natural curiosity about outer space. Stargazing late at night on the roof of his high school and launching high-powered model rockets in college cemented this passion."
case.sian:
"Dr. Sian Proctor is a geoscientist, explorer, and science communication specialist with a lifelong passion for space exploration."
}
}
}

The second variable we are adding is the about variable. Same logic as fullName, just replacing the strings with a quick bio of each astronaut. These bios were pulled from here.

Frontend

import SwiftUI

struct CrewArea: View {
var body: some View {
HStack{
ForEach(Crew.allCases){ crew in
VStack(alignment: .leading){
Image ("crew-\(crew.name)")
.resizable()
.frame(width: 180,height:200)

For the frontend of Crew in the file CrewArea, we use the same starting logic from NavigationToAreas() file screen and then start with a HStack and the CaseIterable logic to create a VStack for all the different crew images. The code runs a search of every case and it’s corresponding file name which is all taken from the github repo. Then there is some formatting with .resizable() and .frame ()

                    Text(crew.fullName)
.font(.system(size: 32,weight: .bold))
Text(crew.about)
.font(.system(size: 20))
}
.frame(minWidth: 180,minHeight: 200)
.padding(15)
.glassBackgroundEffect()
}
}
.padding(20)
}
}

The next part is to call crew.fullName & crew.about into the view with some formatting for the text, for the images, and some general padding all around the screen.

Alright let’s move on to the equipment code! This is where things get real intresting and the logic is a little confusing but I got you 😉

🚀 Equipment Page

Backend

import Foundation

@Observable
class ViewModel{
var navigationPath: [Area] = []
var isShowingRocketCapsule: Bool = false
var isShowingFullRocket: Bool = false
}

Remember how we talked about passing in .environment(ViewModel()) as a link to other dynamic views. This is the backend for that as it allows us to use Immersive and Volumetric spaces properly.

This uses the protocol called @Observable which allows for data to be changed in different views. We intalize the class ViewModel for UI changes with three variables (1) NavigationPath, (2) isShowingRocketCapsule, & (3)isShowingFullRocket. The latter two are bools (True or False) and the first one is intialized to call an empty array of the area object we made earlier.

With this is mind, let’s look at the frontend!

Frontend

import SwiftUI

struct EquipmentArea: View {
@Environment(ViewModel.self) private var model
@Environment (\.openWindow) private var openWindow
@Environment (\.dismissWindow) private var dismissWindow

@Environment (\.openImmersiveSpace) private var openImmersiveSpace
@Environment (\.dismissImmersiveSpace) private var dismissImmersiveSpace

With the beginning of EquipmentArea(), we have the same begining logic, but we need to set up 5 environment protocols. The first one is passing in the ViewModel to make sure you can turn on different environments on & off. The other 4 are to open and dismiss volumetric windows & immersive windows.

 var body: some View {
@Bindable var model = model

HStack{ //HSTACK BEGIN
VStack {
Image("equipment-capsule")
.resizable()
.frame(width: 300, height: 300)
.padding(20)

The next section of this code is intialize a Bindable variable to bind all the views together and chain the different views. We then move into a HStack, then nested in a VStack. In the VStack, we have an image called equipment-capsule which is then formatted with .resizeable(), .frame(), and .padding()

   Toggle(model.isShowingRocketCapsule ? "Hide Rocket Capsule (Volumetric)": "Show Rocket Capsule (Volumetric)", isOn: $model.isShowingRocketCapsule)
.onChange(of:model.isShowingRocketCapsule){ _, isShowing in
Task {
if isShowing {
openWindow(id:"CapsuleRealityArea")
}
else {
dismissWindow(id:"CapsuleRealityArea")
}
}

}
.toggleStyle(.button)
.padding(25)
}
.glassBackgroundEffect()

This is where it gets a little tricky, but essentially all we are doing is creating the first Toggle where we are showing the rocket capsule so there are two labels to this toggle: Hide Rocket Capsule (Volumetric)”: “Show Rocket Capsule (Volumetric)”. The way we chain the views together is through the .onChange event. Once we pass through the onChange event, we’ll enter an if loop that will use those bool variables openWindow & dismissWindow that are going to match with some scenes we will create in Reality Composer Pro!

 VStack {
Image("equipment-fullrocket")
.resizable()
.frame(width: 300, height: 300)
.padding(20)
Toggle(model.isShowingFullRocket ? "Hide Full Rocket (Full Immersed)": "Show Full Rocket (Full Immersed)", isOn: $model.isShowingFullRocket)
.onChange(of:model.isShowingFullRocket){ _, isShowing in
Task {
if isShowing {
await openImmersiveSpace(id:"FullRocketRealityArea")
}
else {
await dismissImmersiveSpace()
}
}

}
.toggleStyle(.button)
.padding(25)
}
.glassBackgroundEffect()

Same logic here but instead of the half rocket, this will change into the full immersive space as shown above in the article! Also note here that the toggle style is set to .button and some padding has been added for both buttons.

Alright now let’s move into the two different Reality Scenes!

import SwiftUI
import RealityKit
import RealityKitContent

struct CapsuleRealityArea : View {
var body: some View {

For this reality scene, CapsuleRealityArea(), use the same starting logic, just some new import lines → we are going to import RealityKit & RealityKitContent. This will allow us access to develop in Reality Composer Pro (This comes in the visionOS SDK, you can download that here.)

 RealityView { content in
guard let entity = try? await Entity(named: "Scene", in: realityKitContentBundle) else {
fatalError("Unable to load scene model")
}
content.add(entity)

Then we write this guard statement (a protected if-loop)in the RealityView{} protocol which basically makes sure that the scene loads up and will print out an error message if something does not work.

import SwiftUI
import RealityKit
import RealityKitContent

struct FullRocketRealityArea: View {
@State private var audioController: AudioPlaybackController?

var body: some View {
RealityView { content in
guard let entity = try? await Entity(named: "Immersive", in: realityKitContentBundle) else {
fatalError("Unable to load scene model")
}

The next reality file, FullRocketRealityArea(), utilizes the same starting logic but intializes a new variable for Audio Playback Control so we can allow for the Ambient Audio to play in the immersive area!

   let ambientAudioEntity = entity.findEntity(named: "AmbientAudio")
guard let resource = try? await AudioFileResource(named:"/Rocket/Space_wav", from: "Immersive.usda", in: realityKitContentBundle)else{
fatalError("Unable to find space.wav file")

The next part to this code is writing a guard statement to always pull the audio from it’s respective filepath along with an error message to appear if it is not available.

 audioController = ambientAudioEntity?.prepareAudio(resource)
audioController?.play()

content.add(entity)
}
.onDisappear(perform: {
audioController?.stop()
})
}
}

The last part of FullRocketRealityArea is creating the settings for the audioController function and creating audio functions when the immersive space opens and closes.

With all this code done let’s render the models using Reality Composer Pro!

Opening Reality Composer Pro from Xcode

We can open up Reality Composer Pro (RCP)from clicking our new file called : “Scene” and then there is a button that “Open in Reality Composer Pro”

Once in RCP, add the capsule file which you can find here. Set the position settings, rotation, & scale settings accordingly. Let’s move on to the Full Rocket immersive area!

This is what the immersive area looks like. Here are the settings you want to keep:

Earth Model Settings

🌍 Earth Visual Model : The earth model is embedded in RCP and should be scaled with the above settings to feel immersive.

Rocket Model Settings

🚀 Rocket Visual Model : The rocket model for the Falcon 9 can be downloaded from the github and should be scaled with these coordinates. You need to test to see where it appears in running the app. Move the rocket accordingly.

Particle Emitter

⚛️ Particle Emitter : From the rocket model add a component called particle emitter with the following settings selected and hit play to test it out.

Ambient Audio Settings

🔉 Ambient Audio : Download the audio file Space.wav file from github, Create an ambient audio object, import the space.wav file into RCP, assign Space.wav to ambient audio, and select loop!

Alright let’s move into the last page — the mission page!

🎞️ Mission Page

Backend

import SwiftUI
import AVKit

For the mission page, the backend and frontend are essentially in one file written in SwiftUI called MissionArea(). With that said, the beginning of this file includes a new import statement. The frameworkAVKit allows us to create media playback files. Learn more about the framework here.

struct MissionArea: View {
@State var player = AVPlayer(url: Bundle.main.url(forResource: "Inspiration4", withExtension: "mp4")!)
@State var isPlaying: Bool = false

The next part of the backend in this file is creating the struct like normal and two variables: player which is of type AVPlayer that calls for the file Inspiration4.mp4 & isPlaying which is a bool data type that will allow us to create actions for an opening case and dismiss case. Let’s move into the frontend formatting.

Frontend

 var body: some View {
VStack{
VideoPlayer(player:player)
Button{
isPlaying ? player.pause() : player.play()
isPlaying.toggle()
player.seek(to: .zero)
} label: {
Image(systemName: isPlaying ? "stop": "play")
.padding(5)
}
.padding(10)
}
.glassBackgroundEffect()

We start the formatting of this with a VStack with a VideoPlayer type brought in from AVKit. We also create a toggle for play and pause along with an image to demonstrate stop & play functions.

.onAppear(perform: {
isPlaying = false
})
.onDisappear(perform: {
player.pause()
})
}
}

The last part of the frontend is creating the formatting for when the video is paused and when we exit out of the video!

Now with the coding done, you can run the app, test, and debug any issues that might have come up!

That being said, I used a lot of different resources with the coding process, here are the best of the bunch:

(1) ⏯️ The Youtube Video: I used this as a guide for the app. followed every single line of code he wrote in this video

(2) 👨🏽‍💻 Swiftful Thinking : I used this youtube channel to better understand swift so that it would be eaiser for me to code.

(3) 🤖 Chat-GPT 4: Used GPT4 with any coding issues that came up and used it to learn alongside the swiftful thinking youtube channel

(4) 🤖 Phind AI : A new AI tool that allows it to be a pair programmer with you and also brings up Stack Overflow and other dev forums to help you out. You can learn more about them here.

📊 Potential Revenue Model

📈 Potential Revenue Model

While developing this, I thought about the potential applications for this app and asked GPT4 to create a potential revenue model, here’s what I came up with: a breakdown of the Target Audience, Value Prop, Revenue Streams & other aspects needed for the app and a predicted revenue model that would generate 30 million dollars if we penetrated 20% of the market!

🤯HOLY SHIT!

This would be sick for SpaceX and the future of space travel!

Thank you for making it all the way to the end and stay tuned 👀 — I’m looking at making another project in visionOS soon and video for this!

Hey i’m Piram and i’m an aspiring UI/UX designer, exploring how AI is changing the design world. connect with me on linkedin to follow me on this journey!

--

--

Piram Singh

Hey i'm Piram and i'm an aspiring UI/UX designer, exploring how AI is changing design. connect with me on linkedin! https://www.linkedin.com/in/piramsingh/