AR Technology — From XAML to Unity

Panayot Cankov
Telerik AR VR
Published in
8 min readJun 14, 2018

If you are an experienced .NET developer, how do you develop AR and VR? You will go to Unity. Because it is state of the art framework, exports to all platforms, C# is first class citizen, AR VR software vendors develop SDKs for Unity and so on.

So go download Unity and get started with it.

And to keep up with my thoughts you can peek at the basics in these tutorials.

Now how do you develop AR and VR with Unity? Pretty much the way you develop games with Unity. The only difference is that, instead with the WASD keys, you control the camera with your head. The input events may be a little different and need special SDK. So before I push you forward to MRTK you need to be comfortable with developing in Unity.

Georgi Atanasov and I worked to deliver the HoloStock demo for Microsoft Build 2018 and are now building the Telerik UI for AR VR. And here I would like to share some of my personal experience, view and thoughts on moving from XAML to Unity.

Developer With .NET Desktop Background?

XAML

So if you work with WPF you express your UI in XAML. That is compiled to BAML at build time. Then at runtime the BAML is parsed by the framework and instantiates the UI visual tree of controls — a mix of User Controls, Custom Controls, Templates.

User Controls

You author your forms or pages as User Controls, with XAML to describe its UI and C# in code-behind to handle the application logic using the MVVM pattern. The UI is created by using components such as Buttons, CheckBoxes, ComboBoxes, ListViews.

Custom Controls

These small reusable widgets, are designed as Custom Controls. Custom Controls handle input, encapsulate the interaction logic, has private visual tree to represent the control state, can nest content generated in the UserControl. Expose public API — properties that can be used in bindings in the User Controls.

Templates

Very often in a ListView or a GridView you will present lists of similar items. The visual representation of these items is described in XAML by building Templates. These Templates can be used internally by the Cutsom Controls to instantiate copies of small visual trees that would render each data item in your list.

So How Does That Work in Unity?

Scenes

This would be the analogy of a User Control. Unity organizes the content in scenes. In games you would have one scene per level, however for AR VR apps you usually have only one scene. You can not provide directly code-behind for a scene. But you can write an object with a behavior to execute what otherwise would be a User Control’s code-behind.

Here is the manual for working with scenes.

GameObject, Transform, MonoBehaviour

A combination of these three classes will seem to you as a Custom Control. When you start working with Unity, most guides will tell you to:

  • right-click in the “Project” panel and “Create > C# Script”
  • double-click the “NewBehaviourScript” and add some code in Start or Update
  • right-click in the “Hierarchy” panel and “create empty”
  • in the “Inspector” panel click “Add Component” and select the “New Behaviour Script”

This is the bread-and-butter of making interactive elements in Unity.

Here is a very good tutorial that will get you started with these.

GameObject — the “empty object” you’ve created will create a GameObject in the scene. The GameObject class has a list of Components.

Component — the “NewBehaviorScript” creates a class that derives from the MonoBehaviour and Component. The Component class has a gameObject property. So when you attach components to a GameObject they are bound together. Your Component can access GameObject’s other Components using methods such as GetComponent. It is all about something I personally love:

Composition over inheritance!

This is something rather unique in Unity. For XAML Custom Controls you would inherit a base class. Then add new properties in your derived class. Get elements from your template by overriding OnApplyTemplate. You will also override other base methods to react on state changes and events. Everything will be very well encapsulated in that class.

That’s not how things work in Unity. You will not extend GameObject. You will rather author small reusable behaviors and add many of them to the GameObjects on your scene. It feels like writing all your code using WPF’s attached behaviors. But these small behaviors are reusable. You can create a “mouse over” behavior and use it in both your Button and ComboBox. You will have to figure out best practices not to mess up when multiple behviours start to compete for shared properties and state.

Transform — each GameObject has one special component — its transform — either Transform or RectTransform instance. You will never inherit from these or replace the transform component of your GameObject.

The transform is also unique in that it keeps a list of child transforms. In XAML a Panel instance has а list of children but in Unity you don’t have a GameObject derived class that has special powers to nest children. From a GameObject you can get its transform, from that transform you can list its child Transform instances, from each of these Transform components you can get their GameObject.

It is worth nothing to say that GameObject, Component and Transform (that also derives from Component), are so fused together that many methods exist in all of them. For example:

However there are properties that differ — GameObject.activeSelf and Behaviour.enabled. Disabling a behavior will disable only one of the many GameObject components, while if you call SetActive(false) on a GameObject, it shuts down all of GameObject’s components. This also affects methods that search for components by traversing the visual tree, see GameObject.GetComponentInChildren.

Authoring Reusable GameObjects

As you are trying to implement small reusable widgets such as Buttons, Labels etc. you will quickly find yourself writing a master MonoBehaviour components that acts as the XAML’s Custom Control classes.

Constructors

No! Like really. Constructors for MonoBehaviours are no-go for Unity. Again Unity scenes are serialized and deserialized on start. I don’t have to explain serialization and deserialization to you. If you need initialization, then instead of constructors for MonoBehaviors, use some of the Lifecycle Events (OnEnable, OnStart, OnAwake to name a few) described below.

Lifecycle Events

One thing you need is to manage your GameObject and its children based on certain lifecycle events. Unlike XAML where you should override a base method or subscribe for an event Unity is hardwired to call “special” methods by convention.

Add a “void Update()” to your MonoBehaviour class and it will be called once every frame, even if it is private. Initially this seems like black or heavy use of reflection, these are called even if private. In OOP (and that’s what C# is all about) you have to use polymorphism and override base class methods, or implement an interface, or subscribe for an event. But take it for granted, just don’t misspell them and they’ll be called.

Here is a very good overview of the order of events.

For AR VR development — it is complicated. We use MRTK that has its own way to dispatch events. But you may use another SDK. So lets assume you have methods similar to Unity’s OnMouseOver for the rest of the article.

Accessing Children

So you have a small hierarchy — a Button game object with YourButtonScript component attached, and it has two children. One renders an icon and the other some text. Now you want to make the button change its appearance on OnMouseOver.

In XAML these two children would have been part of your control template. You would override OnApplyTemplate and use GetTemplateChild to obtain references to them.

One thing you can do is use Transform.Find from YourButtonScript to locate a specially named children but this will come with runtime performance cost since it will traverse the visual tree and you may have to take extra care locating disabled children.

Again, scenes are serialized when saved in Unity Editor and deserialized at runtime. So your Components fields are saved, and those fields can actually reference other GameObject or Transform components from the scene. In YourButtonScript add:

[SerializeField]
private GameObject textGameObject;

When you select the Button GameObject in the Hierarchy tab, Unity draws a property editor in the Inspector tab for the textGameObject field. Now you can drag the child Text GameObject from the “Hierarchy” tab and drop it onto the “Text Game Object” field in the Inspector.

Next time you run the scene Unity will instantiate the scene object graph, assign all fields, including the textGameObject, and you won’t have to worry about the runtime penalty, broken paths and maintainability costs. It works like magic.

In YourButtonScript’s OnMouseOver you can now alter the state of the textGameObject.

Prefabs

The Button is now in the scene. But you need several buttons. And they need to share their appearance. XAML has Templates and ControlTemplates. Unity has Prefabs.

Drag the Button GameObject from the “Hierarchy” view onto the “Project”. This will create a new “Asset” — a Button Prefab.

“Prefab” is a thing in Unity, this tutorial is excellent to get the details.

Now you can drag “Button” instances back from the “Project” onto the scene, prefab nodes are rendered with blue text in the “Hierarchy” tab. Also you can change specific properties independently on the instances. Unity will keep track of the Prefab relationship and will bold the properties you’ve customized.

Instantiating Prefabs From Code

Drag and drop Button Assets from the “Project” onto the scene in the “Hierarchy” is one way to utilize prefabs. Another thing you can do is make copies programmatically.

Create an empty GameObject on the scene, change its name to ListView, create new C# script, name it YourListViewScript, add the YourListViewScript to the ListView GameObject. Add a serializable field of type Transform that is called itemTemplate.

You probably see where this is going. Object.Instantiate can make copies of Transforms and GameObjects. Now you only have to foreach a list of data items and create a child instance of the itemTemplate.

ExecuteInEditMode

XAML Custom Controls execute their code while you are interacting with the Visual Studio editor. With Unity you can mark your MonoBehavior instances with the ExecuteInEditMode attribute. This will run the behavior lifecycle events while in edit mode. There are some differences — for example Update will be called only when the scene is changed instead keeping 60 calls per sec.

Your ListView, given design time data, can instantiate the itemTemplates while you are editing your scene. There is an interesting side effect though. The copies will add up and be saved as part of the scene, because on save the scene will simply serialize all children.

This shows another very powerful side of Unity, you can use the exact same classes and methods to perform changes at edit time as you would at runtime to build editor utilities.

If you want the ListView to show the itemTemplates at edit mode but avoid saving them use Object.hideFlags. Hide flags allows you to hide instances from the “Hierarchy” tab, or to freeze the object so it can not be edited in the “Inspector”. It is on Unity’s base Object class so you can also prevent programmatically generated meshes or material clones to be saved.

So here it is — edit time —the strong Unity version of the XAML design time.

The XAML to Unity Path is Clear

Unity development is rather different form XAML development. But also 3D AR VR line of business application development is different form 2D UI XAML development.

Best practices, concepts and ideas however have a mapping from XAML to Unity. We at Telerik UI for AR VR will be working hard to iron this out. We will provide a way to design and build rapidly AR VR apps and map your XAML skills to Unity. As a .NET developer you will be able to reuse as much as possible of your current .NET skills.

--

--