As we forge the future of Mixed Reality development, it is important that we all strive to understand and utilize a common set of concepts, patterns, and principles while working with the Unity game engine. Some of the biggest challenges when writing good software in the Unity engine can often be the inability to control object life cycles and being bound into Unity’s game loop.
In order to utilize Unity to its fullest, developers first need to know how the engine is intended to be used as well as understand well-defined software patterns and concepts in order to build a robust framework.
The underlying ideas behind this Framework are what the Mixed Reality Toolkit’s foundation is based on.
Unity’s default script template
One problem most developers face when working in Unity is figuring out the best way to hook up the application logic to the engine’s game loop. The engine strictly controls how GameObjects are instantiated and there’s no easy way to get event messages from the engine unless the class is derived from MonoBehaviour. In fact, Unity pretty much expects all of your managed code to derive from this class.
MonoBehaviour is the base class from which every Unity script derives.
When you use C#, you must explicitly derive from MonoBehaviour.
Many novice developers will just create new scripts based on the default MonoBehaviour template Unity provides and then will begin to build their core logic into these classes. These “Components” are intended to be attached to GameObjects which live in the project’s various scenes. However, this has the unfortunate side effect of making traditional patterns like MVC or MVVM impossible to use in the Unity development environment, out of the box. Instead, many developers closely couple application logic with the UI / UX logic — a practice frowned upon by many professional software engineers.
While Unity’s expected workflow “just works” for simple Unity projects, it can cause issues when creating more complex applications where the need to manage script components running in a scene becomes extremely important — and in some cases critical. It creates even more problems when there should only ever be one instance of a component running at any time (like a spawner or special item of some kind). To solve this, most developers will turn to the Singleton pattern which introduces its own problems in Unity.
The problem with Singletons in Unity
Although critics generally regard Singletons as an anti-pattern, many professionals still utilize this concept in their project. Its ability to lazy initialize, track global variables and inherit from classes like MonoBehaviour, makes the Singleton pattern an easy candidate for managing the state of a game or application. However, this pattern also comes with its own set of problems such as difficulties when working in multi-scene projects: forcing hardened application layer dependencies and inability to tackle object life-cycle management.
Unlike traditional software engineering, the Unity developer doesn’t have total control over the life-cycle of the underlying GameObject references in their scene and there’s no official support for constructors or instances. Everything is managed directly by the game engine itself and developers only have access to a number of lifetime “event functions” that are executed in a predetermined order during the game loop (e.g. Awake, Start, Update, etc.). On top of that, the order in which the engine calls these events on each individual GameObject is non-deterministic; there is no easy way to inform the engine how to custom initialize a Component class. This can cause “Chicken or the Egg” types of scenarios or race conditions that are often difficult to debug. If a developer has application logic baked into a Component that is added to a GameObject, two possible issues could arise. The first occurs when the GameObject is disabled and therefore the Component as well, and the second is the case where a Singleton Component does not initialize when needed or as expected when other Components depend on it.
The Singleton Toolbox
Within the Mixed Reality framework, we solve the problems of using a traditional Singleton approach by using a Singleton Toolbox as the basis for our hook into the scene and the engine’s game loop, whilst also providing a type of Service Locator.
The application, not the component, should be the singleton. The application then makes an instance of the component available for any application-specific code to use. When an application uses several such components, it can aggregate them into what we have called a toolbox.
With the Singleton Toolbox there is only one Singleton class type in the entire application and it is responsible for orchestrating all other run-time services and dependencies. This script Component inherits from MonoBehaviour so it can be added to a single GameObject within a scene, then flagged as an object that shouldn’t be destroyed when unloading scenes. From this single script Component, all of the run-time services (e.g. Input, Boundary, Spatial Awareness, etc.) are registered with the Service Locator, which in turn forwards all of the Unity game loop “event function” messages back to them. This orchestration enables the ability to prioritize and order how each individual service receives game loop messages which provides total control over each service’s life-cycle without requiring direct dependencies or tightly coupled references via interface methods.
A fusion of concepts
One of the unique and exotic ideas in the framework is the way we utilize and blend well known programming patterns and concepts. The main concept we build upon is the Inversion of Control, which is implemented using the Service Locator pattern found in the Singleton Toolbox Component. We also heavily incorporate Dependency Injection via constructors and interface contracts for maximum customization and flexibility in concrete implementations. The final result closely adheres to the Dependency Inversion Principle for low-level abstraction layers, such as devices and services. All of the run-time service code in low-level abstraction layers use plain old CLR objects, providing modularity and ease of use.
The framework also utilizes Event-Driven programming concepts for high-level abstraction layers. This layer consists of GameObjects with script components that derive from Monobehaviour and generally implement an Event Handler Interface of some kind. This strategy improves communication between the framework’s disparate systems, the developer’s application specific code and any Event Handler Components in the Unity scene. The framework takes advantage of Unity’s built-in Event System, which does a wonderful job of taking care of most of the implementation details of this pattern. This event system was initially designed to handle UI / UX messages exclusively but is perfect for sending any type of event data to all GameObjects in the scene. This pattern cuts the cost of using
GetComponent<T> while also decoupling the hard dependencies to specific concrete types. The Event Handler Component script does not need not know if a Service it is listening for is valid or active because it only cares if it receives an event though its interface method. This strategy is very similar to the MVVM and MVC workflows as it ensures data properly flows from the low-level abstraction layers (Model) to the high-level abstraction layers via the event handler interfaces (View).
Wrapping it all up
The architecture used in the Mixed Reality framework has been designed to formally abstract all of the elements required to support cross-platform development and delivery. It provides a highly scalable and componentized system that meets all current demands while facilitating the ability to easily add any future features or requirements.
The architecture adheres to the core pillars of good software engineering:
- Keeping things modular, which in turn keeps the project clean and its components focused on solving specific problems or tasks.
- Facilitating extensible, data driven systems and features with extreme customization of implementations through interface contracts and scriptable objects.
- Making sure each system or feature is compartmentalized and decoupled so the whole Framework is easy to debug and test on an individual basis.
Each layer in the diagram above encapsulates the requirements for each system’s functionality, defining interfaces and controlling the flow of information. The framework ensures that each system can work independently while cooperatively delivering a whole solution.
If you’d like to see this framework in action, grab our latest copy of the on GitHub — Mixed Reality Toolkit for Unity. The Mixed Reality Toolkit ensures developers have every advantage when creating modular, customizable, and testable software. Your participation is highly important to us! We encourage you to open issues, provide feedback, and ask questions.