Combine DuckHunt with ARKit? — Lesson 1: Welcome to the world of components

Nomtek Labs
Inborn Experience (UX in AR/VR)
6 min readNov 22, 2018

Story by: Wojciech Trzasko, iOS developer at nomtek

pic by vincentvanzalinge

At WWDC 2017, Apple made the first step into bringing augmented reality to our life. By presenting his own framework, called ARKit, it started a small phenomenon on a large scale. Internet was quickly flooded by many videos and demos presenting crazy ideas made in AR, going from entertainment ones (for example a real-life Street Fighter game) and ending with more practical, like an easy solution for measuring rooms.

Such quick progress in AR app development was possible thanks to wide support for popular tools. From the first announcement, Apple released plugins for two very mature a 3D game engines: Unity and Unreal Engine. Both have big communities, hungry for testing new technologies.

So if you planning to create a game or immersive experience in augmented reality, choosing one of them can be your best point to start with. But what to do when you want to make the app where AR is an only nice enhancement of default user experience?

The place where everything gets tricky

You are stepping to a place where you need to connect classic UIKit based application with the ARKit framework. To the place where you need to plug technology, that process image data and based on results provides real-time pieces of information, into the event-based infrastructure represented by UIKit. Connecting those two different worlds in the right way can be a little bit tricky, but not impossible.

To figure out the convenient way, let’s start with defining the architecture for our application. One of the most popular approaches is to use VIPER, the concept of clean architecture adapted to the world of iOS. In a nutshell, it splits single module into five parts:

  • View — responsible only for displaying what it is told to
  • Interactor — responsible for business logic for a single use case
  • Presenter — takes outcome from Interactor and prepares it to display on View, also reacts for user input, passed from View
  • Entity — model layer used by Interactors
  • Routing — handles navigation between two screens, its responsibilities are parted between Wireframe objects (creates new screens and places them in app’s window) and Presenters (takes user input and selects to which screen moves the user)

To make all connections more clear just look at the diagram below:

Connections of VIPER parts

Now assume that we want to add simple, but seamless experience into the application written in VIPER. You have probably already figured it out that there is no much room for it. Where should we put code responsible for scene management? How to deal with interactions? Should we process it in a real-time engine? Or return to the presenter? But in such case what should we do in interactor if there is no clear business logic in our experience?

Those are just some of the question that you probably have in mind right now.

Time for an example!

Let me show you a simple example that adds UIKit menu to one of our demo experience made with ARKit. Just look at the final result:

As you can see, demo introduces a classic NES game called „Duck Hunt” to the world of AR. Of course, it is just example, you can easily imagine that instead of duck hunt gameplay, your app adds the functionality of interactive car showroom, where the user can open a car and looks at its interior.

So, how to manage such an interactive world when classic app architecture fails? The answer is really easy and called Entity-Component-System.

The world of components

Entity-Component-System (ECS) is a design pattern that was developed by authors of “Thief: The Dark Project” game. In the following years, it evolved to the one of most popular pattern in game development. If you’ve ever played with Unity Engine, you probably know this pattern very well 😉.

In general, ECS assumes that every individual element of the world is represented in the form of an Entity. These can represent literally everything, starting from player representation, going through decorations, even conceptual elements that are not visible to the player (eg spawn points of enemies). The way how the entity interacts with the world, its appearance and behaviour are separated into smaller structures called Components. Each component is responsible for one specific task (e.g. display the mesh of the 3D model or calculate the movement). With a single responsibility approach, components can easily be reused in many entities. A lot of ECS implementations assumes that entity is built from the core fields, like identifier, plain strings or integers. Rest of its functionality is moved to a set of components.

So how it will look in practice? The diagram below shows the example entity from our DuckHunt game.

Components of DuckEntity

As you can see our entity is built only from components:

  • SpriteComponent — responsible for rendering 2D sprites in the world
  • AgentComponent — wrapper for GameplayKit’s agent object, implements AI behaviour
  • AnimationsComponent — applies right animation based on last movement change
  • SoundComponent — plays sound effects
  • StateMachineComponent — allows the entity to change its states and adjust behaviour to the current one (e.g. use different movement strategy when the duck is alive and different when is dead)

We have defined our entities, so what should we do with them now? The answer is easier than you think, just iterate through them and their Components in your main update method and then call update action on them…

…ok, maybe it is not that easy, but first let’s introduce the last part of the ECS design pattern, System 😉.

Love the system!

To understand how Systems works, let’s introduce a simple real-life example. Let’s assume that we currently work on interactive experience with rich 3D sound effects, the experience that will blow the users minds, but for some of them could be a problem when playing in crowded places. Let’s assume that our user doesn’t want to mute the whole phone just to show your app to a friend in a coffeehouse.

The easiest way is to allow the user to turn off sounds effect in the app itself. Here is where System rises to the challenge. The main responsibility of System object is to perform actions on Components of the same aspect. Yes, it sounds complicated, but in practice, it is a really easy pattern. Just take a look back on our experience.

We already know that Component is an object that performs one specific task, so if we want to play sound, we need to extract this task to a type called SoundComponent. Now let’s change a little bit our iteration algorithm and allow it to group Components by their functionality. For example, first, it should call update on all components that calculate physics, then should process all SoundComponents. The grouping is exactly what System was made for. Single System is an object that takes a responsibility for calling update method on all Components of one type. In such an approach, the main update method works on Systems instead of on Components.

So now, when we need to turn off sound in our experience, we can just disable only SoundSystem without spamming our code with a lot of if statements. But there are more benefits. A quick example is to skip playing sound or move it to the next iteration of the update loop when our app starts losing frames. We can use saved time for the complicated calculation made by the physics engine. Another example can be a case where you don’t want to process physics for objects that are not visible to the user. You can easily put the simple logic for culling into your PhysicsSystem object.

That’s all folks… for now ;)

But don’t be afraid this is not the end!

In the next chapter, we will take a look at VIPER architecture, define its new parts and figure out the way for connecting ECS.

Any thoughts? Share it below in the comments!

See you soon, folks!

--

--