Why should you care about optimizing UI for mobile games?
Lessons learned from recent years working with game development.
🇧🇷 Leia este artigo em Português (PT-BR).
As harmless as our component’s .png may seem, when added to the various UI files, it will impact the final size of the game. From the last couple of years working with mobile games and being part of Wildlife's Visual Design team, I have learned that this number greatly influences download metrics. This is because when the application exceeds the file size of 200MB, AppStore and Google Play suggest to the user download it using Wi-Fi, as there may be excessive data consumption. It may not seem like it, but a simple pop-up suggesting you to turn on Wi-Fi can lead to dropouts when downloading a game.
And although 200MB may seem like a super healthy margin, games are operated as services these days, which implies on continuous production of content. As a result, the game will become heavier with every new release. So, in order to be prepared to handle any future issues that may arise, adopting optimization practices from the beginning can save you a lot of time and stress.
A great example of how important is this strategy in the mobile game industry, we Free Fire to help us make a point. In an interview with Harold Teo, published by GamesIndustry, the producer at Garena said this:
“If our game is 4GB, 8GB, or something, people will have to think twice before installing. They need to delete some apps from the phone and sometimes even the full capacity of the phone might not be enough. Even today, one of the team’s priorities is to make the app as light as possible.”
There are several characteristics that can influence how your game is received. The size of the final build, the size, and resolution of the assets, the amount of unused repeated files, as well as many other situations like these which need your attention.
How to optimize your game UI: tools and techniques
Knowing this, I prepared a compilation of optimization techniques and tools that I consider relevant and can benefit games produced in different game engines. However, being specific, I’ll focus on the tools I’ve had most contact with, based on my experience working as a Visual Designer at Wildlife’s Art Studio: Unity and Figma.
Before starting, as much as the suggestions in this article result from the sum of conversations with Graphic Engineers, Art Directors, Engineers, and Game Devs in general, I ask you not to treat it as a rule. Each game has its own context with completely different variables, so always talk to your team before making any decisions that could impact the pipeline.
Do I really need to think about optimization?
Just to give you more context, these are some of the common scenarios I’ve come across:
- UI looks much nicer and more consistent in Figma than the one implemented in the game.
- Asset resolution is lower than planned.
- UI files get cluttered in the project structure.
- Project growth makes maintenance and integration of new components a complex task.
While transitioning between different games and teams, I noticed that many patterns were repeated and maybe you can relate to that, too. Through these patterns, it is possible to see some outputs to improve both the life of the designers and the engineers. It is also important to mention that there is no miracle pill to solve these issues, but a set of solutions developed alongside the team and the stakeholders through the game’s natural post-release process and active communication.
Understanding the solutions
Faced with all the processes, I do not intend to rank from most important to least important. I believe that everyone contributes to the result in a different way. These are the solutions:
- Size, resolution, and compression
- Sprite Atlas
- Colorspace & Rendering pipeline
- Design review
I will delve into each one along this article, but bear in mind that some might make sense to you and others might not. That said, take some time to reflect on it and decide what is worth testing and what’s not.
For a long time, I worked with this technique but under other names. It’s something that’s been widespread since the early days of the internet, where the biggest use was in creating resizable custom divs that had four corners with a border radius or some other kind of ornament.
This concept is nothing more than a 2D technique that allows you to reuse an image in various sizes without having to prepare multiple assets. You divide your image into 9 slices, where you determine that the corners are fixed and the core is resizable.
Fortunately, in Unity, we have a package that makes this practice trivial. The Sprite Editor! I won’t go into details of how to execute it, because there is a lot of quality material on the internet. Here’s an example if you want to check it out:
Sometimes a Sprite Texture contains just a single graphic element but it is often more convenient to combine several…
However, even though it’s trivial, 9-slicing can be super impactful when incorporated into the workflow. Imagine the situation where you would need to export about 4 different assets to produce a responsive card that works in different situations, but now you can get the same result with just 1 if you configure and set the cuts correctly.
In addition to being an excellent development practice, once you learn how it works, it becomes a technical requirement for your Design assets and a new way of thinking. Understanding this concept is one of the first steps to working with UI in games.
Like 9-slicing, this technique is also revolutionary when it comes to asset reuse and scalability. And surprisingly, it is very easy to understand, apply and see the result.
Imagine the HUD — heads-up display, or screen used to detail the main information and events during a match — of an air battle game like Sky Warriors, where there are several similar icons that are displayed in different colors depending on the chosen team.
In this scenario, if there are ten different icons that need to appear in three colors, we would have a total of 30 files. However, using tint, we will export only 10, but in white. Then we will apply the tint inside Unity with the necessary color through the inspector or via code.
When applied, in addition to making maintenance much easier for everyone on the team, tint can drastically reduce the number of UI files in the project in the long run.
Size, resolution, and compression
This topic in particular is what motivated me to write this article and to start pipeline improvement initiatives with my team. In addition to directly affecting the size of the final build, as I said in the introduction, it plays a key role in the synergy of the other optimizations.
For example: for Unity to compress your file well, and consequently decrease its size through an algorithm, the resolution needs to be a power of 2 (16, 32, 64, 128…). A priori, it seems to be a rule that does not affect your work routine but believe me, the time will come when this detail will demand your attention so often that you will wonder what can be done to optimize the process.
That’s exactly what happened to me. As much as I value the importance of using the power of 2, it has become impossible to always review and export assets with a considerable region in transparency just to hit the right resolution. Hence, in conversation with several people, we noticed something that could alleviate the problem, whose technique I will go into more detail below.
However, despite having other alternatives to solve this problem, working with an 8pt grid in its Design is already a big step towards evolving in this direction.
Ah, the wonderful Sprite Atlas! Why worry about so many assets individually if I can automatically stack them all in a big asset with the power of 2 resolutions and solve all my problems?
Before moving on, allow me to point out: that individual concern for assets is still necessary but in a much healthier way.
Sprite Atlas is an excellent development practice because in addition to relieving micro-management and organizing the project, it also scarily decreases the number of draw calls (name of the process responsible for rendering the assets on your screen), which, in turn, improves performance.
In other words, if your Main Screen requires 25 different files in the normal state, that could mean at least 25 draw calls, assuming they all appear only once. With Atlas running this drops to 1. In technical terms, the amount of allocated memory this can save is directly related to the number of times your asset is rendered on-screen within the same session. Being objective: fewer draw calls, more frames per second (FPS). And in case you didn’t know, performance is gold in game development.
Going further, Atlas ties together all the topics in this article very well. It brings more autonomy to designers and offers the possibility to make tangible the budget for the UI. Let’s suppose you can have an Atlas per feature and, in addition, you have the freedom to have another one that keeps the assets in common use by the game without any compression, being rendered only once. They don’t hurt performance and keep the visual quality high, without that bad low-resolution feeling.
Going back to the budget, you now have a very clear artboard of how many assets you can still insert. If nothing else fits, you need to revisit the old ones and understand if 9-slicing or tint can save you from possible redundancies.
Read more about how to integrate Sprite Atlas into your project:
For those unfamiliar, Taxonomy is the stage of grouping the contents and actions according to the meaning. It is a term that comes from Biology, where the objective is to organize and classify structures.
In Design, it is used as a way of structuring, naming, organizing, and distributing what has been produced. It can be simple and at the same time very complex. The context influences a lot and most of the time its definitions arise from collective consent, where the team defines good practices emphasizing what should be done and what should be avoided.
Being very objective, when the Designer produces a layout, it is extremely important that the handoff is transparent and easily accessible. In some cases, the Designer himself implements the engine, in others the Tech Artist or the Engineer is responsible for the feature. Anyway, when the assets are exported from Figma, Photoshop, or other software, when arriving at the project, if there is not the least organization, the collapse starts to have a date.
We know that in some urgent scenarios, let’s be honest, it may not be possible to take all the care you want with the names of layers, components, and things as such. However, the ideal is that the Designer always minimally organizes its deliverables and is aware of what has already been imported or not for the project.
That way, when the assets are exported, you avoid duplicate files and make them easier to access later. We know that it is a painful process to integrate, as it requires a lot of responsibility and discipline on both sides, but without the shadow of a doubt, it is of great value for those who are on the team at the moment and for future generations.
There are a few suggestions on how to get started, but as this is not the main goal of this article, I hope the brief introduction to the subject can motivate you to do more research on it. And remember: whatever you apply it to, your team needs to be in agreement with this decision and aware of the adaptations required for the optimization to work.
Color space & rendering pipeline
Of all the suggestions in this publication, this is the one that intrigues me the most. Maybe because it was a completely new subject in my routine. The moment I learned about this topic my view of game development changed a lot.
Artists in the digital world are already familiar with the term Color Space, but within the gaming universe, it can be quite different. This subject can be extremely complex, so I will only stick to the details that impact the result of the Designer’s work.
Basically, Linear and Gamma are the most relevant Color space in digital production. The rendering pipeline is a separate process that determines how the graphics engine will work with the game’s colors, materials, shaders, and graphical information. To connect the two terms, the Rendering Pipeline directly influences which Color Space will be used.
The decision to use one form or another is related to the game’s art direction, technical limitations, team knowledge, and studio requirements. While one allows for more realistic graphics, for example, the other can provide greater performance. The reasons are many.
And what does this have to do with UI?
Probably the software the Designer works on is exporting the assets in the Gamma pattern. Hence, when imported into a project using Linear, the fidelity can drop a lot.
There are several ways to solve this problem. For example, it is possible:
- Create a camera that renders the UI in gamma and then adds it to the game’s render in overlay format.
- Use a custom shader on UI elements to correct the mathematical variance calculated automatically by the engine.
- If you are producing your layouts in Photoshop, it is possible to export directly in the engine’s color space (with the exception of Figma which is limited to Gamma).
As I work with Figma, I noticed this frightening fidelity difference, and with the help of my team, we realized that the biggest anomalies arise from the difficulty of converting Unity in situations where there is a mix of assets. For example, you have a background image of a card and above it, you have another image with a gradient of a color that you use to highlight. And this gradient alpha goes from 100% to 0%. The anomaly occurs from the moment it is at 99% and needs to blend its color with the background in real-time.
So one way to soften the impact is to avoid having gradients fade to transparent. Instead, fade the gradient to the color you will actually use for the background. This way the color blend will already be rasterized in the image and Unity will not need to calculate the blend result.
I know it’s a complex topic to understand and delicate to explain, but I thought it was important to share, because, at least in Unity, there is no definitive solution. And in case you go through this situation, remember the suggestions above and don’t despair.
If you want to read more about it, check out the link below:
Finishing in harmony with all the previous topics, this one will be pretty straight to the point. If you, as a Designer, want your UI to be of high quality during implementation, you need to be involved in the entire process. When I say this, I refer to all production steps.
Don’t just stick to Figma. Participate in definition and implementation! See how engineers understand their deliverables, how they export, how they import into the project, and, above all, try to understand how to implement your UI in the engine you work with. That way, whenever necessary, you can make a second pass to refine what only the trained eyes of a Designer can see.
In some studios the UI Designer doesn’t need to implement anything, in others it takes care of all the steps. Regardless of how it works where you work, keep in mind that learning new skills that touch your area of mastery will always improve you as a professional.
Why should you care about optimizing mobile game UI?
- Being a designer/artist who understands the basics of optimization can keep you from coming across a number of routine mishaps.
- Including some of the steps mentioned above in your workflow will help you become a more seasoned professional concerned with the impact of your work.
- Optimizing from the start is the best way to avoid problems.
- Understanding these issues is critical to achieving a high level of visual quality in your game.
- Optimization practices, in general, improve the lives of game developers and players. It brings scalability and performance.
After all, optimizing is an eternal learning process. Even though I share all these themes with you, I’m constantly talking to my team to understand how we can improve.
And more important than optimizing is being aware of what you want to improve. In order for you to be able to sell these ideas to your team, it is necessary to make it clear what problems will be solved, what will be the short and long-term gains, and the size of the impact on the roadmap. Having that mapped out, the trend is only getting better!
The solutions will always vary according to the context in which your game is inserted. All the suggestions cited here are based on my experiences, so it’s important to analyze what is the biggest source of problems in your UI pipeline today.
If you’ve made it this far, I hope this material has contributed to your professional growth in some way. And if you liked it, feel free to check out my other articles or to reach out to suggest new topics, chat, follow my work or even tell me if you managed to apply any of the techniques I mentioned.
Before leaving, a special thanks to mentors, coworkers, and friends who contributed directly or indirectly to the publication of this material: Ingrid Bugarin, Om Tandom, Allan Camargo, Patrick Sava, Musa Sayyed, Huila Gomes e Vitor Canuto.
Thanks and hope you enjoyed!
– Stéfano B. Girardelli.
UI, UX & visual designers behind great games
A collection to follow and get inspired
Unity User Manual 2020.3 (LTS)
Use the Unity Editor to create 2D and 3D games, apps, and experiences. Download the Editor at unity.com. The Unity User…