What technologies, processes and solutions we use when developing on Unreal Engine 4

MY.GAMES
MY.GAMES
Published in
13 min readMar 15, 2023

--

MY.GAMES experience.

As game developers, we’re not only interested in exchanging experience, we’re also interested in sharing specific technical solutions. In our case, we use Unreal Engine, so let’s talk about it!

I’m Viktor Shchepkin, a Product Director at MY.GAMES, and in this article, I’ll detail our experience working with UE4 and describe the solutions and processes we use when developing projects. In particular, we’ll hit on a few key points:

  • Unreal Engine Modules and Plugins
  • Choosing a Coding Standard
  • Setting a Data Standard
  • Data Validation
  • Source Code and Data Management
  • Git
  • CI/CD

Everything I’ll share is exclusively based on our personal experience, and it can always be improved or challenged. Feel free to share your thoughts and suggestions in the comments!

Unreal Engine Modules and Plug-ins

When working with the UE, we believe that one can (and must) adhere to the principle of modularity.

We strive to use many independent system modules because this allows us to form our projects according to the constructor principle. This helps us easily transfer entire features from project to project.

To achieve modularity, we take advantage of some things which Unreal Engine provides us:

Unreal Engine Modules are collections of classes that can be part of both the project and plugins. (It’s important to take into account that an Unreal Engine Module can only be implemented in C ++, so if we want to export any blueprint as part of the module, we have to create a separate plugin.)

We use Unreal Engine Modules when all the logic can be stored in one module. For example, our Character Stats System Module is a basic set of GAS attributes, components for working with them, and a set of auxiliary functions used to implement basic character representation, its characteristics, and combat system.

It’s worth noting that the Unreal Engine Modules we develop and include in a project can only depend on the engine and plugins, but not on each other! Code with such dependencies should be in the Unreal Engine Module of the project itself.

Plugins are collections of code (in the form of Unreal Engine Modules or Blueprints) and/or data (assets).

We use plugins when developing more complex features unlikely to fit into one Unreal Engine Module, or when a feature, in addition to the Unreal Engine Module itself, contains some content.

For example, our Compass Plugin is a plugin consisting of Compass Module, which contains the implementation of the in-game compass behavior, and Widget, which is the base UI representation of the compass. This Widget can be used at the prototyping stage as a ready-made solution, and subsequently as a template for implementing your own unique UI representation.

The modular approach is at the root of the cross-sharing of technical solutions between projects, since we can use a ready-made feature on another project. Further, if necessary, we can improve and expand our solutions. We significantly save time and resources during prototyping and project development.

Coding Standards

Before talking about code standardization, we should emphasize that we have independent teams working in our studio. Each of them has its own history, experience, has suffered unique bumps and bruises, and, accordingly, have their own code standards. But at the heart of each approach is the general code standardization from Epic Games.

We chose the code standard from Epic Games because we didn’t want to reinvent the wheel, instead relying on an existing code standard that developers meet every day when working with the Unreal Engine. This is good because, when hiring new employees, we don’t need to teach them to use our unique standard. (And, if someone isn’t familiar with Epic Games’ code standards, they’ll be able to quickly learn it as they’ll be seeing it 90% of the time.)

In addition, if you write your own code standard, then most likely it will be difficult to use, because it will not always suit all the developers on different teams. It’s better to take something more generalized with a preexisting history.

As for our documentation of the standard, we use the documentation from Epic Games as a basis, but there are some differences with regards to extension forms or improvements. I strongly advise against transferring all these specialized additions into your standard documentation for the following reasons:

  • Documentation grows rapidly, making it difficult to find the right item
  • As projects progress, documentation quickly becomes out-of-date
  • Documents of this type are usually stored in a separate place (for example, in Confluence), not beside the code that developers work with
  • And well, let’s be honest: due to a general lack of time, few read the documentation

Instead, we use an approach where the code is stored in a separate plugin, which contains the Unreal Engine Module and assets, as documentation. The assets are contained in the form of blueprints, which also contain a description of the code standardization — this is important, because the standardization of blueprint code should not differ in any way from the usual standards that are used when writing in C ++ in Unreal Engine.

This approach most proved itself worthwhile during code reviews; we no longer need to send a documentation link to make an employee look for something there. Now, we just need to point to the line of code in CodingStandard.h or CodingStandard.cpp.

The teams have differences in how they implement their standards, but these are minor:

  • The order and style of writing #include
  • Rules for commenting functions and variables
  • The order and use of Metadata Specifiers

Data Standards

We believe that if there is code standardization, then there should be data (assets) standardization. (However, we’re not talking about blueprints as we have a Coding Standard for those.) This includes standardization of all the following data:

  • Content naming (naming conventions)
  • Project folder structure
  • Meshes
  • Textures
  • Levels

When it comes to data standardization, Epic Games does not have a specific standard, only a set of recommendations, and these are scattered across many articles in the documentation. Thankfully, here the community has come to the rescue, which for the most part, has solved the problem.

But unlike the coding standard, we document the data standard in Сonfluence. For this reason, the data standard is highly dependent on the project and can change from project to project. Of course, there are some common things that will always be similar for different projects, but still, most of the standards will always differ. Here are some examples:

  • Standardization of content naming quickly expands in the project, especially if the development is carried out using the Data Driven Gameplay paradigm. When only DA_/DT_ are used for naming data assets, there are problems with their search and filtering.
  • Standardization of meshes sees the basic rules described in the Style Guide greatly supplemented, for example, by the rules for socket naming.
  • Standardization of textures, as the maximum texture resolution is highly dependent on the platforms for which the project is being developed.

Data Validation

Naturally, since we have data standardization, we must have data validation as well. Data validation is a tool which ensures that only valid and well-formed data is stored in the project repository. First of all, when validating data, we look at the following:

1) Data compliance with current standards:

  • In real life, developers cannot keep everything in their head or constantly check the documentation.
  • Don’t forget about the problems of standard documentation, which often quickly change along with the project. Also remember: not everyone reads the documentation.

2) The data stored in the repository must be valid, that is, it should not contain errors or warnings.

3) We want to avoid the “Butterfly effect”. This is a phenomenon when the smallest changes in one file might cause problems in other related files.

For data validation, Epic Games already has a ready-made solution (even several) in the form of Data Validation Plugin. But for now, we’ll only go over the key aspects.

First of all, we validate:

  • Naming Convention. Good naming helps developers navigate a project more easily. If everyone starts using their own naming, it might turn into a problem and, as a rule, we end up with data duplication.
  • Any imported data. All source data imported into the project must be placed on a special virtual disk, which represents the content repository. We don’t allow desktop import paths!
  • Blueprints. We pay great attention to errors and warnings. If the first is clear, then the second probably needs clarification. It may seem that the warnings that the engine issues are not so scary. But in fact, they are! If you ignore warnings, then some time later, and most likely, at the most inopportune time, such a warning will turn into an error. And since it might take a long time from when a warning first appears to the moment of error, the cost of finding the problem and fixing it can be too high.
  • Variables (in blueprints). As part of our code standardization, all system variables should be placed in a separate category and followed by comments. This is necessary for content makers to understand which data can be changed and which are system data, which should not be changed.
  • Textures. When working with textures, it’s important to ensure that the resolution of the texture sides is always divisible by a power of two. It’s also important that texture size doesn’t exceed the threshold value set for the project. For example, if we create a project for a mobile platform and have an 8K texture there, this will lead to big problems with texture memory (and this happens very often.)
  • Static Meshes. We check UV, avoid overlapping, and check the validity of LODs.

The data validation process is launched using the client Git Hook: it launches a commandlet that checks each file specified in the commit.

Source Code/Data Management

All project data is stored in two different repositories:

  • The project repository: here we use Git, on top of which is Git LFS
  • The art repository with sources (here we are talking about any sources, including giant PSDs), for which we use SVN

Why use Git and not Perforce? Unfortunately, I don’t have a specific answer, it’s just historically been so. But I had the chance to work with Perforce as part of project development with Unreal Engine, and here’s my personal opinion: Git doesn’t really cope well with all the tasks (we’re talking about working in conjunction with Unreal Engine), but, at the same time, Perforce is nothing to shout about. Unfortunately, no one has made a silver bullet (that I know of). There are advantages and disadvantages to both. The attractive thing about Perforce is the ready-made technical solutions from Epic Games. But even now, all this is easily covered by excellent open source solutions.

A curious observation: I often hear that some developers are switching to Plastic SCM. I’ve not worked with it, so if you have any related experience, share in the comments, I’m interested in learning more about it.

Now, about SVN: it’s easier for artists to work with (and most artists work exclusively with it). But in this regard, we have a rule in the team: any artist working on a project should know at least the basics of Git, like pull/push/branch and who to ask for help when there’s a conflict.

Why do artists need all this? Since artists (this is more about 3D Artists, texture artists, sometimes UI Artists, etc.) are responsible for their content from the very beginning of development to the full integration of content into the project, they need to know Git. Of course, we have a Technical Artist who integrates and automates these processes, but that’s not the point: artists need to understand and see what their work looks like in the project after post-processes and shading have been applied. Therefore, artists should be able to integrate their work into the project and upload it (sometimes even into their experimental branches).

Well, the second advantage of SVN is a simpler, sparse checkout: when you need a specific folder, you can get just this folder (well, the content in it), without the need to download the entire repository. We even have an example of when this might come in handy. Let’s say someone from the MY.GAMES needs to look at old Skyforge art developments, but they weigh 4 TB. If they were in Git, then we would have to download all 4 TB, which would take about a week. How do I know that? Well, because once, we had to download all the sources.

For this reason, our art is in SVN, and the project is in Git.

Git

As I wrote earlier, we use Git Hooks, and in addition to the client-side hook for data validation, we have many different server-side hooks. Here is a description of some of them:

  • A commit message can only be in English
  • A commit message must contain the task number
  • File names must not contain Cyrillic characters
  • All large files must be in Git LFS only

Of course, working with Git, we actively use branches, and any developer can, if necessary, create separate branches and work in them. This is especially important when it comes to large features, the implementation of which is initially done only in separate branches; the main branch is periodically merged. And a feature branch goes into the main branch only after a QA department check.

There is one more important thing when a team is working on a single project in Unreal Engine: all the files that we see in the Content Browser are, as a rule, binary files. What this means for us: if two developers are working on the same file simultaneously, then the one who uploads their changes last will definitely have a conflict. And since the files are binary, they will have to redo the work.

About three or four years ago, while working with Git, we even had to write in a special chat that we were working on such and such file and it was better not to touch it. And that was not very convenient. But progress does not stand still, and since we have Git LFS on top of Git, we actively use the lock system (yes, yes, yes, Git LFS has a file lock system), which is very similar to that in Perforce.

Here’s how it works: if a developer wants to work with a particular file, then they must first lock it. This procedure will reserve the file only for a specific developer, and no one else will be able to work with it until the lock is removed.

At the same time, the very procedure of locking and unlocking can be performed via Git Bash or any Git Client that supports this function, as well as through the Unreal Engine itself using the Source Control Plugin.

If you want to try, here is a decent solution for working in the engine.

The most important thing in this process is to teach people how to remove locks once they’ve finish their work. To do that, we’ve applied automation: after the commit, the lock is automatically removed.

CI/CD

I’m not going to unveil anything drastically new for you, but since I’m talking a little bit about everything, then I need to cover CI / CD, too. In fact, I will concentrate mainly on CI. First, let’s brush up on what CI is.

Continuous Integration implies frequent automated builds of the project in order to quickly identify integration problems. You will always have an up-to-date and test-ready version of the product.

We use Jenkins for all that. I’m not going to describe the Jenkins setup process itself, but here are a few things that we advise you to do first.

  1. Notification system. It doesn’t matter which messenger you use; notifications will make your life much easier! Assembly failed? Gotta fix it. And the sooner we do it, the better.
  • Post Commit Build. After each commit, we run a build (without deploying artifacts), it helps to detect compilation and build errors in the project at the earliest stages of development.
  • Nightly Stable Build. At the end of each day, we get the current state from the main working branch, collect Debug Build, which every morning goes into the hands of our QA for Smoke testing.

2) Using Commandlets as a Build Step. This is a set of additional build steps. For example: light baking, and navigation mesh calculation for AI.

Why do we need Commandlets if all this can be done in the editor? First, not everyone has the powerful hardware necessary to calculate something quickly. Second, errors sometimes do happen. For example, someone might forget to save the map along with the calculated Nav Mesh.

Autotests

This is the saddest part of my text, because we have autotests, but not for everything. The main difficulty is that writing a high-quality autotest with good coverage of all scenarios takes 60–70% of the feature creation time. And as a rule, programmers themselves write autotests, because it’s hard to find a QA who knows Gauntlet or Unreal Engine in the context of developing autotests. (And this is a harsh business; sometimes it’s difficult to explain the value of autotests in comparison with some new game mechanics.)

But each project has two parts that must be covered by autotests: the backend and everything related to the economy (including monetization)! Without backend and the economy autotests, you’re at great risk of finding your service on the server simply dead and with corrupted data. This is at best. At worst, you’ll suffer financial losses.

Therefore, assemble a separate QA team that will write autotests for the backend, because there are too many cases that cannot be covered with ordinary manual testing!

Therefore, assemble a separate QA team that will write autotests for the backend, because there are too many cases that cannot be covered with ordinary manual testing!

In the future, we are going to continue to improve our workflow. We have plans to introduce other technologies, for example:

  • Clang-Tidy for Unreal Engine
  • Linter for Unreal Engine
  • Integration of the bug-tracking system into Unreal Engine

And this is only a small part of what we develop for working with projects. Thanks for reading!

--

--

MY.GAMES
MY.GAMES

MY.GAMES is a leading European publisher and developer with over one billion registered users worldwide, headquartered in Amsterdam.