Modularising an iOS app: why and how we have been breaking Badoo app up into modules

Alexander Kulabukhov
Bumble Tech
16 min readMar 22, 2021

--

At Bumble Inc — the parent company operating Badoo and Bumble apps — the iOS development team have been involved in creating modules for several years now. A large part of the new code is also being developed outside the apps’ code base. As a result of this work, we now have over 100 modules across our different applications. In this article I will tell you about our experience and answers these most frequently asked questions on modularisation:

  • What is our principle(s) for selecting modules?
  • How can connections be organised between them?
  • Is it enough for a feature to have a solely framework?
  • How can startup time be reduced for a multi-module app?
  • What role does monitoring play in this process?
  • Can the creation of new modules be automated?
  • How can linking errors etc, be prevented?

Last year, my colleague Artem — iOS and core team leader, gave a talk about modularisation at FunCorp Meetup. It was such a success that we thought it would be nice to share it with a wider audience. In this article, I will provide a more detailed breakdown of the process of modularisation and some details not covered in his talk.

Part 2 — Modularising the Badoo iOS app: dealing with knock-on effects

Modularisation and its benefits

Modularisation is not just about taking a chunk of code, wrapping it in a separate framework, sitting back and relaxing. This is a process whereby the code base is broken up into smaller specialised modules, ready to be reused. The following two points are particularly important:

  1. This is a process. You need to understand that modularisation represents its own separate process as part of your department’s work and will require regular support. You won’t be able to undertake it as a one-off.
  2. The modules are specialised. When choosing how to split an app into modules, we decided to give particular preference to an intuitive interface that will display the task solved by the module, rather than to some technical element with a code “filling”. This has protected us from stereotypes such as, for example, believing that modules have to be more or less the same size in terms of code, or have the same architecture.

Of course, additional rules in projects bring new headaches. Why, then, do we do this at all?

  1. Scaling development. This approach allows you to extend the development department horizontally without major difficulty: new developers work on new modules in isolation.
  2. Economy of time and resources. When you need to reuse code in another product, you will immediately appreciate how fast you are able to do so.
  3. Synergy between apps. Product queries enhance the functionality of modules, which can be used in all company products.
  4. Quality of code. As has already been stated above, when modules are specialised, and their interface is simple, code coupling becomes much lower, as does the threshold for new programmers joining the project. Code support and testing also become simpler.

Our experience

How did we come to the decision to break the application up into modules? In actual fact, at the point when we were thinking about making changes to processes, we already had a certain number of modules: network layer, analytics, error handling and others. We refer to them as the “platform”. The question was: what’s next? We decided to create two types of intermediate modules between the app target and the platform:

  • Functional modules: ready-to-use modules which have proven themselves and have an intuitive interface and functionality: “registration”, “payments” and “chat”. See later for a description of their structure.
  • Experimental (internal) modules are trying to become fully functional modules, but due to certain limitations (resources, marketing deadlines), they are not yet able to do so.

Let me qualify this straight away by saying that our approach precludes horizontal connections between functional modules. The app target connects them (and, in an ideal world, its only tasks, practically, would be building modules and providing transitions between them). Experimental modules can assume responsibility for this, if you need to build some functionality quickly, based on existing frameworks, “mixing in” additional business logic.

A good example of an experimental module would be integrating the app with some new feature in iOS, for example, with in-app authorisation using Apple ID. Let’s imagine managers really want to release support for this functionality on the same day that a new version of iOS is released, so that Apple adds us to the list of the top apps chart with Apple ID. But they are not yet sure about the future of the feature in question. What do we do?

  1. We create a new experimental module
  2. We link a functional authorisation module to it
  3. We implement the option of accessing it using Apple ID, based on existing code
  4. Bingo!

If the functionality goes on to be developed further, we will spend some time rearranging dependencies and moving the functionality to a separate new or existing module. Alternatively, we will simply delete the experimental module and forget about it.

What does an ideal functional module look like?

For ourselves, we decided that, ideally, every feature module should comprise three submodules:

  1. Logic. The only thing here is business logic, based on the platform (network, cache, analytics etc.), which is covered by unit tests on every side.
  2. User interface. This is a separate submodule of functionality developed for all view-components. In terms of dependencies, it is only linked to platform UIKits and a design system.
  3. A top-level submodule builds a single logic and delivers it to the UI. It also provides a convenient public interface for module users. For simplicity’s sake, we shall refer to this as an interface submodule.

We have tried to leave space for flexibility in how we approach organising how modules interact with one another (as you will also have noticed from the description of experimental modules). For this reason, in the case of simple functional modules the developer can use a simplified arrangement for submodules:

Sometimes the whole interface fits onto just a screen (for example, a module with a request to update the application); its logic is just a couple of queries. In these cases, based on common sense, the developer can combine the interface submodule with the logic submodule, allowing the logic to import a UI for building the component.

As you can see, even for simple modules we have opted to make the UI submodule separate. It is a reasonable question to ask, “Why?”

Because it allowed us to create a UIGallery app, into which all the UI modules are imported. They have no business logic nor dependencies from the platform — in fact, there are multiple benefits:

  • The UI can be worked on isolation: you can develop the user interface for the new module as a whole, supplying fake data or user action handlers
  • A single screen can be used to display a component configured for multiple apps (we achieve this using styles — you can find more details in this article written by one of my colleague, Andrey Simvolokov)
  • A Visual Regression Test, or VRT, for each component, is automatically added. I think this is the principal benefit of our tool: this makes it impossible to accidentally change any of the application components. We have already updated our design system several times, and VRTs helped us to identify problems with new elements at a very early stage: incorrect images, inadequate contrast between text and background colours, etc.

Before moving on to describe the implementation, I would like to share some facts about the modularisation process in Badoo and Bumble applications:

  • We have a total of 110 modules for two applications
  • We work in cross-functional teams, each of which is responsible for its own set of modules
  • Code for any particular functionality is never duplicated
  • Integrating ready-to-use functional modules into our applications is fast
  • At Bumble, everyone thinks in terms of modules, from product managers right through to workers in our continuous integration department.

Preparing for new processes

We knew that we were going to be launching more and more new functionality which would be used in several applications. But we were unsure whether modularisation would relieve us of our problems. So, we began with an experiment.

At that point in time, we had three applications with chat facilities, each with its individual implementation. We decided to use the ready-made chat functionality from the Badoo app as a basis for creating a single reusable module.

Having decided on the focus of the experiment we put together a team that included developers but also a product manager to oversee the project at all stages. Next, we carried out assessment and planning.

The image above shows one of the stages of the project: planning helped us to manage waiting times with a virtually zero margin of error. The descriptions of the individual tasks are hidden (there were about 100 of them) but, believe me, they were detailed, specifying deadlines and dependencies. It is important to assess how long you estimate each task will take to solve because, retrospectively, it will help you to identify where there may be hidden problems.

For example, tasks marked with the number 1 related to separating off and moving out individual and fully independent components, such as the bubble surrounding a message. Even for us, who are accustomed to the code, this seemed complicated. As a result, we overestimated five-fold the lead times for solving these tasks. The tasks in the next group, by contrast, appeared to be very easy. They were related to components that interact closely with the keyboard. As a result, we were about two weeks “out”, but discovered lots more about how the keyboard actually interacts with its environment.

The main conclusion we drew from this whole story was that you need to document the steps you take in every experiment. This will help you and, most importantly, your colleagues incorrectly assessing the situation and not repeating previous mistakes.

Once planning was completed we set about implementing the project. Development of the new functionality in the chat facility was not stopped at this point, that is to say, we moved the chat facility to a parallel module while still working on its development. This made the process a little more complicated but, I repeat, having a detailed plan for exporting the module allowed developers working on the new functionality to get their bearings, in terms of what could be changed, what was not worth changing, how changes could be made, etc. Whenever we saw that our work was overlapping in a given area, the issue was either resolved locally or was passed on to the product managers for them to resolve.

What happened next?

  1. We broke Badoo chat up into frameworks (Chat, ChatUI and ChatService were selected in the functional module structure).
  2. We integrated the chat module into Bumble. This was the most troublesome part of the project since the public interface of the module, which was suitable for Badoo was not quite right for Bumble. We had to change a lot of things, not only in Bumble but also in Badoo. Fortunately, we were able to deal with this.
  3. To integrate chat into the third app, we called on the developers of the app in question. So rather than doing it ourselves we simply monitored the process. We wanted to know what other problems might arise, and so iron out any rough spots.
  4. It took about three days to integrate the chat into another experimental app, and one of the developers on an internal hackathon added it to the prototype of the app within less than 24 hours.

Results of the experiment

  1. Documentation ready. At the end of the experiment, we had a technical plan for exporting a new module. We knew roughly what potential problems there might be and the options for resolving them.
  2. We had a working example in the repository. This meant we could see the correct configuration, view the commit history, how to create dependencies and resolve problems.
  3. Lots of developers took part in the process and had a good idea of how it worked. From the start, two camps formed: those who selected shared functionality, trying to avoid making this component too bloated; and those developing new functionality who found it more convenient to embed functionality inside the component rather than wrap everything in interfaces, implement dependencies etc. Thanks to this, developers became more involved, and the controversy helped clarify matters.
  4. Everyone was aware of what to expect in the future. Project and product managers, the QA automation team and the developers involved could all see how shared module development touched on their particular areas of responsibility. This allowed them to anticipate some of the problems that would occur.

Problems found

Before rolling out the modularisation process to the whole iOS development department, we identified and resolved several problems.

Harmonising build configurations for all modules.

The initial and, I would say, the most fundamental problem is managing configurations for the modules. If you suddenly have, for example, 50 modules and you want to change a given Swift compiler flag for the whole project, that is going to slow you down. You will have to go through the build settings manually for all the modules and set the flags. This is no fun, it takes ages and what’s more, there is a high likelihood of an error slipping through. If that happens, you will then have to spend double the time getting to the bottom of why the project won’t compile.

A major downside is that there will be questions you will not be able to give answers too quickly. How can we build everything? Is the version of Swift that we have, the same everywhere? Is Bitcode on/off everywhere? And so on. Without this information, as well as being unable to change configurations quickly meant we were extremely limited in terms of experiments. And we didn’t like that.

By the time we noticed this problem, we already had a Noah’s Ark of settings: compiler warnings switched off, truncated debug symbols, Bitcode in the debug config etc. What could we do about all this?

A minor caveat. Lots of people (if not everyone) know of the wonderful tool CocoaPods. It allows you to link up (to one another) the development pods to which your functional modules can be made available. Also, using post_install hooks, allows you to configure identical build settings. But we opted not to pursue this path, because any diversion from the way CocoaPods, as a tool, has been designed would mean either giving up the tool or giving up our idea. I, for one, wanted to see how the solution to the issue of static linking differs from version to version. If you consider CocoaPods to be a mature Enterprise tool for managing internal dependencies, please say so in the comments. I would be interested to know your reasons why.

But getting back to the question of harmonising configurations for modules. For us, the solution was plain to see: xcconfigs. We had already used them in platform modules and now we decided to extend this approach to functional modules. This is about xcconfig in brief:

  1. Text file with settings, which is saved outside the Xcode project file (xcodeproj/project.pbxproj).
  2. Supports nesting. Settings written once can be reused using the #include directive.
  3. Support for settings both at the project level and individual targets level.

We have created a separate project in the repository. We store all generalised configurations here, along with application versions, basic compiler settings and additional settings for various build configurations (Debug, Release, Production and others). Top-level configurations for functional modules and the applications themselves are also stored here.

The modules themselves contain a minimal number of settings: here is the relative path to the repository for correctly importing global settings and the basic Bundle ID, Info.plist and modulemap:

If you want to move a longstanding existing project to general xcconfigs, this can turn out to be complicated and labour-intensive, but there are definitely a range of benefits to be derived:

  1. Harmonised settings give you the option of experimenting and supporting a repository in a consistent state.
  2. Minimal risk of an error being made when updating configurations.
  3. The option of handing on responsibility for support and control of the project parameters to one team or to a particular developer.
  4. Any changes are clearly visible in the version control systems.

In the final analysis, we came to the decision to completely ban, at the Git pre-commit hook level, making changes to the project’s Build Settings.

Explicit dependencies

Another problem we encountered at the experiment stage was default implicit dependencies. As your dependency tree starts to grow, knowledge of specific dependencies becomes more valuable. Off-the-shelf, Xcode prefers using implicit dependencies.

This means that the build system analyses all the modules included in the workspace. From these, it creates a build plan and, based on its heuristics, it builds your app. A good example of implicit dependencies being used is the way CocoaPods works, when, after pods have been installed, it happily tells you that xcodeproj is no longer working, and suggests you use xcworkspace.

Having moved to explicit dependencies, the build system will no longer accept heuristic solutions about what to build, in what order etc. The module dependency graph clearly sets out the build priorities and a plan for building the app’s basic file.

If you have something that “is already working okay”, any additional actions will appear superfluous. Why make life complicated for yourself and developers? In response, I can suggest several reasons straightaway.

Firstly, as the number of modules increases, it will become important for you to see specific dependencies. It is easier to make changes when you understand what they might affect.

Secondly, without explicit dependencies, the likelihood of an error is quite high. For example, Xcode has this interesting specific: when the app is launched in the simulator, it automatically adds the Derived Data file to the Framework Search Paths.

In this case, with implicit dependencies, it is easy to miss module linking errors. But if you test an application on real devices before merging changes to the main branch then you won’t have hidden linking errors).

Thirdly, explicit dependencies are the only option for generating a dependency graph directly from project files. This topic will be covered in the next article but for now, we simply point out that this is a useful option.

Finally, when removing intermediate dependencies this approach forces you to explicitly specify the precise modules you want to use. For example, in the current arrangement, the app uses both chat and the platform module. In the case of implicit dependencies, removing the chat module allows you to continue to use the platform implicitly. While, in the case of explicit dependencies, the compiler will force you to specify the platform as a dependency.

Automated creation of new modules

At the stage of testing the hypothesis, we encountered another interesting challenge: the automated module creation. We did not think of this immediately, but quickly saw that this was going to be a problem for a huge number of reasons:

  1. Xcode provides practically no automation instruments. Creating and supporting custom templates do not look like simple tasks.
  2. Configuring a new module involves repeating steps of the same type. Basically, everything boils down to trying to get the state of your project as close as possible to the benchmark.
  3. As you progress towards the benchmark, you may forget something, or go wrong at some point etc. Errors are highly likely.
  4. Discontented developers. Actually, the average developer is a creative type. So, if to create a new module, you just have to sit and click a mouse for a few hours on end, follow the instructions, or, even worse, simply look at the module next to it and try and do the same, it’s pretty soul-destroying.

So, what did we do? We have solved the problem in a radical way: we wrote our own Swift script. It is aware of all our internal conventions and it generates new modules, accepting only the name and relative path as input parameters. Initially, this was a stand-alone solution based on XcodeGen but during the development process, the script became part of our Deps tool, which we will talk about in the next article. This is what creating a new module looks like now:

At input, we have a correct pathway, structure and project configuration based on xcconfig. Moving to modularisation, we also obtained a range of benefits:

  1. A new module can be created in minutes instead of hours.
  2. All modules have a uniform structure.
  3. All build settings are inherited from shared xcconfigs.
  4. The entry threshold for new developers is low.
  5. Specific people are responsible for the utility.

In lieu of conclusions

As you have seen, our experiment to test the modularisation concept on real app functionality was successful. We obtained a documented solution for further scaling the process and moved projects to xcconfig, so that it would be easier to implement further plans. Including explicit dependencies allowed us to obtain a transparent structure for the projects, and Swift allowed us to automate the process for creating new modules, cutting lead times to minutes.

Although it might seem that we are ready to scale our solution up to the whole department, there are a few things we haven’t yet taken into consideration…

Nevertheless, I will stop here and continue in part two. If you have any questions or have carried out modularisation experiments, please feel free to share your feedback!

--

--