Embracing legacy on Android

Appaloosa Store
Appaloosa Store Engineering
9 min readJan 30, 2019
Photo by Joshua Earle on Unsplash

Starting off a new project is always something exciting. You start asking yourself lots and lots of interesting architecture questions. You tune your stack of tools and libraries. You aim to achieve maximum unit test coverage. You maybe even add instrumentation testing and configure it to run on real devices in your CI. Great, we all love that!
Sadly, most of the time this is not what we experience.

Often, we join a team to work on a project already running in production for several years. Architecture choices have been made, libraries become obsolete, APIs are deprecated and the people responsible for setting up the project in the first place already left the company. Technical debt is high and there are business and technical parts of your code that you just don’t understand/trust/control.
On top of that, we are asked to add new features, keep the project stable, don’t break anything and remove bugs.

In this series of articles, I want to share with you some Android flavored tips and tricks to help you in your journey to work with legacy code. Some of them are sanity checks to do at first. Some are just day to day good practices, like a manifesto.

Legacy code is often seen as annoying, but everyone who loves software craftsmanship should work with it not against. ❤️

Commits

In our journey towards working with legacy, small and isolated commits are our best friends! Everyone used to review code knows how important it is to keep commits clean and atomic. That way regressions are easier to spot and to fix.

Chances are that you work with git. You can setup a tool like Overcommit to enforce good commit message redaction across your team. Here at Appaloosa we adapted Angular contributing guidelines to fit our needs.

Tests

It is almost impossible to change complex parts of the code without breaking things or introducing a bug at some point. If you want to work with confidence you must automate your tests. Put some effort into it and you will soon be rewarded.

The mighty pyramid of tests

Engineers at Google separate tests by size (small, medium, large). So it is natural to also find such a nomenclature in the Android documentation : unit tests are S, integration tests are M and UI tests are L.

To simplify, I split my tests in two : Unit testing and Instrumentation testing. Whatever terminology you choose doesn’t really matter. What is important is to remain consistent. Simply remember that unit tests (S) run fast but cover small part of code and instrumentation tests (M & L) are slow but represent what your user will really see. They will hit almost every layers of your architecture. Balancing between the two is always a tradeoff.

Small handy tip : to quickly identify tests in Android Studio, switch to the Tests view.

Android Studio “Tests” view

If your legacy has a test suite, good! Identify how many and what type they are. test* folders contain unit tests and androidTest* folders contain instrumentation tests that run on devices or emulators.

Unit tests

If you are not familiar at least to unit testing, it’s time to level up and master it! Good books by Kent Beck or Roy Osherove can help you on this topic. No doubt you’ll find guidance and advices.

Running your unit tests with code coverage enabled will give you a first estimation of your tests suite health and which parts of your code lack testing.

Code coverage window in Android Studio

A word on Robolectric

If your unit tests use Robolectric, it is probably because your SUT (system under test) relies on some Android SDK classes. Try to identify what those objects are : intents, activities, fragments ? Are we doing conditional logic on the SDK version ? Typically do you seeif(Build.VERSION.SDK_INT ...) or @RequiresApi(Build.VERSION_CODES.XXX) ?
As some Android classes contract can change or be deprecated from one version to another, I like to specify manually the API level tests run on.

Identify the API versions ( minSdkVersionand targetSdkVersion) your app can run on. Then configure some tests to run on multiple SDKs versions. Be careful though : the number of your tests can grow quickly.
Configuration can be set at package, test suite or even test level. More info on this here.

Example of Robolectric configuration at test level

Instrumentation tests

Those tests are too often run on a single device or emulator. Doing so prevents you from detecting some tricky errors. On mobile devices and specially Android, the range of configurations an app can run on is exponential. At its simplest, it can be summed-up as : SDK version * orientation * hardware * density * language. Those are variables you need to consider in your instrumentation tests.

Pick up the most interesting configurations for your project and find physical devices or emulators that match closely. If you have some analytics about your users, you should know the devices that are used the most.

And now… drumroll … run your tests on all devices!

Instrumentation tests in one of our app in Firebase Test Lab. We can still improve diversity by mixing orientation or language.
  • Do you see tests failing on a specific device ?
    You better fix it. It appears that a piece of code doesn’t work properly on your end-users devices.
  • Do you see tests failing randomly on the same device ?
    Ouch, flaky tests are a pain, they will drive yourself crazy. Take some time to fix them or they will slow you down. If you really can’t spot the problem, use JUnit @Ignore annotation or simply delete them. VCS won’t forget anything.

Tidy up your tests (but one at a time)

Writing good tests is like writing good code, it’s kind of an art :) And having a test suite in your legacy project is already a good start. But having a test suite you trust and can maintain easily is better.

If you don’t understand what a chunk of code does, a good practice is to look at the tests. But if you also find yourself struggling to understand what those tests mean, what’s the point ? They must probably be refactored in a way that they become clearer.

Avoid code duplication when instantiating your SUT. Move all initialization logic in helpers/factories to improve maintenance. If one test needs to instantiate the SUT in a different way it is not a problem, adapt your factory.

If you use Espresso or Mockito you can create custom matchers to reduce the number of lines in your test.

Refactoring

It is not always possible to rewrite a method or a class from scratch each time even if we don’t like it. Every change has a part of risk and you don’t want to break things each time you ship to production. Before switching from one system to another, you have to do small iterative adjustments, renaming, extract methods, add some inversion of control.

Most of the time, working with legacy implies refactoring things first.

Keep in mind that refactoring is something that never ends, so you better be patient! Take your time and favor baby steps rather than rushing and breaking things.

This is a topic already covered by authors including well known Martin Fowler and Michael Feathers.
I strongly believe that even if you’re running out of time, a small refactoring is always useful. It contributes in making a better product. Maybe not for you but surely for your successors. This is what Robert C. Martin calls the boy scout rule in his best-seller Clean Code.

Linters

A linter statically analyzes your code and outputs warnings / errors about potential bugs, coding conventions, security, etc. Android Studio is shipped with its own linter : lint. A simple ./gradlew lint will output things in console and create html and xml reports.

A lot of options are available but there are two I find useful :

baseline file("lint-baseline.xml")

After you create the baseline, if you add any new warnings to the codebase, lint lists only the newly introduced bugs.

warningAsErrors true

Returns whether lint should treat all warnings as errors

Combined together, they can be a secret weapon to freeze the codebase and prevent new bugs to be introduced. If you find warningAsErrors to be too restrictive, you can instead tweak the severity of each issue in the lint.xml file

Change the default severity of issues in the lint configuration file

Setting abortOnError false in lintOptions is a bad practice for release builds. It mutes all lint errors and lets you generate a release build without failing. Turning it on again might reveal problems, but you should definitely take time to fix them. If too many errors pop up and you don’t have time, lower temporarily their severity down to informational in the lint.xml file. It will allow you to build a release and fix issues one by one later.

Continuous integration

Who doesn’t love CI 😍 ? Working with legacy implies a lot of small modifications. Having a system that ensure everything remains consistent each time the codebase changes is priceless.
Now that our tests are stabilized we need to properly configure the CI.

At Appaloosa we use CircleCI for our Android apps. A simple but yet powerful workflow looks like this :

This simple workflow is run each time new code is pushed. (9+ mins)

Jobs stacked vertically are run in parallel. To iterate quickly, it is important to fail-fast. So reducing our workflow time by parallelizing was a simple but huge improvement we did.

Our previous single job (makes coffee ☕️) workflow was this one. (18+ mins)

Instrumented tests are configured to run on physical devices using awesome Firebase Test Lab. Devices were selected using the method explained further up.

Building and testing all variants of your project is also important. As release apks are proguarded and signed, the compilation process differs from the debug one. A release build may fail whereas the debug version won’t.

Pair-mob programming

Working with legacy code often means working with code hard to understand. To get the whole picture, you need to dig. And the mental effort required to jump into classes hierarchy and files without loosing yourself can be exhausting.

Ask someone to work with you at that point. New ideas will emerge and you will get a boost of motivation to tackle problems. You can even ask your whole team and organize some mob-programming session. That way you make sure to spread the knowledge.

Comments (but not everywhere)

In most situations, I believe that good code doesn’t need comments and that a test can replace many comments. If SOLID principles are respected, you can read code just as you read a book.

There are exceptions though :

  • Business code is full of subtleties that must be explained. In mobile apps world we try to enforce as much rules as we can in external APIs we call. Still, there is always some logic left to be implemented by the client.
    If you stumble across a code that is well-written but hard to get the first time, add a comment!
  • Technical/voodoo magic code that requires some specific knowledge
  • Helpers and utility classes are meant to be used everywhere in your project and even sometimes in multiple projects as libraries. These public interfaces need to be perfectly documented. Android Studio understands Javadoc, so take advantage of it!
Quick documentation view in Android Studio

minSdkVersion

SettingminSdkVersion has a lot of consequences. As your app grows you must ensure it runs correctly on older versions of Android. Google helps us and provides compatibility libraries (previously Support Library, now AndroidX) to avoid boilerplate code. Still, a low min sdk version is hard to maintain and to test.

Ask yourself from time to time whether or not your minSdkVersion is still relevant and if you should increase it. A good indicator (if you already have analytics about your app usage) is to check the most used devices and versions.

Thanks

Thanks to Romain Mouton , Guillaume Sévaux and Appaloosa team for the reviews🤘

You should subscribe to our blog to be notified whenever a new article is available!

This article was written by Benjamin Orsini of Appaloosa’s dev team.

Ideas

In next articles, we may talk about :

  • Introducing Kotlin in a Java codebase
  • Tuning Proguard
  • Sharing code strategies
  • Adding feature flipping
  • Improving crash reports
  • Changing your app architecture, worth it or not ?
  • Adding tests to a class when it it tightly coupled with Android
  • Migrating from one library to another
  • Upgrade a library or gradle plugin

--

--