How to keep your code maintainable

Riina Pakarinen
ELCA IT
Published in
18 min readDec 13, 2022

Have you ever seen some old code that looked almost rusted? Where does all that technical depth come from? I am convinced that code is always made with good intent to fulfill the task, like implementing a business rule or improving performance. But as time passes, it can get harder to maintain. Based on my experience, I have put together a list of points to consider to not only write maintainable code but keep it that way.

There are already many articles about how to keep code maintainable and how to refactor, for example refactoring legacy code, clean code and how, what and why and more general lists like Joel’s list and cloud-oriented The twelve-Factor App. They also discuss what makes code maintainable, for example, testability, readability, or simplicity. I decided to take a wider look and dig a little deeper into how to keep code maintainable.

Text: “Mend the roof before it rains”
Photo by Brett Jordan on Unsplash

Let’s start with the obvious question first: Why should I write maintainable code? It takes more time to write than quick and dirty code, right? I think it is good to remember, that code is read more times than it is written. It is also read by humans, not only machines. Even when the writer reads his or her code after half a year, it might be hard to understand. Also, the person reading the code might be in a hurry, for example when analyzing critical issues from the production. When the code is not maintainable, then each analysis and change can take more time than the previous one, and more bugs can be added unintentionally.

I came up with the following list of aspects, which I use to keep code maintainable and I pass on in the code quality courses at ELCA.

1. Dare to throw away code

Make an explicit decision to either refactor or rewrite the code.

Before automatically starting refactoring an old project to reduce the technical depth, it is good to ask, if would it be more efficient to throw the old code away and write it from the scratch again. It may sound provocative, as time and money were used to write the code. But if you measure the worth of the code by correctness and maintainability, the value of an old code might not be that huge. But nowadays with code generation, code like simple CRUD services, DTO-objects, and WSDL-clients can easily be generated. Another example is code logic which was increased piece by piece and got a bit clearer. When the logic is at the end known, it might be simpler to rewrite the logic with clear known rules than try to update the existing logic. Another possibility to throw away code is to see, which code is not used. When unused code and references are removed, the build process may get faster and the size of the delivery may get smaller.

Dare also to question the decisions, which were made a long time ago. Also the architectural decisions. As time passes, the requirements and the technological landscape may have changed to support a different kind of solution. For example, it could be possible to use a workflow engine or form engine instead of its implementation, or the other way around, an OOTB component is outdated and can be replaced with its implementation. Pay attention to when it is the last possible moment to decide on the technology or a solution and when it is possible to redecide.

2. Keep your dependencies maintainable and avoid vendor lock-in

Know your dependencies.

When you have decided to maintain your current code, then have a look at your dependencies. An application can depend on various components, for example, libraries, frameworks, technology, and interfaces. Let’s have a closer look at the challenges of library and framework maintenance. A library is a component written by someone else, which you use for a specific task. For example, you can use Log4Net for logging or charts.js for charts. When comparing software development with building houses, getting a library is like buying furniture: you can select from existing or if nothing exists, you can build your own. A library kind of fills the holes that you have in your application, to make it complete. A framework on the other hand is one having holes, which you can fill with your code or with libraries. A framework gives you a kind of template for how to structure your code and makes certain decisions for you, like transaction handling or format of response and request. Examples of the frameworks are Angular, Vue and EntityFramework.

How to manage library dependencies

When using a reference, make sure you have your copy of the package for example in Artifactory or CDN, as the packages can also disappear (see the left pad incident). As soon as you add a dependency, remember that you also add references to that dependency, also named as “transitive dependencies”. Make sure all these dependencies have no known security holes, by scanning them for example with JFrog Xray. As the direct references may depend on a certain version of another reference, try to make sure you only need one version of each package to avoid side effects when multiple versions of the same library are loaded simultaneously in your application. When planning an update of a version, be aware of a chain effect that might occur. For example, when upgrading a component, it could be possible that it does not support the version of the framework and the framework also needs an upgrade. This might cause other older components to require an upgrade too.

How to keep the framework maintainable

When choosing a framework, choose one which is not too new and not too old. Too new frameworks might have child illnesses, might not be yet featured, and get changed radically for the next version, as happened with Angular. Too mature frameworks might be fading out. Even when they would currently widely in use if the time horizon of the product is 5–10 years, the framework usage, its support on the browsers, or the know-how of the people might be decreasing; as happened with SilverLight and GWT. Check the planned lifecycle of the chosen framework to verify that is supposed to still exist at least at the time when your product goes live.

The same goes for upgrading the framework: don’t upgrade too early, as the N.0 version might have issues, which will be fixed in the next minor version N.1. But if you wait too long, a straight-forward upgrade might not be possible and you’ll need to do the upgrade on more than one step. We plan to upgrade our framework usually yearly or bi-yearly. If you plan to do the upgrade at the same time for many projects, to profit from the reuse of the code and not only services, you might profit from a tool, which converts code from old to new, for example from WCF-services to Web API services.

Sign up for security alerts of your framework, to be able to react fast to the needed upgrades. For example aspNet and Dotnet core. Follow also the lifecycle and release notes of your framework, for example for .net, to be aware of possibly deprecated features on time.

Below is an example of when to upgrade using a framework. You can do a proof-of-concept to verify that the planned upgrade does not contain breaking changes. Remember to plan time for resolving merge conflicts, when implementing the business features parallel to the upgrade.

Example of framework cycle

Should I write my code or use libraries or frameworks?

Before adding a yet-another library, verify whether it is needed. Even when the tendency is to reuse existing code, sometimes it is better to write a short piece of code on your own, which you then understand fully. Also when planning to use a component library, if you need customization, verify early that it can be done. If customization is complicated, the adjustments might take the same amount of time as writing your component. If you write your framework to be independent of the above-mentioned issues, be prepared to maintain, add needed features, document, and test it to keep it as usable as the public and commercial frameworks.

Avoid vendor lock-in

Vendor lock-in means that the application is dependent on a vendor of products like components, libraries, and frameworks and it is impossible or very expensive to change the provider of these components. This can happen for example when using a specific feature of a database, reporting software, or a cloud provider. If these features are removed from the product, the license prices get increased or the vendor goes out of business, then the application will suffer from the consequences of vendor lock-in. Remember, that you can also get lock-in with your framework. So, how to avoid it?

  • Try to avoid using proprietary, publicly undocumented data formats, as they might change during the versions.
  • Think carefully about where you commit yourself to, before using very specific features of a framework you are using.
  • Centralize the usage of each library, so that it is easily changeable if needed
  • Create a vendor-independent interface, which stays the same, even when the vendor is changed
  • In a cloud environment, consider multi-cloud over single-cloud
  • Make sure the product you are using supports common ways to import and export data

3. Manage complexity

Measure code quality and maintain code quality during development.

In addition to dependency management, also your code needs management of the complexity. Complex code is hard to understand and hard to change. There are plenty of ways for managing the complexity, and surely everyone has their favorite among the well-known principles like DRY, KISS, YAGNI, SOLID, clean code, style guide, etc. Choose the best practice, that suits your project, because like houses based on architecture, code also needs housekeeping from time to time. Below few things to consider:

Is the short code always the best code? I think not, because

  • clear (=longer) naming is more readable instead of shortcuts
  • good error handling might not be short, as different cases could be handled differently. I have heard that on average happy path of the code/work makes 20% of the code/work and 80% is used for error handling.
  • having too much done on one line increases the error possibility and decreases the readability. For example, when there is a null-pointer exception on the long line, in the stack trace it is not visible, what exactly on the long line was null

Does the style matter? What about code consistency instead?

I think in the code it should not be easily visible, who was coding it. So how does style impacts this? For example, we had once a code review on a short function containing a loop. As review feedback, we focused on the style and consistency of the brackets, naming, and other style-related issues, and none of us spotted the actual bug, which laid on the for-statement conditions. So inconsistent code helps bugs to hide, as the readers get distracted because of other issues. When the rules in the SonarQube are configured correctly and the rules are followed, then much of the work is already done. In my projects, we use SonarQube as a guideline and in addition to that, we agree on some consistency rules of the cross-cutting concerns, like how to handle errors and which kind of return values the services can have (null, empty, and error code, HTTP-code, etc).

Best practices: KISS, YAGNI, SOLID, MVP, POC, DRY
Examples of best practices

Debug should not be the first or only way of analyzing problems

When analyzing a program’s behavior, the usual way is to start it in the debug modus. But think about analyzing the behavior of an application, where the only result you get is a log file. There is no debugger, no F12 developer tools, and no SQL queries. Just the log. Scary, isn’t it? I think when focusing a long time only on coding and being far away from the actual support work, it is possible to forget, which kind of output from an application would be useful when analyzing issues on production. Also in the era of cloud, microservices, and message queues, debugging is getting more complicated and should not be only nor the first way of resolving issues. So one way of keeping code maintainable is to write a readable log.

Log should

  • be output from your application instead of “just another text file”
  • Log messages should be unique
  • Log messages set to a specific level, for example, debug, info, warning, error
  • If possible, it would be good to be able to configure the log level without needing to reset the application.
  • Contain unique error ids, which are also shown to the user, so the screenshot from a user can be combined with the log
  • Contain session id or user id, to be able to combine the logs from different threads
  • Also work even when the function is called hundreds of time as part of a batch-process
  • Be structured, so it can be processed and shown for example with Splunk or Kibana in a table-like view. The structure can simply contain the same fields in the same order or it can be a JSON object.

Log should not

  • contain sensitive information, like session cookies or personal information of the user (GDPR)
  • impact too much on the performance
  • contain too much information as it would be hard to get an overview

The log can be visualized in different ways, as shown below:

Example log visualization

In addition to the debugging and the logs, a regularly run test harness helps to reduce the number of issues that occur in the production. The adaptation and extension of the test suite should be a part of story development and bug fix. This way, latest when bugs occur, a new test is added to recognize them, and the amount of regression tests grows. Test-driven development and behavior-driven development can be used as methods to gather the needed test coverage.

Be aware of tool caveats

When the tool measurements like SonarQube and JFrog security scan show a green light, is my code then 100% maintainable? Not necessary. It would be easy just to trust the green light from sonar and good weather from Jenkins. I think tools giving a good result is a good start, but I think code still needs a good review. For example, even when the amount of tests goes up and covers all the code paths, the tool can not check, whether the input values make sense from the business point of view. And if they don’t, the tests do not prove that the application works with real business cases. To correct that, the input and output values or the tests themselves could be defined by the business analysts. Behavior-Driven Development with tools like Cucumber testing can support this. However, be careful with tools, which generate production code based on concrete tests. The result might not be as generic or object-oriented as you might expect.

Be aware of Tools Caveats (photo by Cesar Carlevarino Aragon on Unsplash)

The IDEs have also tools and commands for refactoring, for example, “Remove Unused References” in Visual Studio and “Code cleanup”. These can be convenient when configured explicitly to suit your project and when you know, what to be aware of when using them. For example, they can not recognize the usage of reflection or runtime binding. So when a file has a reference to libraries, which are bind runtime based on configuration, the using-statements would be considered unused and the clean-up tool would remove them. It is indeed useful to remove unneeded dependencies, as this can make your build faster and reduce the size of the application, but remember to take care of the runtime dependencies.

IDEs offer also tools for refactoring for extract interface and extract method. Even when refactoring is easy, it is still good to consider, what exactly should be refactored and how should the new methods or objects be named.

When using tools like SonarQube to clean up your code, configure the project in a way, that has as less false positive errors as possible. For example, it is possible to ignore libraries, which are only meant for proof-of-concept and will be deleted in a short time. Or ignore test libraries from duplicated code checks, as tests are being written independently. Or add your abbreviations to the list of allowed words, which may be written in capital letters. The team is more motivated on fixing the sonar warnings when no time is needed for writing justifications and nosonar-tag in the code. When using the scanners, analyze the results before just blindly removing all the mentioned libraries. I have seen scanner reporting of vulnerabilities, which were already fixed in the used version, or threads in libraries, which were not deployed with the final version.

4. Keep the documentation up-to-date

Link the business description with the release.

When the development is done and the code is ready for production, make also a snapshot of your business description, Documentation of the application would be another topic and would deserve its own article. Here just shortly from the maintainability point of view: Have an up-to-date description of the functionality in one place, so that it is easy to check, what the code and tests are supposed to do. The format does not matter: It can be for example up-to-date user stories or detail specifications; as long as it is clear, which document version is coupled with which release. I had a good experience of snapshotting the delivery with the used documentation, where the use cases were updated with the newest changes, so there was no need to read the user story and its change requests in the right order to get the latest information.

5. Show the business value and understand the risk of the technical tasks

Explain the business value and risks of the maintenance tasks.

When you have recognized how to reduce the technical depth, get ready to explain the benefit in non-technical terms, like increasing performance, security, or maintainability.

Sometimes technical improvements like refactoring and framework upgrades get postponed in projects, as it might be hard to show the business value. I think the technical tasks like refactoring and technology upgrades should be separate tasks from the business features because they also have their business value. For example, they maintain security, as the out-of-support framework would not receive security upgrades. Sometimes with a proof-of-concept, it is possible to show, that a newer version of the technology performance is better or there are new out-of-the-box-features, which speed up the development. I have for example used a modern look and feel as an argument to get a certain UI component. It is important to learn to sell the technical improvements to the project leader and project sponsor enough early and not after a security breach or browser not supporting the application. I myself had to learn that “new and interesting technology” is not good enough argument.

Business value
 Faster evolution
 More secure
 Better quality
 Less bugs
 Modern look and feel
 Ootb features
Examples of business value

In addition to the recognized selling arguments, it is equally important to recognize, which kinds of risks and threads are introduced with the upgrade. For example, is an older browser not more supported? Do we know the technology enough to estimate the cost? Why should we do exactly this upgrade, can we wait for the next one? Are we sure the version does not contain breaking changes?

6. Maintain with care

Decide on a suitable technical improvement.

After getting the GO for the technical improvements, it is time to evaluate, which way of maintenance is suitable for the current project. Below are examples of technical improvements to keep code maintainable: refactoring, upgrading, and restructuring. Let’s have a closer look at these

Refactoring

Refactoring should not change the behavior of the application. This can be verified with unit, integration, or E2E tests. I recommend separating the refactoring into different commits or ideally into different pull requests. Then for the code reviewer, it is clear, what to review and whether the tests and logic should change or not.

Examples:

  • Removing duplicates
  • Centralizing code

Updating/Changing the technology

When planning an upgrade for a larger code base, check whether converters or code generations already exist or consider writing your own, which could cover at least part of the related code, when for example converting WCF service to Web API.

Examples:

  • Migrating from WCF services to Web API
  • Preparing for migration to .NET 7
  • Switching from Oracle to PostgreSQL
  • Changing from NHibernate to Dapper
Summary of the possibilities for technical maintenance

Restructuring

When restructuring in a way, that impacts the users of your interface, then consider wrapping the old part within an interface, which has the new structure, but still uses the old code behind. Then step by step, restructure or rewrite the old code in a modern way. Then the clients can start using the new interface as early as possible, and won’t need to change. Offer the old and new interface parallel for a while with version information, so the users of the interface are less dependent on your release schedule.

Examples:

  • Small specific services instead of one generic
  • Generic service instead of specific services

Before starting the maintenance task, make sure you have business rules, test cases, and ideally a running test framework and performance benchmark before starting refactoring. Also, it would be good to have at least a running environment of the old environment, if you want to compare, how the old version did run. If your application or service is used by a client or another service, then you probably need to version your services.

Have a timeline for the technical improvements, similar as for the usual business stories. Remember the YAGNI principle (you aren’t gonna need it). It is usually recommended not to code something, which is not going to be used in the next half a year. In SAFe the planning horizon is even shorter, just 3 months. Remember also that code can be “good enough” when it is working and it is not planned to be changed. Then the refactoring can wait for the next relevant business change. Make an explicit decision about whether the improvements are added incrementally or as a big bang. Incrementally you get early feedback but need to decide carefully the order of improvements to avoid having the “old way” and “new way”, which could cause issues with cross-cutting concerns like session handling, transaction, context, and caching. In the big bang, you can create a new clean code base but need to maintain the old system until the new one is ready.

7. Review your maintainability

Integrate maintainability into your development process.

During the development, remember to monitor your maintenance. How to know whether your code is maintainable or not? Is it possible to measure? As mentioned above, even when there are measurements like 100% coverage by tests and no security issues, I think there should be guidelines in addition to the tool results, as they do not cover everything. I think it is good to define together for the project(s), which criteria are used to measure the maintenance. Possibilities are for example usage of scenarios, audits, and definition-of-done. Besides the statistics from the tools, you can use for example different “what if”-scenarios to evaluate that. For example:

  • What if I create a CRUD service for a new business object, do I need to touch many files, create many files, or just generate code?
  • That if I want to change validation rules, do I only configure them, or do I need to redeploy them?
  • What if I need to deploy a new version of the application right now, how many clicks do I need for that
  • What if I need to change the configuration, how long downtime do I need?
  • How short is the “time to first hello world”, meaning how long does it take for a new developer to have the application running on his or her laptop?
  • Does the time to onboard a new person to the project change over time?

If appropriate, have your code audited by someone outside of the project. We use this method when doing architecture audits and it is surprising, which results it is possible to get within only a short time.

Review your maintainability (photo by Scott Graham on Unsplash)

When doing audits and code reviews to recognize the technical depth, it is good to classify the findings, for example, effort high/low, impact high/low. It might be interesting to start with low-effort and low-impact tasks as they are less risky, but remember to keep the business value also as criteria.

In addition to that, the maintenance aspect can be covered in the definition-of-done of the stories. For example:

  • Are all the documents brought up to date?
  • Are new tests added?
  • Were new dependencies added, if yes, can they be justified?

Conclusions

To sum up, I think code maintainability does not depend only on a few silver bullet best practices. Instead, it is an overall continuous process, which consists of good decisions on multiple levels from technology choices to upgrade decisions and rewrite decisions.

--

--

Riina Pakarinen
ELCA IT
Writer for

Software architect, GIS expert and trainer at ELCA