Performance improvement efforts — the dreaded side of software development

This is so slow and sluggish! Users have reported slowness when they tried to retrieve the latest from the news feed! The app is experiencing delays with showing real-time data! Usually it’s OK, but the page starts lagging around noon, probably because that is when the most of the users are using it! We are losing money, orders take more time to process than usual! This worked perfectly before the last app updates!

Users are at your door!

If you are, or were at any point in your career, assigned to address comments like these, you’ve probably experienced the performance improvement hell week. Or month. Or maybe even a longer period of time. While these comments rarely provide measurable information that is useful for your future investigations, they are not to be ignored. Users, while they subjectively have their own preferences, will mostly provide objective insight about the performance characteristics of your app. That should always be taken into consideration.

What is this article about, anyway? Take a look at the following two scenarios.

Scenario 1:

  • You have reasonably enough time and budget to improve whatever you need to do.
  • You know exactly what you need to do and where the problem lies.
  • Client/owner is understanding, excited and eagerly and patiently waiting for the results.
  • Users are either not aware of it or are understanding about the process required to do this.
  • Everyone is super excited when it’s finally delivered and praises come from all corners of management and user realms.

Scenario 2:

  • Not Scenario 1
  • You are pressed on time and budget because you (or someone else) built and spent money on something that is not working well enough.
  • You don’t even know where to start troubleshooting.
  • Client/owner is pressuring you to make the service/app/page work faster ASAP.
  • Users are not happy because the service/app/page is slow and they are threatening to tweet about it.
  • An occasional ”great work!” here and there comes your way after it gets delivered, but usually followed by the afterthought “it should have worked like this in the first place!”.

Scenario 1 might look like a scenario for building new features, and it is, but it can happen for performance improvement efforts as well. These are called planned efforts for refactoring and performance improvements. They are not that rare and they can be seen in startup projects more often than not. Because, cutting corners is what startups often do (I wanted to say occasionally, but we all know that is not true), to get to the targeted market faster. That is okay, as long as you plan (and communicate) for refactoring, scaling and performance improvement efforts, as long as it is a part of your project roadmap, you should be able to address this through continuous delivery and improvement. If you don’t, you can easily fall within the scenario 2 boundaries.

However, this article is not about what happens in the boundaries of scenario 1. There are multiple articles out there about it. They get enough praise as it is. This is an article written based on personal experience gained by working on at least a dozen performance improvement efforts (not counting the planned ones) during my career as a software developer. This is an article about the boundaries of scenario 2, when the pressure is on, when the damage has already been done, when miracles are expected.

I too once worked on new features

This is a two-part article that will focus on the following:

  • Factors to consider when planning a performance improvement effort (part 1)
  • My performance improvement checklist, steps to consider following when doing performance improvements (part 1)
  • Real example of a performance improvement effort, the problem and the effort to resolve it (part 2)

Planning factors for performance improvement effort

Time is money and everything is a priority 1!

There are a lot of factors to take into consideration when planning a performance improvement effort. After all, what client/boss/product owner likes to hear that you will be including performance improvement tasks in your sprint and/or road map? To fix or refactor something that has already been built?

I’m not going through all possible planning factors here, but I will mention two of them and explain how most of the other factors closely or loosely relate to these two. And let’s be honest, most of the clients/bosses/product owners don’t care about the other factors.

Expenses

Whether it’s manpower, software, hardware or the other types of resources, it all translates to money. Hence, expenses. And a lot of other planning factors relate to this one. For example, one of the factors would be the new architectural design that should improve performance (either for a new app or a refactored old one). It’s tied to technologies that will be used to build the software components, skills coaching (if required), hardware and software resources needed, etc. All of this relates to expenses factor, because, in reality (which we, the developers, sometimes don’t take into consideration), most of the mentioned things cost money in one way or another. And that matters.

Time to market

Sometimes it’s not even the expenses that play a key factor in performance improvement planning. Sometimes performance issues need to be fixed ASAP, no matter the expenses, because the product is losing money. Or worse, losing users.

Performance improvement checklist

Over the years, mostly by reviewing my own bad experiences, as well as feeding off of pain and suffering of other colleagues, I have devised some sort of checklist-ish guidelines that I try to keep in mind when I’m involved in performance improvement efforts. They have helped me go through a couple of these efforts with less stress than usual. I’m not sure they will help in 100% of the cases, but for now, they help immensely. Also, it’s never a bad thing to have some sort of a checklist when doing things like these.

  • Take time to plan the performance improvement efforts

Even when the time to market factor is extremely important, it is very beneficial to plan out the effort before tackling the issues head on. This would include things like including people that built the low performing functionality in the first place, determining how many resources will you need for researching, investigating, providing the solution, etc. I am aware that at the sign of a big trouble the fight or flight instinct kicks in and you might want to start investigating or doing something immediately, but it’s more beneficial to take some time and plan out the effort.

  • Take time to design a proper solution

If the time to market factor is more important than the rest of the factors, there is not much you can do at this point. Plan it out, investigate, fix the issues as soon as possible, any way you know how, and then come back to this step. Sometimes performance issues are small, fixes are minor and around a solid code base, so it might be a waste of time to design a “better” solution. But, if the issue is straining a part of your architecture, then it’s better to invest time to design a proper solution so that not only you will fix the performance issue, but will have enough room to improve it gradually by planning performance improvements in your product road map. Couple of points below relate closely to this one, but I wanted to separate them.

  • If and when possible, take comparable metrics and diagnostic data prior to starting performance improvement efforts

This one is very useful and you should do it whenever possible. Although collecting measurable data is very beneficial for reports, bragging and writing articles about it, there is a much better reason to do it. The reason is that data sells. When you have measurable and comparable data, it is much easier to sell future performance improvement efforts and put them on the product roadmap.

  • Break down your performance improvement efforts into smaller deliverable chunks

This one is closely tied to the ones before, as smaller deliverable chunks of performance improvements, together with comparable metrics, can sell future deliverables to get you where you want to be with the performance of the targeted functionality. Believe me, you don’t want to be that guy that has spent weeks or even months improving the performance of the app/functionality with nothing to show for when called upon. And you will be called upon to show something or give a status update and probably answer the uneasy questions like “What’s taking so long?”, “Why are we spending so much money on this?”, etc. To avoid all of this, deliver as often as possible, with comparable statistics and metrics, if the conditions allow it.

  • It’s better to build a proper solution that is able to scale, even if the first iterations don’t bring much performance improvements, or any at all

This one might be a head-scratcher, but let me explain. Of course, when time to market factor is a high priority, this might not be applicable. But when it’s not, and when expenses factor allows it, you should try to design and build a proper solution. By “proper solution” I mean a solution that gives you enough potential to further improve performance on it and scale it as needed. With roll out in planned iterations and metrics data to back it up, you should not have much problems selling this to your supervisors. It certainly beats trying to improve the existing solution if that solution has limitations.

Consider the following example: A pretty large stored procedure (around thousands of lines of code) that includes going through a bunch of tables and a million of rows to get the needed data is performing poorly. Your first response might be to add indexes where they are needed to speed up things. And that might be a good solution to hot-fix the issue, but usually it’s not a long-term solution, as a million of rows over time can become two, five, ten million rows. If the time to market factor priority is high, this is fine, but plan for the long run. It is better to refactor the stored procedure or rethink your read models altogether.

  • Communicate your endeavours as much as possible

All your roadblocks, challenges, milestones, results, metrics, deployments, all of it. Everything should be transparent to your supervisors and rest of the team. And not just for your own sake. When you are facing challenges, roadblocks, the support team will have information from your report that they can use to calm down the users. Your metrics and deployment data is valuable to the marketing so they can update the users with the new development, because most of the users may stay faithful if they stay in the loop. The list of examples goes on and on. As a developer bordering on an introvert side, I hate to say this, but communication is the key. If there is a hard lesson that I’ve learned over the years, it’s that your solution is only as good as how well you communicated about it. Because, let’s face it, no one from the C-level management, no clients or product owners will go through your code to commend you on a job well done. To tell you the truth, I’m still somewhat bad at this, so that’s why it’s here, on this list. So, to sum it up, communicate your endeavours. And back them up with data.

Conclusion

Let’s sum everything up, for this first part.

  • Do anything you can to get the performance improvement effort on your product roadmap
  • If the above is not possible, plan your performance improvement efforts, get needed metrics and communicate your efforts transparently.
  • After all the fuss has died down a bit, get performance improvement efforts on your product roadmap.

In the next part, we will go through a real-world example, just to put some things into a perspective. And the perspective is, staying flexible while going through the performance improvement checklist.

--

--

Emin Laletovic
Ministry of Programming — Technology

Software developer @ Ministry of Programming, amateur climber, Malazan geek