The keys to development velocity

Corey Scott
OVO Tech
Published in
15 min readAug 31, 2020

Introduction

In the fast-paced world where we find ourselves, businesses are measured not just by the value they deliver to customers but also by the speed at which they provide it.

For software-based companies, an integral part of that equation is the development velocity of both individual developers and developer teams.

But what is development velocity?

Development Velocity

Firstly, let me offer you my definition.

Development velocity is a measure of the ability to deliver customer value.

Please, go back and re-read the previous sentence. These words were chosen with care.
You will note that it does not say “feature”, and it does not say “code” anywhere.
This is very intentional and extremely important.

Let’s dive deeper into this.

Why doesn’t developer velocity mention code?

As coders, it’s easy to think that the answer to all problems is code, typically more code.

This is simply not true.

In fact, adding code often makes the problem worse.

Consider this, for every line of code that a project has, there is a cost.
It has cost someone time to write it.
It has cost someone time to test it.
And it will continue to cost the team to maintain it.

Once we acknowledge that code has a cost, we can contrast the cost of this code with the value that it brings to either the customer or the team.

This cost/value trade-off actually forms the basis of many arguments in software engineering like:

  • YAGNIYou-ain’t-going-to-need-it
  • Minimalist API — i.e., exporting only the bare minimum
  • Minimal config/options — flexible code is sexy but expensive, unnecessarily so, for options that are seldom or never used.
  • Removing unused features and Deleting dead code — removing code that is no longer used has very low risk. Once removed, it is not costing us anymore.
  • Buy vs. Build — It is always tempting to build the “perfect” version of a solution, but finding an existing solution that is “good enough” and is maintained by someone else is frequently cheaper.

All of these arguments are based on the same idea, we should be looking to write and maintain as few lines as code as possible.

Why doesn’t developer velocity mention features?

As developers and product owners, we are bombarded with opportunities to add more features, more config options, and more customization.

These features will often even make some people happy.

However, the question we need to be asking ourselves, is it worth it?

It might seem harsh, but if you could add a feature that would make 1% of our users happier, is that worth it?
The answer to this question should not be an automatic, yes.
If the development cost is too high, then it might be a no.
If the maintenance cost is too high, then it might be a no.
If the feature causes the system to become slow or unstable, then it is almost certainly a no.

Hopefully, you can see where I am going with this.

When it comes to maintaining and extending our systems, we should be continuously asking ourselves two questions:

  1. Is this feature or code worth the cost to develop and maintain?
  2. Are there any old features or code that no longer provide enough value to keep around?

By spending the time keeping our “work area” clean using these ideas above, we ensure that new work is not inhibited by what came before it.

If the key to development velocity is not blindly adding more code or more features, then what is it?

Keys to Development Velocity

There are 6 keys to development velocity, these are:

  1. Code Clarity
  2. Trust (or lack of fear)
  3. Code Quality
  4. Automation
  5. Support
  6. Introspection

Let’s explore these.

Code Clarity

Code Clarity is, by far, the most crucial factor when it comes to velocity.

By clarity, I mean the readability and, by extension, the usability of the code itself.

When a programmer can read and easily understand both the intention and implementation of a piece of code, API, or module, then it is easier and faster to work with.

Code clarity is not easy to achieve and does require continuous monitoring and effort.
It is often achieved and maintained by frequent tweaks (aka refactoring).

The freedom to achieve risk-free refactoring is itself achieved through a lack of fear, which is the next item.

Recommended reading for Code Clarity:

Trust (lack of fear)

When we have trust in our system, we have confidence that every level, from a single function to modules, to the system as a whole does what we intend it to do.

By extension, this trust allows a developer to be confident that they can make changes and not break something else.

Such trust is hard-earned and based in a history of such successful changes.

Developers can develop a sense of trust with the following activities:

  • Adding unit tests — these tests will serve to document the codes’ intention on a small scale. Ensuring that these tests are run often (as part of the build or more) allows the tests themselves to deliver the maximum value.
  • Adding tests to prove bugs — these are special tests (unit or UAT) that are added as a result of a bug report. These tests fail because of the bug and will only pass when the bug has been fixed. Beyond their initial value of documenting and proving the bug fixed, these tests prevent the bug from reoccurring; and there is nothing worse for a programmer or user than having the same bug more than once.
  • Adding UAT tests — these tests ensure that the customer value (feature) that we promised to deliver is actually being delivered. When both unit and UAT tests are in place, most of the risks associated with developing and deploying new features are reduced to a point where deployment failure is almost always the result of a config problem.

Hopefully, testing is part of your standard development practice, but if this is not the case, I would strongly encourage you to adopt it.

We mentioned earlier that every line of code has a code associated, in both creation and maintenance, and while this is absolutely true for test code as well, test code brings immense value.

The lack of fear that adequate test coverage brings is extraordinarily liberating and empowering, especially when compared with attempting to maintain legacy, buggy, or lousy code.

Recommended reading on testing:

Code Quality

One of the most significant problems in our industry is the lack of a concrete or codified set of guidelines or metrics that define precisely what is good code and bad.

In many cases, beauty truly is in the eye of the beholder. Just like physical beauty, folks instinctually believe what they have created is the right or best outcome; after all, who doesn’t think their kids are beautiful?

The problem with this ambiguity is that it necessitates that every programmer, team, and company must define what quality means to them.

My ego is not big enough for me to tell you, “this is the right style” Instead, I will say this:

  • A team style should be defined
  • It should be enforced by code reviews
  • Where possible it should be automated with formatters and linters

Yes, I know it sucks to have to write code in a style that is not your own. Sadly, there really is no way around it.

Code that is annoying and clumsy for you to write would be annoying and clumsy for your team to read if you did not follow the team style.

On the flip side, the annoying and clumsy feeling goes away after using a new style for a week or so.

Many style choices and common mistakes can be either fixed or surfaced using linters.

Linters and Metrics

I know many people hate code linters, but I have a very different perspective.

I think linters are fantastic and here are my reasons why:

  • They can help individual programmers find issues in their code before sending the code for review; saving precious review time
  • They can teach you how to write better code and help you eliminate bad habits or develop new ones

Beyond these reasons, I am very happy to admit that one of the best advice I got over the years was to “turn on all the linters in Eclipse”.

It seemed silly at the time. Especially as many of the lint issues that were raised, I did not understand at all.

But I persisted with it, and over the weeks and months, I found the time to research what particular lint issue was and why it was important.

After this process, I came to understand why the code was better after fixing the issue.

And as a consequence, I was putting out much better code because of it.

Because of this, I have continued to find and adopt even more linters over the years, and as such, I often run many more lint checks that team or company requires; it is my personal preference and the standard I want to set and maintain for myself.

Linters are not without issues, though. The most common, and likely why many people hate linters, is turning on too many lint checks too fast.

When a check gets employed, it will generate noise. If it becomes required that this gets fixed immediately, then this is annoying, and people will hate it.

When folks are required to fix issues in the code that they do not recognize as an issue, then it becomes annoying, and they will hate it.

While I believe that folks should adopt as many linters as they reasonably can, it must be done gradually.

In fact, the easiest way to set and maintain a high bar in terms of lint is with new projects. When checks are turned on from the start, the cost is lessened, spread out, and seems far less arduous.

One last word of warning to any team lead/manager that is thinking of adopting linters. You must find a balance between requiring that all lint issues are resolved and the cost of adoption.

If there is too much noise, then the warnings will cause alert fatigue, and they will be ignored.

If folks fall into the habit of ignoring these warnings, then the linter will just be wasting everyone’s time.

It might be best to either turn things on one by one or find a way to separate the fatal issues from the warnings and provide sufficient time for items to be addressed before being blockers to the build.

I have talked mostly about linters in this section, but any tool that provides an automated measure of quality should be considered.

Tools like Sonar and GolangCI-Lint can be invaluable. Just make sure you spend the time to configure them well, and you communicate your expectations/requirements clearly.

If you are just starting out with these tools, I would recommend enabling tests for:

  • Unit test coverage
  • Dead code
  • Code coupling
  • Common Errors (like Go’s vet or Java’s NPE checks)

Sorry, but I must repeat myself if you do not care about a lint check do not enable it.

I cannot tell you how many times I have seen teams (read managers) turn on checks for things like documentation that, while valuable in theory, the team just did not care about.

As a result, the team just used auto-generated comments or added only enough comments to make the linter shut up.

The code didn’t get any better, and the programmer wasted time (and angst), making the linter happy.

Nothing was gained, and time and effort were lost.

Automation

Automation is designed to address two fundamental issues:

  • Humans make mistakes
  • There are only 24 hours in a day

Yes, I am being glib, but these are important issues.

It is very easy to make mistakes; you could be tired, you might be rushing or just unlucky.

Automation in the form of build tools (like Maven or Gradle), scripts (bash), or full tools (like Jenkins or Bitbucket Pipelines) ensure that things are done the right way every time.

They also offer the ability to do things for us automatically and with confidence.

An excellent Continuous Integration (CI) pipeline allows developers to throw code changes at the CI server with the confidence that it will perform all the checks (tests, lint, and metrics) without any extra cost.

These checks should be automatically triggered and done asynchronously to the development process. Allowing the developer to get on with other things.

Similarly, there are many other tasks we programmers frequently perform that can be automated, like code formatting, code generation, and even pull-request submission.

Where possible, these tasks should be automated to ensure consistency and to reduce the effort required.

This should either be done automatically in the CI pipeline or as part of the provided developer tooling. Either way, the impact on the developer and reviewer must be minimized.

Support

Up until this point, I talked mostly about code and the individual developer, but development velocity extends beyond this.

There are three main things that the tech lead (or higher) must ensure that all developers are provided to ensure that development velocity is not hampered. These are:

  • Tooling / Base Libraries
  • Prompt code reviews
  • Clear deliverables

Let’s dive deeper into these points.

Tooling / Base libraries

Most of the work a particular development team will do will be for the same purpose, e.g., building web services or creating tools or maintaining shared libraries.

To achieve this work, developers will often use the same tools or libraries over and over.

The team should decide on a set of tools or libraries that they want to use, and it is the tech lead’s responsibility to ensure they are available.

When developing web services, these will typically include:

  • Instrumentation — tools like StatsD (Datadog), NewRelic or Grafana. It does not matter which tool is used as long as it is standard across the development team
  • Logging — as with instrumentation, the key here is standardization. Logs, like instrumentation, must be centrally accessible (using tools like Scalyr or AWS CloudWatch).
  • Consistent usage of Logging and instrumentation — the team should develop and adopt standard practices when it comes to how they log and how services are instrumented. This typically takes the form of wrapper or convenience libraries that help ensure consistency. A typical example of this would be ensuring that user-related log messages include a RequestID.
  • Config — teams should have a standard way of handling configuration. This could be environment variables, config files, or a configuration server. It does not matter which approach as long as it is known and consistent across the team.
  • Feature flags — beyond basic configuration, it is crucial for service owners to change the configuration of a running service. This includes being able to turn features on and off, changing limits, and even user-related configuration. Again it does not matter if this configuration is available via something simple like a collection of Redis keys or a configuration system like LaunchDarkly; the key is that a solution exists and is consistent.
  • Central Handling of Common Concerns — when providing web services (plural) to users, it quickly becomes apparent that all services, particularly public-facing services, have many shared requirements. These include instrumentation, user authentication, security, rate limiting, and even DDOS protection. A practical approach is to handle as many of these concerns as possible “at the edge” of our network. This allows service developers to trust that these concerns have been taken care of, which in turn reduces the scope and complexity of the service. Take user authentication, for example, if the service developer can just trust that the request includes the user’s identity and that the user is logged in and valid. Then there is no need to call an authentication service. They can just perform their tasks without validation or dependency on user authentication or validation services.

This is by no means a definitive list. As a team, we should regularly take stock of similarities between projects and look to lessen their cost by standardization and/or centralization.

Prompt Code Reviews

As a senior Individual Contributor (IC), this point is a pet hate of mine. There is nothing more frustrating than quickly producing code and then having it stuck in review for days.

Similarly, it is frustrating to have to pester my teammates for a review continually.

Not only does it limit my personal velocity, it frequently causes double work in the form of rebase/merge conflicts.

It is the tech lead’s job to ensure that code reviews are performed promptly and thoroughly.

I am not saying that it is the tech lead’s job to do the reviews, in fact, quite the opposite, they should not be doing anymore or less than other members of the team.

Reviews are an excellent opportunity for the reviewer and reviewee to learn the system and improve as developers.

Rubber stamp reviews don’t help anyone, and if the code really has problems, then it is quite detrimental as it impacts quality.

Personally, I try to work on small chunks of work, small enough to finish two or more a day. As such, I find it useful to submit my work and then check the review queue and do some reviews while waiting for my code to build and get reviewed.

This ensures that reviews for others in the team are not pending for long, and I can do something useful while waiting.

Ideally, reviews should not be pending for more than about 4 work hours. Reviews should not need to be requested, but if folks need an emergency review or want a review from a specific person, they should be able to request it.

Clear Deliverables

These are another task for the tech lead (or product owner).
Simply put, a developer cannot build something unless they know what it is.

This does not mean developers need to be told how to do their job. Actually, the best approach is to outline the business or user value that they need to provide and let them determine the appropriate implementation.

Taking the perspective of user value has two advantages.
Firstly, this is usually the default perspective of the product owner and, therefore, the easiest to convey.
Secondly, it allows the developer the greatest flexibility in how the goal is achieved.

As developers, we can sometimes be a bit straightforward in our thinking. If you tell us to build X, we may build X without thinking much about it.
However, when we fully understand what you are asking for, the solution we provide might surprise you.

Clear deliverables can take many forms, but generally speaking, we should be aiming for “just enough” documentation and formality.
We don’t need a full development plan for every change, but significant changes should have an RFC or some kind of software design/architecture document.

Similarly, when adding small features or tweaking an existing one, it is often enough to write a user story or two using a form like this:

As a merchant, when I send an order with $0.00 value, then my request should be rejected

User stories like this are clear to both product owners and developers and can easily be turned into both UAT and manual tests.

Introspection

This last point is perhaps the one most often missed or forgotten when it comes to development teams.

It is natural to get caught up in the day-to-day grind of fixing bugs and adding features.

As such, we often forget to take the time to perform an honest review of ourselves and our progress.

Once in a while, perhaps quarterly, teams should set aside an afternoon for themselves.

In this session, the team should:

  • Review the current state of all code and services that are under their care
  • Review the processes and practices of the team

The goal of this session is to celebrate the successes, acknowledge any on-going concerns, and make adjustments.

Teams should be asking themselves questions like:

  • Where did we succeed, and how can we do more of that?
  • Where did we fail, and how can we fix it?
  • Do the current practices and processes help or hinder? How can we streamline them?
  • Are the tools we adopted helping or hindering?
  • Did the linters we have adopted to make things better or worse?
  • Can we enable more linters or checkers and take our code to the next level?
  • Do we have any significant tech debt that we need to spend more time on?

How, when, or where this is done is not nearly as important as ensuring that it is done.

We (developers) are responsible for the quality of the work we produce, and we can influence how this work gets done.

Mistakes can only be fixed after they have been identified.

Inconveniences can only be addressed after they have been acknowledged.

Best practices can only become standard practice once everyone knows about them.

Conclusion

As a programmer, I love to solve problems and deliver value with code.

I hope this article has given you an idea of the sorts of things I do to make myself more productive, what I believe teams should do to make themselves more productive and how supporting folks (tech leads and product owners) also have a part to play.

As you can see, many things can impact development velocity. I do not recommend that you adopt everything in this article blindly or all at once.

Instead, try incrementally adopting these ideas and use the introspection session to reject anything that does not work for you or your team.

--

--

Corey Scott
OVO Tech

Programmer, Poker Enthusiast & Author. Constantly questioning the wisdom of the status quo. For more: https://coreyscott.dev/