Feature Acceleration and Serverless

I wrote my post yesterday (read that first!)…

…and realised that I hadn’t put anything in about acceleration, which was a bit crazy since I did a degree that was mostly Physics and I was talking about Velocity over Time, which is Acceleration.

I also found a blog post about Agile and measuring productivity from 2008

which made me feel like I’m rediscovering something forgotten. This IBM post talks about some of the things I talked about yesterday, and goes into a little more detail about it with numbers. I think it’s helpful to repost it to give a little more context for yesterday’s post and this one.

But that IBM post is missing something.

If you are simply measuring productivity of a team/person then the Acceleration post does a great job of explaining how to measure if everyone in your team is doing their job.

Acceleration is a productivity measure of a person or team.

Acceleration then is not a measure of efficiency.

Acceleration and Efficiency

When you start a greenfield project, everything is fun and rosy and enjoyable. Everybody can build their bit of the solution and nobody is affected by major issues or users jumping up and down annoying them with feature requests or bugs. You have very few constraints or problems to resolve.

As you move through a project, you find bugs, and problems in the code, or edge cases come to light that you couldn’t ever have considered (it’s impossible to define all edge cases, which is why resilience in your system is important) and this causes issues.

What generally happens in a multi-iteration scenarios is that you end up starting with no bugs and no technical debt and no architectural debt and it’s pretty much all features.

And then over the iterations, the non-feature elements increase and become part of current and future iterations.

So, there is a problem with looking at “Feature Velocity” and “Feature Acceleration” because you have to ask the question:

What is a “Feature”?

Often within an Agile scenario the idea of Feature Velocity utilises the units of work and the number of stories in a sprint as a measure.

Except that isn’t “Feature Velocity” it’s “Story Velocity”.

And that’s a great measure of team and individual productivity.

If you want “Feature Velocity” you have to break down which of the stories are features and which are stories.

So, looking at some numbers, you may end up with this scenario (assume team size is constant throughout):

Team 1

(units of work completed for features/bugs/technical debt/architectural debt)

Iteration 1: 15/0/0/0 = 15 stories

Iteration 2: 12/3/1/0 = 16 stories

Iteration 3: 9/5/2/1 = 17 stories

Iteration 2: 7/9/2/2 = 20 stories

Now I know this is totally contrived, but it is an illustration of how metrics can be misleading.

In this scenario, the team is improving it’s “Story Velocity” throughout the iterations: 15, 16, 17, 20. It’s a total of 68 units of work on stories. This is very good from a simple way of viewing things.

This team is delivering lots of “work”.

But if you look at “Feature Velocity” you get a different story through the iterations: 15, 12, 9, 7. It’s a total of 43 units of work on features. This is not so good. It’s actually decelerating.

(I’ll leave it as an exercise for the reader to do Bug Velocity, Technical Debt Velocity and Architectural Debt Velocity)

But the team is definitely delivering more units of work over time. That’s a positive result right?

Except… if you’re management, what tends to happen is that the team feels like it’s working really hard but when you look at cold hard numbers the team is slowing down it’s feature releases.

And that’s a problem.

The team is becoming more inefficient over time because it’s putting more effort in and achieving less.

Now, let’s take a slightly different scenario (assume same team size and iteration length):

Team 2

Iteration 1: 12/0/0/0 = 12 stories

Iteration 2: 12/1/1/0 = 14 stories

Iteration 3: 12/2/1/0 = 15 stories

Iteration 2: 13/2/1/1 = 17 stories

Story Velocity: 12, 14, 15, 17 (total of 58 units of work)

Feature Velocity: 12, 12, 12, 13 (total of 49 units of work)

(Again I’ll leave it as an exercise for the reader to do Bug Velocity, Technical Debt Velocity and Architectural Debt Velocity)

From the outside, this scenario looks like the team is doing less “work” than the other team. It is true that fewer units of work are being done, but what is fascinating is that Feature Velocity is accelerating (slowly).

Now, there could be a number of reasons for this (and it’s contrived of course) but one of the reasons could be better coding taking a little longer producing fewer bugs/technical debt or better technical decisions making the base layer of technology more stable.

The point is that in this scenario, if you simply look at “Story Velocity” and “Story Acceleration”, both teams look good, but team 1 looks better than team 2. They are doing more “work”.

However if you look at “Feature Velocity” and “Feature Acceleration” you end up with a different story. Team 2 looks a whole lot better than team 1. Team 2 is more efficient.

What has this got to do with Serverless?

(I pointed this out in my previous post, but I thought it needed numbers)

Basically, my experience is that you still have bugs and technical debt and the rest, but over time, you have a very different Feature Acceleration because you actually remove some of the burdens around Ops as well as bug fixes and technical debt being relatively easier over time to fix. This is partly due to the fact that FaaS are decoupled (depends how you build it but that’s how we do it) which means that fixing a bug in a function is relatively harmless to the rest of the system.

Basically, my experience of what happens in a Serverless environment is that :

You end up with a much more constant Feature Velocity

This leads to an improved Feature Acceleration

You also end up with a much smaller Bug Acceleration and Technical Debt Acceleration

I will admit that Architectural Debt Acceleration is a different problem, however you are more reliant on your provider and so should be able to mitigate that really well, but we currently don’t have the best tools for that (yet).

Given all of the above, I would suggest that a Serverless approach tends towards delivering a significantly improved Feature Acceleration and an improvement in overall Team Efficiency (output for work done).

Efficiency is better than velocity

Simply reiterating the point from the previous post, but it’s worth mentioning again:

So, if someone is saying you’re not moving fast enough, just consider whether they are looking at a snapshot, or whether they are looking at the wider context.

Making your team efficient, and able to deliver a constant or near constant Feature Velocity over time is far more valuable to senior management than being able to deliver lots of stories in an iteration.

And I haven’t even started talking about scale…