There’s a lot of talk in the agile community about the ‘right’ and ‘wrong’ way to ‘do’ agile. You’ll note by my heavy use of apostrophes in this first sentence as to my personal views concerning this subject.
As an agile coach I’ve been involved in plenty of departments where agile was demarcated into particular frameworks, practices and processes. For some of these departments this rigorous adherence to a particular framework actually helped — to begin with. I like to describe this approach as ‘agile by the book’ where a clearly defined approach is adopted by a team or department as the way they will adopt agile techniques. Its good, when it works. When it doesn’t it can lead to an utter shambles of argument, failed deliveries and the most important factor in all of this, the people, feeling chewed up and spat out when the world comes crashing down.
The other consideration when following a fixed process is that one of the central tenets of agile is continuous improvement. A fixed process means the ability to continuously improve is vastly reduced!
Experimentation is key.
It can be pretty nerve racking as a delivery manager to hear from a team that they want to run a two week spike to experiment with a new technique or idea. But I’d urge you to allow this experimentation to happen. Some of the most effective practices, techniques and even new features where I work have come from a deliberate practice of experimentation. A great example from one of my own teams was when we welcomed Woody Zuill to Redgate a few months ago to share mob programming with our development division. Literally the same day that he had finished one of my teams shared their intent with me that they were going to experiment with mob programming (From that afternoon) as an experiment.
What could I have done in this scenario?
- I could have pushed back on this expression of intent from my team and said we don’t have time right now with in flight work, but we’d try it in the future
- I could have suggested that our existing techniques were working fine and we shouldn’t be changing our approach halfway through the delivery of a new suite of features
- I could welcome the suggestion of trying something new in order to see what the results were
As it stands I went with the 3rd option and the team went ahead and started trying mob programming. The net result? The team has adopted mob programming into their suite of practices. Last week they even used it to do a mob release and watched as another one of my teams sent a couple of their own engineers to observe, who then went back to their own team and suggested they needed to improve a set of their own release practices as a result.
This practice of experimentation and sharing of the results is incredibly powerful. In this case a team now have an additional technique to complement their already existing suite of practices, one of my other teams now has an idea of what they need to do to improve their own release process, and so we go on.
Dan Pink’s Drive should be a 101 guide to agile teams
You can read a lot online or from external advisors on what you should and shouldn’t do when it comes to adopting agile for your teams and departments. But Dan Pink’s Drive sums up what it means to be agile at Redgate (Check out the brief video here). In summary at Redgate we want our teams to have: -
- Autonomy — self directing teams in conjunction with a clear purpose (see below)
- Mastery — the ability to drive personal development and learning
- Purpose — clarity on why we are here and what our mission is, in our teams, in our departments and in our company, at its most basic form its about knowing the ‘why?’
These 3 things mean far more to us than a specific framework, technique or ‘by the book’ method of ‘doing’ agile and it shows across our entire company, but especially in our development teams. No two of them are alike, no two teams use the exact same methods or techniques or process to get the work done. Granted, there are absolutely some things that are clearly aligned across the entire development department, but these are done more to provide that clarity of purpose and aid in the autonomy of the teams. Techniques such as OKRs ( you can read here how we use OKRs) and clearly defined expectations around coding guidelines etc. but we never ‘tell’ teams how to deliver the work in a certain way. Teams and their individual team leads are trusted (With the support of quality coaches and development leads) to make the right decisions i.e. autonomy
We measure the right stuff and ignore ‘bad’ metrics
I’ve worked plenty of places where it was MI heaven. If I wanted to know how many PRs a specific engineer had contributed over the last 2 years I could get it. If I wanted to compare team A’s burndown to team B’s burndown I could do it. I’d be an idiot for doing it but I could do it if I wanted. At Redgate we’re not driven by artificial metrics, instead we focus on the outcome we are trying to achieve. And I say bad purely because metrics themselves are not bad, it’s what we do and how we react to them that leads to bad outcomes.
As an example if I or my department suddenly declared that every team must use story point estimates when shaping their backlogs in order for us to measure their burndowns and velocity, then a number of outcomes would occur: -
- Teams would be forced to adopt methods they did not want or were unfamiliar with — breaking the autonomy of our teams to pick the right tools for the right job and showing we don’t trust them to do the right thing
- Estimates would become a cottage industry, requiring oversight, training, support and discussion
- Estimates would likely be challenged, gamed or compared for no real benefit
- Misunderstandings would occur, disagreements and a general sense of why?
- We’d damage the safety of our teams, why do they want to measure these things? Why haven’t they told us?
Instead we focus on measuring the right things at the right time in order to support our teams. The key to this is our adoption of the ‘4 key metrics’ from Accelerate. Put simply we measure: -
- Deployment frequency — i.e. how often our teams are deploying to a production environment
- Delivery lead time — i.e. the time taken to start and end a commit (this is the time taken for work to be merged to master and shipped)
- Change Failure Rate % — i.e. of the number of items we deploy, what % cause a failure in production
- Mean Time to Recovery (hrs) — i.e. once a failure has occurred, how quickly do our teams respond to recovering the service for our users
We use these metrics (We adopted them in Q2 of 2019) not to beat teams with, but to use as a conversation point, with the idea being that teams will naturally want to review and understand these metrics in order to: -
- Be able to have a narrative around them e.g. in August we had a change failure rate that jumped to 10% because we deployed ten times that month and the one that failed was X which we discussed in our retro that week and this is what we’ve done to prevent it in the future
- To encourage continuous improvement. We want teams to have an attitude of CI in everything they do. If our lead time is currently 3 days, what would it take to move the dial to 2.5 days?
- To aid in the mastery of the work, if we want to be a high performing team, what do we need to go after to achieve that? If we are only deploying once a week, what could we do to move to twice weekly deployments? What skills would we need, what automation? Could we experiment with trunk based development? Etc. etc.
We encourage and welcome change
Not just in the sense of changing goals and features, but even in the make-up of our teams. We have a pretty firm view on what we think is the ideal team size for a really effective team (One Technical Lead, 5 engineers and a designer thanks for asking) but we encourage and support rotation of our team members across our product suite. This means if someone fancies a change from working in our SQL Source Control area and wants to explore Oracle then we try and put in place the mechanism to do this smoothly. What could be chaos actually ends up well and ties into Dan Pink’s point about mastery, as it allows all of our team members to gain new skills, experiences and exposure to new techniques (for more info on this check out Chris Matt’s staff liquidity argument here) remember, each of our teams are different in what they do, processes, techniques etc.
The best example we have of this recently is at the end of 2018 we did an entire department reteaming exercise where everyone was offered the opportunity to move into new teams (You can read Redgater Chris Smiths blog about what we did here) . Each team created a team charter (What we believe in, what techniques we use, why this team will be a good fit for you- or why it might not be) and then asked people to express their preferences. The interesting outcome was that a third of people moved, not everyone. For me it’s another demonstration that if you trust people they aren’t going to ruin your plans, bring down the company or set out to destroy you. Trust is a powerful thing, giving it to people means it gets paid back in spades.
We accept this won’t work for everyone
It’s pretty acceptable now for someone with a decent enough model for how a team or department should operate to setup as a consultancy or to trademark their idea and sell it to the masses as the framework we should all use. We could probably do something similar if we wanted. I’m not sure what we’d call it, possibly ‘freedom’, ‘common sense’ or ‘just trusting people to do the right thing’. Regardless, I know writing this its very easy to sit here and pretend to be on some lofty position of expertise and tell you how easy it is to do what we do. The reality is that it isn’t.
We’ve spent the best part of the last 10 years perfecting where we are right now, trying lots of things, failing lots of times and having varying levels of success. We’ve tried crazy ideas like autonomy bargains, release trains, automation in lots of places, technical project managers, scrum for all and anything else you care to throw into the agile bowl of ideas. Some of it has worked, some failed spectacularly but the key thing was we kept going, learning, perfecting, iteratively building on the failure just as much as the success.
The key point to all of this is that this is how we ‘do’ agile at Redgate.
We aren’t beholden to a particular model, technique, framework or set of ideals. Yes we’ve found some things we’re pretty immovable on now (Like team sizes and the four key metrics) but others are just as likely to change as the weather. This is what makes us agile, the ability and willingness to try new things and accept a high chance of failure on the journey, but not to abandon the end goal because something we tried turned out badly, but to use it to learn and improve. When we talk of continuous improvement for teams it sometimes feels like a clear and bounded process that only the people in a team need to consider, what we do at Redgate is apply this thinking to the entire company. How do we make today a little better than yesterday?
This is agile and this is how we do it.
Comment below or get involved on our Twitter page @RedgateProdDev