Continuous Deployment: No Sprints Required
Enough students of Industrial Logic’s Agile eLearning wanted an ability to resume from where they last studied that we knew we had to build that feature. It was 2011, about five years since we’d begun producing eLearning, and by this point we’d graduated from weekly releases to continuous deployment. How would we develop the resume feature given that every commit we made to our source code repository would make it to production?
We’d long since dropped planning and executing sprints in favor of do-a-little-work, release and repeat. We didn’t know precisely how long the resume feature would take, but we also knew it wasn’t complicated enough to require a lot of time.
We discussed the resume feature and realized that we could start by writing and deploying the data persistence code first, since the “last page visited” data for students would eventually be needed when we completed the entire resume feature.
Data persistence was a horizontal slice of the resume feature (the vertical slice):
We test-drove the data persistence code, checked it in and after passing all of our build’s automated tests, it went live and started actively recording last page visited data.
Meanwhile, our students knew nothing about this new development. A small, valuable portion of the resume feature had been deployed to production, but not released publicly. It’s common practice in continuous deployment to distinguish between deploys and releases.
Next, we focused on the domain logic, the code to decide how to handle resuming in different contexts: resuming from a page in an album, or a page in a box set or a playlist or compilation. Again, we used test-driven development to craft the logic and when were satisfied that any new piece of logic worked correctly, we checked it in and it too went to production.
Next, we produced an ugly, but functional user interface and deployed it to production in a way that only we could see it. We played around with this new feature in production.
We then produced code to keep statistics on how many users actually used the new resume feature. We wanted this in place before we released the resume feature publicly so that we could know how popular the feature was.
Finally, after cleaned up the user interface, we made the new resume feature public to all:
The resume feature ended up taking around 3.5 days to produce. During that time, we deployed valuable work to production, tested our work along the way, huddled frequently to review what we’d done and decide what to do next and gradually evolved and released the new feature. Over time, we watched the daily usage statistics for the resume feature climb to 33%.
We did not plan this work into a sprint by carefully estimating what we could do, tracking our velocity with story points or doing a formal demo at the end of the sprint. Instead, we worked in much tighter cycles, planning, testing, deploying, reviewing, demoing and releasing all along the way. Trying to fit this work into a fixed-length time box would have provided no value to us. The cadence of continuous deployment obviates the need for a sprint’s courser-grained planning and review cycles.
While your situation may be different from ours, I’d suggest that if you are beginning to do continuous deployment during your sprints, experiment with dropping the sprints altogether and see if that improves your agility. You might just be surprised.