When your investment in DevOps processes pays off, it’s magical!

As someone who has done most of his work with PowerShell for the best part of a decade, I used to work with a “just enough script” mentality. There were always a large number of tasks to complete, so I would do basically the minimum required to get one done before moving on to the next, and I certainly wouldn’t do things like optimise to use fewer cpu cycles, which are cheap, when my cycles are far more expensive.

Over time this resulted in a large number of snippets of code that I could reuse, sometimes with a minor bit of tweaking, but certainly weren’t tools that could be passed on to anyone unfamiliar with PowerShell.

Since first reading The Phoenix Project, and subsequently delving further into DevOps at every opportunity, I both changed my way of working (to better manage my time and get more things done), and changed my approach to producing code, to increase both quality and reuse.

This change first manifested when working with my colleague Sean to implement Microsoft’s Forefront Identity Manager (FIM). We ended up writing a lot of PowerShell scripts and C# code, which we managed from the start in a TFS source repository and later migrated to Git, hosted on Visual Studio Team Services (or Visual Studio Online, as it was at the time). VSTS gave us the tools to very simply turn on Continuous Integration for anything that we checked in, and we made a decision early on to ensure that at the very least all PowerShell was tested using Pester and PowerShell Script Analyzer (PSSA).

Even with the best intensions, our unit testing with Pester was patchy (writing good tests is hard, yo!), but at least we were meeting the best practice standards as measured by PSSA, which does force you to avoid some pitfalls and maintain certain standards.

We built tools into modules, and we defined releases for the code, so that it could be quickly deployed to the relevant servers for use. After getting FIM pretty well finished, we worked with more colleagues to add other systems into the same workflows. It’s work that I think we can be justifiably proud of, but until now (ironically just a few days before I move on to a new job with another organisation) I couldn’t express as succinctly how completely brilliant this is…

Today I was looking at a support ticket that required a change to a number of user accounts. It would’ve taken someone on the Service Desk a significant amount of time to do manually, and being a new kind of request, I didn’t have a script to do it. I realised that there were a bunch of pre-requisites for dealing with the ticket, and that it would take a chunk of time before I could even start to perform the necessary changes. However, all of the pre-requisites are used by similar functions in our FIM Helper module, so I started to add the new functionality there.

I ended up just taking copies of bits of another couple of functions and literally writing about two new lines of PowerShell. The result is a function that is a perfectly reusable tool, with enough parameters to make it more useful in the future than was actually required for today’s job.

On checking in the code, it was tested to ensure a level of quality that I wouldn’t have worried about before having this tooling in place, and it was automatic, so didn’t take up more of my time. On deploying the release from VSTS, it made the new tool available to the whole team, and in the future the Service Desk can use it themselves to deal with this sort of ticket at the front line.

Yes, it took us some time and effort to get all this in place, but the speed of delivery and quality of new functionality is now vastly increased, and we have a whole load of other benefits too. Our DevOps transformation has really delivered, and it is kinda magical! :-)