What we learnt from shipping SQL Prompt 8

The SQL Prompt team shipped SQL Prompt 8 back in May 2017. This blog post talks about what we learnt during the final stages of the project and how we worked with the quality coaches.

Reducing the amount of work to do

Formatting was a difficult project to finish because our users care a lot about it. The new formatting engine solves over 16 UserVoice requests with over 1000 votes in total. Our users feel very passionately about the software and we didn’t want to disappoint.

After months of development, we were keen to get this out to all SQL Prompt users. We had positive feedback from early adopters, but we didn’t have a deadline and we were suffocating under the quantity of feedback from surveys, the forum, and support. It was important to the company as a whole that we get something out to drive renewals and solve a key deficiency in SQL Prompt. We set a goal to release the formatter by the middle of May.

In order to meet that goal we had to cut down on features and improvements. We had an initial hour-long session to decide on the must-haves for our v8 release, putting everything else in “after v8” to do incrementally or discarding. A couple of weeks later we repeated the session as it became clear we were still planning to do too much. As the deadline drew closer, it became easier for us to reach consensus on what was most important. We learnt a lot in between, so what was most important changed from the previous session.

On top of this, we also had to write the UI to allow SQL Prompt to have identified trials. We couldn’t do this the same way as the other products, because SQL Prompt doesn’t have a big window where we can put notices or alerts. Marketing allowed us more time to do this in a sensible way, but it increased the size and complexity of the release. We had to find a way to manage all these risks.

Working with the coaches

We approached the quality coaches to see what advice they had for our release. We weren’t very confident and had concerns as it was such a big change. We were changing the whole formatting engine and had been working on a feature branch for some time, while continuing to release stable versions on our mainline branch.

Gareth advised that it would be possible for the coaches to run an exploratory testing session. This was exciting — we hadn’t realized the coaches would run these kind of sessions, and had thought that we would have to do all the prep work on the team. Jose and Gareth advised that we should have separate testing sessions for different parts of the software. This meant we could test the formatting engine deeply, and then concentrate on testing licensing for identified trials in a subsequent session.

Jose ran both exploratory testing sessions. At the start of the session he paired us up and gave us a testing scenario and environment. During the session Jose rotated the scenarios so we could repeat them on different environments. We setup a spreadsheet to record issues in the session:

Coming out of the testing session, we found two must-fix bugs to fix before release, plus some high-priority issues which we needed to address. There were fewer bugs than expected given the size of the release, and this gave us more confidence that we already had something which was high quality.

A big benefit of the exploratory testing sessions was the fact that they included people from all around the business, and thus spread knowledge of the release. With the formatting testing, Support and Sales got to see the release-ready formatting engine and use it in practice. With the identified trials testing session, Support were able to help us test difficult licensing scenarios (e.g. serial for v7 but does not cover v8) and get enough understanding to respond to user queries after release.

Fast feedback cycles using Vagrant

During both the fixing and testing stages, we made heavy use of our Vagrant virtual machines.

Vagrant was crucial because it’s so fast to set up, snapshot and restore consistent VMs across the entire team. We used Vagrant throughout the development cycle, verifying all bug fixes on at least one version of SSMS or Visual Studio. During the exploratory testing, we targeted our most-used VMs according to product usage data.

While responding to our issue with SSMS 2008, we already had a Vagrant VM ready to go and could reproduce the issue immediately.

You use Vagrant using the command line; it takes a few minutes to learn the commands but after that it’s easy to snapshot and restore:

What happened after release

A week before our deadline of May 17th, we discussed whether we were ready to release. During our exploratory testing we voted on issues and marked them as release-blocking. As we had fixed all release-blocking issues, we decided there was no reason not to release and we went ahead.

We released to 500 users using the limiting mechanism on check-for-updates. This meant that SQL Prompt 8 was available on the website but only 500 people would be prompted to upgrade. You can do this on all products that send out update notifications using check-for-updates:

Overall we were receiving positive feedback and tweets about the release. However, we started to hear about errors in SSMS 2008 R2. We had explicitly chosen not to test this platform because we were planning to drop support for it. Our reasoning seemed sensible — let’s concentrate on the platforms we want to support. Unfortunately we had a popup error on startup and the formatting options were not showing. As we were planning to stop installing into SSMS 2008 R2 in a couple of weeks, we just moved that earlier and removed it from the installer. Jamie made sure we checked relevant marketing communications, and we began responding to customer concerns.

A couple of days later we lifted the cap on upgrades to let everyone upgrade to SQL Prompt 8.

Conclusions

It was really useful to work together to ensure the quality of the product was high enough to release. This gave us the confidence to push the button. We found it useful working with the quality coaches and it was great that they could organize testing sessions for us while we were developing the software. We’re going to work with them over the next few weeks to help us define the release process we want to follow.

We learnt to test on all supported versions. Ideally we would have a GUI testing framework doing this on every push to source control. However, failing that we could do a quick run through the red routes before any major release. To help us with this we can create more Vagrant boxes to allow us to test on every environment that our supported customers use.

Like what you read? Give Michael Clark a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.