How we used tight research, design and development iterations to build SQL Monitor
The HammerHeads product team (one of the 2 engineering/design teams working on Redgate SQL Monitor) have been working on a new Estate Management area for the product. Our aim has been to provide insightful, estate wide information to our users that helps them stay on top of things and plan for the future.
In Q2 of 2018, one of the 3 projects we were developing was the Installed Versions page. From previous research, we knew that staying on top of patching across your estate was a time consuming and painful process, involving regular manual data collection and combining multiple different sources of information to check if there are updates available.
The intention of the Installed Versions page was to make it a pain-free process to know if your estate is fully patched, and if not, what the latest available update is.
But, how should we introduce new features if we don’t know where they will belong in our product?
We’d previously identified that we didn’t have a good way for users to discover new features and enhancements in SQL Monitor. We’d regularly get feedback from people who had stumbled across a “new” feature that had been released months ago.
Our concern was that if we wanted to eliminate waste and iterate our ideas quickly, we’d need users to be able to find new features easily. This early in the project, we also didn’t know where in the application these new features should live.
In order to address this, we built a very simple “What’s New” dropdown in SQL Monitor. It didn’t have an OTA update mechanism and was in fact just some hardcoded HTML.
This allowed us to:
- Highlight the new features/enhancements as we iterated the previews.
- Build prototypes without having to worry about where they should live in the application.
- We didn’t need to disrupt existing user interactions and workflows.
- Build prototypes independently to other code in the application (ie on a completely separate page). This limited the engineering surface area of these prototypes and would make it easy to delete any “failed” experimental code, if needed.
Week 1: Break it down
One of the most important things before you start a project is to try and understand the rationale behind the problem, especially when you have to work in a very complex context.
So in the first week our scope was to break the problem down to the core issue (the what), discuss why we think this is a problem and who gets affected by it.
We also captured our assumptions so we could review them after doing some research and see if they were valid. In addition, we decided that we should conduct some exploratory research to learn more about our unknowns and investigate people’s workflow. So we captured things that we wanted to ask through this research.
Week 2: Iteration 0
We had a very quick (2–3 hour) whiteboarding sketch session in which we explored ideas about what we thought the ideal solution might look like as a team. This helped us think about what a minimal prototype would need to help us validate some of our riskiest assumptions.
Based on this session, we decided that we could build a very lightweight prototype pretty cheaply and ship it in the next release.
In total, we spent very little effort on the first prototype — 6 hrs engineering for Iteration 0! All we had produced was a dumb, static table with all the data we thought might be useful, splatted out across it. We knew it was incomplete, pretty ugly and not very usable, but we also knew it would be a good strawman to get users to give us feedback on.
Although we built(and shipped!) our initial prototype before any user calls based on our gut feelings and existing product knowledge, we did this in parallel to mapping out our assumptions, making call plans and arranging calls.
This allowed us to:
- Highlight some fundamental assumptions/questions that we could have missed from the call plan.
- Get early feedback from users to know if we were even in the right ballpark (We had our first piece of email feedback within 5 hours of release!).
- Reduce waste by not adding features and functionality we weren’t sure of (eg Sorting, export to PDF).
- Enroll our users into designing a more valuable solution for themselves (Work smart, not hard).
Build, get feedback, iterate
We spoke with 12 users in total to validate Iteration 0 but we also got enough positive feedback through emails which were suggesting future improvements. This was a truly co-designing process in collaboration with our users.
In the following weeks we had enough feedback to analyze and decide what our next steps should be. While collating and analysing feedback we grouped data into “codes”. Codes were used for feedback that kept coming up and became patterns of ideas/suggestions for future improvements and change. After coding our feedback we were able to have a clear mind on which of these ideas were popular by simply counting people.
The next steps for us were to summarise our research and translate this into creative ideas through a design studio workshop with the entire team participating. In another 3 hours we came up with many great sketches that could shape our future page.
Over the next weeks the team iterated on their designs and prototypes, addressing the next riskiest assumption, whilst continuously validating existing interactive designs through user testing to help shape the final product.
Iterate, iterate, Iterate!
Iteration 0 — Dumb Prototype — 8.0.4 — April 18th, 2018
Our initial prototype. It only showed the version you had installed in a static table.
Iteration 1 — Hardcoded latest version — 8.0.6 — May 15th, 2018
We had lots of feedback on Iteration 0 saying “This is handy, but it would be awesome if it could tell you what the latest version for each server is”.
We decided to reuse the code that returned the Service Pack/Cumulative update from the version number to tell users if their servers were out of date. We decided to go ahead, even though this list was hardcoded and didn’t have an update mechanism. To get around this we added a banner saying “This data was correct as of xxx”.
Iteration 2 — Updating versions file — 8.0.7 — May 31st, 2018
In this iteration, we automated the updating of the versions information using a JSON file on the Redgate website. We initially scraped and built this file by hand (we’ll, using a bunch of Excel macros), copying data from one of several online resources.
Iteration 3 — Improve Table readability/usability — 8.0.9 — June 12th, 2018
At this point, we had enough information about the data that people found useful or superfluous so spent some time making the table more readable by merging some columns, dropping unnecessary ones and adding some icons to give some at a glance feedback for when a server was out of date.
Iteration 4 — Summary — 8.0.10 — July 10th, 2018
NOT adding sorting/filtering or grouping — These were probably the first/most frequently requested features for this page from Iteration 0, but we held off on them.
This is a technique we used heavily in our iterations: “Leave the functionality we think is obvious (or are a bit unsure of) out, to force users to request it (or express some pain around it), so we KNOW it’s wanted and can probe deeper”. Not adding any sorting was an example of this — we wanted to find out what jobs/scenarios they were trying to do when sorting, to see if there was a more WOW way to solve it.
Leaving out filtering and sorting from earlier iterations let us iterate on our riskiest assumptions sooner and led us to creating the at a glance summary at the top and to finding out that quite commonly, DBAs can’t update all servers (because of 3rd party dependencies) but don’t want their bosses to think they aren’t keeping things up to date. Muting servers has been left out of version 1.
Iteration 5 — V1 — Coming on 24th July!
(Finally) adding grouping, sorting, filtering and export, polishing, and tidying up loose ends.
- An tightly iterative process can bring you closer to the most valuable solution while eliminating waste.
- Creating a “design vision” of the ideal version up front helped us flush out assumptions.
- Instead of waiting to fully validate things, we started building in parallel to continued calls once we had a reasonable level of confidence, tweaking and iterating our designs as new information came in.
- We didn’t worry about getting it wrong — in fact we purposefully got it “wrong” to force users to ask for missing features and give us feedback
- Avoiding “obvious” features (eg adding sorting/filtering) let us iterate on and validate more important assumptions earlier, and helped us understand users end goals.
- We constantly iterated and experimented with our internal process — If something wasn’t working for the team we dropped it (eg using Codes to analyse research).
- Taking notes and keeping track of everything in documents helped us be more organised and don’t feel lost.
If you liked this post give us a clap and let us know what kind of challenges you are facing and what processes you are following in your team— we’d love to talk to people and share more ideas.