I write this post because I went through a journey I commonly see other product managers mirroring when writing requirements over their career. The first question I had was “Where do I start?” — I was given bad advice, great advice, and a lot of opinions. As product managers, we need to communicate to our teams the vision of our product’s value to users while also communicating when the requirements have been met. I say the title tongue in cheek because there is no one correct way to write a user story, acceptance criteria, or a feature/epic, though there are several wrong ways. When written through a methodologies lens, judged based off a set of criteria, and viewed as a project management tool to help achieve a product’s outcome, a quality set of requirements should emerge.
The two most successful requirements methodologies I have seen and used are Behavior Driven Development (BDD) and Hypothesis Driven Development (HDD). There are times when you will want to use one, the other, or both and times when something custom or completely different is needed.
Behavior Driven Development (BDD):
Dan North created this in response to the many questions he received when teaching Test Driven Development, though he expanded it into a methodology for starting software development by communicating desired business outcomes to developers through a common language with a shared definition of when a user story was done.
As a [role] I want [feature] So that [benefit]
We can see that the what and why are communicated for both a technical and non-technical person to understand. Also, value is communicated in the first sentence and the acceptance criteria is easily testable.
Hypothesis Driven Development (HDD):
Hypothesis Driven Development takes a step back from the focus on delivering business requested value and asks, ‘how do we know whether what is requested is valuable to the end user?’
We believe that <this capability>
Will result in <this outcome>
We will know we have succeeded when <we see a measurable signal>
What great product managers bring to the table is not only a set of great ideas from themselves, designers, engineers, stakeholders, and users, but also the ability to test those ideas, interpret the results, and make a more informed decision. Common problems I have had and have helped others work through are setting up an A/B test and interpreting the results. HDD forces you to develop a strong hypothesis and determine how you will measure success, safety, and behavior from the beginning. A common trap teams fall into is setting up their tests after the development has already been done where the first question becomes “how do we test this?” The time spent setting up an experiment after the fact can be discouraging to teams that want to move fast.
Using Both BDD and HDD:
Depending on the size of what you want to test (which can vary depending on the qualitative research, stage of the company, stage of the product, and other priorities) I find there are many times when a user story may be something I want to acceptance test but not a priority to A/B/n test. Often times I know when something is better suited for using BDD when I can’t formulate a strong hypothesis to use with HDD. For instance, say we are working on a wealth management product that uses a lot of forms on different platforms and my hypothesis is:
We believe that a “Save” button at the top of the wealth management forms
Will result in a higher completion rate of the suitability flow
We will know this when the population entering variant B has significantly higher users completing the suitability flow than the control
Because there is a cost for every story (story points), and because the cost may be higher to integrate the necessary metrics, I may not want to test the color, copy, size, or placement of the save button yet (relying on the heuristics the designer decides). Furthermore, the development team may want to break this story down further due to the suitability flow having multiple software platforms that are all independently testable and deliverable. Because of this, we may decide it is best to have the hypothesis using HDD at the Feature level, and many user stories using BDD at the user story level.
There are times when you will need to customize user stories for a project, though it should be because of a specific need, rather than a preference. I’ll first say the wrong way I’ve seen this approached. I have run experiments with my teams where engineers, Scrum Masters, XP coaches, and others have had a strong preference on how a user story should be written. Always open to test a new idea, I have paired with individuals to write a few stories in their structure outside of the BDD and HDD methodologies. In all but one case, the new format caused confusion and we always reverted back to BDD or HDD. Usually, this was because we introduced ambiguity, we prescribed solutions, or we were putting too much information in user stories that caused unnecessary administrative effort to write, manage, and close the stories.
The one time when custom requirements worked well was based on a specific need for the product, rather than a preference. While consulting on an MVP for an A/B testing product, requests from the users (product managers) to add metrics to the platform had specific statistical requirements to run t-tests. We needed a population, a numerator, and a denominator.
Out of all people who <action> (population)
How many <action> (numerator)
Per <observation> (denominator)
Because this came from a mathematical need, it made sense to customize our user stories. While the verbiage changed on these stories, depending on the metric, requiring a population, numerator, and denominator never changed. You could still merge this structure into BDD as well which gives more information on what is needed, why it’s needed, and additional context.
As a [user] I want [metric] so that [I can test the new feature]
Given the data exists
When I use the buy-per-homepage-visit metric
Then I will see a result of:
Out of all people who visited the homepage (population)
How many clicked the “buy” button (numerator)
Per visit to the homepage(denominator)
How to Judge a Story Based Off of a Set of Criteria:
A user story should clearly state what is wanted and why in a way that is testable and is understood by both engineering and the business. There are many factors to consider when judging whether a story will achieve its desired outcome including whether it has a structure, story size, vertical vs horizontal slicing, and more. An easy way to judge this is whether it follows Bill Wake’s popular INVEST method. Is the story independent, negotiable, valuable, estimable, small, and testable?
The Balance between Independent vs Negotiable vs Valuable vs Estimable vs Small vs Testable:
As a product manager, I want to deliver value to users. In order to make well informed decisions, I spend a good amount of time doing qualitative and quantitative research. It can be seductive to write large user stories that deliver the most perceived value, but what Lean methodologies teach us is that incremental value delivered allows for better quality control, more predictability, easier testability, more on-target results, and a more consistent sense of accomplishment. It can be a bit demoralizing for a developer to pick up a story, work on it for a week, and still not know how much longer it will take for them to complete it. So it’s important to keep user stories small. Also, when stories are small, you have more control on what work is prioritized. The work you prioritize, still needs to be independently deliverable/releasable, though, or else your priority can be broken by a dependency. Stories that are too small may, also, only work if they dictate a specific design.
This is where I see the debate of slicing stories vertically or horizontally comes up. From a business perspective, vertically sliced stories are typically best, but some vertically sliced stories may be so large that the team cannot estimate them— and they should thus be negotiated. If a team likes to get together and design architecture up front, then you can have horizontally sliced stories that are still independently deliverable, testable, releasable, and valuable (maybe only from a testing perspective or to the product manager — e.g. you release a backend service for on-site search that requires you to use it through a tool like Swagger-UI). But if the resulting stories are so small they can’t be functionally tested, then they may need to be more vertical (disregarding chores). Some engineers may want more creative freedom to explore innovative solutions that are sometimes easier with larger user stories. All of this can be negotiated factoring in the project, the amount of functional vs non-functional requirements, and the preferences of the team.
Requirements are Tools to Achieve Outcomes:
The way to know when to reexamine how user stories are written is when desired outcomes are not being achieved. There is no one right way to write a user story, but it’s important to listen to new ideas and preferences and to be willing to experiment. If I had never tried a new structure, I never would have discovered BDD nor HDD. If I didn’t negotiate with my teams, the team’s morale would have been impacted as well as the products. There are opportunities for innovation all around — in the design, functionality, and infrastructure of our products, in the processes we use, as well as in the stories we write. It doesn’t mean that vast amounts of time should be spent changing how one writes user stories, especially when a team finds a good working equilibrium, but I hope this article helps when it is the right time to take a look.
When writing requirements, using a methodology like BDD, HDD, or a custom methodology, judging the quality of the user stories based on criteria like INVEST, and viewing user stories as a project management tool to help achieve a product’s outcome, will help you produce a quality set of requirements.
You may hear BDD referred to as “gherkin”, which comes from the gherkin language, used in the behat tool — http://docs.behat.org/en/v2.5/guides/1.gherkin.html