Building Startup Product During Its Infancy
I have written about how I think a product team of a startup should be like. It was written during my commuting; it is so rough, you’ll see a lot of grammar mistakes and typos there.
Now, I will write more about how they work to deliver features during first 1–3 months MVP. During this phase the burden is mostly in the gatekeeper, the Quality Assurance team. If you’re lucky enough to find developers that already implemented TDD, then that’s good. I’ve never met one, so I assume a team that has no idea about test driven development.
The Agile Process
Agile process is based on idea of iteration. Rather than going all the way in one big, waterfall approach, we do things in iteration. I’ll borrow from Scrum to call each iteration as Sprint. This is the basic unit of iteration. It is usually less than 30 days and resulting into potentially shippable product.
This is the lifecycle of one iteration. In each phase, every role has their place.
On Planning Phase
On this phase, there’s a lot of interactions between product owner and project manager. As a product owner you’re expected to create product requirements and break it down. At this phase, designers are part of the product owner. Product owner needs to break things down to minute details and create user story, usually in this template:
As A <user type> I want to <some goal/action> so that <some reason>
The user story must be very specific, so it’s easy to work on. The smaller the scope the better. This is an example of a good user story:
As A user I want to see the login button so that I can tap it to login.
And this is a bad user story:
As A user I want to log in to the app so that I can continue to the feed screen
The scope of the first example is small, specific, and easily delivered. The second example may looks small but actually it’s vague and huge because there’s screen to make, there are fields to fill, and there’s validation. It’s better to write each of them in separate user story. So, if you think a product manager job is easy, it is not, you need to be very detail on every screen. In my team the rule of thumb is:
If it’s not in the backlog it does not happen.
So never expect the team will write a feature if you don’t write the user story for that particular feature, even a trivial one like the
As A user I want to see the splash screen so that I can be sure that it’s MyAweSomeApp
Yes, a splash screen, a very trivial story. This is to show you what kind of story expected to go out from the product owner team.
The stories then goes to the QA. QA will think about the edge cases and write acceptance criteria for the story. Acceptance criteria is a list of conditions that makes the story can be considered as done. Be specific on the edge cases. For example
- The user should see spinner when the login button is tapped.
- When the app is disconnected, the login button should be disabled.
- When the network is wonky and get a timeout, the user should be informed.
- When the payload is wrong, user should get informed that the app cannot process the request.
and, so on. The first time you make one it’s a pain. But over the time you will recognise a pattern of common error, and if that can be automated or unit tested talk with developers to write unit test for that specific criterion to push them back to developer rather than to QA manual test. If the story meet that criteria then you can consider that as done.
After stories are broken down, we gather the whole team including the product owner, project manager, devs, and QA to do a story carding meeting if we have backlog that has no story points. The whole team come together to give a story points on the backlog items. The point signifies an estimation of complexity. The number is based on fibonacci: 1, 2, 3, 5, 8, etc. I usually limit it no more than 5.
The estimation is compared relative to the easiest story. So for example, the splash screen story above, I usually put 1 as the complexity point. Every other user story is compared to that story. It’s normally hover on 2 or 3 points on every story. If you meet a story with 5 story points you should work together to break it down more because it may be:
- The scope is still big.
- The story is still not clear.
- There are a lot of unknown factors in the story. An example is integration with 3rd party. If that’s the case, set it to 5 or 8. So we know we have problem.
It’s usually called the planning poker. There are cards and apps specifically made for this. What I usually do is simpler. I call it planning rock-paper-scissors because rather than using cards we’re using our hands just like RPS. The rule: never speak the number aloud to avoid cognitive bias. If you find different estimation opinion, talk and settle in a specific number.
After stories have points, we do sprint planning. This before the team start to work on stories. Product owner communicates about the priorities and project manager makes sure that the team capacity in regards of the deriverables. This can be done out of gut if it’s your first sprint. Over the time you will gather data and know how much the capacity of the team to deliver stories. Stories are stacked by priorities, team member volunteers themselves to own the story and then the sprint starts. It’s also important to assign a goal or a theme of the sprint.
One thing to note: First sprint is always the worst. So product owner should keep the expectation. Volatile team member usually overcommits while the risk-averse team member usually undercommits. As the project manager/scrum master you’d need to gather data and find out the real capacity of the team.
The role of tech lead during planning phase
The tech lead role during the planning phase is to review the story and to make sure that the story can be implemented reliastically. For example, if your team has no expertise in signal processing, a sound processing story may be not feasible.
In the early stage, you may also need to plan about the branching model and the Continuous Delivery system you want to implement and maybe set them up yourself and write a technical note to teach the team on the workflow. When and who do the code review and so on. Plan it ahead, don’t neglect the importance of the CD and Code Review.
Implementation and Testing Phase
This is when the developers starts to code. When the developers code, QA can start writing the test cases for each stories based on the acceptance criteria. When story is delivered to devs to QA, run the tests and if all criteria are met, mark it as done.
The question I usually get from QA is: how if we got bug? Well you need to see the severity of the bug. I high severity bug that blocks user to continue, make the story back to developer’s to-do where a minor can be logged after you accept the story. We see the importance of acceptance criteria here. As a QA, you need to think it through. This way, a tension between Dev and QA can be maintained. If the bug is not in the acceptance criteria, QA and Devs need to talk and bargain wether to send that back to to do or log a bug so it can be done later.
This where the continuous delivery shines. Everytime a story is delivered, a tech lead or senior engineer can review and merge the code and send it to build system to be delivered to QA. Compiling and deploying software is a mundane tasks that takes a lot of time. This is the first to be automated before everything else.
In the end of sprint, we expect a potentially shippable product. So in the end of the sprint we need to do two more activities:
Sprint Review/Product Demo
A product demo done by presenting the what has been accomplished in current sprint to the whole team including the product owner or even the management or customers. This is to assess if the sprint meets the goal in the sprint planning meeting.
Management that don’t understand agile usually give “input” on why a certain feature that hasn’t been in the backlog is not implemented during the sprint. This is absolutely forbidden. For that, you need to go to product owner and discuss on the priorities.
The whole team gather and identify three things:
- What went well during the sprint cycle?
- What went wrong during the sprint cycle?
- What could we do differently to improve?
To ease things up, there are thinking framework we can employ for each respective questions:
- Mad, Sad, Glad. Mad: What went wrong. Sad: What can be done better. Glad: What went well.
- Start, Stop, Continue. Start: What we need to start from now on (what can be done better). Stop: What we need to stop (what went wrong). Continue: What we need to continue (what went well)
You can invent the framework that appropriate for your team. This meeting is needed to improve the next sprint and not repeating the same mistakes.
That’s my story and methodology when building product and I think it works well and worth to share. People can do differently. In case you wonder why I don’t mention or pursue you to go to specific methodologies? I just don’t want to be a snake oil salesman of a methodology. I like the idea of agile, breaking down into user stories and sprints, even a pair programming but I’m not dogmatic to Scrum or Extreme Programming because in the end what you want to do is building product. Heck, I even use kanban for maintenance and bug scrub after release. So use what is appropriate for your team.