I’ve been thinking a lot about how to go about testing assumption.
After the MVP is out, I’m think that the life of any new feature would look something like:
- In development
- Released into the ‘next’ branch (50%) of users see this when visiting the site
- Promoted to the master branch (50% of users see this)
Each feature will have analytics built in so that I can measure success and I suppose (shudder) that I’ll talk to humans about their thoughts on the feature. After stage 3 I will either shed a tear and throw the work out or promote it into master.
All good in theory, but what about the little features? I can’t batch them, that defeats the purpose of applying the scientific method. So do I push them straight into ‘master’?
For example, a user of Malla will be able to enter text into a box that will then be stored as a piece of content. I would like this box to support markdown. The assumption is that users will need to enter text that contains hyperlinks, and it won’t take much time. Do I really need to clog up my A/B test pipeline with that? Nah.
So thinking in colors, I made this and gave it a fancy name.
For the markdown feature, I’m pretty damn sure that people will need to enter text that contains hyperlinks. Because that’s a thing that websites have. And it’s not much effort. So that’s a ‘1’.
Another (maybe) post-MVP feature is the ability for a user to version the text they enter. This is lots of work. I think people want this feature, but I don’t know. So that’s a ‘4’ or a ‘5’: proceed with caution. I’ll start with an MVF (Minimum Viable Feature — just made that up) and get it into the ‘next’ step of the pipeline.
I’m sure this has all been thought out before and I’ll look back at wince at my naivety but hey…