Lean Startup methods can radically reduce the number of failed blockchain projects, while enabling teams to create value more quickly for their users.
By Jacob Cantele
After hearing about how Panvala was able to launch an alpha just two months after bringing on our engineering team, a number of different people in the Ethereum community have asked us about our product development flow and how we’ve been able to move so quickly.
I’m a deep believer in the Lean Startup Methodology and its power to help teams gain traction in the market faster. It empowers teams to eliminate waste from our roadmaps, and to ensure that our teams are building things that create real value for our users. Let’s start by talking about why these methods are so critical to the success of blockchain teams, and then take a deep dive into how the Panvala team organizes ourselves to maximize value creation.
The boom of 2017 gave teams access to unprecedented resources to invest in building the Ethereum ecosystem, but simultaneously cursed teams with delusions of seemingly infinite resources. While many startups were being formed, those startups lacked the mechanisms of extreme scarcity that often push startups to greatness.
One study even found that more than half of the projects that emerged from that period have failed within 4 months of their respective token distribution events.
It is no wonder that recent conferences in the Ethereum space have celebrated that the decrease in token price has pushed scammers out of our community, and increased the attention on teams who are really building. For many of us, it is a breath of fresh air to carry out the real work of the Web 3.0 revolution.
We have an opportunity for a fresh start. Doing so means implementing better product development flows, creating real iterative feedback loops with the people who will be using our products, creating processes that mimic the kinds of scarcity that the traditional startup space has faced, and building the products that will change the world.
Iteration and experimentation are in our DNA
First, a few words about our team, and how we approach things. The Panvala team is building a highly innovative and experimental product.
Panvala is a donor-driven platform where token holders vote to fund work that is important to the ecosystem success of Ethereum. Donated funds reside in a smart contract we call “the token capacitor,” where they are allocated by the votes of token holders. Those who donate know that their donations will be allocated to the most important work via the collective wisdom of Panvala’s community.
Perhaps if we were building a product that we’d already built before, like a home-builder building the same house they’ve constructed twenty times before, our product development flow could fit into the neat world of Gantt Charts and stage-gated processes, but that isn’t what we are building (neither is virtually any other blockchain team, for that matter).
By the very nature of our work, we are building something that has never existed before and is full of risky assumptions. If we were to go off and attempt the implementation of a highly complex white paper for years at a time without a living feedback loop with real users, we would be taking a massive gamble and would likely run out of the runway without real revenue generation or a usable product at the end of that process.
Instead, our team works with a variable roadmap that evolves constantly as we conduct both infrastructural experiments and garner feedback from users. The roadmap itself is divided into very small iterations, each meant to build a feature-thin usable product that we can test with real users at the end of each 1 or 2 week sprint.
Iteration applied to blockchain
Many innovators like Eric Ries and Jeff Patton have written extensively about the changing nature of work in our modern economy. Our world is no longer characterized by the challenge of insufficient production to meet demand. It is characterized instead by an economy where the majority of work that people do is waste.
Even within successful businesses, teams spend months or even years building products that will ultimately need to be majorly refactored or abandoned all together when they reach customer needs or scale. Many of those teams could have been saved if they had discovered key learnings earlier before running out of runway.
The Lean Startup movement has taught us that the solution to these problems is to reduce the batch size of our work into smaller and smaller releases. In fact, every iteration we build should be the smallest thing that we can possibly build, measure with real users, and learn for our next iteration (aka the build-measure-learn-cycle). Many web 2.0 companies have mastered the process of validating key assumptions before scaling them up and changing the world.
There are many objections to this that have been raised in the blockchain community. Some have argued that slow release cycles are a feature, not a bug, because it prevents bugs from being released in systems that people depend upon. Others have argued that smart contracts themselves are (typically) non-upgradable, so they cannot be developed iteratively and must pass through a stage-gated QA process.
I believe both of these arguments are ultimately untenable, and ultimately increase the risk of security flaws or wasted development effort. In reality, the larger a release becomes, the more difficult it is for a team to ensure it is free of security flaws, the more likely those reviewing its code will make mistakes, and the more likely competing projects will seize the initiative.
In the Panvala team, we’ve worked around these limitations by front-loading our riskiest assumptions (including contract work) on our story map, and ensuring that contracts are open sourced and audited sooner. Being transparent and creating this kind of feedback loop dramatically de-risks our contracts and the overall success of our ultimate launch product.
Whether or not a project currently has a market leader position, what matters in the long-term is pace of innovation, and the best path to increasing a project’s pace of innovation is reducing its batch size of work so that the team can learn faster, remove waste, and remove risk as quickly as possible.
If we’re going to radically change the world with the Ethereum blockchain, we have to make it a goal for ourselves to reduce the batch size of our work. We have to build smaller iterations that we test sooner, and allow our roadmaps to evolve in a living way.
We have powerful tools for achieving this. We have a culture of open-source, where contracts can be open-sourced sooner and evaluated by the community iteratively as they are perfected. Design prototypes are easier to build than they ever were before, and we can involve the participation of real users to garner their insights. And the releases made to our open-source communities can be smaller, more frequent, more comprehensible, and validated with our users through rapid experimentation.
Let’s talk about how to accomplish that.
Story Mapping: the iterative backlog
Lots of teams have experience with building a backlog.
In the worst case scenario, the backlog might look like a giant product requirements document, held statically for months or even years (often generated from a white paper that is being used as a dogma).
The problem with the waterfall approach is that it results in the largest batch of work, deferring key learnings until the very latest possible date. Sometimes this means that 6 months in, the team will discover a critical flaw leading to a radical refactor. Even worse, a team might spend a year building a product, only to learn that the prospective users don’t see value in it, and that their efforts were wasted.
If we’re lucky, the backlog is instead the far left column of a scrum/kanban board, and looks like a giant list of 200+ cards of poorly-written, overly prescriptive features. As the product matures, the backlog increases in size. The stress among team members increases with the size of the backlog proportionately as the team insists that they don’t have enough resources to fulfill this backlog, and they begin feeling they are failing in their roles.
The reality is that most of the work that exists in an agile backlog should never be built. As the team gains key validated learnings from their users, many of those stories will prove to be the wrong stories, and a waste of time. Agile processes are incredible at enabling us to do more, faster. Those processes do not, however, tell us what are the right things to build.
Instead of lists of features or requirements, our base unit of work should be stories. Instead of abstract features, stories describe what the user is trying to accomplish, and empower the cross-functional team (software engineers, product designers, other roles) to find the best way to satisfy the consumers need. Requirements and features as a framework misaligns the teams focus to abstract productivity, and destroy entrepreneurial cultures. Instead of abstractly doing lots of work, we should challenge each other to do the right work. There’s a lot more to be said about story-driven development that is beyond the purview of this article. To learn more, I strongly recommend Jeff Patton’s User Story Mapping book that explains not only the stories themselves, but also how to organize them.
That is where user story mapping comes in. A user story map is an iterative visualization of user stories, where user stories are organized in successive MVPs. Each currently planned iteration is represented on the story map. The user story map also has some forcing functions to help us think iteratively rather than incrementally (more on that later).
Here is an example what a user story map looks like:
This example is an email application. The red and orange cards on top explain the journey of where a specific user story falls within the product. The yellow cards vertically below them are the actual user stories for the team to implement. The user stories are divided into different releases so that the team can plan its different iterations. Lots of teams build life size physical walls with poster board and post-its, but in the Panvala team we’ve opted for purely digital versions that correspond to our needs as a distributed team.
As you can see, the releases are organized into multiple very thin releases that persist across the application, rather than one release containing all of the cards in one column. This is because every release must be able to be built, measured by real users, and learnings must be gathered from that iteration. If we build all of the most complex details in one vertical column, we won’t be able to arrive at a usable product until all three of the releases are completed! Instead, by using user story mapping, the team is able to identify and create multiple successive iterations and get feedback from their users very quickly, using that feedback to adjust the future iterations.
You can see it even more starkly in this painting of the Mona Lisa. By building all of the most complex details of the painting before moving on to other parts of the application, the incremental team has deferred having something that can be consumed until the third cycle, whereas the iterative version created a usable sketch in the very first cycle, and was able to validate that sketch before iterating on the painting. Iteration IS how great paintings are created, by the way.
Going back to our user story map, each of the stories on this map are fully “groomed.” When we groom a story, typically someone on the product side is telling the story of what the user is trying to accomplish. They write out a full story, like “As an (persona), I’d like to (action), so that I can (goal).” Each of those three parts of the story gives the team context for what the user is actually trying to accomplish. Instead of writing requirements or prescriptive features, we’re empowering our colleagues across different functions of our teams (designs, contract engineers, front-end engineers, etc) to all collaborate on what is the best way to fulfill the user’s story. We create the most value not by being prescriptive, but instead by building decentralized entrepreneurial cultures where the team is focused on value creation.
After adding the story, we add a description. The description is just that, a description sharing any specific details about the story, its context, things we know about it and why we think it is important.
Lastly, we add acceptance criteria. The acceptance criteria are purely technical needs like “use web3.js” or “needs test coverage”.
The tool we’re using for our user story maps is Avion.io, by the way. It is by far the best user story mapping tool out there, and it integrates nicely with Trello and a few other products that teams are using.
Our whitepapers should be decomposed into their contingent stories, and produced in an iterative backlog. Even better, we can identify which stories from our whitepapers represent our riskiest assumptions, and place those stories at the top of their respective columns, so that we front-load risk and learn faster. As we garner insights, our whitepaper should reflect those learnings and be treated in an iterative way. Ultimately, our users care about the product being successful, not how well we adhered to our early nascent assumptions and requirements.
Transparency and Predictability in Roadmaps
One of the main objections that I’ve heard many teams raise to an iterative backlog is that they’ve got agreements or expectations in place for their roadmap. Perhaps they’ve got a public roadmap, stakeholders or investors, or some other reason that a more variable roadmap seems counter-productive. How can we predict release milestones if learning will be constantly evolving our roadmap? It is easier than you think.
In a product’s development lifecycle, there are three factors: time, budget, and the product itself (aka scope). Waterfall processes typically try to hold the scope constant, while delaying the product endlessly and requesting more resources/time even before it is known whether the product will succeed. A great lean-agile product development flow does the opposite. Instead, we hold time and budget constant, and allow our scope to evolve iteratively as iterations reveal key learnings.
But don’t stakeholder become upset when the scope changes? No. Stakeholders become upset when you make things that suck or delay things endlessly. Instead, we keep a regular release cadence with broad categories. People are delighted because we consistently do the things that create the most value. The roadmap itself identifies key milestones, but not too much detail about which stories will be in the milestones so that wrong expectations are not set. We also release often, and the more often we do, the more quickly we learn and have a better quality roadmap. This helps guide the roadmap with data-driven decisions, rather than opinions.
We’re also able to predict with pretty high certainty whether our current plan for a release is on track. To achieve that, we do a few simple things:
First, we size the complexity of each story in our current sprint. We then take the average story size, and project that onto the remaining number of stories in our release. A release might persist across multiple sprints in the quarter. Never make a release larger than one quarter. Finally, we take this projection and place it onto a burndown chart (you can make one in Google Sheets or Excel, it is super easy). At the end of the sprint, we measure how many points the team was able to complete. Some other teams have opted for no story size estimation, and instead try to create relatively similarly sized stories, and measure stories completed. Both are powerful models.
As additional sprints take place, the accuracy of our average velocity and average story size will become more clear (and also increase because the team’s skill and knowledge and collaboration all improve). All of these predictions and changes are used to update the burndown chart and give clear transparency to stakeholders about how we are pacing towards the MVP.
So why use projections instead of sizing all of the stories that are planned for the larger release? Because if your release is larger than one sprint, those size estimates will be wildly wrong if done before the team has begun building. The team’s ability to size a story’s complexity increases iteratively over time, because they’ve begun implementing stories that will touch other stories, and develop a clearer understanding of what is necessary.
The projections will give us enough information for whether we are trending toward a successful release on the date that we had planned. If we are not, our user story maps make it very easy to move cards between releases. The scarcity of the release milestone is a forcing function to really build the smallest thing that we will be able to release, measure, and learn from.
Decentralizing Cross-functional Innovation
As many people know, in ConsenSys, there is a great emphasis on decentralization not only in the product but also in the future of how work should be conducted. In the Panvala team and many other ConsenSys teams, we use Objectives and Key Results (OKRs) to set decentralized quarterly goals. For anyone unfamiliar with OKRs I strongly recommend Google re:Work’s introduction to OKRs here.
Essentially, OKRs are a system for replacing managers and top-down bureaucracy with decentralized goals that are transparent amongst every team member. Colleagues can share feedback with one another about goals during the planning phase to arrive at a high level of cross-functional collaboration and understanding.
A small number of high level objectives (typically three) are chosen for the whole team, and decomposed into three to five metrics that will tell us if we accomplished the objective. We limit the numbers of OKRs to be a forcing function to measure what matters the most, and eliminate distractions from our success. The OKRs are owned by the team as a whole, with individuals making personal OKRs to describe specific tasks they’ll need to accomplish for the team’s OKRs to succeed.
Writing a full guide to OKRs and how we integrate them with ConsenSys’ Future of Work software Sobol is a subject for another article. In the meantime, I recommend them strongly for those teams looking to deepen their cross-functional collaboration and accountability, while avoiding bureaucracy and centralization.
No matter where your team is in its product development life cycle, taking steps to reduce the team’s batch size and work more iteratively will improve that team’s product development flow, and produce rich learnings. While there are many tools and ideas presented in this article, you can always just experiment with one, and learn as you go.
We’re on the precipice of radically transforming the internet and so many of the industries that depend on it, but there are many possible futures that can emerge. More centralized and less trusted blockchain solutions could gain more traction if we don’t take great steps to create more excellent product development flows.
If your team is working through these challenges, I’d love to hear from you. Please join me in the comments or reach out to us at the Panvala team directly.