How Agile and Open Source work together in (nearly) perfect harmony
This article is based on the talk I gave for the Red Hat Agile Day in Raleigh on October 11, 2016.
The conversation about agile and Open Source usually starts with an interruption in this form:
Agile will not work in an Open Source context because…
That’s usually how a conversation will start, and that’s the motivation behind this talk because I feel that’s there’s a lot to learn on that, especially if we continue to listen after the word “because”.
One example of possible cross-fertilization between Open Source people and Agile people was given by Jim Whitehurst during the opening keynote.
Jim presented briefly the Open Decision Framework that is meant to help making decision in an Open Source way. It’s on github for you to try, learn and modify!
This presentation was intended to discuss the collaboration mechanism in open source software development and how those mechanisms are evolving to tackle more complex problems.
If we want to simplify the explanation, we could say that one individual has a problem. She or He will build a piece of software to tackle this problem. She will do that openly, because it’s build on top of other open source software, or it works with other open source software, or maybe because she wants to enable people that have the same problem to use the software, to look at how the software is made and even to contribute to this software. Contribution could be comments, bugs, suggestion for improvement, or even patches…
That’s how collaboration works!
People have problems, they solve their own problems writing pieces of software, and others with the same, or nearly the same problems will use and even contribute to the piece of software that solves those problems.
And that’s how it works and how it continues to work for a lot of software, and we can see that some of the software have a few number of maintainers (some time few is one) even if the number of users is huge.
For a lot of software the developers are the main users of the software, real users, with real production need.
So when you are trying to tackle more problems at the same time, you will need to find a way to foster contributions to avoid the “only one maintainer” risk for your software (that could be really dangerous, if you plan to use in production, distribute or support this software).
How to contribute, how to recognize small contributions to encourage the next steps, how to announce your intention early so your idea could be challenged, the implementation of your idea could be challenged and at the end your code is better, accepted and merged. How to avoid the counterproductive effect of giving too much power to people that were here at the start and so on…
I would recommend The Art of Community by Jono Bacon as a starting point for studying that 🙂 or to look at all the resources available on OpenSource.com to learn more about Open Source. I am adding that especially because I add three questions after the conferences that started by: I didn’t ask the question during the Q/A session because I feel I don’t know enough about Open Source… And after that, the question was great, and I would have loved to have this question during the Q/A session :).
Even if sometime people tend to submit code without discussion expecting that a good solution will win the adhesion of other people (some time it works, and some time it’s not).
All those aspects of fostering collaboration are important, and we could expect that projects that forgot to have a good collaboration code, with “no assholes rules” for example, will probably not stay relevant for long.
The diversity of the community is one of the important factor that will improve the relevance and the quality of the solution that will be built. If we are all the same, all similar, we will probably rally easily one point of view. By having different point of view of the problem that we are trying to solve, we could improve the solution that is built to solve it.
The developers are not the users anymore, and we can see that when we look at the comments of people who attempt to use the first release of some software:
- Impossible to install
- Impossible to update or upgrade
- Impossible to debug
- Impossible to operate
The challenge is to welcome the voices of the real users, that are unable to contribute with code to the software. And to give the ability to the contributors to become proxy for real users of the product, so they could improve the conversations they have on the problem they are trying to solve and on the implementation strategy to solve those problems. That’s where the role of the contributors that are working for companies are especially important because their users, customers, partners are the real users of the software, and they need to be the proxy voices of those users.
And this could change the way we are envisioning the open source model. It’s not only solo engineer work…
In 1986, Takeuchi and Nonaka studied the teams that were building highly successful products that were disrupting existing well installed product on the market. Their study, published in the Harvard Business Review, is titled The New New Product Development Game.
In the new new product development game Takeuchi and Nonaka studied the reason that make teams to build products that were able to take all the market because those products were really better than the others.
What those teams that built disruptive products had in common?
According to the article (not a long one, you should read it), here are some of the important aspects:
- Built-in instability: Broad goal, Strategic importance, Challenging requirement, Funding and Freedom
- Self-organizing project teams: Autonomy, Self-Transcendance, Cross Fertilization
- Overlapping development phases: It’s not a relay team, it’s a rugby team (yes, they used the term SCRUM for the first time referring to a team, and that’s what inspired others to create an agile methodology with this same word). Each team member feels responsible for — and is able to work on — any aspect of the project. Cross functional teams, Small teams (under 12 people to enable direct communication), End to end responsible to deliver
- “Multilearning”: Multilevel learning, Multifunctional learning
- Subtle control: Selecting the right people, Creating an open work environment, Encouraging team members to go to the field, Establishing an evaluation and reward system based on group performance, Managing the differences in rhythm, Tolerating and anticipating mistakes, Encouraging suppliers to become self-organizing
- Organizational transfer of learning
So if I go back to the challenge that is to welcome the voices of the real users, that are unable to contribute with code to the software.
And to give the ability to the contributors to become proxy for real users of the product, so they could improve the conversations they have on the problem they are trying to solve and on the implementation strategy to solve those problems.
One solo engineer, in charge of one technical component, will not be able to listen to the real user voices to fix problems that are more complex than those solved by one individual technical component. And the real user will not necessarily be able, or willing to assemble the components by themselves.
The classic way to organize the contributors by grouping them in technological area that make a technical sense, limits the ability for the technology to solve needs that cross the technology groups.
One diverse cross-functional team, responsible end to end of the delivery of the software, could welcome the real user voices. And could solve more complex problems, cross technology problems, than those solve by one technical component.
How diverse? How cross-functional? What end to end will really means in a specific context?
Let’s imagine that we answered those questions in one specific context.
This diverse team will work together to reduce the feedback loop so they can listen to their users. That is the try — learn — modify aspect of the model that Jim was referring to during the morning keynote.
There’s several aspects, The first one is being able to listen to the real user voices (and not to mock them, or to prove them wrong).
That means that your product managers, or your product owners are member of the team. And that’s probably in this area that there’s a lot of differences between “standard” agile practices and what open source projects are doing. This is probably the area where we could import and adapt a lot of practices that will help to understand what and why a user want something.
That means also that we will benefit from having team members that will be in direct contact with customers from time to time… on the field…
The second one is to improve the whole production work flow in order to be able to deliver a new release of the software, so the user could test the idea, and give feedback, and so on…
That’s challenging when you want at the same time, to deliver new features and to offer a long term support.
That’s challenging when a large part of the production work flow is shared upstream with other contributors, that could come from other companies.
We need at this point to recognize that improvement downstream will be meaningless if we are not solving the problems upstream.
And that could prevent us to deliver the value to our users as often as we could need to.
And maybe that’s challenging because we need new features to become available in an existing software continuously, and we don’t need new release with painful updates and upgrades.
This last aspect demonstrates why the end to end responsibility needs to include important aspect for user like install, update, upgrade, debug, operation, and that those aspects should not be delegated to another team.
This last aspect shows how the architecture of the product is impacting the potential value we could deliver.
So far, we covered 3 aspects, what features and why the user need those features, how we are building the product, how the feature are implemented.
On all those aspects we need to reach a shared understanding and agree that we will try-learn-modify on each of those aspects.
On those 3 aspects that could be formulated as value, people and process and technology.
On the understanding the value, practices like story mapping, impact mapping, backlog refinement, could be highly valuable.
On the people and process part: Retrospective is the first one that came to mind, value stream mapping, constraint theory are directly following as they will help to bring this systemic sense that is absolutely needed.
On the technology part, I will just reinforce the need for a really modular technology, and the need to formalize the architecture principles accordingly to the business objectives and to understand the tight connection between those two.
On all those aspects, there’s a lot of agile practices that could be (and are) used in open source projects (I don’t mean a specific framework, I mean specific practices to solve specific problems that fit the distributed organization of open source projects. That means also that a lost of experienced agile practitioners could bring a lot of value in the Open Source world (we need you and we are hiring 😉 ).
What I understood is not what you understood, and that’s not because one of us is wrong.
It’s not a solo work, and it’s important to invest in a shared understanding as a diverse team to be able to tackle more complex problems faster.
The three aspects need to be taken care of independently:
- You need to invest time to understand the value, the benefit for users, as a diverse team.
- You need to challenge the implementation of the ideas that will bring this value to users as a diverse team.
- You need to reflect on your organization and processes as a diverse team.
The 3 aspects are influencing each other.
We tend to think that the value will directly be represented in the technology, and to forget that the 3 aspects will influence each others.
If we have a fixed waterfall release schedule, we will have a big planning session upfront, people will tend to force all their ideas in it in the hope that they will get out of the process at the end.
If we have a siloed organization with a lot of hands off, that will have an impact on the value you can deliver with the technology.
If we defined in your value that there’s a need for non disruptive upgrades, that will have a huge impact on our ability to use a better / newer technology to solve our problems.
To come back to the question I asked before: How diverse, how cross-functional and how end to end the team should be?
I would say that the answer is as much as we can, because it will have an impact on our understanding of the value we could bring to user, our structure will define the structure of the technology, and the way it could evolve in the future.
The open source way continues to evolve, cross-fertilization with agile practices benefits to both.
Originally published at alexis.monville.com on October 12, 2016.