The freeftopia Experiment - Can a Software Company Decentralize Itself and Succeed ? Progress Report #1.

Karine Durand-Garçon
free ftopia
Published in
11 min readOct 12, 2016

The freeftopia project kicked-off as a unique adventure in which we decided to open and decentralize the operation of ftopia, an existing incorporated cloud storage company… by nature, a centralized organization. This post describes what we have learned during the initial phase of our experimental freeftopia journey: April 18th to June 28th.

First, here’s how freeftopia is attempting to “decentralize” itself :

  • The project team distributes decision-making processes according to meritocracy — the project team welcomes any willing party to contribute
  • Distribution of revenue is done according to the value produced by all project contributors

Our aim is to build a boss-less, for-profit organization. Individuals or groups of individuals are fueling such an organization. Their energy, the value produced by them, will receive freeftopia compensation. In order to decentralise its parent cloud-storage company, freeftopia offers a new way to contribute: take responsibility in collective projects, keep your own agenda and have freedom to engage or disengage without compromising the produced value for all.

Of course, our experiment faces formidable obstacles, for instance:

  • How to remain nimble as a business when collective decision-making processes can be painstakingly slow ?
  • How can part-time members and short-term/long-terms contributors bring enough traction to the business so it can compete in a such an active market as online storage ?

For now, let’s set such questions aside as these challenges actually hide opportunities that freeftopia endeavors to explore.

With that, we are optimistically moving toward resilience…liftoff in 10, 9, 8…

Where are we now ?

Since last March, freeftopia began to organize itself through several work sessions, which are detailed below.

Meeting of April, 18th : we vote for meritocracy and our need to experiment

This first meeting focused on theoretical approaches to contribution mechanisms; plus, it was the first time that team members met in real life.

From this initial work session, all project members identified how contribution mechanisms would determine both monetary payments and the balance of control within the organization. These mechanisms are both central and sensitive matters because they tickle individual egos while enduring culture bias. Contribution assessment goes beyond mere tracking mechanisms — it stirs desires, fears and emotions.

Nonetheless, we made an agreement (at least for now) to adopt meritocratic metrics, yet it will very much be a work in progress and a continuous topic of discussion. To start out, all project members agreed to a common experiment : let’s walk the talk and carry an assessment on our own contributions as objects as well as an assessment on ourselves as subjects. Operational project issues will allow us to field-test our theoretical ideas…

Meeting on May, 16th : an experimental assessment on real contributions

Our meeting takes place at 18:00 at the 10:10 co-working cafe (yes, we know, either an odd place or an odd timing) and Philippe sets the scope in his introduction to the group:

“Our goal is to establish an open organization that can welcome any newbie — the same way that Facebook, Airbnb and Uber platforms do. No credentials should be needed to start interacting with a platform. We need new ways to operate from a business, an organization and a technology perspective. We also aim to share management tools, ideas and field-tests with the wider community.

When a newcomer makes contact with our project as a potential contributor, answering a “what do I do ?” call is not always a straightforward thing. I initiated kind of a roadmap, then others thought about KPIs — measurable success indicators to determine if a goal is reached, such as a 100% increase in revenues. However, our experiment is at such an early stage that it is difficult to even to know what to measure.

Even so, it is important to measure the value we create together and how we all contribute individually. As we “open” our experiment to any by-passer, “meritocracy” offers everyone a proportionate influence factoring its contribution to the joint venture.

Maybe, some of you have read the article by Ouishare about their experiment with decentralization, based upon the Backfeed technology. The aim was to use technology tools to foster consensus within the group in charge of defining the OuishareFest program. It proved a difficult experience mainly due to human factors.

What the Ouishare paper says is that technology tools designed to facilitate decentralization can be counterproductive to initial expectations. With Backfeed, the team members were asked to rate the work or ideas of others. Assessment is time-consuming and places a hefty toll on top of real work. The assessment process puts us in front of others and can be unfair or unclear, so it makes some participants feel bad. What happened during the OuiShare experiment is probably a matter of context, more than a problem with the Backfeed technology itself. If freeftopia is looking to replace hierarchical control by collective assessment, we all need to work upstream on evaluation methods.

Based on an experimental approach, we agreed to define the most useful activities to bring our project forward, work to accomplish them, and then share our expectations and needs in terms of evaluation. Those that support a task themselves will tell the group how they would like their work to be evaluated and rewarded.”

The second half of the work meeting was dedicated to brainstorming and from that we gathered a set of 26 possible contributions, each of which was divided into several categories.

Through the Stormz collaborative workshops platform, project members had the opportunity to participate in a collaborative vote to determine which contributions seemed most important to the group:

The Stormz session triggered the setup of a complete set of online tools to maintain work momentum and execute the chosen contributions. These tools include : Slack, Part-up, Google Drive, Github, and the appear.in videoconferencing platform.

A month later…

Meeting of June, 28th: structured feedback

The goal of this meeting was to examine the contribution assessment model and ideally move on to a real-life implementation.

Out of the 26 documented contributions, the group achieved seven, two of which were thoroughly examined during the two-hour meeting:

  • The design and roll-out of a bug fix for the existing ftopia product
  • The summary of a book to categorize different types of governance. This summary was the material for Mathieu’s presentation at the Blockfest conference (more about his presentation below).

Assessing the “bug fix” contribution

It would be a fair assumption to assess a ‘bug fix’ as a single bounty and then break down the calculated amount between the contributors; for example: the person who programmed the bug fix, another who reviewed the code, another who tested the fix, and finally the person who rolled out the fix into production. Everyone on the team could assess the credit for each contributor and the final allocation of reward, which is based on the average of all assessments. Our approach is similar to the Cocoon Projects approach (we actually considered using their tool as well as the LIQUIDO framework).

Another idea we developed was to build a “bug fix” template to provide guidance as to how to allocate a bounty using default values. Such a template would offer an evaluation grid, allowing the assessor to determine a value for each part of the work in a standardized way. The template bounty splits by virtue of “points” allocated differently to the implementation of corrective works, to the code review, to the manual testing, and finally, to the production deployment.

This kind of template simplifies the evaluation operation: the “bug fix” assessment focus on contributions whose default values ​​are not fair.

As a developer, Alexander N. was new to the ftopia software project, so he spent some time learning a large part of ftopia codebase before working through the bug fix. As a consequence, the project team raised the issue of the relationship between effort sharing and value sharing; for instance, should a person collect more points when the contribution requires a longer learning curve ? As compared to permanent employees, one should consider that wages also need to cover training and education time. So it’s important to consider how to encourage contributors to invest some training time outside of the assessed task. Even if contributors have their own reasons to put in extra time (self-improvement, learn and master new things, etc.), shouldn’t we include part of this time while assessing contributions ?

As the meeting progressed, it became clear that we need to understand who determines the template value points distribution. Should we engage an outside expert ? Can we have self-declared experts within the group ? Should we allocate points collectively by a voting system ?

Some in the group were reluctant to use assessment templates because they are rigid. Nevertheless, the group reached a consensus recognizing that fairness needs to be guaranteed by a balanced remuneration system of contributions.

The following video provides a short diversion regarding the subject of fairness:

Our Conclusions So Far…

Conclusion 1 : Legitimate evaluators and evaluation methods

The identity of the evaluators, potential “templates” and the rules of the game must be accepted or even chosen by those who carry out the contributions. The “templates” are a nice idea to simplify the evaluation, provided they are not rejected by contributors.

Photo Credit: Abel Orain

“categorizing governance systems” : our second contribution

As mentioned earlier, Mathieu G. voluntarily worked on a significant governance contribution as part of a presentation at the Blockfest conference. It is worth noting that he said if he had to evaluate someone else for his activity in terms of usefulness and difficulty, he probably would offer a greater reward than for himself.

Conclusion 2: Cost and usefulness are two perspectives of contribution assessment.

The value of the contribution is based on its impact, that is to say its usefulness for those who will receive it, but also important is the cost of production. Something useful may have been produced, but if the cost exceeds usefulness then this needs to be considered.

Matthieu suggested that it’s unrealistic to model the contributions cost factors, without running into the problems identified by Ouishare during their experiment.

He suggested another approach: all participants can have tokens and appoint them to the projects on the marketplace. The more a project is mission-critical, the more the group will give it tokens. When a potential project contributor feels that the token appraisal is worth it, he asks for the assignment and leads the project to the end before earning the tokens that are attached to it.

Conclusion 3: Assessment can be made before or after the project is complete

Evaluation as the project begins is like establishing a pricing mechanism by the market, based on the perceived usefulness of the contribution. The after-the-fact assessment relies much more on the cost factor and may result in a judgment skewed by the knowledge we have about people who contributed.

A further question arose: who evaluates the quality of project deliverables, and how do we agree to accept them ? The group concedes that this will remain as an open question until we find and agree upon a fair process.

Lionel mentions that the real impact of a contribution may only be observed long after the delivery of the task, therefore assessment results should be subject to successive re-evaluations over time. This process would give a boost to initiatives that would not necessarily be recognized at their fair value at the beginning; in other words, contributors could “ripen the fruit” of their efforts, as it were, at a later time.

Fourth Conclusion: Continuous assessment of contributions

A contribution can be evaluated and re-evaluated several times. It is not necessarily appropriate to evaluate its delivery at a given milestone. The impact of an action can be felt in the long term, and recognition of the value created must be accrued at any time.

Philippe found that the process described by Mathieu is limited to actions initiated by a vote among the group, according to the planned allocation mechanism. When the opportunity is not necessarily understood, this process can make it difficult to welcome new and original contributions until being shown by the facts.

Fifth conclusion : Encourage risk taking

Within an open organization, a planned allocation process does not favor bold and risky contributions. After-the-fact and continuous evaluation methods are a useful complement in this case.

Since the risk of making a contribution is borne by the contributor and not by the organization, several questions arise: How to encourage contributors to take risks ? Will people contribute given the risk of not being paid ? Will the risk (time spent at a minimum) be factored-in and included in the remuneration of the contribution ?

Alexander H. suggested using the model of predictive markets (futarchy) to make the most of both before-the-fact evaluations (bet on the value of contributions) and after-the-fact assessments (bet on the outcome once the contribution has been issued) .

Sixth conclusion: appeal and attractiveness first

What is our common motive ? It’s time to gather around what makes sense for us.

During the final meeting, we recognized that:

  • The attractiveness of the community that we are now building is a priority as compared to the assessment of contributions. We must invest as much in motivations and as we would on evaluation and reward mechanisms.
  • We will continue experimenting and we will retain a flexible approach. We will find and correct our mistakes as we clear the way.

As a follow-up to make some progress beyond the many questions that have arisen over the past several months, we are committed to more practical objectives by the end of 2016. These include:

  1. Adopt a legal framework to automate the decentralized allocation of remuneration;we have begun to address its practical accounting and legal issues
  2. Model and start implementing a contribution application submission process — consider the allowance of crypto tokens
  3. Open the ftopia product even further, by making the application source code widely available under a free software license

Would you like to join the ride ?

We would be delighted! We’re looking for finance specialists, coders, lawyers, designers, communicators, growth hackers, and any other talents that we might not yet know we need…just come as you are ! Send an email free@ftopia.com.

Thank you to Philippe Honigman, Christophe Gauthier, Alexandre Narbonne, Mathieu Galtier, Francoise Arbelot Lionel Auroux, Victor Vorski Simon Polrot, Bill Rice, Bertrand Fritsch, Alexander Hajjar, Louis Margot Duclot, Will Schiller, for your valuable inputs that we collectively helped lead to this waypoint. It is a tremendous joy to discover with you these new ways of delivering value.

Special thanks to Bill Rice and Christophe Gautier for translating and rewriting this English version.

--

--