Intro to the Philosophy of Contributionism

Martin Rezny
Words of Tomorrow

--

Or a new way to work toward the common good (that I’m making up)

By MARTIN REZNY

So, we had a first project meeting last weekend with Luke Macmichael and Nova MacCourt, as we’re trying to develop a new social media system that would allow for productive communication. Apparently, the mere idea of it was so effective, that we kinda sorta invented a whole new philosophy of work, business, and social organization just by planning how to start.

The reason why we called it contributionism is that we have very quickly arrived at the idea that what needs to be boosted on a communication platform, as well as incentivized above all within a working community, is contribution. For the work on the project itself, this means variable forms of contribution that are convertible into variable forms of compensation.

Specifically, we will track time invested, money invested, and impact achieved (as determined by voting among the project members), which will be transformed into a social credit or currency, which could then in turn be converted by each project member into either some type of monetary reward like wage, share, or dividend, or into boost for their communications, or into some other exclusive or scarce privilege.

As for the specific means of voting about the impact of someone’s contribution or value of someone’s idea, each project member will have a limited supply of support, or a kind of currency with which one can show that they support what any other project member is doing. On our platform “Supports” will replace “Likes”. Since support will be tied into social credit, or overall contribution of the person giving it, it will have value.

This should be able to solve a couple of major problems of existing systems of boosting or censoring communications, as well as the core problem with existing social credit systems. Currently, boosting of content or the rewarding of the content’s producers is either backed by feedback given with no personal investment into a constructive outcome and at no personal cost, or it is pay-to-win, where big money can always hijack the conversation. That’s why social media are largely unproductive and unfair.

As for existing social credit systems, with the main example being the one implemented in China, those get outright nightmarish. For starters, for a social credit system to be legitimate, it must not be punitive. Any system designed to assign penalties and take away privileges on the basis of breaching authoritarian social rules or not meeting quotas is outright inhumane. A sane social credit system must be a carrot, not a stick.

However, even in a reward-based, or positive, social credit system, the emphasis must not be on the superficial, arbitrary, personal, or social. Popularity isn’t inherently productive, constructive, or fair. The whole concept of “liking” tends to spiral into insanity because people haven’t earned their beauty, charm, or wit, because these goods aren’t judged objectively, anyway, and because they’re ephemeral and not productive.

If the ability of the participant in such a system to show support for the work of others would be proportional to the level to which they contributed toward a shared goal (and therefore proportional to their level of personal investment in a constructive outcome), the support given likely won’t be random or frivolous, and will more likely contribute to more constructive work and communication being done. Much like one tends to weigh their words a lot more carefully when they’re not anonymous in a conversation.

While the influence of paying to win or irrational popularity cannot be negated completely, it should be possible to keep them in check through the incorporation of objective measures like time worked or tasks accomplished, and by balancing the weight of directly earned credit by limiting maximum allowable financial contribution and the maximum weight of popular support. The precise optimal balancing may require some tinkering to figure out, but no single factor should be dominant.

How would it work in practice on our social media platform? Imagine a room with many thousands of people engaging in discussion on the same topic at the same time. Like a big company meeting. If every message would be seen by everyone, the result would be chaos. If only a few had the right to speak to all, then it’s merely an elaborate announcement. In our scenario, the number of people reached would depend on support.

It would be quite analogous to how a large forum would work in physical space — regardless of how many people congregate in total, humans can only effectively debate in small groups. To scale that up, a series of steps is needed. If a small group agrees that some idea is worth sharing, the message will be relayed to several adjacent small groups. An idea that becomes the most supported in such a cluster of groups will be shared with other clusters, and so on. Through the currency of support, individuals with a lot of credit could compare to groups in their ability to boost ideas.

Again, the precise mathematics of this will need to be figured out through tinkering, but with a clear ultimate goal of achieving maximally productive or constructive communication. This means that reach or the boosting of the signal must not be possible to simply buy, or to be dominated by a few hyper-popular individuals. The algorithm also needs to be optimized to minimize noise, or unproductive or unconstructive communications.

Which brings us to the issue of censorship, or some possible constructive alternatives to how it’s currently being done on the internet. Just like support will be given weight through the individual’s credit, so will their reporting of inappropriate content be proportionally escalated. However, a middle option between supporting and reporting is also needed, to show one’s disagreement or disapproval, but it shouldn’t be merely suppressive.

Ideally, the negative feedback should factor into qualitative filters. Filters that will be optional for the users of the system. The technical term for this principle is data disaggregation, which is currently being used for example in news apps to filter the news by source bias. The negative or alternative feedback could therefore use message flagging or tagging, and should still be weighted based on the flagger’s or tagger’s credit. Other users will then have the option to filter out or sort various types of content as they see fit.

In short, the only reportable and outright censored content will be illegal content, the kind that incites or threatens violence, constitutes sexual harassment or explicit pornography, those sorts of things. Beyond that, the users should be able to flag and tag various degrees of merely distasteful, unconstructive, biased, or niche content, so that any user can apply a filter that will allow them to set their own custom content visibility preferences.

Also, in the spirit of community and trust-building, all content moderation rules need to be transparent. There should be no shadow-banning, opaque algorithms, confusing wording of user agreements, or hostile UI design. The users need to know what the rules are at all times, and anonymity should only be enforced or enabled when absolutely justified. Those who run a system that forms a community shouldn’t put themselves above it.

So, that’s the basic idea of contributionism, as a work model and as a communication environment. When I searched for the term after our meeting, the only relevant use of the word that I was able to find was in a title of a book about the African Ubuntu philosophy, which actually appears to be the closest existing socio-economic philosophy. It’s of course not the same in specifics, but working toward the common good is a big part of it.

Another interesting connection is that the word Ubuntu is most known as a name of one of the distributions of Linux, the open source operating system. Open source is indeed the next closest and most compatible philosophy to contributionism, given the fact that voluntary individual contributions are responsible for most of the open source code being written. However, the open source approach is only a foundation.

There have been some attempts to develop the open source model into more of a philosophy that would apply contributive principles to the structuring of whole organizations, mainly at Red Hat, the company where I used to work for five years. I highly recommend you read the book written by its ex-CEO, Jim Whitehurst (and a whole team of collaborators, obviously), called The Open Organization, which outlines the Red Hat way.

In short, the main contributive idea upon which Red Hat built its success is that all employees were continuously engaged in a company-wide discussion. The logic of it was that anyone can have the best idea, not just the people in leadership positions, so everyone needs to be able to offer their ideas in a public forum, even directly to the CEO. Even using just simple, unmoderated email technology, this did give Red Hat an edge.

The media platform we’re building is exactly in line with the open organization philosophy, but it should be a lot more powerful in fostering this type of high-value communication. The broader philosophy of contributionism, when applied to the organization of work and structure of an enterprise, should then remove the main limiting factor that held back even Red Hat — Red Hat was still a normal, big money-run corporation.

If one truly wants to follow the logic that anyone can have the best idea and that best ideas must be indentified and pursued at all times, then a corporation that’s beholden to shareholders who don’t participate in its work can never fully do that. The same goes if a few executives have a completely disproportionate level of decision-making authority. A few people on the top making all the decisions can’t be agile, must be fragile, and is certainly not geared to always respect those who contributed most.

In economic terms, our enterprise model is cooperative, so a kind of a co-op, where anyone’s payout is proportional to the level of their contribution. There’s no justification for paying a CEO thousands of times more than people in other roles, with the sole exception of everyone deciding that the impact of the CEO’s contribution truly amounted to thousands of times larger portion of the overall outcome. People who contribute more will have more authority, but likely not to overrule a large group of others.

So, that’s it, for now. What do you think? If you have any ideas you wish to contribute, please do. We‘ll be posting video recordings of our planning sessions, as a form of public record, which I will link here and share on social media after posting them. At the moment, we’re trying to figure out the name for our communication platform, so feel free to suggest one. If you want to be part of this project, you’re certainly welcome. Let us know.

--

--