The Effective Altruism movement is not above conflicts of interest

Sven Rone
13 min readSep 1, 2022


Sam Bankman-Fried, founder of the cryptocurrency exchange FTX, is a major donator to the Effective Altruism ecosystem and has pledged to eventually donate his entire fortune to causes aligned with Effective Altruism.
By relying heavily on ultra-wealthy individuals like Sam Bankman-Fried for funding, the Effective Altruism community is incentivized to accept political stances and moral judgments based on their alignment with the interests of its wealthy donators, instead of relying on a careful and rational examination of the quality and merits of these ideas. Yet, the Effective Altruism community does not appear to recognize that this creates potential conflicts with its stated mission of doing the most good by adhering to high standards of rationality and critical thought.

In practice, Sam Bankman-Fried has enjoyed highly-favourable coverage from 80,000 Hours, an important actor in the Effective Altruism ecosystem. Given his donations to Effective Altruism, 80,000 Hours is, almost by definition, in a conflict of interest when it comes to communicating about Sam Bankman-Fried and his professional activities. This raises obvious questions regarding the trustworthiness of 80,000 Hours’ coverage of Sam Bankman-Fried and of topics his interests are linked with (quantitative trading, cryptocurrency, the FTX firm…).

In this post, I argue that the Effective Altruism movement has failed to identify and publicize its own potential conflicts of interests. This failure reflects poorly on the quality of the standards the Effective Altruism movement holds itself to. Therefore, I invite outsiders and Effective Altruists alike to keep a healthy level of skepticism in mind when examining areas of the discourse and action of the Effective Altruism community that are susceptible to be affected by incentives conflicting with its stated mission. These incentives are not just financial in nature, they can also be linked to influence, prestige, or even emerge from personal friendships or other social dynamics. The Effective Altruism movement is not above being influenced by such incentives, and it seems urgent that it acts to minimize conflicts of interest.

Introduction — Cryptocurrency is not neutral (neither morally nor politically)

Even though they are not the product of a single unified vision, cryptocurrency industry projects are generally rooted in the same broad set of political ideas and values. Cryptocurrency is not simply an attempt to provide a set of technical solutions to improve existing currency systems. It is an attempt to replace existing monetary institutions by a new political system, it is therefore political at its core.

A core tenet of cryptocurrentcy is decentralization (although it is in practice often organized around centralized actors). This decentralization is meant as an opposing alternative to hegemonic nation currencies which are themselves perceived as being controled by central banks. The development of cryptocurrencies is a statement of defiance against the state and its government, and an action that aims to undermine their power. The founding principles of the cryptocurrency industry find their roots in anarcho-capitalism and much of the crypto industry today is infused with this ideology. In its applications, crypto carries deep implications in terms of society organization and democratic processes. The crypto ecosystem naturally favors a plutocratic decision making process: power is based on wealth and the rich get a decision power correlated with the magnitude of their wealth.

As critics of the cryptoindustry describe it (see for instance the excellent videos by Dan Olson and münecat, as well as the Crypto Critics’ Corner podcast hosted by Bennett Tomlin and Cas Piancey) and at the risk of oversimplifying, a large part of cryptocurrency aims to build an almost neofeudalistic society in which early adopters reap a rent provided by the rest of the population, the late comers. On the contrary, proponents of cryptocurrency would likely argue that crypto is a fantastic mean to rebalance power dynamics in our damaged democracies in favor of the People. Crypto is then see as a tool to reclaim monetary decision power, by means of a decentralized and censorship-resistant system, meant to protect citizens from the arbitrary power of corrupted governments.

My point here is not to debate on the virtues of the societal model promoted by cryptocurrency actors, but rather to convince readers unfamiliar with the cryptocurrency industry that it is deeply infused with political ideology and is certainly not a purely-technological response to a technical problem. The cryptocurrency response to monetary policy questions manifests a specific worldview accompanied by a specific set of moral values.

EA’s reliance on funding from the cryptocurrency industry

A major part of the funding pledged to organization belonging to the Effective Altruism (EA) ecosystem comes from Sam Bankman-Fried alone.
Sam Bankman-Fried (often referred to as SBF) has been in EA since 2014, he founded Alameda Research, a successful trading firm specialized in crypto markets, and then went on to fund FTX, one of the major cryptocurrency exchanges.

SBF’s wealth is estimated in the tens of billions of dollars, he has pledged to essentially donate all of it to EA-aligned causes. He has already made substantial donations to Effective Altruism-aligned causes (around $50–100M according to him in his recent 80,000 Hours Podcast interview). SBF’s total wealth, at its peak, was estimated to be up to $24B. After the recent crypto market crash, his wealth is likely closer to $10B. Total funds pledged to EA-aligned causes are in the $30–50B range according to Will MacAskill. With a total fortune pledged of more than $10B, SBF therefore represents a massive fraction of EA’s funding perspectives (and the recent halving of his net worth represents a significant decrease in expected funding for EA).

A direct consequence of that is that EA organizations benefit from the overall strength of crypto markets. The higher crypto asset prices are, the higher SBF’s wealth is, the more funding EA organizations can receive and expect to receive in the future. This is not even considering that many other wealthy individuals from the crypto industry are expected to donate to EA causes. This is already something that could raise some eyebrows: Effective Altruism, a philanthropic movement which, on the surface, has nothing to do with the profit-driven crypto industry has a direct interest in seeing the value of a particular class of financial assets rise. When also considering the political ideals carried by cryptocurrencies, EA’s interest in the crypto market should probably do more than just raising eyebrows. It seems worth asking whether the values and worldviews promoted by the cryptocurrency industry align with those of the Effective Altruism movement.

As it stands in 2022, the EA ecosystem has a vested interest in the mass adoption of cryptocurrencies, in the implementation of permissive regulatory frameworks beneficial to the crypto industry, and in the enactment of generous taxation frameworks for crypto transactions and crypto assets. Furthermore, EA has a direct incentive to promote cryptocurrencies and to align its discourse with that of major crypto players and particularly with SBF and his company FTX.

These incentives are not just monetary. As the crypto industry grows and SBF gains in wealth, influence and prestige, EA benefits by receiving more funding but also by extending its own area of influence and its prestige. On the contrary, attacks on the image of SBF, FTX and even crypto as a whole carry the risk of tarnishing EA’s reputation. Were SBF to be involved in an ethical or legal scandal (whether in his personal or profesional life), the EA ecosystem would inevitably be damaged as well. As a result, the EA community has an incentive to protect SBF’s reputation, to counter critics against him from the outside and to stifle critics from the inside of the community (this incentive can act between EA members, by voiced criticisms being ignored, downplayed, treated with defiance, but can even act via self-censorship, conscious or not).

How EA views cryptocurrency

As an outside observer, it is difficult for me to know and precisely describe how the crypto industry is actually perceived by members of the Effective Altruism ecosystem. The incentives described above are most likely at play but they are not the only relevant factors and it could very well be that most EA members share a negative view of cryptocurrency. From my limited perspective, I cannot tell with certainty whether the cryptocurrency industry is something the EA community 1) mostly does not think about, or 2) has reached a consensus on, or 3) is heavily debated.

Nevertheless, judging from the few EA forum posts I could find on the subject (like this, this, this, or this), cryptocurrency appears to be viewed positively but does not appear to generate a particularly-high level of interest in the community at large. Posts I read that engaged in critical discussions remained at the surface level (in my opinion) and were focused on questions such as the energy use of cryptocurrencies (discussed in part of this post), or the risk for EA funding caused by falling prices of crypto assets (e.g. here).

Given that the adoption of cryptocurrency has massive political implications for the future of our societies and carries with it very strong ideological foundations, it should, at first glance, seem slightly surprising that the EA community does not visibly engage critically with this topic on a deeper political level. But this is not as surprising when considering EA’s reliance on crypto wealth for funding. As explained above, the EA community is powerfully primed to view cryptocurrency positively, if only by the direct financial benefits it collects from the industry. Moreover, the incentives at play are likely effective inhibitors of contrarian views (notably by means of self-censorship).

Again, I do not claim to know the state of the internal debate on crypto within the entire EA organization (or on any other topic). Still, if a contrarian debate has happened or is still ongoing, there is no indication of it in EA’s high-profile publications and forums (for instance on the 80,000 Hours website or the EA forum).

EA’s ineffective mechanisms to protect itself against conflicts of interest

Conflicts of interest feed our cognitive biases and prompt us (consciously or unconsciously) to align our actions and beliefs with the incentives we are subject to. It is therefore crucial to implement systemic safeguards against them, especially when our goal is to promote rational, minimally-biased thinking. EA claims to aim to do “the most good” using the tools of rationality and critical thinking. So what does the EA ecosystem do to mitigate the risk that EA members act according to bias-inducing incentives?

As far as I can tell, the systemic safeguards against conflicts of interests in the EA ecosystem are very limited. From my outsider perspective, the visible mechanisms are 1) the EA forum, which can potentially promote contradictory discussions and 2) criticism contests such as the one this post is meant to be entered in.

Both these mechanisms are limited as check and balance systems and can actually end up having the opposite effect by cristallizing uniformized ideas and widening the gap between the in-group and the out-group, rendering the community less susceptible to hear alternative ideas and critics from outsiders.

Discussion forums by themselves are very mild forms of safeguards against conflicts of interests. By means of implicit social pressures, forums can instead stifle internal debates and serve as tools to align a community around a homogeneous worldview. I do not know enough about the Effective Altruism community to understand if the EA forum suffers from this issue.

In the same vein, I remain skeptical of the potential of criticism contests to act as powerful safeguards against conflicts of interest. Unfortunately, such contests are likely to serve as theater-criticism rather than actual criticism. By theater-criticism, I mean a form of criticism that centers around peripheral and non-controversial topics to create the illusion or sincere belief that a healthy debate is taking place inside a community. I do not know whether the current Effective Altruism criticism contest will fall in this trap and I admit that I have not read enough submissions to form an informed opinion on the matter.

These two main forms of promotion of debates (internal forums and invitations to criticisms) are not nearly sufficient as mechanisms to prevent the establishment of conflicts of interests. Yet, they appear to be the only ones the Effective Altruism community relies on.

Conflicts of interest need to be addressed whether they have real effects or not

Given the extensive and overall extremely positive coverage of SBF in EA publications, in particular by 80,000 Hours (his profile, his podcast interview and numerous articles referring to him in laudatory terms), it seems clear that little to no effort has been made to recognize and minimize the potential for conflicts of interests in the EA community, in particular those linked to the funds pledged by SBF.

Those conflicts of interests are real and painfully obvious: EA is incentivized to please SBF, to adopt his views and beliefs, to work on projects that he believes are important, and to shed a positive light on his activities (EA-related or not) and his person, within the EA community and to the general public.

Fundamentally, the problem I want to highlight is not even whether the EA ecosystem is effectively influenced by the incentives it is subject to. These incentives exist, and are left unchecked. This is the primary issue.
Whether these incentives have actual effects on EA is almost secondary to the fact that EA seems to be unable to recognize, publicize and mitigate its engagement in conflicts of interest. To clarify, I believe that almost any form of monetary, influence, prestige incentive creates a conflict of interest when they clash with an organization’s stated mission, and ethics. For EA, incentives like the ones related to SBF are in direct conflict with EA’s stated mission of “using evidence and reason to figure out how to benefit others as much as possible” (quote from the Centre For Effective Altruism).

What should EA do?

It appears clear that EA does not consider itself to be at any real risk of falling prey to conflicts of interests. This seems to be the only way to explain the blind spot EA suffers from when it comes to recognizing its incentives associated with relying on donations from tech bilionaires, as obvious as these may be in the particular case of donations by SBF.

Identifying and publicizing obvious sources of potential conflicts of interests

A necessary (but non sufficient) first step would be to acknowledge existing incentive and recognize their potential effects. The bare minimum would be that readers of publications such as SBF’s profile on 80,000 Hours be made aware of the conflict of interest in which 80,000 Hours is engaged with regards to SBF, in language that leaves no room for doubt (much like academic researchers are required to disclaim funding sources and potential conflicts of interest when they publish scientific articles).

I will briefly address a counter-argument that could be made along the lines of: “SBF’s profile in 80,000 Hours clearly mentions SBF’s contributions to EA-aligned causes. Therefore EA is transparent about its funding, therefore EA does not suffer from issues of undisclosed conflicts of interests.” Indeed, SBF’s contributions are mentioned at length in EA publications. But I have seen no instance where this contribution was listed as a sign of potential conflict of interest. On the contrary, SBF is framed as a prime example of Earning to Give, he is presented as an example to follow, a person to admire, to take inspiration from and to be grateful to, which does nothing to warn against potential conflicts of interest.

Going further

It seems crucial that EA, if it values independence of thought and critical thinking, should engage in an in-depth examination of the role that incentives are allowed to play in the organization.

I focused on conflicts of interest linked to the crypto-currency industry and Sam Bankman-Fried more specifically. But EA’s funding induces further-reaching incentives. These incentives notably expand to issues such as wealth redistribution and taxation policies. Given EA’s reliance on donations from disproportionately-wealthy individuals (notably through its earning-to-give pledges), EA’s interests align with the accentuation of wealth inequalities in parts of the world in which its donors operate.

Publicizing existing conflicts of interest achieves little if not accompanied by a significant effort to understand how conflicts of interest are allowed to appear, how to minimize their potential effects, how to strengthen counterpowers within the organization to foster accountability, how to prevent EA from becoming more conflicted and instead reduce the number and strength of existing conflicts of interest.

There is a clear trade-off between 1) expanding the available resources of a non-profit organization and 2) protecting said organization from potential conflicts of interests. My opinion is that EA as a community should probably think hard about where it stands on this trade-off. What is it ready to sacrifice in terms of (existing and potential future) resources to protect a certain level of adherence to its ethics and aspirations to rational thinking? On the contrary, what is it ready to sacrifice in terms of freedom of critical thought and moral standards to increase its resources and thus its potential impact?
With the rapid gain in influence and funding that EA has benefitted from in the last couple of years, I think it is now urgent that EA engaged with these questions.


It is important to understand that relevant incentives to consider when fighting against potential conflicts of interest are not all monetary in nature. Incentives can be related to the influence, or the prestige of an individual or of an organization in the case of Effective Altruism. This is particularly important to realize for EA because only a small part of the funding it receives has the potential to directly benefit members of the EA community (by paying salaries to staff in EA-affiliated organizations for instance).
For EA, receiving more money means to be able to achieve more ambitious goals, to play a greater role in the charity landscape, to have better chances of shaping the future as it thinks it best. Overall, funding translates into prestige and a potential for increased influence on society for members of EA. Prestige, status and influence are powerful incentives and should not be underestimated as sources of potential biases.

As humans, we all have our own biases. It would be pointless to aspire to building an organization in which conflicting incentives are completely eliminated. On the other hand, it would be completely illusory to think that individuals can consciously decide to free themselves from the biases associated with incentives of all kinds. Systemic safeguards are essential, all the more so when an organization aims to hold itself to high standards of rationality. Hopefully, the EA movement will remember this sooner rather than later.


This post deals with conflicts of interests, it is only natural that I would be particularly transparent regarding the incentives that played into its writing.
First, I did not receive any funding for writing this post, and I have no affiliation to the EA movement.
I wrote this post aiming to submit it to EA’s criticism contest and was thus incentivized to write an effective critique of EA, but one that would not be too antagonizing to the jury of the contest (I believe that the jury is mainly composed of members of EA). I did my best to resist this incentive and aimed to not water down my thesis too much.
By making a pseudonymous submission, I am shielding myself from the fear of reputational damage, which could otherwise have been a powerful incentive to self-censor.