There’s a spectre going around between Oxford’s philosophy department and the Silicon Valley.
Cheesy opener, won’t live up to that. Neither will this text’s subject, how grandiosely it may aim at political economy, ever reach the influence of that OG spectre. This one’s coming from pure mathematics, and it’s coming for the ethics. In a kind of perversion of anglo-saxon thought, it combines all shortcomings of utilitarianism and analytical philosophy towards the only modality oeconomicus really determines value: Sheer quantification. 💩 ‘Longtermism’ 💩 seeks to precisely calculate the effects of any possible deed to the apparent benefit of the largest possible number of humans, centuries ahead, and to the tenth degree. One solution of such a calculation, I kid you not, is that test subjects for the covid vaccine should have been exposed to the virus purposefully. Well, the numbers probably don’t lie there, such a refined study design likely yields a more effective product. Yet, the moral objections should be intuitively obvious to the average person — at least I hope so, I’m a bit out of touch (I’m massively out of touch). Anyway, too childish to actually consider that guff, right? Pretty out of touch, huh? Well, longtermism’s proponents are mighty. The present powers, in the form of our techno-capitalist overlords, are pushing this ideological instrument disguised as ethical school.
Astroturfing thought schools for political influence from the most fundamental levels is, as monstrous as it sounds, an actual phenomenon, and perhaps the most sustainably aporetic. At least since the CIA likely sponsored postmodern theory (whatever that may encompass) early on after realizing its destructive potential for orthodox Marxism we should brace ourselves for the abysmal possibility of powerful players shaping and derailing idea history to immeasurable degrees. Let me megalomaniacally suggest cult ops there— cultural operations — to describe this most fundamental level of interference, as distinct from the end consumer product ‘psy ops’ (nowadays bot armies, deepfakes, etc). Now, since the preliminary victory of capitalism halted intersystem warfare, actors from the private sector increasingly replace public entities, so private CIAs are at it on the media front. Especially considering the atmospherical backdrop: Our era of technological revolutions naturally recalibrates its value system (notoriously regarding employment), advancements on the material level necessarily bear ideological volatility. Longtermism’s appeal now partly lies in its simplicity, its manager’s mindset. Elon apparently ‘got it’. From a tweet.
In fact, incorporating a substantially temporal dimension into political thinking is anything but revolutionary. The numerical, merely logical approach may not yet be the utmost grasp an era defined by economism could conceive, but seems the least common denominator to claim last truths in neoliberally disciplined discourses. Indeed, for quite a while, there are conceptions outlined of history unfolding itself dialectically 🤯 Which, on a most tangible level, could mean: Space colonialism facing an ecological negation in ways present consciousness cannot fathom. No quantum computing will help these accountants of history there, ever. No pocket calculator works in a semantic vacuum, no formula without a definitional point zero. What if the unborn individuals overcome growth dependency, reasonably deem depopulation their goal, develop another spiritual relationship to their habitat…? What’s the “+1”, the abstract number of another person somewhen, really worth? Does human value lie in developed personalities, or expected workforce for 2084?
They’ll try, though. But the moment AIs will be tasked with such longterm questions for practical implementation (and given you’re not already on your free 🦅 ancap 🐍 Atlantis) it’s hard to legitimate a future good at the expense of living human beings. What they’re able to calculate is one thing only, and that’s the only ultimate goal anyway: The stablest possible infra- and metastructures for the unlimited market to minimise investment risk, necessarily trapped in its presently prevalent consciousness, where the complex eudaemonic question of what’s desirable is ultimately butchered to fit an algorithm. Just as tellingly, the proposed solutions rely heavily on private incentive: As much philanthropy (as personally advocated by an Oxford longtermist pioneer), as many tax evading charity foundations as possible are deemed the most favorable outcome for questions arising from redistribution to infrastructure. No way this whole picture is accidental. It’s by design: Transcending prevailing socioeconomic norms is impossible to calculate utilizing exactly themselves! It’s a trap. May the Hyperloop™ save us ✌️