In recent years, Silicon Valley has become downright philosophical. The tech community is currently in thrall to a buzzy movement known as effective altruism (EA), which argues that philanthropy should be data-driven and evidence-based. According to EA, we should stop donating to causes just because they are near to our hearts. Instead, we should calculate where our dollars will have the biggest impact and focus our philanthropic activities accordingly. But the latest crypto implosion revealed the dangers of such utopian attempts to do good by mathematical formula.
If your goal is to maximize the impact of your benevolence in a quantifiable way, you may come to a surprising conclusion. Rather than pursuing a meaningful career or volunteering for deserving causes — activities whose “impact” is both diffuse and difficult to calculate — you should spend your time making as much money as you can and give away your riches. This is what EA advocates call “earning to give.” And so when Oxford philosophy professor and EA thought leader William MacAskill met Sam Bankman-Fried (widely known as “SBF”), a precocious MIT student who was interested in animal rights, MacAskill advised him to find a way to get rich — very rich. Within just a few years, the idealistic undergraduate grew into a kingpin of the crypto community, amassing a net worth of around $26 billion and becoming far and away the largest funder of effective altruism. Whereas MacAskill had provided the movement with a philosophical foundation, SBF provided it with lots, and lots, and lots of cash.
Advertisement
But SBF’s crypto exchange, FTX, recently crashed in spectacular fashion. Documents filed in bankruptcy court show that “FTX and its related businesses could owe money to more than 1 million people and organizations.” EA adherents were left scrambling to understand — and to explain — this cataclysmic collapse.
In a lengthy Twitter thread detailing his “thoughts and feelings about the actions that led to FTX’s bankruptcy, and the enormous harm that was caused as a result,” MacAskill wrote that “if there was deception and misuse of funds, I am outraged, and I don’t know which emotion is stronger: my utter rage at Sam (and others?) for causing such harm to so many people, or my sadness and self-hatred for falling for this deception.” He continued: “If those involved deceived others and engaged in fraud (whether illegal or not) that may cost many thousands of people their savings, they entirely abandoned the principles of the effective altruism community.”
Advertisement
And yet those “principles of the effective altruism community” supposedly betrayed by SBF include both an abiding trust in quantification and a rationalistic pose that adherents call “impartiality.” Taken to their extremes, these two precepts have led many EA types to embrace “longtermism,” which privileges the hypothetical needs of prospective humanity over the very material needs of current humans. Focusing on quantification brings to the fore that there are a lot more potential people than there are living ones; and the logic of impartiality dictates that the future “they” should matter just as much as the present “us.” Hoping to save the unborn multitudes from preemptive extinction, EA-affiliated longtermists have spoken with their words and their wealth in favor of causes ranging from pandemic preparedness to nuclear disarmament. Their splashiest preoccupation has been to prevent humanity from accidentally destroying itself by unleashing powerful artificial intelligence systems. (Think “Terminator” minus the cool sunglasses and snappy catchphrases.)
Advertisement
A principal threat of AI, as longtermists see it, is the so-called “alignment problem.” This problem results when we task an AI with accomplishing some broadly stated goal but the method the AI devises causes catastrophic harm because the AI lacks the emotional intelligence to see the error of its ways. For example, tell a super-powerful AI to minimize society’s carbon emissions and it may deduce quite logically that the most effective way to achieve this is to kill all human beings on the planet.
AIs, it turns out, are not the only ones with alignment problems. For effective altruists, the goal of maximizing one’s earnings can seem to provide an incentive — even an imperative — to cut ethical corners. If you can make $26 billion in just a few years by leaning on speculative technology, a Bahamian tax haven, and shady (if not outright fraudulent) business dealings, then according to the logic of “earning to give,” you should certainly do so — for the greater good of humanity, of course. The sensational downfall of FTX is thus symptomatic of an alignment problem rooted deep within the ideology of EA: Practitioners of the movement risk causing devastating societal harm in their attempts to maximize their charitable impact on future generations. SBF has furnished grandiose proof that this risk is not merely theoretical.
To be fair, several EA leaders have written compellingly about this very issue. With his collaborator Benjamin Todd, MacAskill has come out against ends-justify-the-means absolutism, arguing that people should consider (and, naturally, attempt to quantify) the harm they may do while “earning to give.” But these nuances are all too easily disregarded in practice. The quickest glance at human history ought to remind us that the pursuit of wealth has the power to confound moral judgment, reducing high-minded ideals to empty slogans. Keep chasing astronomical wealth hard enough and the pursuit may become self-fulfilling; whatever the intended ends, the means may come to be what justifies the means. How many times have Silicon Valley executives spoken idealistically of making the world a better place (or at least propounded mottos like Google’s famous “Don’t be evil”) while they get staggeringly wealthy from technology that causes harm on a global scale?
Advertisement
Consider the following scenario. A bright and idealistic young man wants to use his talents for the greater good. Alas, it’s hard to help humanity when you’re broke, and our hero has just had to drop out of college because he couldn’t pay for it. (Tuition rates these days!) What is our budding Effective Altruist to do? Impartial rationalist that he is, he reasons that he can best maximize his beneficial impact by doing something a little unsavory: murdering a nasty, rich old woman who makes others’ lives miserable. He’ll redistribute the wealth she would have hoarded, and so the general good clearly outweighs the individual harm, right?
Advertisement
That, you may have recognized, is the plot of “Crime and Punishment,” a novel Fyodor Dostoevsky intended to read as if ripped from the headlines — of 1866, not 2022. After all, despite EA’s high-tech aura, the movement’s central tenets are hardly cutting edge.

Dostoevsky’s Russia, too, was awash in types who believed that righteous action in support of the greater good can and should be guaranteed by rational principles (or mathematical formulas). They were socialists, while SBF and friends are uber-capitalists — but 19th-century Russian radicals shared with modern EA bros the naively utopian conviction that humanity’s problems could be solved if we all just stopped acting on the basis of our biased and irrational feelings. Choose the right abstract ideals, maximize the right metrics, and then set your moral judgment to autopilot; your principles will guide your actions and ensure their benevolence. Yet at the end of “Crime and Punishment,” Raskolnikov recognizes that he has got the formula wrong — that his murder of the old woman has served not philanthropic motives but selfish ones. His rational calculations have produced not infallible moral truth but a smokescreen for his ego’s most terrible machinations. If there ever was a lesson for our age of effective altruism and SBF, it might be just this — for as Dostoevsky would argue, algorithms and data are no safeguard against good old-fashioned greed.
This, perhaps, is why Dostoevsky put his faith not in grand gestures but in “microscopic efforts.” In the wake of FTX’s collapse, “fritter[ing] away” our benevolence “on a plethora of feel-good projects of suboptimal efficacy” — as longtermist-in-chief Nick Bostrom wrote in 2012 — seems not so very suboptimal after all.
Emily Frey is an assistant professor at Brandeis University and a specialist in Russian music and literature. Noah Giansiracusa is an assistant professor of math and data science at Bentley University and the author of “How Algorithms Create and Prevent Fake News.”