If you've ever been walking past a pond, seen a child drowning, and wondered what you should do, don't worry – Peter Singer has the answer for you (Hint: you should save the child).
Singer first proposed this thought experiment in his 1972 article Famine, Affluence, and Morality, and more recently in his 2009 book The Life You Can Save. Since the pond is shallow, saving the child will only cost you the dry-cleaning bill for your clothes. This implies that you value the life of a child more than your dry-cleaning bill; so if you could save a child's life by donating the equivalent of that dry-cleaning bill to an NGO, you should do so right now.
The next step in the experiment is to acknowledge that it shouldn't matter whether that child is drowning next to you, or starving in another country – you should still be prepared to give up that nice new sweater that you just bought from T.K. Maxx (the usual aid worker style and budget). Given it costs less to save a child in a developing country, you should put your money there; in fact, you should be prepared to give up all your worldly possessions beyond what you need to sustain your life in order to save as many children as possible.
The practical ethics of Effective Altruism (EA) mean, for example, committing to give a certain percentage of your income to charity – even taking a job you don't enjoy in order to earn a lot of money, so that you can give that away too.
There are a few problems with this thought experiment, the main one being exactly that: it's a thought experiment. As a philosophical tool, it’s useful only up to a point; after that point, the heavy lifting is done by relying on evidence, which fits perfectly with the contemporary deification of data. There's a political problem, too: this is charity for the age of neoliberalism, a technocratic exercise that privileges individual over collective action, and reduces that action to money spent on the basis of cost-effectiveness.
Nevertheless, this is the shaky foundation on which the EA movement has built a utilitarian Jenga tower. Most EA discussions start from the donors' side, which makes sense, since it's far easier to give away money than to change the systems designed to turn that money into assistance (I can confirm this on the basis of my Buckminster Fuller-style nervous breakdown).
Things are a little more tricky when you start to think about the political economy of aid. If you've read this far, I’ll assume you know a little bit about how NGOs actually work. Not even taking into account overhead costs, it's a struggle to match donations to saved lives – even if your project is literally to stop children from drowning. While you might be able to answer the question of whether your work is directly responsible for saving lives, you might not be able to answer the question of which intervention is the most effective.
A lot of EA mental bandwidth is expended on trying to identify what offers the most bang for all those bucks you want to blow, including preparing answers to most of the objections that might be raised against them. The answers to the questions above are Randomised Control Trials (RCTs) and Quality Adjusted Life Years (QALYs), which proves beyond doubt that one sign you have an aid habit is overuse of acronyms.
One problem with EA is not the philosophy but its advocates, who tend to skew very WEIRD indeed. As the story goes: “What kind of people are in your EA group?” she asked him. “Oh, all different kinds!... Mathematicians, and economists, and philosophers, and computer scientists…” This lack of diversity within the EA movement creates a bias towards particular causes (one of the top EA causes, for example, is the existential threat posed by machine intelligence) and a blindspot to the lived experiences of others.
EA possesses the evangelical momentum common to every university student who has just discovered global poverty – and I don't mean that in a negative way, since I was one of those students(back in the 19th Century). As a consequence of this, EA can often appear not as a dispassionate dissection of our moral lives, but a polemical account of our moral inadequacy.
It's not uncommon for EA advocates – powered by a belief in their own rationalism, which is their secular equivalent of righteousness – to come across as Whites in Shining Data.
Take the case of deworming: based on RCTs and QALYs, the EA movement believes the evidence shows this is the single most effective way to help children out of poverty, dollar for dollar. Yet this evidence is contested, to the extent that the debate around deworming has become known as the Worm Wars, an academic storm in a policy teacup with surprisingly large implications for the well-being of children globally.
Everybody agrees that deworming is a good thing, but they disagree about how much of a good thing it is – and this is absolutely critical if your argument is based on cost-effectiveness. The disagreement is so intractable that even St. Chris, patron saint of development blogging, walked away from the wars, noting that “the real tragedy is that... we do not have large-scale, randomised, multi-country, long-term evidence on the... impacts of deworming medicine.”
There's only one solution to a lack of evidence: get more evidence. Unfortunately, “more evidence” can't answer every question, since even the way we phrase these questions depends on the values we already hold, and even with perfect evidence we still need a broader framework in order to make decisions. Cost-effectiveness is one such value – but it’s not a core humanitarian value, which means that it can’t provide a silver bullet solution for deciding our best course of action.
The EA movement is slowly realising that their founding myth of the drowning child comes with its own assumptions, including the fact that both RCTs and QALYs are fairly blunt instruments if you want to carry out surgery on philanthropy. This realisation reflects the fact that EA is relatively new, but also that it has more capacity for course correction than existing organisations; and I have a strong suspicion that they will converge on a similar spot that some of us working in the sector already occupy.
It's at this point that I reveal a shocking plot twist: I, Keyser Söze, am a big fan of Effective Altruism. The basic premise of EA is that we should be putting our money where it will have the most impact, which seems really obvious until you remember that the aid community has spent most of its existence valiantly making decisions on the basis of feels rather than data. The humanitarian community could use a healthy dose of effectiveness.
In the end, evidence is necessary but not sufficient for making the right decisions; the foundation of our decision-making should be our values. Take the string of recent allegations against the UN in Syria, South Sudan, CAR, and other crises, for example: we need evidence to understand the situation, but evidence alone can’t tell us how to respond; our moral compass relies on values other than effectiveness. The question we should all be asking is: what are those values, and how far are we prepared to go to defend them?
(TOP PHOTO: 1913 painting by Danish artist Laurits Tuxen, Den druknede bringes i land)