Is Effective Altruism Effective?

Touring various info sessions at the beginning of the semester, I encountered the Arete Fellowship program offered by the Harvard College Effective Altruism student group. After spending much time pondering over the philosophical, trolley-problem-like questions on the application sheet, I clicked “submit”. I was a bit concerned about why there were questions about philosophy at all; isn’t giving more effectively more of a science than philosophy? Academic commitments soon washed away my concerns until one week later, I received an email with a bolded font — “Congratulations! We’re excited for you to join the Arete Fellowship this semester.”

Effective altruism (EA) is a global movement that started its roots in 2009 from the organization Giving What We Can, located in Oxford, UK. According to The Centre for Effective Altruism, EA is about “using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis.” The Arete Fellowship that I participate in, on the other hand, is a program run by Harvard College Effective Altruism in which Harvard students participate in a structured program to learn about concepts about effective altruism. An hour-long reading list is assigned each week and then we go to section to discuss with other fellows about the readings.

After doing the readings for the first week, I realized why so much philosophy is discussed in EA. It is mainly because the term “effective” implies that one needs to choose between different interventions when donating your resources such that your resources “produce the most good.” Many questions then follow: How should we determine whether an intervention is “good”? How should we convert impact into numbers so that we can determine which intervention is “most good”? Can we convert impact into numbers at all?

In section, after having lengthy debates with other fellows, the facilitator announced that “EA believes in the utilitarian framework, which is the combination of Hedonism, Consequentialism, and Aggregationism.” Aggregationism, according to the syllabus, is “the moral principle that we should maximize what we deem to be good regardless of how that good is distributed among people.” At this point, I felt uneasy as the utilitarian framework, especially aggregationism, worked against my intuition. Using the same amount of resources, is giving rich people 10 units of good “more effective” than giving poor people 5 units of good?

When viewing impact as numbers, one thing that came to my mind is my previous research in reinforcement learning. Reinforcement learning is a subtopic of machine learning that tries to train robots by giving them rewards. Once the robot can obtain the most reward reliably, it is assumed that the correct behavior would be learned. In reality, robots often learn to exploit the reward, behaving in ways that would maximize rewards but not in the ways you expect. Similarly, when laws and policies are enacted, we also see humans finding ways to circumvent laws.

On the other hand, are numbers important at all? This is a greatly discussed problem in philosophy. Professor John M. Taurek argued in the widely cited paper “Should the Numbers Count?” that numbers shouldn’t, in fact, count, other things being equal. His argument can be summarized in the following hypothetical situation: Suppose Alice can save either one person (Bob) or a group of five people, but not all. Now, if Bob has the ability to save only himself or the other group of five people, we think it is permissible for Bob to save only himself, and not the other five people. Therefore, it seems morally appropriate for Alice to save only Bob and not the group of five people.

I would argue that to put emphasis on numbers and optimization in EA is counterproductive to its mission. As effectivealtruism.org points out in its FAQ: “Utilitarians are usually enthusiastic about effective altruism. But many effective altruists are not utilitarians.” Jeff McMahan, a White’s Professor of Moral Philosophy in the University of Oxford wrote a response to philosophical critiques to EA, arguing that most critiques “consist almost exclusively of rehearsals of familiar objections to utilitarianism” but “none of the philosophical critics… rejects [the] goal [of EA]”.

We need to emphasize more on the other meaning of “effective,” in that excessive resources should not be put into interventions that “are proven to be likely ineffective”. As EA is a field of using science to analyze social problems, it should follow the same guidelines in scientific research. One important principle is that good hypotheses are those that can be falsified with experiments. As an example, the hypothesis “there is life outside of earth” cannot be falsified (one could only verify it by sending astronauts to space but could never prove it wrong), while the hypothesis “all swans are white” can be falsified, since finding one swan that is black would prove the hypothesis false.

It is almost impossible to falsify the hypothesis that “intervention A is ‘less effective’ than intervention B” because there are so many factors to consider, let alone quantify (such as political benefits). Experiments to falsify the hypothesis that “intervention A is ineffective” is much easier and conclusive. I do not deny the benefits EA has to society, but believe that by shifting focus from numbers to proving effectiveness of interventions is crucial to truly reaching EA’s ultimate goal.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *