Effective Altruists are Cute but Wrong

Note: For a more complete derivation, start with Survival of the Moral-ist (and Morality is what other People Want you to do.)

Effective Altruists mean well. They want to be moral people. They just don’t quite get what morality is. This leads to amusing results like EAs stressing out to themselves about whether or not they should donate all of their money to make animals happy and, if they don’t sacrifice themselves to the “save the cockroaches” fund, are they being meta-inconsistent?

The best synthesis of game-theoretic morality and evolutionary morality is that morality is about mutual systems of responsibility toward each other. You have no moral duties toward people (animals, beings, etc.,) who have none toward you. Your dog loves you and would sacrifice himself for you, so you have a moral obligation to your dog. A random animal feels no obligation to you and would not help you in even the most dire of situations. You have no moral obligation to them. (Nice people are nice to animals anyway because niceness is a genetic trait, and so nice people are nice to everyone.)

The EA calculations fail to take into account the opportunity cost of your altruism: if I donate all of my money to other animals, I no longer have money to buy bones for my dog, and my dog will be unhappy. If I spend my excess money on malaria nets for strangers in Africa, then I can’t spend that money on medical treatment for my own children.

If you feel compelled to do something about Problem X, then it’s a good idea to take the EA route and try to do so effectively. If I am concerned about malaria, then of course I should spend my time/money doing whatever is best to fight malaria.

As I mentioned in my post about “the other”, a lot of people just use our ideas/demands about what’s best for people they really have no personal interest in as a bludgeon in political debates with people they strongly dislike. If you are really concerned about far-off others, then by all means, better the EA route than the “destroying people’s careers via Twitter” route.

But morality, ultimately, is about your relationships with people. EA does not account for this, and so is wrong.

10 thoughts on “Effective Altruists are Cute but Wrong

  1. My impression of EA is not so much focussed on how can we be ever more altruistic, which of course is virtually infinite (and, as you note, debilitating to the donor), but how can we, given a fixed pile or percentage of resources, donate it more “effectively”? Of course, a very interesting answer (and possible entry point for neoreactionary poison pills) is: maybe we’d be more effective—i.e., at alleviating the burdens of the poor, combating environmental disasters—by not donating at all.

    Like

    • There are times when random bad shit happens. Houses catch on fire; children are orphaned by cancer or some random guy texting while driving. Such events among one’s own community, family, friends, etc., are appropriate recipients of charity.

      Necessary forms of risk-taking and innovation are probably more likely when people are less likely to survive as a result. The families of soldiers who have fallen in battle, for example, should not be left to starve, as this is both dishonorable and makes soldiers shirk their duty.

      Like

  2. As a moral anti-realist, I reject claims that my morality is “wrong”. Calling me “cute” is not going to convince me that universal flourishing is a bad idea. You’ll go further if you try to persuade me that the best way to achieve universal flourishing is to advance some expanding circle of mutually cooperating agents or something like that. (I’m not sure if that’s the point you’re trying to make in this post or you’re trying to make some other point.)

    Like

  3. […] people want you to do, Has Australia gone Totally Nuts? Why Do Good? For others or one’s Self?  Effective Altruists are Cute but Wrong, I’m Sick of False Empathy, Conservation of Caring, Animal Morality, New Yorker: Adopting 20 kids […]

    Like

  4. Are you deriving moral recommendations from results in biology and game theory? It seems that there is a gap between the moral domain and science, so how do you go from one to the other?

    Like

Leave a comment