Effective Altruists mean well. They want to be moral people. They just don’t quite get what morality is. This leads to amusing results like EAs stressing out to themselves about whether or not they should donate all of their money to make animals happy and, if they don’t sacrifice themselves to the “save the cockroaches” fund, are they being meta-inconsistent?
The best synthesis of game-theoretic morality and evolutionary morality is that morality is about mutual systems of responsibility toward each other. You have no moral duties toward people (animals, beings, etc.,) who have none toward you. Your dog loves you and would sacrifice himself for you, so you have a moral obligation to your dog. A random animal feels no obligation to you and would not help you in even the most dire of situations. You have no moral obligation to them. (Nice people are nice to animals anyway because niceness is a genetic trait, and so nice people are nice to everyone.)
The EA calculations fail to take into account the opportunity cost of your altruism: if I donate all of my money to other animals, I no longer have money to buy bones for my dog, and my dog will be unhappy. If I spend my excess money on malaria nets for strangers in Africa, then I can’t spend that money on medical treatment for my own children.
If you feel compelled to do something about Problem X, then it’s a good idea to take the EA route and try to do so effectively. If I am concerned about malaria, then of course I should spend my time/money doing whatever is best to fight malaria.
As I mentioned in my post about “the other”, a lot of people just use our ideas/demands about what’s best for people they really have no personal interest in as a bludgeon in political debates with people they strongly dislike. If you are really concerned about far-off others, then by all means, better the EA route than the “destroying people’s careers via Twitter” route.
But morality, ultimately, is about your relationships with people. EA does not account for this, and so is wrong.