Crypto Twitter doesn't get to criticize Effective Altruism because of SBF (and vice-versa)
If you don't steelman their argument, you can't expect better treatment for yourself.
A fair number of people on crypto Twitter taking shots at SBF’s other hobby — in addition to running an insolvent crypto exchange — of Effective Altruism (EA).
For those not aware, EA is “a philosophical and social movement that advocates "using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis” — as defined by one of its earliest and prominent proponents William MacAskill.
FTX had its own branch of people working on EA-related projects in the Bahamas and SBF was (on the surface) a big proponent of its merits.
The point of this article isn’t to pitch EA (though, there’s a smidgen of that). Instead, I’m trying to make the point that some people in crypto are making the same mistake with regard to EA as the general public does with crypto.
That mistake is dismissing and denigrating an interesting movement based on the failures of its highly publicized members.
What is Effective Altruism?
I don’t pretend to be an expert here. Read a few of the well-known books, and spent a bunch of time watching Youtube videos and listening to podcasts on the topic over the years.
But what I have read (e.g. from William MacAskill's book “Doing Good Better”) does resonate to an extent.
If I had to sum up MacAskill’s key points, they would be:
Today, many charities are ineffective at producing results that improve people’s lives.
We need to have a framework for doing good that is built less on emotion and more on a dispassionate analysis of what will improve the quality of life.
If you take a positive view of what’s possible for humanity in the future if we make the right decisions today, it shifts our priorities on where to allocate resources.
The points above are hardly controversial. However, thinking about them deeply does change how we operate in the world.
As an example, consider the philosopher Peter Singer’s thought experiment about a child drowning in a pond:
Imagine walking by a child drowning in (to make it easy) an extremely shallow pond.
You would be a monster not to help said child — for say, the reason for not wanting to get your shoes wet. Everyone would think you a psychopath for letting the child drown and you’d probably struggle to live with that decision afterward.
Peter Singer argues this is what is happening in the world all around us and all the time — and we all know this.
1 person dies of hunger every 5 seconds (mostly in poor parts of the world). We know we could do more to help them if we were motivated to do so. Essentially, these people are dying in a shallow pond and we are not helping them. The difference is that these people are far away and the problem feels more abstract.
When a pretty blonde girl from Omaha falls into a well, we’ll all tune in to watch the rescue efforts — but a genocide happening across the world is boring enough to change the channel.
Related to this, in studies on empathy, people will give a maximum amount to help out e.g. one little girl in trouble. If you add their brother to the picture people are slightly less motivated to donate. Add the entire family and our donations fall off a cliff. This is why you most often see a single person in the marketing materials by charities.
EA argues that we are poorly tuned to care about problems far away from us, especially when we are talking about groups of people, and we should re-calibrate our moral intuitions to better match reality.
We need to be more dispassionate and rigorous about where we allocate resources to helping people. Instead of charity based on what tugs on our emotional heartstrings, we should take a much stricter utilitarian view.
3 ways to get your anti-EA criticism wrong
Utilitarianism with poorly thought-out consequences
There are a few common mistakes I’ve seen arguing against effective altruism. The first one I’ll call “poorly thought out utilitarianism”.
Example: there are 6 people dying in the hospital who need a transplant. A person walks into the hospital with those 6 organs healthy. If we just sacrificed this 1 life we’d save 6. Per EA and utilitarianism, this would be the right thing to do.
Isn’t this ridiculous? Checkmate.
This is obviously not the case and is just a function of not considering the actual consequences of what would happen if hospitals operated in this manner.
If we actually lived in a world where this was OK, no one would trust hospitals when there is a risk of losing your vital organs when you walk in.
Our trust in the healthcare system would break down and this would be more harmful to society. Thus, it’s better that you can walk into a hospital without fear of the maximum number of lives being saved at that moment and autonomy over your own body.
Arguments to the absurd and abstract
I’ll admit to being guilty of this one (jokingly).
The easiest way to set this argument up is by combining a utility function (e.g. amount of well-being produced by conscious creatures) with statistics of the infinite.
Example: you have 1 billion people living lives with a happiness score of 10 — producing a total of 10 billion happiness points. By pressing a button you have a 50/50 chance of killing everyone or tripling the population.
Should you press that button? Well, by EA logic you should right? The expected value is 15 billion happiness points which is more than 10.
There’s an interesting discussion to be had about how much we privilege people who live today over hypothetical people who might live in the future. However, these examples aren’t necessarily helpful since there is no such thing as happiness points and if there were, happiness points might not be a linear function.
Strawmanning from dumb Internet posts
It’s easy to take a screenshot from an absurd conversation on EA forums out of context and present it as the most egregious immorality imaginable.
Here, there are 2 points to consider:
Most stuff written on the Internet is stupid. This definitely includes crypto — go on Youtube and almost everything you find is the cringiest stuff imaginable.
Pushing the boundaries on moral discussion tends to produce some outwardly strange-looking discussions. Moral philosophers may talk e.g. about why it is wrong to kill babies, but this is simply hammering at bedrock and poking at what a moral framework actually implies.
“Steelmanning is the act of taking a view, opinion, or argument and constructing the strongest possible version of it. It is the opposite of strawmanning.” - LessWrong.com
Ever get upset about how people strawman and present crypto in a bad light?
Consider all the scams and horrible things that have happened in our industry. It’s not unimaginable why people think all of crypto is a scam.
Yet, these people are wrong and don’t present the steelmanned argument for why crypto is interesting and has the potential to improve people’s lives.
So for those making fun of EA: If you are incapable of steelmanning the arguments of others, you are not allowed to be upset when others don’t do it with you.