If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Everyday when I check Facebook (ideally only the one time) I see fundraising pleas. People who want me to give money to one charity or the other. One guy wants me to fund the construction of a tutoring center in Haiti, another wants me to donate to an organization focused on suicide prevention, and still another wants to use my donation to increase awareness of adolescent mental health issues, and that’s just Facebook. The local public radio station wants my money as well, I get periodic calls and letters from my Alma Mater asking for money, and as of this writing the most recent email in my inbox is a fundraising letter from Wikipedia. Assuming that I have a limited amount of money (and believe me, I do) how do I decide who to give that money to? Which of all these causes is the most worthy?

As you might imagine I am not the first person to ask this question. And more and more philanthropists are asking it as well. It’s my understanding that Bill Gates is very concerned with the question of where his money will do the most good. And there is, in fact, a whole movement dedicated to the question, which has been dubbed effective altruism (EA). EA is closely aligned with the rationalist community, to the point where many people would rather be identified as “effective altruists” then as “rationalists”. This is a good thing, certainly I have fewer misgivings about rationalism in support of saving and improving lives than I have about rationalism left to roam free (see my post on antinatalism.)

From my perspective, EA’s criticisms of certain kinds of previously very common charitable contributions, their views on what not to do, are at least as valuable as their opinions on what people should be doing. For example you might have started to hear criticism recently of giving big gifts to already rich universities. And indeed it’s hard to imagine that giving money to Harvard, which already has a $30 billion dollar endowment, is really the best use of anyone’s money.

While the EA movement mostly focuses on money, there is another movement/website called 80,000 hours which focuses on time. 80,000 hours represents the amount of time you’re likely to spend in a profession over the course of your life, and rather than telling you where to put your money, the 80,000 hours website is designed to help you plan your entire working life so as to maximize it’s altruistic impact.

Of course both of these efforts fall under the more general idea of asking, “What should I worry about? What things are worth my limited time, money and attention, and what things are not?”

If you’re curious, for the effective altruist, one of the answers to this question is malaria, at least according to EA site GiveWell which ranks charities using EA criteria and has two malaria charities at the top of it’s list. These are followed by several deworming charities. For the 80,000 hours movement the question is more complicated, since if everyone went into the same profession the point of diminishing returns would probably come very quickly, or at least well before the end of someone’s career. Fortunately they just released a list of careers where they think you could do the most good. Here it is:

  1. AI policy and strategy
  2. AI safety technical research
  3. Grantmaker focused on top problem areas
  4. Work in effective altruist organisations
  5. Operations management in organisations focused on global catastrophic risks and effective altruism
  6. Global priorities researcher
  7. Biorisk strategy and research
  8. China specialists
  9. Earning to give in quantitative trading
  10. Decision-making psychology research and implementation

This is an interesting list and I remember that it attracted some criticism when it was released. For example, right off the bat you’ll notice that of the ten jobs listed the first two deal with AI. Is working with AI really the single most important career anyone could choose? The next three are what could be called meta-career paths, as they all involve figuring out what other people should worry about and spend money on, for example setting up a website like 80000hours.org which might strike some as self serving? Biorisk strategy and China specialist are interesting, then at number 9 we have the earn-as-much-money-as-possible-and-then-give-it-away option, before finally landing at number 10 which is once again something of a meta option. If nothing else, it’s worth asking should AI jobs really occupy the top two slots? Particularly given that, as I just pointed out in the last post, there is at least one very smart person (Robin Hanson), who does have a background in AI, and who is confident that AI is most likely two to four centuries away. Meaning, I presume, that he would not put AI in the first and second positions. (If Robin Hanson’s pessimism isn’t enough, look into the recent controversy over algorithmic decision making.) One can only assume that 80000hours.org has some significant “AI will solve everything or destroy everything” bias in their rankings.

Getting back to the question of, “What should we be worrying about?” We have now assembled two answers to that question: we should worry about malaria and AI, and the AI answer is controversial. So for the moment let’s just focus on malaria (though I assume even this is controversial for malthusians) The way EA is supposed to work, you focus all your charitable time and money where it has the most impact, and when the potential impact of a dollar spent on malaria drops below that of a dollar spent on deworming you start putting all your money there. Rinse and repeat. Meaning that from a certain perspective, not only should we worry about malaria, it should be the only thing we worry about until worrying about malaria becomes less effective than worrying about deworming.

As you might imagine this is not how most people work. Most people worry about a lot of things. Would it be better if we only worried about the most important thing, and ignored everything else? Perhaps, but at a minimum the idea that some things are more important to worry about while other things are less important is a standard we should apply to all of our worries. A standard we might use to prioritize some of our worries while dismissing others. It’s only fair, at this point, to ask what are some of the things I would advise worrying about. What worries would I recommend prioritizing and what worries would I recommend ignoring? Well on this question, much like the 80,000 hour people, I will also be exhibiting my biases, but at least I’m telling you that up front.

For me it seems obvious that everyone’s number one priority should be to determine whether there’s an afterlife. If, as most religions claim, this life represents just the tiniest fraction of the totality of existence, that certainly affects your priorities, including prioritizing what to worry about. I know that some people will jump in with the immediate criticism that you can’t be sure about these sorts of things, and that focusing on a world or an existence beyond this one is irresponsible. As to the first point, I think there’s more evidence than the typical atheist or agnostic will acknowledge. I also think things like Pascal’s Wager are not so easy to dismiss as people assume. As to the second point, I think religions have been a tremendous source of charitable giving and charitable ethics. They do not, perhaps, have the laser like focus of the effective altruists, and it’s certainly possible that some of their time and money is spent ineffectively, but I have a hard time seeing where the amount of altruism goes up in a world without religion. Particularly if you look at the historical record.

All of this said, if you have decided not to spend any time on trying to determine whether there’s an existence beyond this one, that’s certainly your right. Though if you have made that decision I hope you can at least be honest and admit that it’s an important subject. As some people have pointed out there could hardly be more important questions than: Where did I come from? Why am I here? Where will I go when I die? And that you at least considered how important these questions are before ultimately deciding that they couldn’t be answered.

I made the opposite decision and consequently, this is my candidate for the number one thing people should be worried about, above even malaria. And much like a focus on AI, I know this injunction is going to be controversial. And, interestingly, as I’ve pointed out before, there’s quite a bit of overlap between the two. One set of people saying, I hope there is a God, and one set of people saying I hope we can create a god (and additionally I hope we can make sure it’s friendly.)

Beyond worrying about the answer to life the universe and everything, my next big worry is my children. Once again this is controversial. From an EA perspective you’re going to spend a lot of time and money raising a child in a first world country money that could, presumably, save hundreds of lives in a third world country. I did come across an article defending having children from an EA perspective, but it’s telling that it needed a defense in the first place. And the author is quick to point out that his “baby budget” does not interfere with his EA budget.

From a purely intellectual perspective I understand the math of those who feel that my children represent a mis-allocation of resources. But beyond that simplistic level it doesn’t make sense to me at all. They may be right about the lives saved, but a society that doesn’t care about reproduction and offspring is a seriously maladapted society (another thing I pointed out in my last post.) I’m programmed by millions of years of evolution to not only want to have offspring, but to worry about them as well, and I’m always at least a little bit mystified by people who have no desire to have children and even more mystified by people who think I shouldn’t want children either.

I have covered a lot of things you might worry about and so far with the exception of malaria everything has carried with it some degree of controversy. Perhaps it might be useful to invert the question and ask what things should we definitely not be worrying about.

The other day I was talking to a friend and he mentioned that he had laid into one of his co-workers for expressing doubt about anthropogenic global warming. Additionally this co-worker was religious and my friend suspected that one of the reasons his co-worker didn’t care about global warming, even if it was happening, was that being religious he assumed that at some point Christ would return to Earth and fix everything.  

This anecdote seems like a good jumping off point. It combines religion, politics, baises, prioritization, and money. Also given that he “laid into” his co-worker I assume that my friend was experiencing a fair amount of worry about his co-worker’s attitude as well. Breaking it all down we have three obvious candidates for his worry:

  1. He could have been worried about religious myopia. Someone who thinks Jesus will return any day now is going to have very short term priorities and make choices that might be counterproductive in the long run, including, but not limited to ignoring global warming.
  2. He could have been worried that his co-worker was an example of some larger group. Conservative Americans who don’t believe in global warming. And the reason he laid into his co-worker was not because he hoped to change his mind, but because he’s worried by sheer number of people who are opposed to doing anything about the issue.
  3. It could be that after a bit of discussion, that my friend convinced his co-worker that global warming was important, but my friend worried because he couldn’t get his co-worker to prioritize it anywhere near as high as he was prioritizing it.

Let’s take these worries in order. First are religious people making bad decisions in the short term because they believe that Jesus is going to arrive any day now? I know this is a common belief among the non-religious. But it’s not one I find particularly compelling. I do agree that Christians in general believe that we’re living in the End Times, and that things like the Rapture, and the Great Tribulation will be happening soon. With “soon” being broad and loosely-defined. The tribulations could start in 100 years, they could start as soon as the next Democrat is elected president (I’m joking, but only a little) or we could already be in them. But I don’t see any evidence that Christians are reacting by tossing their hands up, for example most of them continue to have children, and at a greater rate than their more secular countrymen. I understand that having children is not directly correlated with caring about the future, but it’s definitely not unconnected either. And those who are really convinced that things are right around the corner are more likely to become preppers or something similar than to descend into a hedonistic, high-carbon emitting, lifestyle. You may disagree with the manner in which they’re choosing to hedge against future risk, but they are doing it.

 

What about my friend’s second worry, that his co-worker is an example of a large block of global warming deniers and that this group will prevent effective action on climate change? Perhaps, but is there any group which is really doing awesome with it? In the course of the conversation with my friend, someone pointed out (there were other people involved at various points) that Bhutan was carbon negative. This is true, and an interesting example. In addition to being carbon negative, the Bhutanese are also, by some measures, the happiest people in the world. How do they do it? Well, there’s less than a million of them and they live in a country which is 72 percent forest. So Bhutan has pulled it, off, but it’s hard to see a path between where the rest of the world is and where Bhutan is. (Maybe if malaria killed nearly everyone?) Which is to say I don’t think the Bhutan method scales very well. Anybody else? There is the global poor, who do very well on carbon emissions compared to richer populations. But it’s obvious no one is going to agree to voluntarily impoverish themselves, and we’re not particularly keen on keeping those who are currently poor in that state either. On the opposite side, I haven’t seen any evidence that global warming deniers, or populations who lean that way (religious conservatives) emit carbon at a discernibly greater rate than the rest of us. In fact insofar as wealth is a proxy for carbon emissions and a also a certain globalist/liberal worldview it wouldn’t surprise me a bit if, globally, a concern for global warming actually correlates with increased carbon emissions.

Finally we get to the question of how should we prioritize putting time and money towards mitigating climate change? I’m confident that if it was relatively painless the co-worker would reduce his carbon emissions. Meaning that he does probably have it somewhere on his list of priorities, if only based on the reflected priority it’s given by other people, but not as high on that list as my friend would like. As we saw at the beginning, neither the EA or the 80000 hours people put in the top ten. And when it was specifically addressed by the website givingwhatwecan.org they ended up coming to the following conclusion:

The Copenhagen Consensus 2012 panel, a panel of five expert economists that included four Nobel prize winners, ranked research and development efforts on green energy and geoengineering among the top 20 most cost-effective interventions globally, but ranked them below the interventions that our top recommended charities carry out. Our own initial estimates agree, suggesting that the most cost-effective climate change interventions are still several times less effective than the most cost-effective health interventions.

As long time readers of my blog know I favor paying attention to things with low probability, but high impact. Is it possible global warming fits into this category? Perhaps as an existential risk? Long time readers of my blog will also know that I don’t think global warming is an existential risk. But, for the moment, let’s assume that I’m wrong. Maybe global warming itself isn’t a direct existential threat, but maybe you’re convinced that it will unsettle the world enough that we end up with a nuclear war we otherwise wouldn’t have had. If that’s truly your concern, if you really think climate change is The Existential Threat, then we really need to get serious about it, and you should probably be advocating for things like geoengineering, (i.e. spraying something into the air to reflect back more sunlight) because you’re not going to turn the world into Bhutan in the next 32 years (the deadline for carbon neutrality by some estimates) particularly not by laying into your co-workers when their global warming priority is different than yours. (Not only is this too small scale, it’s also unlikely to work.)

From where I stand, after breaking down the reasons for my friends worries, they seem at best ineffectual and at worst, misguided, and I remain unconvinced that climate change should be very high on our list of priorities, particularly if it just manifests as somewhat random anger at co-workers. If you are going to worry about it, there are things to be done, but getting after people who don’t have it as their highest priority is probably not one of those things. (This is probably good advice for a lot of people.)

In the final analysis, worrying about global warming is understandable, if somewhat quixotic. The combined preferences and activities of 7.2 billion people creates a juggernaut that would be difficult to slow down and stop even if you’re Bill Gates or the President of the United States. And here we see the fundamental tension which arises when deciding what to worry about. Anything big enough to cause real damage might be too big for anyone to do anything about. Part of the appeal of effective altruism is that it targets those things which are large but tractable, and I confess that worries expressed in my writing have not always fallen into that category. When it comes right down to it, I have probably fallen into the same trap as my friend, and many of my worries are important, but completely intractable. But perhaps by writing about them I’m functioning as a “global priorities researcher”. (Number six on the 80,000 hours list!)

Of course, not all my worries deal with things that are intractable. I already mentioned that I worry about being a good person (e.g. my standing with God, should he exist, and I have decided to hope that he does.) And I worry about my children, another tractable problem, though perhaps less tractable than I originally hoped. I may hold forth on a lot of fairly intractable problems, but when you look at my actual expenditure of time and resources my family and improving my own behavior take up quite a bit of it.

Where does all of this leave us? What should we worry about? It seems obvious we should worry about things we can do something about, and we should worry about things that have some chance of happening. Most people don’t worry about being permanently disabled or dying on their next car trip, and yet that’s far more likely to happen than many of the things people do worry about. We should also worry about large calamities, and we should translate that worry into paying attention to ways we can hedge or insure against those calamities. I had expected to spend some time discussing antifragility, and related principles as useful frameworks for worry, but it ended up not fitting in. I do think that modernity has made it especially easy to worry about things which don’t matter and ignore things that do. Meaning, in the end I guess the best piece of advice is to think carefully about our worries, because we each only have a limited amount of time and money, and they’re both very easy to waste.


Is it a waste of money to donate to this blog? Well, as I said, think carefully about it. But really all I’m asking for is $1 a month. I think it’s fair to say that’s a very tractable amount…