Month: October 2018

Jockeying for Control of the Airliner

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


It’s around 1:30 am on June 1, 2009, and Air France Flight 447 from Rio de Janeiro to Paris is flying somewhere over the mid-Atlantic when they run into the outer edge of a tropical storm system. Unlike some of the other planes in the area the crew of Flight 447 has not studied the weather patterns and made a request to be routed around the storm, but this is not a cause for especial concern. They do, however turn on the planes anti-icing system, and check the radar.

After determining that the radar hasn’t been set up correctly, they switch it to the correct setting and see that the storm ahead is worse than they thought. They decide to bank left a little bit, and as they do so, a strange aroma floods the cockpit, and the temperature suddenly increases as well. The more experienced pilot in the cabin, David Robert, explains that both phenomena are due to the extreme weather in the vicinity, and that they are nothing to worry about. Despite this reassurance, the combination of the storm, the smell, the temperature and some St. Elmo’s fire experienced a few moments before, start to make Pierre-Cédric Bonin, the youngest pilot, nervous.

Right about the same time as all of this is happening an alarm sounds to indicate that the autopilot has disconnected. This is because the airspeed indicators have iced over. This is apparently the final straw for Bonin, who irrationally starts to pull back on the control stick which puts the plane into a steep climb. This is a problem for two reasons. One, the air is too warm to provide the lift necessary to climb, which is why they didn’t fly up over the storm in the first place. Two, if you’re climbing and your airspeed drops too low (and recall that they don’t know what their airspeed is anymore) then you can stall. And indeed shortly after this happens the plane begins to sound a stall warning.

I am lifting the description of what happened to Flight 447 from a Popular Mechanics article written a couple of years after the fact, shamelessly and nearly verbatim. And you really should read the whole thing, but if you decide not to, their explanation of the stall alarm is particularly good:

Almost as soon as Bonin pulls up into a climb, the plane’s computer reacts. A warning chime alerts the cockpit to the fact that they are leaving their programmed altitude. Then the stall warning sounds. This is a synthesized human voice that repeatedly calls out, “Stall!” in English, followed by a loud and intentionally annoying sound called a “cricket.”

…The Airbus’s stall alarm is designed to be impossible to ignore. Yet for the duration of the flight, none of the pilots will mention it, or acknowledge the possibility that the plane has indeed stalled—even though the word “Stall!” will blare through the cockpit 75 times. Throughout, Bonin will keep pulling back on the stick, the exact opposite of what he must do to recover from the stall.

Of course one of the big questions is, why did they ignore the stall warning so entirely? Well the plane they’re flying, the Airbus 330, is very advanced, and normally it won’t let you do something like stall the plane. Thus they may have been ignoring the stall warning because they didn’t think it was possible for the plane to stall, and that the warning was spurious. But this is only the case under what’s called “normal law”. When the airspeed indicator freezes up, the plane switches to “alternate law”, and under alternate law a plane can stall. It’s quite possible that Bonin, who still has the controls, has never flown under alternate law and thus doesn’t realize that there are far fewer restrictions, and that one of the restrictions which has been removed is the one that prevents him from doing something to make the plane stall.

Robert notices the rapid ascent, and tells Bonin he needs to descend while at the same time realizing that the situation is serious enough to call the captain, who had left the cabin a few minutes before to nap. Bonin levels things off a little bit, enough that the stall warning stops sounding, for the moment. But he isn’t actually descending, he’s just ascending less quickly.

At a certain point, despite the slower rate of ascent the plane has gone as high as it can go, and it starts to fall. Now if at this point Bonin had just taken his hand off the controls, the plane would have picked up speed, the wings would have started generating lift, and they probably would have been okay. What’s even more interesting is that by this point, the de-icing system has kicked in enough that the airspeed indicator begins working again. The plane is entirely functional now, there’s nothing wrong with it at all, but it doesn’t revert back to normal law, it’s still in alternate law.

Around 60 seconds after being summoned the captain arrives, and perhaps if, upon arriving, he had been able to understand exactly what was happening this would have been soon enough to save the plane. But he’s missing several key pieces of information. He doesn’t know if they’re ascending or descending, he doesn’t understand that the plane has stalled, he doesn’t understand that it’s falling at a rate of 10,000 feet/minute, and most important of all Bonin still hasn’t mentioned the fact he has had the stick back the entire time.

Around this time Robert, understanding that they need to descend, pushes his stick forward. But one of the features of the Airbus 330 is that it averages out the input of the two control sticks, meaning that even though Robert is pushing his stick forward, Bonin is still pulling back on his, this averaging of the two sticks, at best would result in them leveling off, but what actually happens is that the nose of the plane remains high. The plane is still in a stall.

Finally, around two minutes after the captain’s arrive Bonin finally tells the other two that he’s had the stick back the whole time. The captain in disbelief, says, “no, no, no, don’t climb!” (“Non, non, non… Ne remonte pas…”) And Robert demands control and puts it into a dive. Unfortunately it’s not only too late, but inexplicably and without warning the other two, Bonin once again pulls his stick all the way back. Meaning that, 40 seconds after finally getting the crucial piece of information, less than three minutes after the captain’s arrival in the cockpit, seven minutes after losing the airspeed indicator and switching to alternate law, the plane slams into the Atlantic Ocean killing all 228 people aboard.

Some stories manage to really burrow in deep when you hear them. This was definitely one of those stories. The whole thing is tragic. But the final words of the pilots really bring that tragedy home:

Robert: Putain, on va taper… C’est pas vrai!

Damn it, we’re going to crash… This can’t be happening!

Bonin: Mais qu’est-ce que se passe?

But what’s happening?

Captain: 10 degrès d’assiette…

Ten degrees of pitch…

They were uttered in that order, one pilot overcome with disbelief. One pilot still not understanding what he had done to cause it all, and one pilot hoping that if he could just understand the details of his situation he could fix it.

As is only appropriate, when a tragedy of this magnitude occurs people want to understand what happened so they can keep it from happening again. And it’s easy to make a list of things that would have made a difference. Most boil down to more and better training for pilots. But some people, including the author of the article in Popular Mechanics, think the crash of Flight 447 reveals an even deeper issue, one that can’t necessarily be solved by training: an over-reliance on technology.

When I initially read the article, this over-reliance on technology also seemed like the obvious secondary lesson, and I didn’t feel any inclination to dig deeper. Now, several years later, I still worry about becoming too dependent on technology, but over the last few months I began to see how Flight 447 might additionally act as a metaphor for our current situation. Particularly the idea of two pilots both trying to move the stick in the opposite direction. Perhaps you can immediately see where I’m going, but if not allow me to explain what I mean.

I see the US (and perhaps the larger world) as being similar to the plane. We’ve run into a storm and we’ve lost our bearings a bit. Some people think the way out of the storm is to pull back hard on the stick, while other people think we need to push the stick all the way forward. It’s not clear if the plane is ascending or descending, and while the two sides fight over the issue, it’s possible that what’s really happening is the plane is falling out of the sky at 10,000 feet/minute and seconds away from slamming into the ocean.

Now you can agree that this is a useful metaphor, but disagree with who the various pilots represent. You may think that Bonin represents people on the right, who have allowed bigotry, xenophobia, racism and fear in general to convince them that something drastic needs to be done, and that pulling back hard on the stick represents the election of Trump, and that no matter how bad Trump gets and no matter how many scandals there are, they just keep pulling back on that stick, negating the attempts of more reasonable people to metaphorically push the stick forward and correct the disastrous course set by Trump and his followers.

On the other hand, Bonin, who was young and inexperienced, could represent the cohort of young and inexperienced people who are so active in political advocacy right now. People who are confident they know exactly what ails the country and equally confident that they know what to do about it, but who have actually fatally misjudged the situation and rather than pulling back as hard as they can on the stick they should be either doing the opposite. Or, failing that, they should recognize that there are more experienced individuals present and they should be deferring them.

If you see a case for either of those situations being reflected in the story of Flight 447 then I don’t blame you. I can see where both make a certain amount of sense, but I see yet a third lesson from all of it. A lesson on the need for calm and moderation. Recall that the plane was doing okay. It did lose its airspeed sensor, but if it had continued on the same course, at the same altitude with de-icing enabled, it would have almost certainly been fine. Twelve other planes followed more or less the same course as Flight 447 and had no problems. Pilots who were put through a re-creation of the situation in a flight simulator also had no problems. The lesson is that the actual circumstances were not that bad, what caused the plane to crash was a misunderstanding of the situation and an over-reaction to those circumstances. And I definitely see a parallel to the over-reaction we see currently.

If your argument is that Trump supporters or social justice warriors have already pulled the stick all the way back, and that now our only choice is to push our stick all the way forward, then I think you may have missed the point. If Bonin had just leveled off when Robert told him to descend, the plane, once again, probably would have been fine. Counterbalancing Bonin’s desire to have the stick all the way back, by pushing the other stick all the way forward didn’t work. Matching one extreme with another extreme was a losing strategy.

Even if it’s too late to level off, even if the only thing left to do is put the plane into a dive, pick up speed and hope you can pull out before you hit the ocean, you to still need convince the other side (Bonin) that this is the correct course of action. If at any point during the final minutes of Flight 447 the other pilots had managed to convince Bonin of the madness of holding the stick back, they might have been okay. Of course neither of the pilots knew that’s what Bonin was doing, which is an excuse we can’t use. It’s pretty obvious that each side is pushing their stick as far as they can in the direction they think will do the most good.

There was another option, when the captain arrived he could have replaced Bonin. But he didn’t, probably because of the wild gyrations the plane was undergoing. We also have a method of replacing people, we hold elections. And maybe this is stretching the metaphor to far, but I think we’re experiencing our own “wild gyrations” which makes this a difficult option for us as well. Also there’s no obviously impartial, more experienced “captain” we can tap to come in and sort things out, finally, we can only replace certain people every four years.

Interestingly enough the last time we had a chance to replace someone, back in 2016, there was another plane related metaphor making the rounds. This metaphor was introduced in an article called The Flight 93 Election. Flight 93 was one of planes hijacked on 9/11, but before the plane could used in the same manner as the other three flights the passengers became aware of what the hijackers intended, and they stormed the cockpit in an attempt to regain control of the plane. Unfortunately this was unsuccessful and the plane ended up crashing in a field in Pennsylvania killing everyone aboard, though, thankfully, no one on the ground.

I remember reading that article when it was published. It’s powerful stuff, and I agree with many of the points he made. And maybe, to combine his metaphor with mine, we’re not only about to plunge into the ocean, but we’re not even one of the pilots. Perhaps, but I’m more looking at Flight 447 as a framework for considering the current situation, then as an absolute prophecy with specific matches between people and events and what’s happening now. (Would the icing up of the airspeed indicator be the failure of the polls in 2016?)

For example let’s turn to a detail I left out of the initial retelling. I mentioned that professional aviators had a difficult time understanding Bonin’s behavior, but he did say one thing in the final few minutes which offers at least a little insight into what he was thinking. While he and Robert were waiting for the captain, Bonin says, “I’m in TOGA, huh?” I’ll let the PM article explain what this means:

Bonin’s statement here offers a crucial window onto his reasoning. TOGA is an acronym for Take Off, Go Around. When a plane is taking off or aborting a landing—”going around”—it must gain both speed and altitude as efficiently as possible. At this critical phase of flight, pilots are trained to increase engine speed to the TOGA level and raise the nose to a certain pitch angle.

Clearly, here Bonin is trying to achieve the same effect: He wants to increase speed and to climb away from danger. But he is not at sea level; he is in the far thinner air of 37,500 feet. The engines generate less thrust here, and the wings generate less lift. Raising the nose to a certain angle of pitch does not result in the same angle of climb, but far less. Indeed, it can—and will—result in a descent.

Unfortunately Robert was apparently focused on getting the captain back to the cabin and didn’t understand what this statement entailed. He may not have even heard it. But as long as I’m trying to make an extended metaphor out of the event, I think this statement and the underlying mindset is very interesting.

One of the points I make repeatedly is that models and ways of thinking which worked in the past may not work going forward. We are, as Robin Hanson points out (and as I expanded on) engaged in cultural exploration. We’ve reached a place we’ve never been before in terms of technology and wealth. And it’s entirely possible that a way of thinking which is perfectly appropriate at “sea level” may have the exact opposite of its intended effect when we’re at the metaphorical equivalent of 37,000 feet. You could certainly take this to mean that we should abandon the superstitions and prejudices of the past. That religion and traditional values may have worked great at sea level, but we need to abandon them now that we’re at 37,000. But as you can imagine that’s not parallel I’m drawing. Rather I see several lessons that point in the opposite direction.

First, even if we temporarily discard all the metaphorical interpretation I’ve added, most people still see Flight 447 as a cautionary tale of over-reliance on technology. And in the final analysis the reason it crashed has far more to do with abandoning fundamentals like lift, thrust, and angle of attack than any over-reliance on the core principles of aviation. I feel confident in saying that if you had shown Charles Lindbergh how to operate the stick and how to increase or decrease engine power that he would not have made the same mistake Bonin did.

Second, to return to more metaphorical territory, I don’t think it’s too much of a stretch to compare climbing and altitude to technology and progress. Normally they’re not only necessary, but definitional. If you don’t have at least some altitude you’re driving not flying. But this leads people to believe, like Bonin, that if you run into problems climbing to an even higher altitude is always the answer, and there may come a time when it’s not. To connect this to our last point, in the case of Flight 447 adding more technology didn’t solve the problem, it caused it.

Third, from a broad perspective there’s an obvious “small-c” conservative bias to the whole story of Flight 447. If they’d just maintained the same heading and altitude they would have almost certainly been fine. If they had been more cautious, and requested a path around the storm, the problems they encountered would have been less likely to occur. Also, as it turns out, this was a case where age and experience mattered, a lot. Finally there’s this passage from the article:

[Robert and Bonin] are failing, essentially, to cooperate. It is not clear to either one of them who is responsible for what, and who is doing what. This is a natural result of having two co-pilots flying the plane. “When you have a captain and a first officer in the cockpit, it’s clear who’s in charge…The captain has command authority. He’s legally responsible for the safety of the flight. When you put two first officers up front, it changes things. You don’t have the sort of traditional discipline imposed on the flight deck when you have a captain.”

It doesn’t get much more conservative than “traditional discipline”. But perhaps you think I’m making too much of these parallels. That’s certainly possible, but I think in basically every domain you examine, you’ll find that in times of crisis long-term “traditional” values perform the best.

In the end, you could argue, with some justification, that we’re not in a crisis, that our metaphorical plane is doing just fine, or that if we are experiencing a little turbulence that it’s nothing to compare with 1968 and nowhere near as bad as it was in the lead up to the Civil War. To a point I would agree. I don’t think it’s time to storm the cabin, and I don’t think the plane is falling out of the sky, yet. But if we’re not in a crisis, why has one group been pulling the stick back as hard as can for as long as I can remember? And I’ve seen them get angry when anyone pointed out that maybe we had climbed high enough, and we should level it out for awhile. More recently people have stopped trying to convince the other side to stop “climbing”, and have resorted to grabbing their own stick and pushing it as far forward as possible. (And no that’s not a double entendre but maybe it should be.)

Perhaps with the two sides pushing as hard as they can in opposite directions we will level out, and everything will be fine, but I wouldn’t count on it. More likely they’ll eventually come to blows as each becomes convinced that the other is going to end up killing everyone.

It would be nice if there was just one right course of action, like there was in the case of Flight 447. A way of understanding the situation that would make it obvious what was wrong, and what needed to be done to solve it. But unfortunately, while there are many parallels, our actual situation is far more complicated than the one faced by Flight 447. They could understand the effects of air thinning out as you flew higher, because other planes have flown at that altitude. On the other hand, we don’t know what happens at this level of progress and technology. We’re the first civilization to ever “fly this high”. Flight 447 ran into problems because Bonin, at least, was unaware that the controls had shifted from normal law into alternate law when the airspeed indicator froze up, but the Captain might have known that, and if not it was certainly in some manual somewhere. But given the way technology changes civilizations “mid-flight” so to speak, the rules could have changed for us with say, the invention of social media, and there is no manual to look in that will inform us of this fact.

The air is thinning. The world is changing under our feet. Many people are convinced they know exactly what needs to be done. I guess I’m one of them, because I am absolutely convinced that we need to be a lot more cautious and a lot more conservative than we have been.


I heard once that Mark Twain was unable to tell his good stuff from his bad stuff. I sometimes feel like that, but I think this one was pretty good. If you agree consider donating.


What Should We Worry About?

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Everyday when I check Facebook (ideally only the one time) I see fundraising pleas. People who want me to give money to one charity or the other. One guy wants me to fund the construction of a tutoring center in Haiti, another wants me to donate to an organization focused on suicide prevention, and still another wants to use my donation to increase awareness of adolescent mental health issues, and that’s just Facebook. The local public radio station wants my money as well, I get periodic calls and letters from my Alma Mater asking for money, and as of this writing the most recent email in my inbox is a fundraising letter from Wikipedia. Assuming that I have a limited amount of money (and believe me, I do) how do I decide who to give that money to? Which of all these causes is the most worthy?

As you might imagine I am not the first person to ask this question. And more and more philanthropists are asking it as well. It’s my understanding that Bill Gates is very concerned with the question of where his money will do the most good. And there is, in fact, a whole movement dedicated to the question, which has been dubbed effective altruism (EA). EA is closely aligned with the rationalist community, to the point where many people would rather be identified as “effective altruists” then as “rationalists”. This is a good thing, certainly I have fewer misgivings about rationalism in support of saving and improving lives than I have about rationalism left to roam free (see my post on antinatalism.)

From my perspective, EA’s criticisms of certain kinds of previously very common charitable contributions, their views on what not to do, are at least as valuable as their opinions on what people should be doing. For example you might have started to hear criticism recently of giving big gifts to already rich universities. And indeed it’s hard to imagine that giving money to Harvard, which already has a $30 billion dollar endowment, is really the best use of anyone’s money.

While the EA movement mostly focuses on money, there is another movement/website called 80,000 hours which focuses on time. 80,000 hours represents the amount of time you’re likely to spend in a profession over the course of your life, and rather than telling you where to put your money, the 80,000 hours website is designed to help you plan your entire working life so as to maximize it’s altruistic impact.

Of course both of these efforts fall under the more general idea of asking, “What should I worry about? What things are worth my limited time, money and attention, and what things are not?”

If you’re curious, for the effective altruist, one of the answers to this question is malaria, at least according to EA site GiveWell which ranks charities using EA criteria and has two malaria charities at the top of it’s list. These are followed by several deworming charities. For the 80,000 hours movement the question is more complicated, since if everyone went into the same profession the point of diminishing returns would probably come very quickly, or at least well before the end of someone’s career. Fortunately they just released a list of careers where they think you could do the most good. Here it is:

  1. AI policy and strategy
  2. AI safety technical research
  3. Grantmaker focused on top problem areas
  4. Work in effective altruist organisations
  5. Operations management in organisations focused on global catastrophic risks and effective altruism
  6. Global priorities researcher
  7. Biorisk strategy and research
  8. China specialists
  9. Earning to give in quantitative trading
  10. Decision-making psychology research and implementation

This is an interesting list and I remember that it attracted some criticism when it was released. For example, right off the bat you’ll notice that of the ten jobs listed the first two deal with AI. Is working with AI really the single most important career anyone could choose? The next three are what could be called meta-career paths, as they all involve figuring out what other people should worry about and spend money on, for example setting up a website like 80000hours.org which might strike some as self serving? Biorisk strategy and China specialist are interesting, then at number 9 we have the earn-as-much-money-as-possible-and-then-give-it-away option, before finally landing at number 10 which is once again something of a meta option. If nothing else, it’s worth asking should AI jobs really occupy the top two slots? Particularly given that, as I just pointed out in the last post, there is at least one very smart person (Robin Hanson), who does have a background in AI, and who is confident that AI is most likely two to four centuries away. Meaning, I presume, that he would not put AI in the first and second positions. (If Robin Hanson’s pessimism isn’t enough, look into the recent controversy over algorithmic decision making.) One can only assume that 80000hours.org has some significant “AI will solve everything or destroy everything” bias in their rankings.

Getting back to the question of, “What should we be worrying about?” We have now assembled two answers to that question: we should worry about malaria and AI, and the AI answer is controversial. So for the moment let’s just focus on malaria (though I assume even this is controversial for malthusians) The way EA is supposed to work, you focus all your charitable time and money where it has the most impact, and when the potential impact of a dollar spent on malaria drops below that of a dollar spent on deworming you start putting all your money there. Rinse and repeat. Meaning that from a certain perspective, not only should we worry about malaria, it should be the only thing we worry about until worrying about malaria becomes less effective than worrying about deworming.

As you might imagine this is not how most people work. Most people worry about a lot of things. Would it be better if we only worried about the most important thing, and ignored everything else? Perhaps, but at a minimum the idea that some things are more important to worry about while other things are less important is a standard we should apply to all of our worries. A standard we might use to prioritize some of our worries while dismissing others. It’s only fair, at this point, to ask what are some of the things I would advise worrying about. What worries would I recommend prioritizing and what worries would I recommend ignoring? Well on this question, much like the 80,000 hour people, I will also be exhibiting my biases, but at least I’m telling you that up front.

For me it seems obvious that everyone’s number one priority should be to determine whether there’s an afterlife. If, as most religions claim, this life represents just the tiniest fraction of the totality of existence, that certainly affects your priorities, including prioritizing what to worry about. I know that some people will jump in with the immediate criticism that you can’t be sure about these sorts of things, and that focusing on a world or an existence beyond this one is irresponsible. As to the first point, I think there’s more evidence than the typical atheist or agnostic will acknowledge. I also think things like Pascal’s Wager are not so easy to dismiss as people assume. As to the second point, I think religions have been a tremendous source of charitable giving and charitable ethics. They do not, perhaps, have the laser like focus of the effective altruists, and it’s certainly possible that some of their time and money is spent ineffectively, but I have a hard time seeing where the amount of altruism goes up in a world without religion. Particularly if you look at the historical record.

All of this said, if you have decided not to spend any time on trying to determine whether there’s an existence beyond this one, that’s certainly your right. Though if you have made that decision I hope you can at least be honest and admit that it’s an important subject. As some people have pointed out there could hardly be more important questions than: Where did I come from? Why am I here? Where will I go when I die? And that you at least considered how important these questions are before ultimately deciding that they couldn’t be answered.

I made the opposite decision and consequently, this is my candidate for the number one thing people should be worried about, above even malaria. And much like a focus on AI, I know this injunction is going to be controversial. And, interestingly, as I’ve pointed out before, there’s quite a bit of overlap between the two. One set of people saying, I hope there is a God, and one set of people saying I hope we can create a god (and additionally I hope we can make sure it’s friendly.)

Beyond worrying about the answer to life the universe and everything, my next big worry is my children. Once again this is controversial. From an EA perspective you’re going to spend a lot of time and money raising a child in a first world country money that could, presumably, save hundreds of lives in a third world country. I did come across an article defending having children from an EA perspective, but it’s telling that it needed a defense in the first place. And the author is quick to point out that his “baby budget” does not interfere with his EA budget.

From a purely intellectual perspective I understand the math of those who feel that my children represent a mis-allocation of resources. But beyond that simplistic level it doesn’t make sense to me at all. They may be right about the lives saved, but a society that doesn’t care about reproduction and offspring is a seriously maladapted society (another thing I pointed out in my last post.) I’m programmed by millions of years of evolution to not only want to have offspring, but to worry about them as well, and I’m always at least a little bit mystified by people who have no desire to have children and even more mystified by people who think I shouldn’t want children either.

I have covered a lot of things you might worry about and so far with the exception of malaria everything has carried with it some degree of controversy. Perhaps it might be useful to invert the question and ask what things should we definitely not be worrying about.

The other day I was talking to a friend and he mentioned that he had laid into one of his co-workers for expressing doubt about anthropogenic global warming. Additionally this co-worker was religious and my friend suspected that one of the reasons his co-worker didn’t care about global warming, even if it was happening, was that being religious he assumed that at some point Christ would return to Earth and fix everything.  

This anecdote seems like a good jumping off point. It combines religion, politics, baises, prioritization, and money. Also given that he “laid into” his co-worker I assume that my friend was experiencing a fair amount of worry about his co-worker’s attitude as well. Breaking it all down we have three obvious candidates for his worry:

  1. He could have been worried about religious myopia. Someone who thinks Jesus will return any day now is going to have very short term priorities and make choices that might be counterproductive in the long run, including, but not limited to ignoring global warming.
  2. He could have been worried that his co-worker was an example of some larger group. Conservative Americans who don’t believe in global warming. And the reason he laid into his co-worker was not because he hoped to change his mind, but because he’s worried by sheer number of people who are opposed to doing anything about the issue.
  3. It could be that after a bit of discussion, that my friend convinced his co-worker that global warming was important, but my friend worried because he couldn’t get his co-worker to prioritize it anywhere near as high as he was prioritizing it.

Let’s take these worries in order. First are religious people making bad decisions in the short term because they believe that Jesus is going to arrive any day now? I know this is a common belief among the non-religious. But it’s not one I find particularly compelling. I do agree that Christians in general believe that we’re living in the End Times, and that things like the Rapture, and the Great Tribulation will be happening soon. With “soon” being broad and loosely-defined. The tribulations could start in 100 years, they could start as soon as the next Democrat is elected president (I’m joking, but only a little) or we could already be in them. But I don’t see any evidence that Christians are reacting by tossing their hands up, for example most of them continue to have children, and at a greater rate than their more secular countrymen. I understand that having children is not directly correlated with caring about the future, but it’s definitely not unconnected either. And those who are really convinced that things are right around the corner are more likely to become preppers or something similar than to descend into a hedonistic, high-carbon emitting, lifestyle. You may disagree with the manner in which they’re choosing to hedge against future risk, but they are doing it.

 

What about my friend’s second worry, that his co-worker is an example of a large block of global warming deniers and that this group will prevent effective action on climate change? Perhaps, but is there any group which is really doing awesome with it? In the course of the conversation with my friend, someone pointed out (there were other people involved at various points) that Bhutan was carbon negative. This is true, and an interesting example. In addition to being carbon negative, the Bhutanese are also, by some measures, the happiest people in the world. How do they do it? Well, there’s less than a million of them and they live in a country which is 72 percent forest. So Bhutan has pulled it, off, but it’s hard to see a path between where the rest of the world is and where Bhutan is. (Maybe if malaria killed nearly everyone?) Which is to say I don’t think the Bhutan method scales very well. Anybody else? There is the global poor, who do very well on carbon emissions compared to richer populations. But it’s obvious no one is going to agree to voluntarily impoverish themselves, and we’re not particularly keen on keeping those who are currently poor in that state either. On the opposite side, I haven’t seen any evidence that global warming deniers, or populations who lean that way (religious conservatives) emit carbon at a discernibly greater rate than the rest of us. In fact insofar as wealth is a proxy for carbon emissions and a also a certain globalist/liberal worldview it wouldn’t surprise me a bit if, globally, a concern for global warming actually correlates with increased carbon emissions.

Finally we get to the question of how should we prioritize putting time and money towards mitigating climate change? I’m confident that if it was relatively painless the co-worker would reduce his carbon emissions. Meaning that he does probably have it somewhere on his list of priorities, if only based on the reflected priority it’s given by other people, but not as high on that list as my friend would like. As we saw at the beginning, neither the EA or the 80000 hours people put in the top ten. And when it was specifically addressed by the website givingwhatwecan.org they ended up coming to the following conclusion:

The Copenhagen Consensus 2012 panel, a panel of five expert economists that included four Nobel prize winners, ranked research and development efforts on green energy and geoengineering among the top 20 most cost-effective interventions globally, but ranked them below the interventions that our top recommended charities carry out. Our own initial estimates agree, suggesting that the most cost-effective climate change interventions are still several times less effective than the most cost-effective health interventions.

As long time readers of my blog know I favor paying attention to things with low probability, but high impact. Is it possible global warming fits into this category? Perhaps as an existential risk? Long time readers of my blog will also know that I don’t think global warming is an existential risk. But, for the moment, let’s assume that I’m wrong. Maybe global warming itself isn’t a direct existential threat, but maybe you’re convinced that it will unsettle the world enough that we end up with a nuclear war we otherwise wouldn’t have had. If that’s truly your concern, if you really think climate change is The Existential Threat, then we really need to get serious about it, and you should probably be advocating for things like geoengineering, (i.e. spraying something into the air to reflect back more sunlight) because you’re not going to turn the world into Bhutan in the next 32 years (the deadline for carbon neutrality by some estimates) particularly not by laying into your co-workers when their global warming priority is different than yours. (Not only is this too small scale, it’s also unlikely to work.)

From where I stand, after breaking down the reasons for my friends worries, they seem at best ineffectual and at worst, misguided, and I remain unconvinced that climate change should be very high on our list of priorities, particularly if it just manifests as somewhat random anger at co-workers. If you are going to worry about it, there are things to be done, but getting after people who don’t have it as their highest priority is probably not one of those things. (This is probably good advice for a lot of people.)

In the final analysis, worrying about global warming is understandable, if somewhat quixotic. The combined preferences and activities of 7.2 billion people creates a juggernaut that would be difficult to slow down and stop even if you’re Bill Gates or the President of the United States. And here we see the fundamental tension which arises when deciding what to worry about. Anything big enough to cause real damage might be too big for anyone to do anything about. Part of the appeal of effective altruism is that it targets those things which are large but tractable, and I confess that worries expressed in my writing have not always fallen into that category. When it comes right down to it, I have probably fallen into the same trap as my friend, and many of my worries are important, but completely intractable. But perhaps by writing about them I’m functioning as a “global priorities researcher”. (Number six on the 80,000 hours list!)

Of course, not all my worries deal with things that are intractable. I already mentioned that I worry about being a good person (e.g. my standing with God, should he exist, and I have decided to hope that he does.) And I worry about my children, another tractable problem, though perhaps less tractable than I originally hoped. I may hold forth on a lot of fairly intractable problems, but when you look at my actual expenditure of time and resources my family and improving my own behavior take up quite a bit of it.

Where does all of this leave us? What should we worry about? It seems obvious we should worry about things we can do something about, and we should worry about things that have some chance of happening. Most people don’t worry about being permanently disabled or dying on their next car trip, and yet that’s far more likely to happen than many of the things people do worry about. We should also worry about large calamities, and we should translate that worry into paying attention to ways we can hedge or insure against those calamities. I had expected to spend some time discussing antifragility, and related principles as useful frameworks for worry, but it ended up not fitting in. I do think that modernity has made it especially easy to worry about things which don’t matter and ignore things that do. Meaning, in the end I guess the best piece of advice is to think carefully about our worries, because we each only have a limited amount of time and money, and they’re both very easy to waste.


Is it a waste of money to donate to this blog? Well, as I said, think carefully about it. But really all I’m asking for is $1 a month. I think it’s fair to say that’s a very tractable amount…


Age of Em: Races and Rain

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


This last Saturday I was hanging out with a friend of mine that I don’t see very often. This friend has a profound technical interest in AI and has spent many years working on it, though not in any formal capacity. That said he’s very smart, and my assumption would be that his knowledge runs at least as deep as mine if not much deeper. (Though I don’t think he’s spent much time on the philosophy of AI, in particular AI risk.) In short, I don’t think I’m exaggerating to call AI a long-term obsession of his.

Part of the reason for this is that he thinks that general AI, a single AI that can do everything a human can do, is only about 10 years away and if he wants to make his mark he has to do it now. This prediction of 10 years is about as optimistic as it gets (and indeed it’d be hard to compress the task into much less time than that.) If you conduct a broader survey of experts and aggregate their answers Human Level Machine Intelligence is more likely than not to be developed by 2060. Though there are certainly AI experts at least as optimistic as my friend and, on the other hand, some who basically think it will never happen. In fact, this might be a good description of the situation given that some of the data indicates there’s a bimodal distribution in attitudes, with lots of people thinking it’s just around the corner, and a lot thinking it’s going to take a very long time, if it ever happens, with few people in the middle.

(Interestingly there are significant cultural differences in predictions with the Chinese average coming in at 2044 and the American average coming in at 2092.)

Just recently, and as promised, I finished Robin Hanson’s book The Age of Em: Work, Love and Life When Robots Rule the Earth and this whole discussion of AI probability is an important preface to any discussion of Hanson’s book because Hanson belongs to that category of people who think that human level machine intelligence is a long ways off. And that well before we figure out how to turn a machine into a brain, we’ll figure out how to turn a brain into a machine. Which is to say, he thinks we’ll be able to scan a brain and emulate it on a computer long before we can make a computer brain from scratch.

This idea is often referred to as brain uploading, and it’s been a transhumanist dream for as long as the concept has been around, though normally it sits together with AI in the big-bucket-of-science-fiction-awesomeness we’ll have in the future without much thought being given to how the two ideas might interact or, more likely, be in competition. One of Hanson’s more important contributions is to point out this competition, and pick brain emulation, or “ems”, for short, as the winner. Once you’ve picked a winner, the space of possible futures greatly narrows to the point where you can make some very interesting and specific predictions. And this is precisely what the Age of Em does. (Though perhaps with a level of precision some might find excessive.)

Having considering Hanson’s position and my friend’s position and the generic transhumanist position we are left with four broad views of the future (the fourth of which is essentially my position.)

First, the position of the AI optimists, who believe that human level machine intelligence is just a matter of time, that computers keep getting faster, algorithms keep getting better, and the domain of things which humans can do better than computers keeps narrowing. I would say that these optimists are less focused on exactly when the human intelligence finish line will be crossed and more focused on the inevitability of crossing that line.

Second, there’s the position of Hanson (and I assume a few others) who mostly agree with the above, but go on to point out (correctly) that there are two races being run. One for creating machine intelligence and one for successfully emulating the human brain. Both are singularities, and they’re betting that the brain emulation finish line is closer than the AI finish line, and accordingly that’s the future we should be preparing for.

Third, there’s the generic transhumanist position which holds that some kind of singularity is going to happen soon, and when it does it’s going to be awesome. But who have no strong opinion on whether it will be AI, brain emulation or some third thing (extensive cybernetic enhancement? Unlimited free energy from fusion power? Aliens?)

Finally there are those people, myself included, who think something catastrophic will happen which will derail all of these efforts. Perhaps, to extend the analogy, clouds are gathering over the race track, and if it starts to rain all the races will be canceled even if none of the finish lines have been reached. As I said this is my position, though it has more to do with the difficulties involved in these efforts, than in thinking catastrophe is imminent. Though I think all three of the other camps underestimate the chance of catastrophe as well.

The Age of Em is written to explain and defend the second case. Let’s start our discussion of it by examining Hanson’s argument that we will master brain emulation before we master machine intelligence. I was already familiar with this argument having encountered it in the Age of Em review on Slate Star Codex, which was also the first time I heard about the book. And then later, I heard the argument, in a more extended form when Robin Hanson was the keynote speaker at the 2017 Mormon Transhumanist Association Conference.

Both times I felt like Hanson downplayed the difficulty of brain emulation, and after hearing him speak I got up and asked him about the OpenWorm Project where they’re trying to model the brain of the C. elegans roundworm, which has a brain of only 302 neurons, so far without much success. Didn’t this indicate, I asked, that modelling the human brain, with it’s 100 billion neurons, was going to be nearly impossible? I don’t recall exactly what his answer was, but I definitely recall being unsatisfied by it.

Accordingly, one of the things I hoped to get out of reading the book was a more detailed explanation of this assumption, and in particular why he felt brain emulation was closer than machine intelligence. In this I was somewhat disappointed. I wouldn’t say that the book went into much more detail than Hanson did in his presentation. I didn’t come across any arguments about emulation in the book which Hanson left out of his presentation. That said, the book did make a much stronger case for the difficulties involved in machine intelligence, and I got a much clearer sense that Hanson isn’t so much an emulation optimist as he is an AI pessimist.

Since I started with the story of my friend, the AI optimist, it’s worth examining why Hanson is so pessimistic. I’ll allow him to explain:

It turns out that AI experts tend to be much less optimistic when asked about the topic they should know best: the past rate of progress in the AI subfield where they have the most expertise. When I meet other experienced AI experts informally, I am in the habit of asking them how much progress they have seen in their specific AI research subfield in the last 20 years. A median answer is about 5-10% of the progress required to reach human level AI.

He then argues that taking the past rate of progress and extending it forward is a better way of making estimations than having people make wild guesses about the future. And, that using this tactic, we should expect it to take two to four centuries before we have human level machine intelligence. Perhaps more, since getting to human level in one discipline does not mean that we can easily combine all those disciplines into fully general AI.

Though I am similarly pessimistic, in my friend’s defense I should point out that Age of Em was published in 2016, and thus almost certainly written before the stunning accomplishments of AlphaGo and some of the more recent excitement around image processing, both of which may now be said to be “human level”. It may be that after several eras of AI excitement which were inevitably followed by AI winters, that spring has finally arrived. Only time will tell. But my personal opinion is that there is still one more winter in our future.

I am on record as predicting that brain emulation will not happen in the next 100 years, but Hanson isn’t much more optimistic than I am and predicts it might take up to 100 years, and that the only reason he expects it before AI is that he expects AI to take 200-400 years. Meaning that in the end my actual disagreement with Hanson is pretty minor. Also I think that the skies are unlikely to remain dry for another 100 years, which means neither race will reach the finish line…  

I should also mention that in between seeing Hanson’s presentation at the MTA conference and now that my appreciation for his thinking has greatly increased, and I was glad to find that on the issue of emulation difficulty that we were more in agreement then I initially thought. Which is not to say that I don’t have my problems with Hanson or with the book.

I think I’ll take a short detour into those criticisms before returning to a discussion of potential futures. The biggest criticism I have concerns the length and detail of the book. Early on he says:

The chance that the exact particular scenario I describe in this book will actually happen just as I describe it is much less than one in a thousand. But scenarios that are similar to true scenarios, even if not exactly the same can still be a relevant guide to action and inference. I expect my analysis to be relevant for a large cloud of different but similar scenarios. In particular, conditional on my key assumptions, I expect at least 30% of the future situations to be usefully informed by my analysis. Unconditionally I expect at least 10%.

To begin with, I think the probabilities he gives suffer from being too confident, and he may be, ironically, doing something similar to AI researchers, whose guesses about the future are more optimistic than a review of past performance would indicate. I think if you looked back through history you’d be hard pressed to name a set of predictions made a hundred years in advance which would meet his 10% standard, let alone his 30% standard. And while I admire him for saying “much less than one in a thousand”. He then goes on to spend a huge amount of time and space getting very detailed about this “much less than one in a thousand” prediction. An example:

Em stories predictably differ from ours in many ways. For example, engaging em stories still tell morality tales, but the moral lessons slant toward those favored by the em world. As the death of any one copy is less of a threat to ems, the fear of imminent personal death less often motivates characters in em stories. Instead such characters more fear mind theft and other economic threats that can force the retirement of entire subclans. Death may perhaps be a more sensible fear for the poorest retirees whose last copy could be erased. While slow retirees might also fear an unstable em civilization, they can usually do little about it.

This was taken from the section on what stories will be like in the Age of Em, from the larger chapter on em society. And hopefully it gives you a taste of the level of detail Hanson goes into in describing this future society, and the number of different subjects he covers while doing so. As a setting bible for an epic series of science fiction novels, this book would be fantastic. But as just a normal non-fiction book one might sit down to read for enlightenment and enjoyment, it got a little tedious.

That’s really basically the end of my criticisms, and actually there is a hidden benefit to this enormous amount of detail. It not only describes a potential em society with amazing depth. It also sheds significant light on the third position I mentioned at the beginning, the vague, everything’s going to be cool transhumanist future. Hanson’s level of detail provides a stark contrast to the ideology of most transhumanists who have a big-bucket-of-science- fiction-awesomeness that might happen in the future but little in the way of a coherent vision for how they all fit together, or whether, as Hanson points out in the case of ems vs. AIs, they even can fit together

Speaking of big-bucket-of-science-fiction-awesomeness, and transhumanists, I already mentioned Hanson’s keynote at the MTA Conference, and while I hesitate to speculate too strongly, I suspect most MTA members did not think Hanson’s vision of the future was quite as wonderful or as “cool” as the future they imagine. (For myself, as you may have guessed, I came away convinced that this wasn’t a scenario I could ignore, and resolved to read the book.) But of course it could hardly be otherwise. Most historical periods (including our own) seem pretty amazing if you just focus on the high points, it’s when you get into the details and the drudgery of the day to day existence that they lose their shine. And for all that I wish that Hanson had spent more time in other areas (a point I’ll get back to) he does a superlative job of extrapolating even the most quotidian details of em existence.

In further support of my speculation that the average MTA member was not very excited about Hanson’s vision of the future, at their next conference, a year later, the first speaker mentioned Age of Em as an example of technology going too far in the direction of instrumentality. You may be wondering, what he meant by that, and thus far, other than a few hints here and there, I haven’t gone into too much detail about what the Age of Em future actually looks like. And I’ll only be able to give the briefest of overviews here, but as it turns out much of what we imagine about an AI future applies equally well in an em future. Both AIs and ems share the following broad features:

  1. They can be sped up: Once you’re able to emulate a human brain on a computer you can always turn the speed up. Presumably this would make the “person” being emulated experience time at that new speed. By speeding up the most productive ems, you could get years of work done every day. Hanson suggests the most common speed setting might be 1000 to 1, meaning that for every year of time which passes for normal humans, a thousand subjective years would pass for the most productive ems.
  2. They can be slowed down: You can do the reverse and slow down the rate at which time is experienced by an em. Meaning that rather than ever shutting down an em, you could put them into a very cheap “low resource state”. Perhaps they only experience a day for every month that passes for a normal human. Given how cheap this would be to maintain you could presumably keep these ems “alive” for a very long time.
  3. They can be copied: Because you can copy a virtual brain as many times as you want, not only can you have thousands if not millions of copies of the same individual, you’re also going to only choose the very “best” individual to copy. This means that the vast majority of brain emulations may be copies of only a thousand or so of the most suitable and talented humans.
  4. Other crazy things: You could create a copy each day to go to “work” and then delete that copy at the end of the day, meaning that the “main” em would experience no actual work. You could take a short break, but by turning up the speed make that short break into a subjective week long vacation. You could make a copy to hear sensitive information, allow that copy to make a decision based on that information, then destroy the copy after it had passed the decision along. And on and on.

Presumably at this point you have a pretty good idea of what the MTA speaker meant by going too far in the direction of instrumentality. Also since culture and progress are going to reside almost exclusively in the domain of the speediest ems, chosen from only a handful of individuals, it’s almost certain that no matter how solid your transhumanist cred, you’re going to be watching this future from the sidelines. (And actually even that analogy is far too optimistic, it will be more like reading a history book, and every morning there’s a new history book.)

The point of all of this is that there is significant risk associated with AI (position 1). Hanson points out that the benefits of widespread brain emulation will be very unequally distributed (position 2). Meaning that the two major hopes of transhumanists both promise futures significantly less utopian than initially expected. We still have the vague big-bucket-of-science fiction-awesomeness hope (position 3). But I think Hanson has shown that if you subject any individual cool thing to enough scrutiny it will end up having significant drawbacks. The future is probably not going to go how we expect even if the transhumanists are right about the singularity, and even if we manage to avoid all the catastrophes lying in wait for us (position 4).

The problem with optimistic views of the future (which would include not only the transhumanists, but people like Steven Pinker) is that they’re all based on picking an inflection point somewhere in the not too distant past. The point where everything changed. They then ignore all the things which happened before that inflection point and extrapolate what the future will be like based only on what has happened since. But as I mentioned in a previous post, Hanson is of the opinion that current conditions are anomalous, and that extrapolating from them is exactly the wrong thing to do because they can’t continue. They’re the exception, not the rule. He calls the current period we’re living in “dreamtime” because, for a short time we’re free from the immediate constraints of survival.

Age of Em covers this idea as well, and at slightly greater length than the blogpost where he initially introduced the idea. And when I complain about the book’s length and the time it spends discussing every nook and cranny of em society, I’m mostly complaining about the fact that he could have spent some of that going into more detail on this idea, the idea of “dreamtime”. Also his discussion of larger trends is fascinating as well. And, in the end, I would have preferred for Hanson to have spent most of his time discussing broad scenarios, rather than spending so much on this one, very specific, scenario. Because, as you’ll recall, I’m a believer in the fourth position, that something will derail us in the next 100 years before Hanson’s em predictions are able to come to fruition, and largely because of the things he points out in his more salient (in my opinion) observations about the current “dreamtime”.  

We have also, I will argue, become increasingly maladaptive. Our age is a “dreamtime” of behavior that is unprecedentedly maladaptive, both biologically and culturally. Farming environments changed faster than genetic selection could adapt, and the industrial world now changes faster than even cultural selection can adapt. Today, our increased wealth buffers us more from our mistakes, and we have only weak defenses against the super-stimuli of modern food, drugs, music, television, video games and propaganda. The most dramatic demonstration of our maladaptation is the low fertility rate in rich nations today.

This is what I would have liked to hear more about. This is a list of problems that is relevant now. And which, in my opinion at least, seem likely to keep us from ever getting either AI or ems or even just the big-bucket-of-science-fiction-awesomeness. Because in essence what he’s describing are problems of survival, and as I have said over and over again, if you don’t survive you can’t do much of anything else. And brain emulation and AI and science fiction awesomeness are all on the difficult end of the “stuff you can do” continuum on top of this. I understand that some exciting races are being run, and that the finish line seems close, but I still think we should pay at least some attention to the gathering storm.


If the phrase “big-bucket-of-science-fiction-awesomeness” made you smile, even a little bit, consider donating. Wordsmithing of that level isn’t cheap. (Okay maybe it is, but still…)


Modern Monetary Theory: It’s the Inflation, Stupid

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


One of the things lacking in modern political discourse are good-faith attempts to truly understand the other side. Anyone who doubts this need merely look at any of the many political fights over the last few years, including the Kavanaugh nomination I talked about last week. As an antidote to this, several solutions have been offered. The first, is what’s called an Ideological Turing Test, and it was proposed several years ago by Bryan Caplan, a noted libertarian economist. His idea was that someone could demonstrate that they truly understood their opponent’s position if they could explain it well enough to be indistinguishable from an actual supporter of the position. Much in the way that a computer could be said to have passed the original Turing Test by being indistinguishable from a human.

Another proposed solution was offered up by Scott Alexander of SlateStarCodex, who urged people to engage in steelmanning. On the internet it’s common to see people strawman their opponents argument, which is to offer up the weakest and most ridiculous version of it, and attack that. To steelman their argument is the opposite, it’s to offer up the very best version of their argument.

Both of these are very similar ideas, and both are things I should do more often. It could be argued that last week’s post might have benefited from a little more steelman. Though I really think last week there were actually three sides. The two sides that are sure that they know what happened, (and what should happen now) and a third side which is sure that no one knows what really happened, and that the first two sides are just displaying their built in political biases, and then attempting to make what little evidence there is seem ironclad. But while I have no desire to go back and revisit last week’s post (okay I have some desire to do that, but I’m also kind of sick of the topic) I can do better this week. And fortunately this week’s post is more amenable to steelmanning or an Ideological Turing Test as well, because this week, unlike last week, my certainty level is high, but there are people who are are just as certain I’m wrong. Accordingly, this week, it’s my intent to discuss one of the opposing arguments, hopefully in a manner which is indistinguishable from an actual supporter.

I suspect I will not do as well as either Caplan or Alexander would hope. Also, if I’m being honest much of the post will be devoted to showing how, even with this new, updated understanding I still think they’re wrong, but I hope, at least, to have moved the debate closer to their actual position. Actually “wrong” is not the word I’m looking for. I actually think they may be right in the abstract, but foolish in the implementation. But I’m getting ahead of myself, I haven’t even said what the subject is. This week we’re going to return to talking about the national debt and the federal budget deficit.

I’m not sure where the national debt would rank on my list of “issues I’m interested in” but it would probably be pretty high, I’ve mentioned it quite a few times, perhaps most notably in my post The National Debt in Three Lists of Six Items. Looking back, the first of those six item lists was a list of reasons why people say we shouldn’t worry about the debt, so it’s not as if I’ve entirely ignored opposing arguments on this subject in the past, but it could certainly be argued that I treated them too flippantly. Perhaps this post will fix that, perhaps not.

To start with, let’s just, ever so briefly, review my position: The national debt is over $20 trillion dollars. This is probably the largest accumulation of money into a single bucket in the history of the world. Insofar as money acts as a proxy for nearly everything, we’ve put, as they say, a lot of our eggs into a single basket. And if this basket/bucket fails in some fashion it would be catastrophic. I’m not sure exactly how it will “fail” but there is significant historical precedent for things failing even if no one could see in advance exactly how it was going to happen, until it did. And that’s being charitable. Currently there are numerous people with equally numerous theories who feel very confident they can see how it will fail. Maybe one of them will turn out to be right, or maybe it will be something no one saw coming. Or maybe nothing will ever go wrong with the debt, but my position is that this is not the way to bet.

If you take a look at the comments on my “Three Lists” post (which unfortunately didn’t make it over to the new site, so you’ll have to go here.) You’ll see that Boonton disagrees with me on this, and I’m grateful to him for pushing me on it, because otherwise I might still think those on the other side of this issue are being hopelessly ahistorical, when in reality they’re probably just too optimistic. So what is their position? What are people really saying when they say that the debt and by extension the deficit doesn’t matter? Let’s start with the six reasons not to worry I mentioned in that last post. To briefly review:

  1. The government does not have an ironbound debt contract. The size of the debt and the payments change as the economy changes.
  2. The national debt is not money we owe to other people it’s money we owe to ourselves.
  3. Our debt to GDP ratio is not that bad when compared to other countries
  4. Borrowing money is currently a very good deal. Interest rates are near historic lows.
  5. Our debt is in dollars, and we can print dollars. Making it literally impossible to default.
  6. Our assets greatly exceed our liabilities.

To be clear all of these are pretty good reasons to not be worried about the debt. However, as I said then, I don’t think they’re sufficient. (If you want to know why you should go back and look at the original post.) Still they are all essentially true and it’s important not to dismiss them, in particular reason number five. The idea that we can print money. Obviously, if you can print money, then you’ll never run out of it, but the problem with that is that if you do too much of it, you’ll get inflation, and too much inflation is bad.

I don’t think there’s any serious disagreement with the assertion that too much inflation is bad (though there might be some quibbling over how much is “too much”.) High inflation is bad because it wipes out savings, and any benefits which aren’t pegged (or are insufficiently pegged) to inflation. It makes the currency going through inflation less desirable. And, in the most extreme cases, such as in the Weimar Republic and Zimbabwe (and currently Venezuela) you can end up in the positive feedback loop of hyperinflation. But for me it all comes down to the fact that too much inflation makes planning for the future hard. It makes doing something today vastly different from doing something later. If you’ll recall my definition of civilization consists merely of having a low time preference, That civilization means there’s very little difference between doing something today and doing something in a year. This makes inflation something which eats away at civilization.

All of the forgoing is to say that inflation is something I am particularly worried about. It is true that inflation does not currently seem to be much of a problem, and if anything we may have too little inflation. But this does not mean that this condition will hold forever, inflation will eventually be a problem, a problem which I felt the “other side” was dismissing far too hastily. (At least as far as I could tell.)

Such was my understanding of the argument until just recently when I heard a podcast from Planet Money, which completely flipped my understanding. They were interviewing Stephanie Kelton who is a big proponent of the view that deficits don’t matter and she made the exact opposite argument: rather than saying that inflation doesn’t matter she basically said that it was the only thing that mattered. Now I know that this is a weird place to mention this given that I’m nearly half way through things, but it was this podcast that made me decide to write a post. (In addition to doing more Ideological Turing Tests/steelmanning in the future I should probably also have shorter intros.) I finally felt I had heard a credible argument for the idea that we shouldn’t worry about the deficit or the national debt, as long as we are worried about inflation.

Kelton was Bernie Sanders economic advisor during his presidential run, and is a major player in the Modern Monetary Theory space. Which is the best known framework on the other side of the debt/deficit argument from me (and many, many others).  Now I already know that I am unlikely to do the field of MMT justice in only a thousand or so words, so I would urge you to not only listen to the Planet Money podcast (it’s short, only 22 minutes) but to also pay attention if you come across other mentions of MMT. (I’ve seen several just recently, including this one from The Nation.) But for me, the key aha moment came during the podcast when they were talking about taxes:

[The government] taxes because it wants to remove some of the money that it spent into the economy so that it can guard against the risk of inflation.

This is one of the big ideas of Modern Monetary Theory. Taxes are not for spending. Taxes are for fighting inflation. And spending – that isn’t just to buy stuff the government needs, but the power of the keyboard – the spending from that – can be put to use for doing all kinds of good things – to put money into the economy to give it a boost or to help get to full employment.

So, on the one hand, you have the traditional way of thinking about things, which says that government spending is limited by government revenue which mostly takes the form of taxes, and that if government spending goes above government revenue for too long or by too much some kind of catastrophe will occur.

On the other hand you have the MMT school of thought which says that government spending is limited only by the amount of inflation it causes, and that taxes only correlate to spending insofar as more taxes can reduce the inflation caused by higher spending. From this it follows, as they say, budget deficits and the accumulating debt that results, don’t matter, because they don’t affect the rate of inflation, and that’s all we care about.

There is one other, critical piece of the MMT approach. You not only have to be able to increase the amount of money at will, you also can’t have any debts which are denominated in a currency other than the one you can create. As long as this is the case, they reject, as both unrealistic and unserious, any potential fears of MMT policy leading to hyperinflation like the classic examples of Weimar, Zimbabwe and Venezuela.  Because in each of the cases mentioned, the countries had debts to other countries that were denominated in currencies other than their own. (Weimar owed France money for war reparations and Zimbabwe and Venezuela both had/have debts that are denominated in dollars.)

If things still seem a little nebulous, they offer another way of looking at it in the podcast which may be more concrete. Imagine that the economy has a certain ability to absorb money and turn it into goods and services. The MMT economists compare this to a speed-limit. Returning to the podcast:

The speed limit has to do with what economists call real resources. An economy is not just money… If you want to build a hospital, you can’t build it out of money. You need… those IV bags that hang on those sort of rolling coat-rack things…

So say the factory that makes those wheeling coat-rack things is running at, like, half the capacity that it could. Then, if the government decides to place a big order for those coat-rack things, nothing bad really happens. They just buy them at the normal price, put them in the hospital – great.

But what if the factory is at full capacity? Then, the government has to say, hey, sell to our new hospital instead of to your other customers. And to get them to do it, they’ll have to pay more. That is inflation. Prices just went up.

[Kelton] says that’s what the government should think about – not whether they have enough money, but whether there are enough resources in the economy to soak up that money.

There is more to MMT than the elements just mentioned, but before I move on I should say that the core idea makes sense. Which is to say I don’t see any mistakes from a theoretical standpoint. And the appeal of having more money to do the kinds of things we want to do like fund schools, care for the poor, maintain global military hegemony and rescue the states from their pension crises, is obviously appealing. Probably too appealing, and here’s where we get to my criticism of MMT. (I realize this wasn’t the most comprehensive steel-manning, and if anyone thinks I left anything out, I’m looking at you Boonton, please let me know in the comments.)

The first criticism is brought up in the Planet Money podcast itself, and comes from another left-leaning economist, Tom Palley. Palley also feels that mainstream economics is flawed, particularly its obsession with having a balanced budget, and thus there are some elements of MMT he really likes, but he doesn’t think it’s practical to use taxes to fight inflation:

Politics doesn’t work like that. Taxes are very, very contested. No one wants their taxes raised. It’s very hard for politicians to raise taxes. They’re very slow to do it because guess what? They don’t get re-elected if they do.

Kelton has an answer for that, build in automatic changes to taxation as the economy changes, so that you’re not counting on congress to raise taxes when inflation starts going up, it happens automatically. It’s a clever idea, but it’s not necessarily any more politically feasible to pass a law that automatically raises taxes, than to pass a law which just raises taxes at the time, and it might, in fact, be a lot more difficult, given that congress doesn’t generally like to give away their power.  Also there’s the principle of legislative entrenchment which means they can’t bind a future congress to do anything even if they want to.

The problem, of course, is that any form of tax increase is difficult, even if it’s in the future, and any form of spending is easy. And if MMT’s only contribution is to make it easier to increase spending and harder to increase taxes, then it will almost certainly end up being viewed as a net negative when the full history of this age is finally written.

For the sake of argument let’s assume that we can effortless raise taxes in response to inflation, as effortlessly as the Federal Reserve changes the short-term interest rate. How do we know what level to raise the taxes too? Are we sure we understand inflation and the enormously complicated chain of incentives and behaviors and chaos that comprise the modern economy well enough to not dramatically undershoot or overshoot the mark? Let’s just start with inflation how well do we even understand that? Well interestingly enough, in the podcast I’ve been referencing they quote Kelton as saying:

..nobody has a good model of inflation right now. And she thinks the government could spend a lot more money right now, and we’d still probably be fine.

(Am I the only one who thinks that first “and” should be a “but” and that the word “probably” is worrisome?)

Maybe this is understood better than I think. Maybe there’s some great way for determining exactly what taxes should be implemented which accounts for tax evasion, and the health of the economy, and all potential black swan events whether positive or negative. But even if we master taxes we would still have the question of what happens to the concept of debt, deficit, government bonds and interest rates? Do we just junk all of it? This hardly seems possible, not without catastrophic consequences. Perhaps if we start by considering something smaller. One big worry that deficit hawks have is that there will be a loss of confidence and the interest rate the government has to pay on outstanding debt will start rising. This would mean a greater portion of the budget would go to servicing the debt, leaving less available for everything else. (As a point of reference we currently spend 6% of the budget on interest payments.)

What happens if interest rates start rising under MMT? Do interest payments continue as normal? Do we stop borrowing altogether? What happens to the $21+ trillion we’ve already borrowed? Do we pay it all off in a vast orgy of money creation? I assume not, surely even if nothing else is, that would have to be inflationary. If we keep everything the same with bonds, but switch to MMT with respect spending and taxes, does that cause interest rates to rise through a loss of confidence? (I mean we have just kind of repudiated the whole concept of debt.) But I guess under MMT as long as inflation is in check we don’t care how much we’re spending on interest? But does that make rates go up even more in some kind of positive feedback loop?

Maybe I’m missing something obvious, and maybe they have some straightforward plan for all of this, maybe I’ll eventually have an aha moment similar to the one I had with inflation. A quick Google search came up with an explanation that bonds are used under MMT as a way of setting the short term interest rate, but I’m still not sure how that applies to the behavior of the already outstanding debt. If anyone wants to point me at something on this topic, I’d be grateful.

If they do manage to clear all the hurdles I’ve mentioned thus far, there’s still one final hurdle, which doesn’t need to be cleared now, but will have to be cleared eventually. I mentioned above that the one big caveat of MMT was that all your debts had to be in the same currency as the one you can create. For the moment the dollar is still the world’s reserve currency, which basically means that all debts are denominated in a currency we can create. (This makes us singularly positioned as an MMT candidate.) Now, imagine that we switched over to using MMT as the guiding ideology for federal spending and taxation, and that it works great. What happens to this system when (not if) the dollar loses its place as the world’s reserve currency? Is there some smooth transition back to the old way of doing things? Or does the entire thing explode in a fiery disaster where the living envy the dead? I suspect neither, but this is not something we have any way of knowing, since we’re deep into speculative territory even talking about switching to MMT, let alone a discussion of how we might switch back.

Additionally, one other interesting thing occurs to me. Does switching to MMT hasten the end of the dollar’s status as reserve currency? Are people going to be more hesitant to enter into contracts denominated in dollars if the US government is on record as saying they’re going to create as many dollars as they feel like? It’s hard to see how it wouldn’t, given the already substantial inclination of people to switch to things like bitcoin. An inclination which would only be enhanced by any movement in the direction of MMT.

It should be noted, here at the end, that there is a lot of space between the modern monetary theorists and the people who absolutely insist on a balanced budget. And I’ve only covered a small slice of it. But in many ways people who are “MMT friendly” without directly advocating for it are actually harder for me to understand. These people seem to be saying that the debt will matter at some point, but despite being over $21 trillion dollars and over 100% of GDP that point is not yet. The MMTers at least have a theory for why it will never matter, and it’s definitely theoretically interesting. But practically, I think it’s a horrible idea.

Perhaps the biggest problem is one I keep coming back to. For a system to work it has to, on some level, make sense to the average person (or the average congressperson which might be an even lower bar.) Particularly in light of the fact that we’ve given that “average person” the power to vote. It’s possible that the understanding of the masses won’t matter in our post-democratic futures when the AI overlords realize that debt and deficit are silly, biological fallacies, but until that time comes, no matter how much you try, you’re never going to convince the average person that $21 trillion dollars of debt doesn’t matter, and on this point, I think they’re right.


If your own budget is balanced and you’re running a surplus (I know pretty rare these days) then consider donating.