Month: May 2017

Is Pornography a Supernormal Stimuli?

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:


Or download the MP3


Recently I read a fascinating book review on my favorite blog, SlateStarCodex. Scott was reviewing The Hungry Brain, by Stephan J. Guyenet. His review intrigued me enough that I immediately bought the book and started reading it. If you’re interested in the neurology of eating, and why we overeat, I would definitely recommend it. That said it is not primarily about how to lose weight, it’s more about the brain’s system for determining whether we’re full or not, and how the modern world has created an environment which overwhelms that system. As I said the book is very intriguing, particularly in the way that it rejects the idea of a balanced diet, but I’m not going to spend any time on that, instead I’d like to focus on a concept the book brings up, sort of just in passing, though at the same time it could be said to be the overarching theme of the book as well. It’s the concept of supernormal stimuli.

Guyenet introduces the subject by relating the findings of a study which had been conducted on the nesting habits of ringed plovers. The two scientists conducting the study discovered that the birds prefered exaggerated artificial eggs to their own eggs. Guyenet summarizes the results as follows:

A typical ringed plover egg is light brown with dark brown spots. When Koehler and Zagarus offered the birds artificial eggs that had a white background and larger, darker spots, they readily abandoned their own eggs in favor of the decoy. Similar experiments with oystercatchers and herring gulls showed that they prefer larger eggs, to the point where they will leave their own eggs in favor of absurdly large artificial eggs that exceed the size of their own bodies.

You can see why he introduces this topic in a book about overeating, particularly the ways in which the modern world has created, what might be called, supernormal food, but technology has not only changed the food we eat, it’s changed nearly everything about our lives when compared to those of our distant ancestors. And it’s those other changes that I want to examine.

The term “supernormal stimuli” was coined later by another scientist to describe stimuli like the absurdly large eggs, things which are better than anything found in nature, but which paradoxically produce worse outcomes. It’s obviously bad for the bird if they spend all of their time sitting on an artificial egg as big as themselves, rather than sitting on their own eggs.

As I said I’m more interested in looking at the role of the supernormal beyond the obvious areas of food and bird’s eggs, and Guyenet himself acknowledges the potentially wider application of the phenomenon:

It seems likely that certain human innovations, such as pornography, gambling, video games and junk foods are all supernormal stimuli for the human brain.

The whole concept is fascinating to me, and I can imagine all manner of things it might explain, for instance, Guyenet mentions pornography, and for people who’ve been listening to this podcast for awhile you know I have a deep distrust of the conventional wisdom on the subject of pornography, so let’s start there. How might it be, as Guyenet suggested, a supernormal stimuli?

Well first let’s step back and examine why there are supernormal stimuli in the first place. It all stems from the fact that certain things just don’t occur in nature, primarily because they’re impossible or at least extremely rare. Consequently there was never any evolutionary pressure to protect against these non existent things. As Guyenet points out in the book, in the case of food, there was never any danger of people regularly having 1000 calorie meals two or three times a day, 7 days a week for years on end. The food supply just wasn’t that stable. And thus the body has very little in the way of defense against gaining weight on that kind of diet. In a similar fashion there was never any danger of a bird abandoning her eggs for eggs as big as she was because she could never lay those eggs in the first place. Which is to say there’s no evolutionary backstop against this kind of thing. There’s no innate protection against going too far in one direction.

From an evolutionary perspective, the rule bigger is better worked because scientists were never sneaking into bird’s nests and putting in massive artificial eggs. Now it is true that cuckoo’s get other birds to raise their larger eggs using these preferences, and the book goes into detail on that, but that doesn’t make the situation any less problematic, it just shows that organisms can’t even fully protect against natural supernormal stimulation. And if it can’t, how much worse is it going to be at protecting against artificial supernormal stimulation.

With this explanation in place I think the idea that pornography is a supernormal stimuli should be self evident. But if you remain unconvinced I’ll spell it out. In essence, “Life is a game of turning energy into kids,” which is another quote from the book which Guyenet borrows from anthropologist Herman Pontzer. And, whereas over-eating is supernormal on the energy side of this game, pornography might be supernormal is on the kids side of the game. Just like birds have evolved to really want to sit on eggs, humans have evolved to really want to have sex, since both increase the number of offspring they have. But just as birds will sit on large artificial eggs in preference to sitting on actual eggs, it’s very likely that humans will watch large amounts of artificial sex in preference to having actual sex.

There are of course arguments which could be made against this assertion. You could argue that humans are different than birds. This is undoubtedly true, but based on the enormous demand for pornography is there really any evidence that humans are any less stimulated when it comes to sex than birds are when it comes to sitting on eggs?

You might also argue, that even if pornography has exactly the effect I described that it’s a good thing because we’re better off with fewer kids. Perhaps, but as I pointed in a previous post most developed countries already have below replacement level fertility, and the generation of people raised on internet pornography is only just starting to hit peak child bearing age.

Finally you might argue that watching people have sex is not the same as having sex. Once again that’s certainly true, but there is also some large segment of the population for whom it’s obviously close enough, and getting closer. Pornography is only getting more realistic, which means it’s potential as a supernormal stimuli is only going to increase.

The other day, I was taking a break and ended up watching a clip from the Conan O’brien show where he was doing his “Clueless Gamer” segment. In this particular edition they had him playing a VR game on the Oculus Rift. In the segment, once he finds out that it’s a virtual reality game, literally the next thing out of his mouth is that VR is for sex. Now, I don’t think that Conan is personally longing for a world of VR sex, but there are lots of people out there who are. And given the prominence of pornography on the internet can there by any argument that once technology gets to a certain point that pornography will be equally ubiquitous in virtual reality?

As I said, once this happens, and if past technological progress is any guide, virtual reality sex will become increasingly indistinguishable from the real thing. And any arguments about whether pornography is voyeuristic as opposed to participatory will become increasingly moot. When this happens we can hope that we have some baseline level of morality which will kick in and taboos against VR sex will keep it from becoming widespread. But I’ve see no reason to hope that this will be the case. Thus far modernity has done a remarkable and quite thorough job of knocking down taboos and side-lining nearly everything which resembles traditional morality. Which makes it very difficult for me to imagine that VR sex will be the one place where we finally hold the line.

As usual when discussing pornography there is the standard assembly of people who are ready to defend it, and VR pornography is no exception to this. Just a cursory search turns up the following article from TechCrunch which discusses worries about greater realism and where we are informed that:

  • Any worries about VR pornography being too realistic just mean that we need greater “porn literacy”.
  • That worries about VR pornography should be viewed in the same light as worries that bicycles would turn women into lesbians.
  • That “the fear of VR porn is simply more technophobia as we’ve seen so many times in the past.”
  • That being able to use VR to switch your own gender will allow people to “open up brave new dimensions to their own sexuality and sensuality.”

These are all bold predictions for a technology that’s barely in its infancy, And are they really going to put forth the argument that providing something virtually indistinguishable from actual sex is the same thing as bicycle riding and therefore any worries should be dismissed? This is where I think the framework of the supernatural stimuli really comes in handy. Worries about VR pornography map very well onto the analogy of the bird egg we started things with. VR pornography is replacing stimuli for some deep evolutionary drive with something artificially supercharged. In the example of the bikes, what deep evolutionary drive were they supposed to be stimulating? The need to go down hills fast? And what are bikes an artificial supercharged version of? Walking? In other words I think that specific point from the article is definitely an apples to oranges comparison.

I have no problem granting that there has been technophobia in the past, which later proved to be ungrounded. One common example I hear frequently mentioned, was the fear that people would asphyxiate on the first trains because of their high speed (over 20 mph). And if it will make you feel better I have no problem admitting that this was an example of ungrounded technophobia, but if I’m going to admit this then I think it’s only fair, on the other side, for those pointing out past overreactions, to admit instances where fear of technology fell far short of reality. Previous to World War I lots of people worried about aerial bombardment (which didn’t really come into it’s own until World War II) but how many people worried about the carnage which could be inflicted by more advanced artillery and the machine gun? And for those who did fear aerial bombardment it turns out that they were just a little bit premature.

All of this is to say that yes, it is certainly possible that, as the article claims, worries about VR pornography are overblown, but, more likely, it appears that people who want to draw analogies between these worries and past instances of technophobia are missing important differences and further that not all previous instances of people being afraid of technology have ended up being groundless. Sometimes we have every reason to worry about technology, and we may in fact underestimate how bad it is.

If opponents can’t rely on historical analogies to dismiss the idea that pornography and specifically VR pornography is a harmful supernormal stimuli, perhaps they can fall back on the data? Here I think the opponents continue to be on shaky ground. Though it’s hard to get a good sense of the data. Pornography is one of those very divisive issues where it’s hard to separate facts from opinion and anecdote, but one of the most common ideas I came across was to compare pornography to alcohol:

For some people alcohol simply has the effect of making them more relaxed, letting them have more fun. For other people it’s true that alcohol can increase the likelihood that somebody will behave in a violent way.

“But if I simply make the overall generalisation alcohol causes violence or leads to violence, you’d probably say that’s glossing over a lot of the nuances.

“Similarly with pornography, for some people, it may be viewed as a positive aspect of their life and does not lead them in any way to engage in any form of anti-social behaviour. For some people who do have several other risk factors, it can add fuel to the fire.

Ok, so pornography is alcohol? As you can imagine this does nothing to make me feel better about things. First, as a Mormon, I’m also pretty opposed to alcohol. Second, notice that we’re not even talking about VR pornography which may be to normal pornography what opiates are to alcohol. Finally, if it is alcohol could we at least do a better job of keeping it away from kids? The few attempts at this which have been made have been dismissed as puritanical at worst and unworkable as best.

In the end the data has enough ambiguity that it will probably support whichever position you came in with (which is true for most things). But even if the data showed that pornography had a positive effect (which some people think it does). There would be still be reasons to doubt that conclusion. When it comes to pornography we’re dealing with a very short time horizon during which the impact could be discernable. If, as I suggest, the more realistic the pornography the greater it’s potential damage then we’ve had essentially no time to evaluate the effects of VR porn and even video pornography has only been widely available on the internet for about 10 years. We’re thus in a situation, where on the one hand there’s not a lot of good experimental short-term data and on the other hand it hasn’t been nearly long enough to have any idea of the societal impacts.

And of course this is something I come back to over and over again. People dismiss a danger based on the experience of a few short years, when some things take decades if not longer to play out.

I had initially intended to use pornography as just one example of supernormal stimuli among many, but apparently I had more to say on the subject than I thought. Still it might be useful before we end to look at one more potential example of supernormal stimuli. I’ve already talked about virtual reality, and even though I worry that pornography will be a big part of that (some people estimate it will be the third largest category) the biggest use for VR will be video games. And incidentally video games are another thing mentioned by Guyenet, at the beginning of this post, as a potential supernormal stimuli.

This ties into many of the themes of this blog. For one virtual reality might be a step in the direction of transhumanism, and as I am mostly opposed to transhumanism, this is one more thing to add to the list. Secondly there are some people who believe that Fermi’s Paradox can be explained by VR; that intelligent species get to a point where they have no need to explore or expand because they can simulate all the exploration (and anything else) they desire. And finally it gets back to the issue of community and struggle both of which, I would argue, video games provide a poor facsimile of.

Discussing video games brings up one of the symptoms of a supernormal stimuli, one which I haven’t discussed yet, but which could apply to pornography, food, and video games: addiction. I didn’t bring it up previously because people generally don’t talk in terms of an addiction to food (it’s hard to view something you need to live as a possible addiction) though if you read the Guyenet’s book it’s easy to see how people with leptin deficiency might easily be classified as food addicts. People also dispute whether there’s such a thing as pornography addiction (though I don’t) and there’s plenty of harm attributed to pornography without bringing addiction into it, but when it comes to video games, addiction and excessive time spent, are generally viewed as the primary harm.

And as it turns out for all of these things, but perhaps especially for video games, the addiction is the primary evidence of their status as a supernormal stimuli. In our distant past there were figurative buttons which evolved to indicate a situation that was extremely advantageous from the standpoint of survival and reproduction. In nature these buttons were pressed infrequently and most of the time they were associated with tangible rewards. Technology has allowed us to find these buttons, and then mash them continually for as long as we want.

These buttons can convince us we’re doing something useful by giving us virtual rewards which feel real (also known as operant conditioning.) They can convince we’re actually struggling by overcoming fake challenges. And they can convince us that we’re engaged socially even though we’re just yelling at strangers. And this is the problem, with all of this, how do we know we’re not sitting on a giant fake egg, while the real eggs rot and spoil in the sun? How do we recognize these supernormal stimuli as traps and avoid them? When there are powerful inbuilt urges convincing us that twinkies are better than real food, pornography is better than real sex, and videogames are better than real life?

You may disagree with how bad any one of these things are, or how big of a problem the represent, or whether they are in fact examples of supernormal stimuli. But I don’t think you can argue with the existence of supernormal stimuli, nor with the motivation for people to use technology to continue turning up the dial on their power and effect. As I said in the very beginning, I think the concept of the supernormal stimuli has wide-ranging applications and consequences for our modern world, and it’s definitely a subject I intend to revisit, because technology has gotten to the point where I believe there are all manner of supernormal creations and if we fail to recognize the “super” part of that equation and continue to think that all of this is normal the consequences could be much larger and much worse than we imagine.


I’m working figuring out how to make my donation appeal a supernormal stimuli, but until then pretend that it is and imagine you experience the overwhelming desire to give me money.


Job Automation, or Can You Recognize a Singularity When You’re In It?

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Over the last few months, it seems that regardless of the topic I’m writing on, that they all have some connection, however tenuous, to job automation. In fact, just last week I adapted the apocryphal Trotsky quote to declare that, “You may not be interested in job automation, but job automation is interested in you.” On reflection I may have misstated things, because actually everyone is interested in job automation they just don’t know it. Do you care about inequality? Then you’re interested in job automation. Do you worry about the opiate epidemic? Then you’re interested in job automation. Do you desire to prevent suicide by making people feel like they’re needed? Then you’re interested in job automation. Do you use money? Does that money come from a job? Then you’re interested in job automation. Specifically in whether your job will be automated, because if it is, you won’t have it anymore.

As for myself, I’m not merely interested in job automation, I’m worried about it, and in this I am not alone. It doesn’t take much looking to find articles describing the decimation of every job from truck driver to attorneys or even articles which claim that no job is safe. But not everyone shares these concerns, and whether they do depends a lot on how they view something called the Luddite Fallacy. You’ve probably heard of the Luddites, those English textile workers who smashed weaving machines between 1811 and 1816, and if you have, you can probably guess what the Luddite Fallacy is. But in short, Luddites believed that technology destroyed jobs (actually that’s not quite what they believed, but it doesn’t matter). Many people believe that this is a fallacy, that technology doesn’t destroy jobs. It may get rid of old jobs, but it opens up new and presumably better jobs.

Farmers are the biggest example of this fallacy. In 1790, they composed 90% of the US labor force, but currently it’s only 2%. Where did the 98% of people who used to be farmers end up? They’re not all unemployed, that’s for sure. Which means that the technology which put nearly all of the farmers out of work, did not actually result in any long term job loss. And the jobs which have replaced farming are all probably better. This is the heart of things for people who subscribe to the Luddite Fallacy, the idea that the vast majority of jobs which currently exist were created when labor and capital were freed up when technology eliminated old jobs, and farmers aren’t the only example of this.

More or less, this is the argument in favor of the fallacy; in support of the idea that you don’t have to worry about technology putting people out of work. And people who think the Luddite Fallacy still applies aren’t worried about job automation. Because they have faith that new jobs will emerge. And just as in the past when farmers became clerks and clerks became accountants, as accounting is automated, accountants will become programmers, and when at last computers can program themselves, programmers will become musicians or artists or writers of obscure, vaguely LDS, apocalyptic blogs.

The Luddite Fallacy is a strong argument, backed up by lots of historical evidence, the only problem is, just because that’s how it worked in the past doesn’t mean that there’s some law saying it has to continue to work that way. And I think it’s becoming increasingly apparent that it won’t continue to work that way.

Recently the Economist had an article on this very subject and they brought up the historical example of horses being replaced by automobiles. As they themselves point out, the analogy can be taken too far (a point they mention right after they discuss the number of horses who left the workforce by heading to the glue factory.) But the example nevertheless holds some valuable lessons.

The first lesson we can learn from the history of the horse’s replacement is that horses were indispensable for thousands of years until suddenly they weren’t. By this, I mean to say that the transition was very rapid (it took about 50 years) and the full magnitude was only obvious in retrospect. What does this mean for job automation? To start with, if it’s going to happen, than 50 years is probably the longest it will take. (Since technology moves a lot faster these days.) Additionally, it’s very likely that the process has already begun and we’ll only be able to definitely identify the starting point in retrospect. Though, just looking at self-driving cars I can remember the first DARPA Grand Challenge in 2004 when not a single car finished the course, and now look at how far we’ve come in just 13 years.

The second lesson we can learn concerns the economics of the situation. Normally speaking, the Luddite Fallacy kicks in because technology frees up workers and money which can be put to other uses. This is exactly what happened with horses. The advent of tractors and automobiles freed up capital and it freed up a lot of horses. Anyone who wanted a horse had access to plenty of cheap horses. And yet that didn’t help. As the article describes it:

The market worked to ease the transition. As demand for traditional horse-work fell, so did horse prices, by about 80% between 1910 and 1950. This drop slowed the pace of mechanisation in agriculture, but only by a little. Even at lower costs, too few new niches appeared to absorb the workless ungulates. Lower prices eventually made it uneconomical for many owners to keep them. Horses, so to speak, left the labour force, in some cases through sale to meat or glue factories. As the numbers of working horses and mules in America fell from about 21m in 1918 to only 3m or so in 1960, the decline was mirrored in the overall horse population.

In other words there will certainly be a time when robots will be able to do certain jobs, but humans will still be cheaper and more plentiful, and as with horses that will slow automation down, “but only by a little.” And, yes, as I already mentioned the analogy can be taken too far, I am not suggesting that surplus humans will suffer a fate similar to surplus ungulates (gotta love that word.) But with inequality a big problem which is getting bigger we obviously can’t afford even a 10% reduction in real wages to say nothing of an 80% reduction. And that’s while the transition is still in progress!

For most people when they think about this problem they are mostly concerned with unemployment or more specifically how people will pay the bills or even feed themselves if they have no job and no way to make money. Job automation has the potential to create massive unemployment, and some will argue that this process has already started or that in any event the true unemployment level is much higher than the official figure because many people have stopped looking for work. Also while the official figures are near levels not seen since the dotcom boom they mask growing inequality, significant underemployment, an explosion in homelessness and increased localized poverty.

Thus far, whatever the true rate of unemployment, and whatever weight we want to give to the other factors I mentioned, only a small fraction of our current problems come from robots stealing people’s jobs. A significant part of it comes from manufacturing jobs which have moved to another country. (In the article they estimate that trade with China has cost the US 2 million jobs.) In theory, these jobs have been replaced by other, better jobs in a process similar to the Luddite fallacy, but it’s becoming increasingly obvious, both because of growing inequality and underemployment, that when it comes to trade and technology that the jobs aren’t necessarily better. Even people who are very much in favor of both free trade and technology will admit that manufacturing jobs have largely been replaced with jobs in the service sector. For the unskilled worker, not only do these jobs not pay as much as manufacturing jobs, they also appear to not be as fulfilling as manufacturing jobs.

We may see this very same thing with job automation, only worse. So far the jobs I’ve mentioned specifically have been attorney, accountant and truck driver. The first two are high paying white collar jobs, and the third is one of the most common jobs in the entire country. So we’re not seeing a situation where job automation applies to just a few specialized niches, or where they start with the lowest paying jobs and move up. In fact it would appear to be the exact opposite. You know what robots are so far terrible at? Folding towels. I assume they are also pretty bad at making beds and cleaning bathrooms, particularly if they have to do all three of those things. In other words there might still be plenty of jobs in housekeeping for the foreseeable future, but obviously this is not the future people had in mind.

As I’ve said I’m not the only person who’s worried about this. A search on the internet uncovers all manner of panic about the coming apocalypse of job automation, but where I hope to be different is by pointing out that job automation is not something that may happen in the future, and which may be bad. It’s something that’s happening right now, and it’s definitely bad. This is not to say that I’m the first person to say job automation is already happening, nor am I the first person to say that it’s bad. Where I do hope to be different is by pointing out some ways in which it’s bad that aren’t generally considered, tying it into larger societal trends, and most of all pointing out how job automation is a singularity, but we don’t recognize it as such because we’re in the middle of it. For those who may need a reminder I’m using the term singularity as shorthand for a massive technologically driven change in society, which creates a world completely different from the world which came before.

The vast majority of people don’t look at job automation as a singularity, they view it as a threat to their employment, and worry that if they don’t have a job they won’t have the money to eat and pay the bills and they’ll end up part of the swelling population of homeless people I mentioned earlier. But if the only problem is the lack of money, what if we fixed that problem? What if everyone had enough money even if they weren’t working? Many people see the irresistible tide of job automation on the horizon, and their solution is something called a guaranteed basic income. This is an amount of money everyone gets regardless of need and regardless of whether they’re working. The theory is, that if everyone were guaranteed enough money to live on, that we could face our jobless future and our coming robot overlords without fear.

Currently this idea has a lot of problems. For one even if you took all the money the federal government spends on everything and gave it to each individual you’d still only end up with $11,000/per person/per year. Which is better than nothing, and probably (though just barely) enough to live on, particularly if you had a group of people pooling their money, like a family. But it’s still pretty small, and you only get this amount if you stop all other spending, meaning no defense, no national parks, no FTC, no FDA, no federal research, etc. More commonly people propose taking just the money that’s being currently spent on entitlement programs and dividing that up among just the adults (not everyone.) That still gets you to around $11,000 per adult, which is the same inadequate amount I just mentioned but with an additional penalty for having children, which may or may not be a problem.

As you can imagine there are some objections to this plan. If you think the government already spends too much money then this program is unlikely to appeal to you, though it does have some surprising libertarian backers. But there are definitely people who are worried that this is just thinly veiled communism and it will lead to a nation of welfare receipts with no incentive to do anything. That while this might make the jobless future slightly less unfair that in the end it will just accelerate the decline.

On the other hand there are the futurists who imagine that a guaranteed basic income is the first step towards a post-scarcity future where everyone can have whatever they want. (Think Star Trek.) Not only is the income part important, but, as you might imagine job automation, plays a big role in visions of a post scarcity future. The whole reason people worry about robots and AI stealing jobs is that they will eventually be cheaper than humans. And as technology improves what starts out being a little bit cheaper eventually becomes enormously cheaper. This is where the idea, some would even say the inevitability of the post scarcity future comes from. These individuals at least recognize we may be heading for a singularity, they just think that it’s in the future and it’s going to be awesome, while I think it’s here already and it’s going to be depressing.

All of this is to say that there are lots of ways to imagine job automation going really well or really poorly in the future but that’s the key word, the “future”. In all such cases people imagine an endpoint. Either a world full of happy people with no responsibilities other than enjoying themselves or a world full of extraneous people who’ve been made obsolete by job automation. But of course neither of these two futures is going to happen in an instant, even though they’re both singularities of a sort.  But that’s the problem, singularities are difficult to detect when you’re in them. I often talk about the internet being a soft singularity and yet, as Louis C.K. points out in his famous bit about airplane wi-fi we quickly forget how amazing the internet is. In a similar fashion, people can imagine that job automation will be a singularity, but they can’t imagine that it already is a singularity, that we are in the middle of it, or that it might be part of a larger singularity.

But I can hear you complaining that while I have repeatedly declared that it’s a singularity, I haven’t given any reasons for that assertion, and that’s a fair point. In short, it all ties back into a previous post of mine. As I said at the beginning, it has seemed recently that no matter what I’m writing about, it ties back into job automation. The post where this connection was the most subtle and yet at the same time the most frightening is while I was writing about the book Tribe by Sebastion Junger.

Junger spent most of the book talking about how modern life has robbed individuals of a strong community and the opportunity to struggle for something important. He mostly focused on war because of his background as a war correspondent with time in Sarajevo, but as I was reading the book it was obvious that all the points he was making could be applied equally well to those people without a job.  And this is why it’s a singularity, and this is also what most people are missing. The basic guaranteed income people along with everyone else who wants to throw money at the problem, assume that if they give everyone enough to live on that it won’t matter if people don’t have jobs. The post scarcity people take this a step further and assume that if people have all the things money can buy then they won’t care about anything else, but I am positive that both groups vastly underestimate human complexity. They also underestimate the magnitude of the change, as Junger demonstrated there’s a lot more wrong with the world than just job automation, but it fits into the same pattern.

Everyone looks around and assumes that what they see is normal. The modern world is not normal, not even close. If you were to take the average human experience over the whole of history then the experience we’re having is 20 standard deviations from normal. This is not to say that it’s not better. I’m sure in most ways that it is, but when you’re living through things, it’s difficult to realize that what we’re experiencing is multiple singularities all overlapping and all ongoing. The singularity of industrialization, of global trade, of fossil fuel extraction, of the internet, and finally, underlying them all, what it means to be human. As it turns out job automation is just a small part of this last singularity.  What do humans do? For most of human history humans hunted and gathered, then for ten thousand more years up until 1790 most humans farmed. And then for a short period of time most humans worked in factories, but the key thing is that humans worked!!! And if that work goes away, if there is nothing left for the vast majority of humans to do, what does that look like? That’s the singularity I’m talking about, that’s the singularity we’re in the middle of.

As I pointed out in my previous post, as warfare has changed, the rates of suicide and PTSD skyrocketed. Obviously having a job is not a struggle on the same level as going to war, but it is similar. As it goes away are we going to see similar depression, similar despair and similar increases in suicide? I think the evidence that we’re already in the middle of this crisis is all around us. There are a lot of disaffected people who were formerly useful members of society who have stopped looking for work and who have decided that a life addicted to opioids is the best thing they can do with their time. This directly leads to the recent surge in Deaths of Despair I also talked about in that post, which we’re seeing on top of the skyrocketing rates of suicide and PTSD. The vast majority of these deaths occur among people who no longer feel useful, in part for the reasons outlined by Junger and in part because they either no longer have a job or no long feel their job is important.

In closing, much of what I write is very long term, though based on some of the feedback I get that’s not always clear. To be clear I do not think the world will end tomorrow, or even soon, or even necessarily that it will ever end. I hope more to push for people to be aware that the future is unpredictable and it’s best to be prepared for anything. And also, as we have seen with job automation and the corresponding increase in despair, in some areas the future is already happening.


I am reliably informed that the job of donating to this blog has not been automated, you still have to do it manually.


Catastrophe or Singularity? Neither? Both?

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


One of the central themes of this blog has been that the modern world is faced with two possible outcomes: societal collapse or technological singularity. For those of you just joining us, who may not know, a technological singularity is some advancement which completely remakes the world. It’s most often used in reference to creating artificial intelligence which is smarter than the smartest human, but it could also be something like discovering immortality. This is the possible future where technology (hopefully) makes everything alright. But it’s not the only possibility, we’re faced with salvation on one hand and disaster on the other.

This dichotomy was in fact the subject of my very first post. And in that post I said:

Which will it be? Will we be saved by a technological singularity or wiped out by a nuclear war? (Perhaps you will argue that there’s no reason why it couldn’t be both. Or maybe instead you prefer to argue that it will be neither. I don’t think both or neither are realistic possibilities, though my reasoning for that conclusion will have to wait for a future post.)

Once again, in my ongoing effort to catch up on past promises, this is that future post. It’s finally time to fulfill the commitment I made at the very beginning, and answer the question, why can’t it be both or neither?

Let’s start with the possibility that we might experience both at the same time. And right off the bat we have to decide what that would even look like. I think the first thing that pops into my head is the movie Elysium, Neill Blomkamp’s follow-up to District 9. In this movie you have a collapsed civilization on the planet’s surface and a civilization in orbit that has experienced, at a minimum, a singularity in terms of space habitation and health (they have machines that can cure all diseases). At first glance this appears to meet the standard of both a collapse and a singularity happening at the same time, and coexisting. That said, it is fiction. And while I don’t think that should immediately render it useless, it is a big strike against it.

As you may recall I wrote previously about people mistaking fiction for history. But for the moment let’s assume that this exact situation could happen. That one possibility for the future is a situation identical to the one in the movie. Even here we have to decide what our core values are before we can definitively declare that this is a situation where both things have occurred. Or more specifically we have to define our terms.

Most people assume that a singularity, when it comes, will impact everyone. I’ve often said that the internet is an example of a “soft” singularity, and indeed one of its defining characteristics is that it has impacted the life of nearly everyone on the planet. Even if less than half of people use the internet, I think it’s safe to assume that even non-users have experienced the effects of it. Also, since the number of internet users continues to rapidly increase, it could be argued that it’s a singularity which is still spreading. Whereas in Elysium (and other dystopias) there is no spread. Things are static or getting worse, and for whatever reason the singularity is denied to the vast majority of people. (And if I understand the ending of the movie correctly it’s being denied just out of spite.) Which is to say that if you think that a singularity has to have universal impact, Elysium is not a singularity.

If, on the other hand, you view collapse as a condition where technological progress stops, then Elysium is not a story of collapse. Technological progress has continued to advance. Humanity has left the Earth, and there appears to be nothing special stopping them from going even farther. This is where core values really come into play.

I’ve discussed the idea of core values previously, and when I did, I mentioned a friend of mine whose core value is for intelligence to escape this gravity well, and Elysium either qualifies or is well on it’s way to qualifying for this success condition. Which means if you’re my friend Elysium isn’t a story of collapse it’s a story of triumph.

You may feel that I’ve been cheating and that what I’m really saying is that collapse and singularity are fundamentally contradictory terms and that’s why you can’t have both. I will admit that there is a certain amount of truth to that, but also as you can see a lot depends on what your “win” condition is. As another example of this, if you’re on the opposite side of the fence and your core values incline you to hope for a deindustrialized, back to nature, future, then one person’s collapse could be your win condition.

You may wonder why I’m harping on a subject of such limited utility, and further using a mediocre movie to illustrate my point. I imagine before we even began that all of you were already on board with the idea that you can’t have both a technological singularity and a societal collapse. I imagine this doesn’t merely apply to readers of this blog, but that most people agree that you can’t have both, despite a talented performance from Matt Damon which attempts to convince them otherwise. But in spite of the obviousness of this conclusion, I still think there’s some fuzzy thinking on the subject.

Allow me to explain. If, as I asserted in my last post, all societies collapse, and if the only hope we have for avoiding collapse is some sort of technological singularity. Then we are, as I have said from the very beginning, in a race between the two. Now of course structuring things as a race completely leaves out any possibility of salvation through religion, but this post is primarily directed at people who discount that possibility. If you are one of those people and you agree that it’s a race, then you should either be working on some potential singularity or be spending all of your efforts on reducing the fragility of society, so that someone else has as long as possible to stumble upon the singularity, whatever that ends up being.

I admit that the group I just described isn’t a large group, but it may be larger than you think. As evidence of this I offer up some of the recent articles on Silicon Valley Preppers. Recall, that we are looking for people who believe that a collapse is possible but don’t otherwise behave as if we’re in a race in which only one outcome can prevail. In other words, if, like these people, you believe a collapse could happen, you definitely shouldn’t be working on ways to make it more likely, by increasing inequality and fomenting division and anger, which seems to have been the primary occupation of most of these wealthy preppers. On top of this they appear to be preparing for something very similar to the scenario portrayed in Elysium.

Tell me if this description doesn’t come pretty close to the mark.

I was greeted by Larry Hall, the C.E.O. of the Survival Condo Project, a fifteen-story luxury apartment complex built in an underground Atlas missile silo….“It’s true relaxation for the ultra-wealthy,” he said. “They can come out here, they know there are armed guards outside. The kids can run around.” …In 2008, he paid three hundred thousand dollars for the silo and finished construction in December, 2012, at a cost of nearly twenty million dollars. He created twelve private apartments: full-floor units were advertised at three million dollars; a half-floor was half the price. He has sold every unit, except one for himself, he said…. In a crisis, his swat-team-style trucks (“the Pit-Bull VX, armored up to fifty-calibre”) will pick up any owner within four hundred miles. Residents with private planes can land in Salina, about thirty miles away.

A remoted guarded luxury enclave where they can wait out the collapse of the planet? This seems pretty on the money, and don’t even get me started on Peter Thiel’s island.

Far be it from me to criticize someone for being prepared for the worst. Though in this particular case, I’m not sure that fleeing to the rich enclave will be as good of a tactic as they think. John Michael Greer, who I quote frequently, is fond of pointing out that every time some treasure seeker finds gold coins which have been buried, that it’s evidence of a rich prepper, from history, whose plans failed. Where my criticism rest is the fact that they seem to spend hardly any resources on decreasing the fragility of the society we already have.

Reading these prepper stories you find examples of people from Reddit and Twitch and Facebook. What do any of these endeavors do that makes society less fragile? At best they’re neutral, but an argument could definitely be made that all three of these websites contribute to an increase in divisiveness and by extension they actually increase to the risk of collapse. But, as I already alluded to, beyond their endeavors, they are emblematic of the sort of inequality that appears to be at the heart of much of the current tension.

As a final point if these people don’t believe that a societal collapse and a technological singularity are mutually exclusive, what do they imagine the world will look like when they emerge from their bunkers? I see lots of evidence of how they’re going to keep themselves alive, but how do they plan to keep technology and more importantly, infrastructure alive?

A few years ago I read this fascinating book about the collapse of Rome. From what I gathered, it has become fashionable to de-emphasis the Western Roman Empire as an entity. An entity which ended in 476 when the final emperor was deposed. Instead, these days some people like to view what came after 476 as very similar to what came before only with a different group of people in charge, but with very little else changing. This book was written to refute that idea, and to re-emphasis the catastrophic nature of end of Rome. One of the more interesting arguments against the idea of a smooth transition was the quality of pottery after the fall. Essentially before the fall you had high quality pottery made in a few locations and which could be found all over the empire. Afterwards you have low quality, locally made pottery that was lightly fired and therefore especially fragile, a huge difference in quality.

It should go without saying, that a future collapse could have very little in common with the collapse of Rome, but if the former Romans couldn’t even maintain the technology for making quality pottery, what makes us think that we’ll be able to preserve multi-billion dollar microchip fabrication plants, or the electrical grid or even anything made of concrete?

The point is, if there is a collapse, I don’t think it’s going to be anything like the scenario Silicon Valley Preppers have in their head.

And now, for the other half of the post, we finally turn to the more interesting scenario. That we end up with neither. That somehow we avoid the fate of all previous civilizations and we don’t collapse, but, also, despite having all the time in the world to create some sort of singularity, that we don’t manage to do that either.

At first glance I would argue that the “neither” scenario is even more unlikely than the “both” scenario, but this may put me in the minority, which is, I suppose, understandable. People have a hard time imagining any future that isn’t just an extension of the present they already inhabit. People may claim that they can imagine a post-apocalyptic future, but really they’re just replaying scenes from The Road, or Terminator 2 (returning to theaters in 3D this summer!). As an example, take anyone living in Europe in 1906, was there a single person who could have imagined what the next 40 years would bring? The two World Wars? The collapse of so many governments? The atomic bomb? And lest you think I’m only focused on the negative, take any American living in 1976. Could any of them have imagined the next 40 years? Particularly in the realm of electronics and the internet. Which is just to say, as I’ve said so often, predicting the future is hard. People are far more likely to imagine a future very similar to the present, which means no collapses or singularities.

It’s not merely that they dismiss potential singularities because they don’t fit with how they imagine the future, it’s that they aren’t even aware of the possibility of a technological singularity. (This is particularly true for those people living in less developed countries.) Even if they have heard of it, there’s a good chance they’ll dismiss it as a strange technological religion complete with a prophet, a rapture, and a chosen people. This attitude is not only found among those people with no knowledge of AI, some AI researchers are among its harshest critics. (My own opinion is more nuanced.)

All of this is to say that many people who opt for neither have no concept of a technological singularity, or what it might look like or what it might do to jobs. Though to adapt my favorite apocryphal quote from Trotsky. You may not be interested in job automation, but job automation is interested in you.

All of the lack of information, and the present-day bias in thinking, apply equally well to the other end of the spectrum and the idea of society collapsing, but on top of that you have to add in the optimism bias most humans have. This is the difference between the 1906 Europeans and the 1976 Americans. The former would not be willing spend anytime considering what was actually going to happen even if you could describe it to them in exact detail, while the latter would happily spend as much time, as you could spare, listening to you talk about the future.

In other words, most people default to the assumption that neither will happen, not because they have carefully weighed both options, but because they have more pressing things to think about.

As I said at the start I don’t think it can be neither, and I would put the probability of that, well below the probability of an eventual singularity, but that is not to say that I think a singularity is very likely either (if you’ve been reading this blog for any length of time you know that I’m essentially on “Team Collapse”.)

My doubts exist in spite of the fact that I know quite a bit about what the expectations are, and the current state of the technology. All of the possible singularities I’ve encountered have significant problems and this is setting aside my previously mentioned religious objection to most of them. To just go through a few of the big ones and give a brief overview:

  • Artificial Intelligence: We obviously already have some reasonably good artificial intelligence, but for it to be a singularity it would have to be generalized, self-improving, smarter than we are, and conscious. I think the last of those is the hardest, even if it turns out that the materialists are totally right (and a lot of very smart, non-religious people think that they aren’t) we’re not even close to solving the problem.
  • Brain uploading: I talked about this in the post I did about Robin Hansen and the MTA conference, but in essence, all of the objections about consciousness are still essentially present here, and as I mentioned there, if we can’t even accurately model a species with 302 neurons. How do we ever model or replicate a species with over 100 billion?
  • Fusion Power: This would be a big deal, big enough to count as a singularity, but not the game changer that some of the other things would be. Also as I pointed out in a previous post, at a certain point power isn’t the problem if we’re going to keep growing, heat is.
  • Extraterrestrial colonies: Perhaps the most realistic of the singularities at least in the short term, but like fusion not as much of a game changer as people would hope. Refer to my previous post for a full breakdown of why this is harder than people think, but in short, unless we can find some place that’s livable and makes a net profit, long-term extraterrestrial colonies are unsustainable.

In other words while most people reject the idea of a singularity because they’re not familiar with the concept, even if they were, they might, very reasonably, choose to reject it all the same.

You may think at this point that I’ve painted myself into a corner. For those keeping score at home I’ve argued against both, I’ve argued against neither and I’ve argued against a singularity all by itself. (I think they call that a naked singularity, No? That’s something else?) Leaving me with just collapse. If we don’t collapse I’m wrong, and all the people who can neither understand the singularity or imagine a catastrophe will be vindicated. In other words, I’ve left myself in the position of having to show that civilization is doomed.

I’d like to think I went a long way towards that in my last post, but this time I’d like to approach it from another angle. The previous post pointed out the many ways in which our current civilization is similar to other civilizations who’ve collapsed. And while those attributes are something to keep an eye on, even if we were doing great, even if there are no comparisons to be drawn between our civilization and previous civilizations in the years before their collapse, there are still a whole host of external black swans, any one of which would be a catastrophic.

As we close out the post let’s just examine a half dozen potential catastrophes, every one of which has to avoided in the coming years:

1- Global Nuclear War: Whether that be Russia vs. the US or whether China’s peaceful rise proves impossible, or whether it’s some new actor.

2- Environmental Collapse: Which could be runaway global warming or it could be a human caused mass extinction, or it could be overpopulation.

3- Energy Issues: Can alternative energy replace carbon based energy? Will the oil run out? Is our energy use going to continue to grow exponentially?

4- Financial Collapse: I previously mentioned the modern world’s high levels of connectivity, which means one financial black swan can bring down the entire system, which almost happened in 2008.

5- Natural disasters: These include everything from super volcanoes, to giant solar storms, to impact by a comet.

6- Plagues: This could be something similar to the Spanish Flu pandemic, or it could be something completely artificial, an act of bioterrorism for example.

Of course this list is by no means exhaustive. Also remember that we don’t merely have to avoid these catastrophes for the next few decades we have to avoid them forever, particularly if there’s no singularity on the horizon.

Where is the world headed? What should we do? I know I have expressed doubts about the transhumanists, and people like Elon Musk, but at least these individuals are thinking about the future. Most people don’t. They assume tomorrow will be pretty much like today, and that their kids will have a life very similar to theirs. Maybe that’s so, and maybe it’s not, but if the singularity or collapse don’t happen during the life of your children or of their children, it will happen during the lives of someone’s children. And it won’t be both and it won’t be neither. I hope it’s some kind of wonderful singularity, but we should prepare for it to be a devastating catastrophe.

I repeat what I’ve said from the very beginning. We’re in a race between societal collapse and a technological singularity. And I think collapse is in the lead.


If you’re interested in ways to prevent collapse you should consider donating. It won’t stop the collapse of civilization, but it might stop the collapse of the blog.


Time Preference and the Survival of Civilizations

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


In my ongoing quest to catch up on those topics I promised to revisit someday but never have, in this post I’m turning my attention to a statement I made all the way back in July of last year. (As I said I’ve been negligent about keeping my promises.) Back then, as aside on the topic of taboos, I said:

Of course this takes us down another rabbit hole of assuming that the survival of a civilization is the primary goal, as opposed to liberty or safety or happiness, etc. And we will definitely explore that in a future post, but for now, let it suffice to say that a civilization which can’t survive, can’t do much of anything else.

Well, this is that future post and it’s time to talk about Civilization! With a capital C! And no, not the classic Sid Meier’s game of the same name. Though that is a great game.

To begin with though, in timing that can only be evidence of the righteousness of my cause (that’s sarcasm by the way.) I recently listened to several interesting podcasts that directly tied into this topic. (By the way, you all know that you can get this blog as a podcast, right?) The first was a podcast titled Here Are The Signs That A Civilization Is About To Collapse. I confess it wasn’t as comprehensive as I had hoped, but their guest, Arthur Demarest, brought up some very interesting points. And if he had had a book on civilizational collapse I would have bought it in a heartbeat, but it appears that his books are all academically oriented and mostly focused on the Mayans. In any case here are some of the points that dovetail well with things I have already talked about.

  1. Civilization allows increasing complexity and connectivity, resulting in increased efficiency. But this connectivity and complexity increases the fragility of the system. Demarest gave the example of a slowdown in China causing pizza parlors to close in Chile.
  2. This complexity also leads to increased maintenance costs, and overhead. And eventually maintenance expands to the point where there’s very little room for innovation and no flexibility to unwind any of the complexity.
  3. When civilizations get in trouble they often end up doubling down on whatever got them in trouble in the first place. Demarest gives the example of the Mayans who built ever more elaborate temples as collapse threatened, in an effort to prop up the rulers.
  4. A civilization’s strength can often end up being the cause of its downfall.
  5. As things intensify thinking becomes more and more short term.
  6. Observations that the current period is the greatest ever often act as a warning that the civilization has already peaked, and the collapse is in progress.

As you may notice we already check most if not all of these boxes and I’ve already talked about all of them in one form or another, but more importantly, what he also points out, and what should be obvious, is that all civilizations collapse. Now you may argue that all we can say for sure is that every previous civilization has collapsed; ours may be different. This is indeed possible. But I think, for a variety of reasons which I mention again and again, that it’s safer to assume that we aren’t different. If we do make this completely reasonable and cautionary assumption, then the only questions which remain are: when is the current civilization going to collapse and is there anything we can do to extend its life?

I mentioned that I had listened, coincidently, and by virtue of the righteousness of my cause (once again sarcasm), to several podcasts which spoke to this issue. The second of these podcasts was Dan Carlin’s Common Sense. In this most recent episode he spent the first half of the program talking about the increasing hostility that exists between the two halves of the country and specifically the hostility between the Antifas (short for anti-fascists) and the hardcore Trump supporters. Carlin mentioned videos of violence which has been erupting at demonstrations and counter demonstrations all over the country. I would link to some of these videos, but it’s hard to find any that aren’t edited in a nakedly partisan fashion by one or the other side. But they’re easy enough to find if you do a search.

This is not a new phenomenon, we’ve had violence since election day, and I already spent an entire post talking about it. But Carlin frames things in an interesting way. He asks us to imagine that we were elected as president, and that our only goal was to heal the divisions that exist in the country. How would we do it? What policy would we implement that would bring the country back together again?

Carlin accurately points out that there’s not some anti-racist policy you could pass that would suddenly make everything all better. In fact it could be argued that we already have lots of anti-racist policies and that rather than helping, they might be making it worse. In my previous post I pushed for greater federalism, which is less a policy than a roll-back of a lot of previous policies. But as Carlin points out this is probably infeasible. First off because that’s just not how government works. Governments don’t ever voluntarily become less powerful. And second there’s not a lot of support for the idea even if the government was predisposed to let it happen.

Carlin spends the second half of the podcast talking about the Syrian missile strike. And in a common theme this discussion flows into his criticism of the ever expanding power of the executive. As you probably all know, only Congress has the power to declare war, and it last used that power in 1942 when it declared war on Bulgaria, Hungary and Romania. Since then it hasn’t used that power, though generally the President still seeks congressional approval for military action, what Carlin calls the fig leaf. He points out that Trump didn’t even do that. These days if someone dares to mention that this all might be unconstitutional, they are viewed as being very much on the fringe. But Carlin, like me, is grateful when people bring it up because at least it’s being talked about.

As I said executive overreach and expansion is a common theme for Carlin and one of the points he always returns to is that whatever tools you give your guy when he’s President are going to be used by the other side when they eventually get the presidency back. And this idea touches on the central idea that I want to explore, and the idea that unites the two halves of Carlin’s podcast, the idea of short term thinking. Both the current political crisis and the expansion of the presidency are examples of this short term thinking. And exactly the kind of thing that Demarest was talking about when he described historical civilizations which have collapsed.

As an extreme example of what I mean let me turn to one final recent podcast, the episode on Nukes from Radiolab. In the episode they examine the nuclear chain of command to determine if there are any checks on the ability of a US President to unilaterally launch a nuclear strike. That is, launch a nuclear strike without getting anyone else’s permission. And the depressing conclusion they come to is that there are effectively no checks. This is not to say that someone couldn’t disobey the order in that situation, but it’s hard to imagine such insubordination would hit 100%. In other words if Trump really wants to launch an ICBM, ICBMs will be launched.

But, for me, this is an issue which goes beyond Trump, and it’s scary basically regardless of who’s president. But it’s also a classic example of short term thinking. At some point it became clear that in the event of a Soviet first strike that there would be no time for a committee to assemble or multiple people to be called, and in that moment and based on this very narrow scenario, it was decided that sole control of the nuclear arsenal would be given to the President. If I remember the episode correctly this policy really firmed up during the Kennedy administration (and if you couldn’t trust Kennedy who could you trust?)

One could potentially understand this rationale for investing all of the power with the President, even if you don’t agree with it. But no thought was given to what should be done if the Cold War ever ended, and indeed when it did end, nothing changed. No thought or effort was even made to restrict this control to just the scenario of responding to a Soviet first strike. As it stands the President can launch missiles entirely at his discretion and for any reason whatsoever.

One would think that if Trump is as dangerous and unstable as people claim that they would be doing everything in their power to limit his ability to unilaterally start a nuclear war. That, at a minimum they would limit the President’s authority over nuclear weapons so that it applied only in situations where another country attacked us first. (I’m not sure how broad to make the standard of proof in this case, but even if it was fairly expansive we’d still be in a much better position than we are now. ) Instead, as of this writing, such a concern is nowhere to be found, and rather the headlines are about another GOP stab at a health bill, or how much the FBI director may have influenced the election or the sentencing of a woman who laughed at Jeff Sessions (the Attorney General).

Perhaps all of these issues will end up being of long term importance. Though that seems unlikely, particularly the story about the protestor laughing at Sessions, and even the story about the FBI director concerns something that already happened, and is therefore essentially unchangeable. It’s even harder to imagine how any of the issues currently in the news have more long term importance than the issue of the President’s singular control of the nuclear arsenal. And that’s just one example of long term dangers being overwhelmed by short-term worries.

You might argue at this point that the stories I mentioned are not unique to this moment in history, that people have been focused on their immediate needs and wants to the exclusion of longer term concerns for hundreds if not thousands of years. I don’t agree with this argument, I do think historically it has been different. And as a counter example I offer up the American Civil War where the focus may have been almost too long term. But even if I’m wrong and historically people were every bit as short-term in their outlook as they are now, the stakes today are astronomically greater.

I wanted to focus on short term thinking because it all builds up to my favorite definition of what civilization is. You may have noticed that we’ve come all this way without even clearly defining what we’re talking about, and I want to rectify that. Civilization is nothing more or less than low time preference. What’s time preference? It’s the amount of weight you give to something happening now versus in the future. As the term is commonly used it mostly relates to economics, how much more valuable is $1000 is today than $1000 in a month or a year. If $1000 today is the same as $1000 in three months then you have a time preference of zero. If you’re a loan shark and you want someone to pay you $2000 next week in exchange for $1000 today then you have a very high time preference, and are consequently engaging in what may be described as an uncivilized transaction, or at least a low-trust transaction, but of course trust is a big part of civilization.

Outside of economics, having a low time preference allows people to plan for the future, to build infrastructure, to establish institutions and perhaps most importantly to rely on the operation of the law, having faith that it’s not important to get justice right this second if you will get justice eventually. Perhaps you can see why I worry about what’s happening right now.

On the other hand it can easily be seen that corruption, the cancer of civilization, is a high time preference activity, people would rather get a bribe right now, because they have no trust in what the future will bring. When people talk about institutions, the rule of law, societal trust, and even the absence of violence they’re talking about low time preference. And let’s all agree right now that it’s a little bit confusing for “high” to be bad and low to be “good”.

Everything I’ve said so far is necessary to show that short-termism isn’t a symptom of the decline of civilization it IS the decline of civilization. But of course things can look fine for quite a while, because of the low time preference which existed up until this point. Meaning that those who came before us invested a lot in the future (because of their low time preference) and we can reap the benefits of those investments for a long time before it finally catches up to us.

Way back in the beginning of this post I stated that if you assume that our civilization is going to eventually collapse then the only question we’re left with is when and is there any way to delay that collapse? I think I’ve already answered the question of “when?” (Not immediately but sooner than most people think.) And now we need to look at the question, “What can we do to slow it down?” A simple, but somewhat impractical answer would be to lower our time preference. But as you can imagine this exhortation is unlikely to appear on a protest sign any time soon. (Perhaps I’ll try it out if we ever have a demonstration in Salt Lake City.) But, if we can’t get people to lower their time preference directly, perhaps we can do it indirectly.

If you were to use the term sacrifice, in place of low time preference, you would not be far from the mark. And restating the entire problem as, “We need greater sacrifice,” is something people understand, and it, also, just might make a good protest sign. But stating the solution this way just makes the scope of the problem all the more apparent. Because the last thing any of the people who are currently angry want to be told is that they need to sacrifice more.

It is, as far as I can tell, the exact opposite. All of the interested parties, left and right, rich and poor, minority and non, citizen and immigrant all feel that they have sacrificed enough, that now is the time for them to “get what they deserve.” Obviously not every poor person or every minority feels this way, but those who do feel this way are the ones who are out on the streets. And once again it all comes back to low time preference. No one wants to wait 10 years for something. No one is content to see their children finally get the rights they’ve been protesting for (if they even have children) and no one wants to wait four years for the next election.

All of this is not to say that people are entirely unwilling to sacrifice. People make sacrifices all the time for the things they want. But what I’m calling for, if we want to postpone collapse, is sacrifice specifically for civilization, which is, I admit, a fairly nebulous endeavor. But I think it starts with identifying what civilization is, and how it’s imperiled. Which is, in part, the point of this post. (In fact, I firmly expect all protesting and unrest to stop once it’s released.)

Joking aside, I fear there is no simple solution even if you have managed to identify the problem, and it may in fact be that there is nothing we can do to delay the end at this point. To return to Carlin’s question about the sorts of policies you might implement if you were made President and your one goal was to heal the country. I do think that creating some shared struggle we could all sacrifice for, would be a good plan, as good as any, and maybe even the best plan, which is not to say that it would succeed. And this hypothetical still relies on getting someone like that elected. Which is also not something that seems very likely. In other words things may already be too far gone.

One of my biggest reasons for pessimism is that I don’t think people see any connection between the unrest we’re currently experiencing (both here and abroad) and the weakening of civilization and more specifically the country. But there are really only three possibilities, the massive anger which exists can either strengthen the country, it can weaken it, or it can have no effect. If you think it’s making the country stronger, (or even having no effect) I’d love to hear your reasoning. But rather, I think any sober assessment would have to conclude that it can’t be strengthening it, and it can’t be having no effect, therefore it must be weakening it. Leaving only the question of by how much.

None of this is to claim that anger about Trump or alternatively support for Trump (or any of the other issues) will single-handedly bring down the country. But it’s all part of a generalized trend towards higher and higher time preference. Towards wanting justice and change right now. And I understand, of course, that the differences of opinion which have split the country are real and consequential. But what is the end game? What is the presidential policy that will make it all better? What are people willing to sacrifice? To repeat a quote I used in a previous post from Oliver Wendell Holmes:

Between two groups that want to make inconsistent kinds of world I see no remedy but force.

It’s a dangerous road we’re on and I would argue that as thinking get’s more and more short-term that the survival of civilization is at stake. And it’s at stake precisely because long-term thinking and planning is precisely what civilization is.

To come back to the assertion that started this all off, the assertion that I promised to return to. A civilization which can’t survive can’t do much of anything else. Of course at one level this is just a tautology. But at another level it ends up being a question of whether certain things can exist together. Can Trump supporters and Trump opponents live in the same country? Can a country give you everything you think you deserve right now, and yet still be solvent in 100 years? Can you have a system which is really good at reducing violence (as Pinker points out) but never abuses it’s power?

It’s entirely possible that the answer to all of those questions is yes. And I hope that’s the case. I hope that my worries are premature. I hope that similar to the unrest in the late 60’s/early 70’s that things will peak and then dissipate. That it will happen without a Kent State Shooting, or worse. But I also know that civilization takes sacrifice, it takes compromise, and however unsexy and dorky this sounds. It takes a low time preference.


You may have considered donating, but never gotten around it. Perhaps because you have low time preference and you assume that a dollar someday is as good as a dollar now. Well on this one issue I have very high time preference, so consider donating now.