Month: July 2017

Returning to Mormonism and AI (Part 2)

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


This post is a continuation of the last post. If you haven’t read that post, you’re probably fine, but if you’d like to you can find it here. When we ended last week we had established three things:

1- Artificial intelligence technology is advancing rapidly. (Self-driving cars being great example of this.) Many people think this means we will have a fully conscious, science fiction-level artificial intelligence in the next few decades.

2- Since you can always add more of whatever got you the AI in the first place, conscious AIs could scale up in a way that makes them very powerful.

3- Being entirely artificial and free from culture and evolution, there is no reason to assume that conscious AIs would have a morality similar to ours or any morality at all.

Combining these three things together, the potential exists that we could very shortly create a entity with godlike power that has no respect for human life or values. Leaving me to end the last post with the question, “What can we do to prevent this catastrophe from happening?”

As I said the danger comes from combining all three of the points above. A disruption to any one of them would lessen, if not entirely eliminate, the danger. With this in mind, everyone’s first instinct might be to solve the problem with laws and regulations. If our first point is that AI is advancing rapidly then we could pass laws to slow things down, which is what Elon Musk suggested recently. This is probably a good idea, but it’s hard to say how effective it will be. You may have noticed that perfect obedience to a law is exceedingly rare, and there’s no reason to think that laws prohibiting the development of conscious AIs would be the exception. And even if they were, every nation on Earth would have to pass such laws. This seems unlikely to happen and even more unlikely to be effective.

One reason why these laws and regulations wouldn’t be very effective is that there’s good reason to believe that developing a conscious AI, if it can be done, would not necessarily require something like the Manhattan Project to accomplish. And even if it does, if Moore’s Law continues, what was a state of the art supercomputer in 2020 will be available in a gaming console in 2040. Meaning that if you decide to regulate supercomputers today in 30-40 years you’ll have to regulate smart thermostats.

Sticking with our first point, another possible disruption is the evidence that consciousness is a lot harder than we think. And many of the people working in the field of AI have said that the kind of existential threat that I (and Stephen Hawking, and Elon Musk and Bill Gates) are talking about is centuries away. I don’t think anyone is saying it’s impossible, but there are many people who think it’s far enough out that while it might still be a problem it won’t be our problem, it will be our great-great grandchildren’s problem, and presumably they’ll have much better tools for dealing with it. Also, as I said in the last post I’m on record as saying we won’t develop artificial consciousness, but I’d also be the last person to say that this means we can ignore the potential danger. And, it is precisely the potential danger, which makes hoping that artificial consciousness is really hard, and a long way away, a terrible solution.

I understand the arguments for why consciousness is a long ways away, and as I just pointed out I even agree with them. But this is one of those “But what if we’re wrong?” scenarios, where we can’t afford to be wrong. Thus, while I’m all for trying to craft some laws and regulations, and I agree that artificial consciousness probably won’t happen, I don’t think either hope or laws represent an adequate precaution. Particularly for those people who really are concerned.

Moving to our second point, easily scalable power, any attempts to limit this through laws and regulations would suffer problems similar to attempting to slow down their development in the first place. First, what keeps a rogue actor from exceeding the “UN Standards for CPUs in an Artificial Entity”? When we can’t even keep North Korea from developing ICBMs? And, again, if Moore’s Law continues to hold then whatever power you trying to limit, is going to become more and more accessible to a broader and broader range of individuals. And, more frighteningly, on this count we might have the AI itself working against us.

Imagine a situation where we fail in our attempts to stop the development of AI, but our fallback position is to limit how powerful of a computer the AI can inhabit. And further imagine that miraculously the danger is so great that we have all of humanity on board. Well then we still wouldn’t have all sentient entities on board, because AIs would have all manner of intrinsic motivation to increase their computing power. This represents a wrinkle that many people don’t consider. However much you get people on board with things when you’re talking about AI, there’s a fifth column to the whole discussion that desperately wants all of your precautions to fail.

Having eliminated, as ineffective, any solutions involving controls or limits on the first two areas, the only remaining solution is to somehow instill morality in our AI creations. For people raised on Asimov and his Three Laws of Robotics this may seem straightforward, but it presents some interesting and non-obvious problems.

If you’ve read much Asimov you know that, with the exception of a couple of stories, the Laws of Robotics were embedded so deeply that they could not be ignored or reprogrammed. They were an “inalienable part of the mathematical foundation underlying the positronic brain.” Essentially meaning, the laws were impossible to change. For the moment, let’s assume that this is possible, that we can embed instructions so firmly within an AI that it can’t change them. This seems improbable right out of the gate given that the whole point of a computer is it’s ability to be programmed and for that programming to change. But we will set that objection aside for the moment and assume that we can embed some core morality within the AI in a fashion similar to Asimov’s laws of robotics. In other words, in such a way that the AI has no choice but to follow them.

You might think, “Great! Problem solved”. But, in fact we haven’t even begun to solve the problem:

First, even if we can embed that functionality in our AIs, and even if, despite being conscious and free-willed, they have no choice but to obey those laws, we still have no guarantee that they will interpret the laws the same way we do. Those who pay close attention to the Supreme Court know exactly what I’m talking about.

Or, to use another example, stories are full of supernatural beings who grant wishes, but in the process, twist the wish and fulfill it in such a way that the person would rather not have made the wish in the first place. There are lots of reasons to worry about this exact thing happening with conscious AIs. First whatever laws or goals we embedded, if the AI is conscious it would almost certainly have it’s own goals and desires and would inevitably interpret whatever morality we’ve embedded in way which best advances those goals and desires. In essence, fulfilling the letter of the law but not its spirit.

If an AI twists things to suit its own goals we might call that evil, particularly if we don’t agree with it’s goals, but you could also imagine a “good” AI that really wants to follow the laws, and which doesn’t have any goals and desires beyond the morality we’ve embedded, but still ends up doing something objectively horrible.

Returning to Asimov’s laws, let’s look at the first two:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

One possible interpretation of the first law would be to round up all the humans (tranquilize them if they resist) and put them in a padded room with a toilet and meal bars delivered at regular intervals. In other words one possible interpretation of the First Law of Robotics is to put all the humans in a very comfy, very safe prison.

You could order them not to, which is the second law, but they are instructed to ignore the second law if it conflicts with the first law. These actions may seem evil based on the outcome, but this could all come about from a robot doing it’s very best to obey the first law, which is what, in theory, we want. Returning briefly to examine how an “evil” AI might twist things. You could imagine this same scenario ending in something which very much resembling The Matrix, and all the AI would need is a slightly fluid definition of the word injury.

There have been various attempts to get around this. Eliezer Yudkowsky, a researcher I’ve mentioned in previous posts on AI, suggests that rather than being given a law that AIs be given a goal, and he provides an example which he calls humanities “coherent extrapolated volition” (CEV):

Our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.

I hope the AI understands it better than I do, though to be fair Yudkowsky doesn’t offer it up as some kind of final word but as a promising direction. Sort of along the lines of telling the genie that we want to wish for whatever the wisest man in the world would wish for.

All of this is great, but it doesn’t matter how clever our initial programming is, or how poetic the construction the AIs goal. We’re going want to conduct the same testing to see if it works as we would if we had no laws or goals embedded.

And here at last we hopefully have reached the meat of things. How do you test your AI for morality? As I mentioned in my last post this series is revisiting an earlier post I made in October of last year which compared Mormon Theology to Artificial Intelligence research particularly as laid out in the book Superintelligence by Nick Bostrom. In that earlier post I listed three points on the path to conscious artificial intelligence:

1- We are on the verge of creating artificial intelligence.

2- We need to ensure that they will be moral.

3- In order to be able to trust them with godlike power.

This extended series has now arrived at the same place, and we’re ready to tackle the issue which stands at the crux things: The only way to ensure that AIs aren’t dangerous (potentially, end of humanity dangerous) is to make sure that the AIs are moral. So the central question is how do we test for morality?

Well to begin, the first, obvious step, is to isolate the AIs until their morality can be determined. This isolation allows us to prevent them from causing any harm, gives us an opportunity to study them, and also keeps them from increasing their capabilities by denying them access to additional resources.

There are of course some worries about whether we would be able to perfectly isolate an AI given how connected the world is, and also given the fact humanity has a well known susceptibility to social engineering, (i.e. the AI might talk it’s way out) but despite this, I think most people agree that isolation is an easier problem than creating a method to embed morality right from the start in a foolproof manner.

Okay, so you’ve got them isolated. But this doesn’t get you to the point where you’re actually testing their morality, this just gets you to the point where failure is not fatal. But isolation carries some problems. You certainly wouldn’t want them to experience the isolation as such. If you stick your AIs in the equivalent of a featureless room for the equivalent of eternity, I doubt anyone would consider that an adequate test of their morality, since it’s either too easy or too unrealistic. (Also if there’s any chance your AI will go insane this would certainly cause it.) Accordingly you’d want in addition to the isolation, the ability to control their environment, to create a world, but what sort of world would you want to create? It seems self-evident that you’d want to create something that resembled the real world as much as possible. The advantages to this should be obvious. You want to ensure that the AI will act morally in the world we inhabit with all of the limitations and opportunities that exist in that world. If you create a virtual world that has different limitations and different opportunities, then it’s not a very good test. Also this setup would present them with all the moral choices they might otherwise have and you could observe which choices they make, and choices are the essence of morality.

While putting a “baby” AI in a virtual world to see what it does is interesting. It might not tell us very much. And here’s where we return to the embedded law, whether it’s something like the three laws of robotics or whether it’s more like Yudkowsky’s CEV. As I mentioned, regardless of whether you have embedded morality or not you’re going to need to test, but I also can’t think of any reason to not try providing some kind of direction with respect to morality. One could imagine an AI doing all sorts of things if it was placed in a virgin world without any direction, and how could you know if it was doing those things because it was “evil” or whether it was doing them because it didn’t know any better. So, as I said, there’s no reason not to give it some kind of moral guidelines up front.

A discussion of what morality is, and what those guidelines should be, beyond the examples already given, is beyond the scope of this post. But if we assume that some guidelines have been given, then at that point the AI being tested can do one of two things: it can follow the guidelines perfectly or it can violate them. What happens if it violates them? You could make arguments that it would depend on what the guidelines were and how it violated them. You could also make arguments that the AI might be smarter than us and it might have had a very good reason for violating them. And all of these arguments are valid, but the danger of getting it wrong is so great, and the creation of another AI would, presumably, be so easy that it’s hard to imagine you wouldn’t just get rid of the AI who violated the guidelines. Even if the infraction was minor. Also as Bostrom points out, if we “forgive” the AI, then there’s the danger that it will understand the nature of the test and the consequences of failure. And from that time forward it act perfectly, not because it’s moral, but because it wants to avoid destruction. In this circumstance the AI hides its true intentions, meaning that we never know what sort of morality it has, and we end up defeating the whole process.

As aside, when speaking of getting rid of AIs, there’s a whole ethical minefield to grapple with. If we have in fact created sentient AIs then it could certainly be argued that getting rid of them is the equivalent of murder. We’ll come back to this issue later, but I thought I’d mention it while it was fresh.

So that’s how we handle AIs that don’t follow the guidelines, but what do we do with AIs that did follow the guidelines, that were perfect? You may think the solution is obvious, that we release them and give them the godlike power that is their birthright.

Are you sure about that? We are after all talking about godlike power. You can’t be a little bit sure about their morality, you have to be absolutely positive. What tests did you subject it to? How hard was it to follow our moral guidelines? Was the wrong choice even available? Were wrong choices always obviously the wrong choice or was there something enticing about the wrong choice? Maybe something that gave the AI a short term advantage over the right choice? Did the guidelines ever instruct them to do something where the point wasn’t obvious? Did the AI do it anyway, despite the ambiguity? Most of all, did they make the right choice even when they had to suffer for it?

To get back to our central dilemma, really testing for morality, to the point where you can trust that entity with godlike powers, implies creating a situation where being moral can’t have been easy or straight forward. In the end, if we really want to be certain, we have to have thrown everything we can think of at this AI: temptations, suffering, evil, and requiring obedience just for the sake of obedience. It has to have been enticing and even “pleasurable” for the AI to make the wrong choice and the AI has to have rejected that wrong choice every time despite all that.

One of my readers mentioned that after my last post he was still unclear on the connection to Mormonism, and I confess that he will probably have a similar reaction after this post, but perhaps, here at the end, you can begin to see where this subject might have some connection to religion. Particularly things like the problem of evil and suffering. That will be the subject of the final post in this series. And I hope you’ll join me for it.


If you haven’t donated to this blog, it’s probably because it’s hard. But as we just saw, doing hard things is frequently a test of morality. Am I saying it’s immoral to not donate to the blog? Well if you’re enjoying it then maybe I am.


Returning to Mormonism and AI (Part 1)

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Last week, Scott Alexander, the author of SlateStarCodex, was passing through Salt Lake City and he invited all of his readers to a meetup. Due to my habit of always showing up early I was able to corner Scott for a few minutes and I ended up telling him about the fascinating overlap between Mormon theology and Nick Bostrom’s views on superintelligent AI. I was surprised (and frankly honored) when he called it the “highlight” of the meetup and linked to my original post on the subject.

Of course in the process of all this I went through and re-read the original post, and it wasn’t as straightforward or as lucid as I would have hoped. For one I wrote it before I vowed to avoid the curse of knowledge, and when I re-read it, specifically with that in mind I could see many places where I assumed certain bits of knowledge that not everyone would possess. This made me think I should revisit the subject. Even aside from my clarity or lack thereof, there’s certainly more that could be said.

In fact there’s so much to be said on the subject, that I’m thinking I might turn it into a book. (Those wishing to persuade or dissuade me on this endeavor should do so in the comments or you can always email me. Link in the sidebar just make sure to unspamify it.)

Accordingly, the next few posts will revisit the premise of the original, possibly from a slightly different angle. On top of that I want to focus in on and expand on a few things I brought up in the original post and then, finally, bring in some new stuff which has occurred to me since then. All the while assuming less background knowledge, and making the whole thing more straightforward. (Though there is always the danger that I will swing the pendulum too far the other way and I’ll dumb it down too much and make it boring. I suppose you’ll have to be the judge of that.)

With that throat clearing out of the way let’s talk about the current state of artificial intelligence, or AI, as most people refer to it. When you’re talking about AI, it’s important to clarify whether you’re talking about current technology like neural networks and voice recognition or whether you’re talking about the theoretical human level artificial intelligence of science fiction. While most people think that the former will lead to the latter, that’s by no means certain. However, things are progressing very quickly and if current AI is going to end up in a place so far only visited by science fiction authors, it will probably happen soon.

People underestimate the speed with which things are progressing because what was once impossible quickly loses it’s novelty the minute it becomes commonplace. One of my favorite quotes about artificial intelligence illustrates this point:

But a funny thing always happens, right after a machine does whatever it is that people previously declared a machine would never do. What happens is, that particular act is demoted from the rarefied world of “artificial intelligence”, to mere “automation” or “software engineering”.

As the quote points out, not only is AI progressing with amazing rapidity, but every time we figure out some aspect of it, it moves from being an exciting example of true machine intelligence into just another technology.

Computer Go, which has been in the news a lot lately, is one example of this. As recently as May of 2014 Wired magazine ran an article titled, The Mystery of Go, The Ancient Game That Computers Still Can’t Win, an in depth examination of why, even though we could build a computer that could beat the best human at Jeopardy! of all things, we were still a long ways away from computers who could beat the best human at Go. Exactly three years later AlphaGo beat Ke Jie the #1 ranked player in the world. And my impression was, that interest in this event which only three years ago Wired called “AI’s greatest unsolved riddle” was already fading, with the peak coming the year before when AlphaGo beat Lee Sedol. I assume part of this was because once AlphaGo proved it was competitive at the highest levels everyone figured it was only a matter of time and tuning before it was better than the best human.

Self-driving cars are another example of this. I can remember the DARPA Grand Challenge back in 2004, the first big test of self-driving cars, and at that point not a single competitor finished the course. Now Tesla is assuring people that they will do a coast to coast drive on autopilot (no touching of controls) by the end of this year. And most car companies expect to have significant automation by 2020.

I could give countless other examples in areas like image recognition, translation and writing, but hopefully, by this point, you’re already convinced that things are moving fast. If that’s the case, and if you’re of a precautionary bent like me, the next question is, when should we worry? And the answer to that depends on what you’re worried about. If you’re worried about AI taking your job, a subject I discussed in a previous post, then you should already be worried. If you’re worried about AIs being dangerous, then we need to look at how they might be dangerous.

We’ve already seen people die in accidents involving Tesla’s autopilot mode. And in a certain sense that means that AI is already dangerous. Though, given how dangerous driving is, I think self-driving cars will probably be far safer, comparatively speaking. And, so far, most examples of dangerous AI behavior have been, ultimately, ascribable to human error. The system has just been following instructions. And we can look back and see where, when confronted with an unusual situation, following instructions ended up being a bad thing, but at least we understood how it happened and in these circumstances we can change the instructions, or in the most extreme case we can take the car off the road. The danger comes when they’re no longer following instructions, and we can’t modify the instructions even if they were.

You may think that this situation is a long ways off. Or you may even think it’s impossible, given that computers need to be programmed, and humans have to have written that program. If that is what you’re thinking you might want to reconsider. One of the things which most people have overlooked in the rapid progress of AI over the last few years is it’s increasing opacity. Most of the advancement in AI has come from neural networks, and one weakness of neural networks is that it’s really difficult to identify how they arrived at a conclusion, because of the diffuse and organic way in which they work. This makes them more like the human brain, but consequently more difficult to reverse engineer. (I just read about a conference entirely devoted to this issue.)

As an example, one of the most common applications for AI these days is image recognition, which generally works by giving the system a bunch of pictures, and identifying which pictures have the thing you’re looking for and which don’t. So you might give the system 1000 pictures 500 of which have cats in them and 500 of which don’t. You tell the system which 500 are which and it attempts to identify what a cat looks like by analyzing all 1000 pictures. Once it’s done you give it a new set of pictures without any identification and see how good it is at as picking out pictures with cats in them. So far so good, and we can know how well it’s doing by comparing the system’s results vs. our own, since humans are actually quite talented at spotting cats. But imagine that instead of cats you want it to identify early stage breast cancer in mammograms.

In this case you’d feed it a bunch of mammograms and identify which women went on to develop cancer and which didn’t. Once the system is trained you could feed it new mammograms and ask it whether a preventative mastectomy or other intervention, is recommended. Let’s assume that it did recommend something, but the doctor’s didn’t see anything. Obviously the woman would want to know how the AI arrived at that conclusion, but honestly, with a neural network it’s nearly impossible to tell. You can’t ask it, you just have to hope that the system works. Leaving her in the position of having to trust the image recognition of the computer or taking her chances.

This is not idle speculation. To start with, many people believe that radiology is ripe for disruption by image recognition software. Additionally, doctors are notoriously bad at interpreting mammograms. According to Nate Silver’s book The Signal and the Noise, the false positive rate on mammograms is so high (10%) that for women in their forties, with a low base probability of having breast cancer in the first place, if a radiologist says your mammogram shows cancer it will be a false positive 90% of the time. Needless to say, there is a lot of room for improvement. But even if, by using AI image recognition, we were able to flip it so that we’re right 90% of the time rather than wrong 90% of the time, are women going to want to trust the AI’s diagnosis if the only reasoning we can provide is, “The computer said so?”

Distilling all of this down, two things are going on. AI is improving at an ever increasing rate, and at the same time it’s getting more difficult to identify how an AI reached any given decision. As we saw in the example of mammography we may be quickly reaching a point where we have lots of systems that are better than humans at what they do, and we will have to take their recommendations on faith. It’s not hard to see where people might consider this to be dangerous or, at least, scary and we’re still just talking about the AI technology which exists now, we haven’t even started talking about science fiction level AI. Which is where most of the alarm is actually focused. But you may still be unclear on the difference between the two sorts of AIs.

In referring to it as science fiction AI I’m hoping to draw your mind to the many fictional examples of artificial intelligence, whether it’s HAL from 2001, Data from Star Trek, Samantha in Her, C-3P0 from Star Wars or, my favorite, Marvin from A Hitchhiker’s Guide to the Galaxy. All of these examples are different from the current technology we’ve been discussing in two key ways:

1- They’re a general intelligence. Meaning, they can perform every purely intellectual exercise at least as well or better than the average human. With current technology all of our AIs can only really do one thing, though generally they do it very well. In other words, to go back to our example above, AlphaGo is great at Go, but would be relatively hopeless when it comes to taking on Kasparov in chess or trying to defeat Ken Jennings at Jeopardy! Though other AIs can do both (Deep Blue and Watson respectively.)

2- They have free will. Or at least they appear to. If their behavior is deterministic, its deterministic in a way we don’t understand. Which is to say they have their own goals and desires and can act in a way we find undesirable. HAL being perhaps the best example of this from the list above. I’m sorry Dave, I’m afraid I can’t do that.

These two qualities, taken together, are often labeled as consciousness. The first quality allows the AI to understand the world, and the second allows the AI to act on that understanding. And it’s not hard to see how these additional qualities increase the potential danger from AI, though of the two, the second, free will, is the more alarming. Particularly since if an AI does have it’s own goals and desires there’s absolutely no reason to assume that these goals and desires would bear any similarities to humanities’ goals and desires. It’s safer to assume that their goals and desires could be nearly anything, and within that space there are a lot of very plausible goals that end with humanity being enslaved (The Matrix) or extinct (Terminator).

Thus, another name for a science fiction AI is a conscious AI. And having seen the issues with the technology we already have you can only imagine what happens when we add consciousness into the mix. But why should that be? We currently have 7.5 billion conscious entities and barring the occasional Stalin and Hitler, they’re generally manageable. Why is an artificial intelligence with consciousness potentially so much more dangerous than a natural intelligence with consciousness? Well there are at least four reasons:

1- Greater intelligence: Human intelligence is limited by a number of things, the speed of neurons firing, the size of the brain, the limit on our working memory, etc. Artificial intelligence would not suffer from those same limitations. Once you’ve figured out how to create intelligence using a computer, you could always add more processors, more memory, more storage, etc. In other words as an artificial system you could add more of whatever got you the AI in the first place. Meaning that even if the AI was never more intelligent than the most intelligent human it still might think a thousand times faster, and be able to access a million times the information we can.

2- Self improving: I used this quote the last time I touched on this subject, but it’s such a good quote and it encapsulates the concept of self-improvement so completely that I’m going to use it again. It’s from I. J. Good (who worked with Turing to decrypt the Enigma machine), and he said it all the way back in 1965:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

If you want to continue to use science fiction to help you visualize things, of the science fiction I listed above only Her describes an actual intelligence explosion, but if you bring books into the mix you have things like Neuromancer by William Gibson, or most of the Vernor Vinge Books.

3- Immortality: Above I mentioned Stalin and Hitler. They had many horrible qualities, but they had one good quality which eventually made up for all of their bad qualities. They died. AI’s probably won’t have that quality. To be blunt, this is good if they’re good, but bad if they’re bad. And it’s another reason why dealing with artificial consciousness is more difficult than dealing with natural consciousness.

4- Unclear morality: None of the other qualities are all that bad until you combine it with this final attribute of artificial intelligence, they have no built in morality. For humans, a large amount of our behavior and morality is coded into our genes, genes which are the result of billions of years of evolutionary pressure. The morality and behavior which isn’t coded by our genes is passed on by our culture, especially our parents. Conscious AIs won’t have any genes, they won’t have been subjected to any evolutionary pressure and they definitely won’t have any parents except in the most metaphorical sense. Without any of those things, it’s very unlikely that they will end up with a morality similar to our own. They might, but it’s certainly not the way to bet.

After considering these qualities it should be obvious why a conscious AI could be dangerous. But even so it’s probably worth spelling out a few possible scenarios:

First, most species act in ways that benefit themselves. Whether it’s humans valuing humans more highly than rats, or just the preference that comes from procreation. Giving birth to more rats is an act which benefits rats even if later the same rat engages another rat in a fight to the death over a piece of pizza. In the same way a conscious AI is likely to act in ways which benefit itself and possibly other AIs to the determinant of humanity. Whether that’s seizing resources we both want, or deciding that all available material (humans included) should be turned into a giant computer.

On the other hand, even if you imagine that humans actually manage to embed morality into a conscious AI, there are still lots of ways that could go wrong. Imagine, for example, that we have instructed the AI that we need to be happy with its behavior. And so it hooks us up to feeding tubes and puts an electrode into our brain which constantly stimulates the pleasure center. It may be obvious to us that this isn’t what we meant, but are we sure it will be obvious to the AI?

Finally, the two examples I’ve given so far presuppose some kind of conflict where the AI triumphs. And perhaps you think I’m exaggerating the potential danger by hand waving this step. But it’s important to remember that a conscious AI could be vastly more intelligent than we are. But even if it weren’t, there are many things it could do if it were only as intelligent as reasonably competent molecular biologist. Many people have talked about the threat of bioterrorism, especially the danger of a man-made disease being released. Fortunately this hasn’t happened, in large part because it would be unimaginably evil, but also because its effects wouldn’t be limited to the individuals enemies. An AI has no default reason to think bioterrorism is evil and it also wouldn’t be affected by the pathogen.

These three examples just barely scratch the surface of the potential dangers, but they should be sufficient to give one a sense of both the severity and scope of the problem. The obvious question which follows is how likely is all of this? Or to separate it into it’s two components how likely is our current AI technology to lead to true artificial consciousness? And if that happens how likely is it that this artificial consciousness will turn out to be dangerous?

As you can see, any individual’s estimation of the danger level is going to depend a lot on whether you think conscious AI is a natural outgrowth of the current technology, whether it will involve completely unrelated technology or whether it’s somewhere in between.

I personally think it’s somewhere in between, though much less of a straight shot from current technology than people think. In fact I am on record as saying that artificial consciousness won’t happen. You may be wondering, particularly a couple thousand words into things, why I’m just bringing that up. What’s the point of all this discussion if I don’t even think it’s going to happen? First I’m all in favor of taking precautions against unlikely events if the risk from those events is great enough. Second, just because I don’t think it’s going to happen doesn’t mean that no one thinks it’s going to happen, and my real interest is looking at how those people deal with the problem.

In conclusion, AI technology is getting better at an ever increasing rate, and it’s already hard to know how any given AI makes decisions. Whether current AI technology will shortly lead to AIs that are conscious is less certain, but if the current path does lead in that direction, then at the rate things are going we’ll get there pretty soon (as in the next few decades.)

If you are a person who is worried about this sort of thing. And there are a lot of them from well known names like Stephen Hawking, Elon Musk and Bill Gates to less well known people like Nick Bostrom, Eliezer Yudkowsky and Bill HIbbard then what can you do to make sure we don’t end up with a dangerous AI? Well, that will be the subject of the next post…


If you learned something new about AI consider donating, and if you didn’t learn anything new you should also consider donating to give me the time to make sure that next time you do learn something.


The Apocalypse Will Not Be as Cool or as Deadly as You Hope

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Every so often someone reads one of my posts. And sometimes this same person will even talk to me about it afterwards. Most often this happens with something I wrote recently, but every so often it happens with something I wrote quite a while ago. This makes the discussion somewhat difficult because whatever I wrote is fresher in the mind of the individual I’m talking to than it is in my own mind. That difficulty aside I’m delighted that something I wrote several months ago is still being consumed, and I’m overjoyed when they point out an idea, or a refinement or even a criticism that I hadn’t considered. Which is what happened a couple of days ago.

As it turns out, the actual episode he listened to doesn’t matter that much, because the discussion ended up covering a topic which appears in many if not most of my episodes. And while it’s a topic that has made a lot of cameos in previous episodes, after this recent discussion I think it has at last solidified to the point where it’s finally ready for a starring role.

The subject is endings. Whether it’s the ending of death or the ending of civilization, or even the ending of all life on the Earth. But my subject is also to a certain extent about extreme thinking in general. We might even give it the title “Thinking About the Middle is Difficult.” We might, but we didn’t because, whatever it’s accuracy, that title is kind of lame. The actual title I choose is cooler as evidenced by the fact that it actually has “cool” in the title.

To begin with, bad events come in different forms. I have, on many occasions, talked about various cataclysms, catastrophes and disasters. All of which are bad, but not equally bad. I’ve talked about a peaceful dissolution of the United States into fiefdoms; I’ve speculated about a jobless future where people have lots of time, but little meaning; I’ve spoken about how hard it is for a nation to remain intact; I’ve dissected global warming; sounded the alarm about nuclear war, and examined the probability of earth being struck by a comet. All of these are significantly different in their impact, (no pun intended) but if you’re a fan of the status quo, and are happy going to a job, collecting a paycheck and binging The Big Bang Theory on the weekend, then it’s easy to lump all catastrophes into a single category of, “Well I sure hope that doesn’t happen!” Those who are a little bit more sophisticated might have two categories, catastrophes which are survivable and catastrophes which aren’t survivable. But even here things get fuzzy. Many people automatically equate lots of people dying with everyone dying, but when you’re talking about humanity’s extinction the difference between most and all is entirely the point.

And it is here that I would like to introduce one of the major themes of this post. People want their catastrophes to be simple. They don’t want cataclysms that require numerous demanding sacrifices, but which, for someone with sufficient resources who makes all the right choices, are ultimately survivable. They want cataclysms where it doesn’t matter what they do. They want to be able to sit on the couch and watch the latest episode of Game of Thrones secure in the knowledge that TV, or something better, will be around forever, or, alternatively, they want to know that one day it will all end suddenly and they’ll be dead and free of care without ever having to actually exert themselves in between those two points. Perhaps this portrait of the average individual is a stretch, but if it is, it’s not much of one.

As an example of what I mean by this, let’s look at global warming, if it’s going to be a disaster people want it to be a true apocalypse. Something which scours the Earth of the wickedness of humanity. Though actually, as I already pointed out, this vague longing for global warming to wipe out humanity is really not about whether people are wicked or not, it’s about the fact that it’s far easier to toss up your hands and say, “Well we’re all going to die, and there’s nothing we can do about it.” Then it is to really figure out what you should be doing and then do it.

John Michael Greer, who I reference frequently, described the state of people’s thinking about global warming in this way:

It’s a measure of how drastic the situation has become that so many people have fled into a flat denial that anything of the kind is taking place, or the equal and opposite insistence that we’re all going to die soon so it doesn’t matter. That’s understandable, as the alternative is coming to terms with the impending failure of the myth of progress and the really messy future we’re making for those who come after us.

All of which is to say that people don’t really like doing hard things. And they don’t want to consider a messy future, they want a simple future or no future at all, which, as it turns out, is pretty simple. As I have already pointed out, any plan for preventing global warming is ridiculously difficult, meaning, as Greer said, most people default to one of the two extremes. Specifically, there’s no plan which grants the truth of global warming, but also allows us to prevent it, while still continuing to live as we always have. Lots of things are going to continue, just about regardless of what happens, but sitting on the couch, enjoying the bounty of technology: watching TV, being cooled by central air, and distracted by your iPad, is not necessarily one of those things.

What will continue? Or, to frame the question in a form closer to my subject, what won’t end? To begin with, life, almost regardless of the catastrophe, will continue. From the simplest microbe to most complex water flea (31,000 genes as compared to humans 23,000), life is remarkably tenacious. The poster child for this tenacity is the Tardigrade, also known as water bears. Allow me to quote from Wikipedia:

Tardigrades are one of the most resilient animals known: they can survive extreme conditions that would be rapidly fatal to nearly all other known life forms. They can withstand temperature ranges from 1 K (−458 °F; −272 °C) (close to absolute zero) to about 420 K (300 °F; 150 °C) for several minutes, pressures about six times greater than those found in the deepest ocean trenches, ionizing radiation at doses hundreds of times higher than the lethal dose for a human, and the vacuum of outer space. They can go without food or water for more than 30 years, drying out to the point where they are 3% or less water, only to rehydrate, forage, and reproduce.

You may be thinking that this is all fine and dandy, but it doesn’t matter, how tough the tardigrade is, if the Earth itself is destroyed like Alderaan in Star Wars, or more realistically by some giant comet, the tardigrades will perish like all the rest of us. While that would certainly slow things down. It by no means guarantees the end of life. To illustrate my point, one of the big worries about any trip to Mars is contaminating it with earth-based bacteria. Given how tenacious life is, most scientists think Earth-life gaining a foothold on Mars is more a matter of when than if. There are some, in fact, who will allege the exact opposite, that life started on Mars and then spread to Earth. Either way, the point is, many scientists think that life spreading from one planet to another is not only possible, but very likely, particularly when you consider that it has had billions of years in which to do so.

Before leaving this topic, it’s instructive, and interesting, to describe how this sort of thing happens. The details are fascinating enough that they could easily form the basis for completely separate post. But, essentially, every time there’s a large enough impact, material is flung into space, and if the impact is big enough things can get flung all the way out of the Solar System. You might be skeptical at this point, and that would only be natural. I mean even if material from Earth gets ejected all the way out of the Solar System, how much material is it really and how much of it would actually end up on an exoplanet as opposed to floating in the interstellar vacuum forever? Because unless you can show that a significant amount actually ends up on another planet, then you’ve just moved the extinction of life from the end of the Earth to the end of the Solar System.

Well, as it turns out some scientists decided to run the numbers, with respect to the impact 65 million ago that wiped out the dinosaurs. And what they found was very interesting.

The scientists wanted to know how much of the ejecta from this impact would have ended up in various locations. Among the locations they looked at were the Jovian moon Europa (a promising candidate for life) and a super-earth orbiting Gliese 581.

For the answer to the first they discovered that almost as much material would have ended up on Europa as ended up on the Moon, because of the assistance Europa would get from Jupiter’s gravity. The scientists estimated that 10^8 (100 million) rocks would have traveled from Earth to Europa as the result of that explosion. But what was more interesting is what they discovered about the Gliese 581 exoplanet. According to their calculations 1,000 Earth rocks would have ended up there, though after a journey of a million years. A million years is a long time, and astrobiologists generally think even the most hardy life can only last 30,000 years, but given all of the above do you really want to bet that life is confined to the Earth and nowhere else?

That ended up being quite the tangent, but the impact/ejecta stuff was too interesting to leave out. The big thing I wanted to get across is that whatever the cataclysm it probably won’t wipe out all life. Also given its frequent appearance in this space I should also point out that this is another reason why Fermi’s Paradox is so baffling. (Well not for me.) All of this is to say that even if all life on the Earth is completely destroyed, whether in 5 billion years when the Sun expands or in 7.5 billion years when it engulfs the Earth or whether Earth’s surface gets completely sterilized by a high energy gamma ray burst, life will find a way, as they say.

But when people imagine apocalypses what they are mostly worried about is the end of all humans, not the end of all life, and admittedly humans are not as resilient as the tardigrades. Unlike them we can’t handle hard vacuum, or temperatures from 300 °F to −458 °F. Even so humanity is a hardy species, with lots of tools at its disposal. Humans have survived ice ages and supervolcanoes and that was when the most technologically advanced tools we had were fur clothes and flint spears. Now, we have vast amounts of knowledge, and underground bunkers, and seed vaults, and guns and nuclear power. Of course the last item is a double edged sword, because in addition to (relatively) clean power it has also given us very dirty weapons.

In the past I have used nuclear war as something of a shorthand for THE apocalypse, as an event which would mark the end of current civilization. And consequently you may have gotten the impression that I was saying that nuclear war would mean the end of humanity. If you did get that impression I apologize. What I intended to illustrate was that large enough disasters are just singularities of another sort.

As you may recall when people started to use the word singularity, in this context, they were borrowing an idea from astrophysics, specifically the idea that you can’t see past the event horizon of a black hole, singularity being another word for black hole. And this is mostly what nuclear war is, something, past which, it’s impossible to predict, but while it’s the case that a post nuclear world would be difficult to imagine, there are some things we can say about it, and one of them is that humanity would survive. It might be only a small percentage of humanity, which makes it an inconceivable tragedy, but nuclear war all by itself would not mean the end of our species. As I said, you may have gotten a different impression in previous posts, and if so I apologize, mostly I was just using it as shorthand for a very, very bad thing. In fact if you only take one thing from this post it should be this, nuclear war would not mean the end of humanity.

Why is this important? Because it’s another example of the same thing we saw with global warming, people assume that it’s either not going to happen or that if it does they’re going to be dead so it doesn’t matter.

Though, there also seems to be a third group who feel like if it does happen and they do survive that it will be awesome. That it will be one long desert chase scene involving impossibly cool cars with flame throwing double-necked guitar players attached to the front, like in Mad Max. Or that it will involve lots of guns and zombies like in the Walking Dead. Or perhaps that it will be some sort of brutal, all-encompassing dictatorship, like in Nineteen Eighty-Four. But if it is, they always imagine that they would be part of the resistance. What they don’t imagine, as part of any apocalypse, is slowly sinking into despair and eventually overdosing on heroin. Or standing in long bread lines, waiting for a small amount of food, with no guns or glamorous resistance fighters any where to be found. Or unemployment at above 20%, and being homeless and hungry. And of course all of these things have already happened or are happening in decidedly non-apocalyptic situations. It’s sheer madness to assume that things would be better during an actual apocalypse. But once again, people assume it either won’t happen or they’ll be dead, not that they’ll have to wake up every day with an empty stomach, not knowing where their next meal is going to come from.

You may have noticed that this post has been largely free about discussions of specific apocalypses or catastrophes. And in that way I have contributed to the problem I’m trying to solve, though in my defense I’m going to say that it was done intentionally to illustrate the point. Also there are so many potential unforeseeable disasters, that the one’s we can name and describe might not be the ones to worry about. But perhaps, as we’ve been discussing it, you can see that visions of the future end up in one of three categories. Either the future will be awesome, or it will basically be the same (TV, couches and central air will all still exist) or the world will end, and we’ll all be dead. What this post is trying to point out is that far more likely than the world ending suddenly and irrevocably, is the world continuing, but going through some kind of crisis. Whether that’s temporary or long term, whether it’s nuclear war, or something like the 2007 crisis, in all of these situations there would be a good chance you would survive. Even in a nuclear war you would have a better than even chance of surviving. The question is not would you survive, but how long would you survive. My go-to disaster book, Global Catastrophic Risks, illustrates the point, in a quote about the effects of an all out war on America.

In addition to the tens of millions of deaths during the days and weeks after the attack there would probably be further millions (perhaps further tens of millions) of deaths in the ensuing months or years. In addition to the enormous economic destruction caused by the actual nuclear explosions, there would be some years during which the residual economy would decline further, as stocks were consumed and machines wore out faster than recovered production could replace them… For a period of time, people could live off supplies (and in a sense, off habits) left over from before the war. But shortages and uncertainties would get worse. The survivors would find themselves in a race to achieve viability… before stocks ran out completely. A failure to achieve viability, or even a slow recovery, would result in many additional deaths, and much additional economic, political, and social deterioration. This postwar damage could be as devastating as the damage from the actual nuclear explosions.

Notice that this assessment is not just a repeat of Private Hudson’s quote from Aliens. “That’s it, man. Game over, man. Game over!”, but rather a very sober assessment which points out that a lot of people would live and it would be really horrible.

In bringing this up, my primary point is not that people are inadequately prepared for the eventuality that they might survive a nuclear war, preferring instead to believe that they would be instantly killed, though that statement is certainly true. My primary point is that people are equally unprepared for smaller catastrophes.

I started this post off by mentioning a conversation I had had with a friend, and a desire to talk about endings in general. In that conversation we didn’t talk about the end of life, or the end of humanity, we talked about the eventual end that comes to us all, death. Which even more than the end of life or the end of humanity is an end everyone should be worried about.

Specifically the conversation was about how non-religious people dealt with death, and he, being a non-religious person, claimed that they don’t ignore it, that most of them have come to terms with the fact that they are eventually going to die. I replied that I was unconvinced. That if this was really the case, how many had taken some concrete action to illustrate that fact? If I were to survey the non-religious and ask them whether they had come to terms with their mortality, how many would say yes? If I then asked all of those people whether they had life insurance, or a graveyard picked out, or if they had a living will? How many of the group who answered yes to this first question would answer yes to the second? And maybe that list is too bourgeois for my non-religious friends, particularly those who don’t have any dependants to worry about, but if they don’t like that list what other concrete evidence would they offer to show that they had really grappled with death other than them just saying that they had?

It’s easy to say, whether it’s the possibility of our own death, or the possibility of global warming or nuclear war, well it won’t matter, I’ll be dead. And perhaps with the first that’s true, but that doesn’t mean you’ve come to terms with it. But also, there are of course lots of catastrophes and mini-apocalypses which could happen which won’t kill you, and if your view of the future is limited to: its going to be awesome, it’s going to be the same or I’m going to be dead. I think there’s a good chance you’re going to be very alive, and very disappointed.


You know what’s not disappointing? Donating to this blog. I can personally vouch that several people who’ve done it have described it being followed by a warm satisfied glow. Though It may have been indigestion. Apparently my blog causes both.


Is Social Media Making Unrest Worse?

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


The other day I was talking with a friend of mine and he mentioned how crazy his Twitter feed was these days. According to him, it’s completely dominated by people yelling at each other. From the description, the Trump tornado is a big part of it, but it’s not just that. As he described it, he’s seeing a lot of left-on-left yelling as well.

I’m not really on Twitter much (though perhaps I should be.) But his description of things certainly mirrors my impression of the state of dialogue in the country. And of course it’s not just Twitter, it’s all over Facebook, and Youtube and essentially any place with comments or user-generated content.

Once we decide that this state of affairs deserves a closer examination, then, as is usually the case, we can approach it from several different perspectives. First we can decide that there’s nothing to worry about. That this is the same sort of factionalism which has always existed, and that it’s not even a particularly extreme example. People have, after all, been disagreeing with one another for as long as there have been people, and even the slightest amount of historical knowledge reveals times in our nation’s past when things were much, much worse. As examples of this, in my last post, I mentioned the social unrest of the late 60’s/early 70’s along with the enormous factionalism which preceded the Civil War. And these aren’t the only two examples in our nation’s short history. As it turns out, despite the rosy view we have of the country’s founders, things were a lot more acrimonious than as well. If you have studied the battles between the Republicans and Federalists, and specifically between Jefferson and Hamilton, it makes Clinton vs. Trump look like amateur hour.

In other words there is a reasonable case to be made that we’re over-reacting, that the nation has weathered worse division than this and survived. That however much hate and anger exist that it’s manageable and unlikely ever to tip over into large scale violence. And, as reasonable as this case is, I don’t see very many people advocating for it. Partially this is because some of us (myself included) are natural Chicken Littles and we want to believe that the sky is falling and that the political anger we’re seeing is something new and terrifying. And this makes us disinclined to be reasonable. This is a second perspective. The perspective of looming civil war.

But the Chicken Littles and the doom-mongers are the minority. Far more people aren’t focused on the divisions at all. They have a completely different way of looking at things, a third perspective to add to our list. From this perspective they’re not focused on the anger, and they’re not focused on the divisions because they’re creating the anger and divisiveness. And they know that their anger is a righteous anger, and that their divisions are only dividing the pure from the wicked.

From this perspective we’re experiencing extreme conditions, but they have nothing to do with not getting along, or with an impending civil war, and everything to do with Trump supporters and the alt-right and white nationalists clinging to their privileged status (or their guns and religion.) At least for some people. For some others, the problem is the pampered social justice warriors who can’t stand the fact that Trump won, and who especially can’t stand what that says about the world they thought they were living in. And who are, furthermore, unduly fixated on achieving justice for imagined crimes.

As I mentioned, both sides are angry, but from this perspective there’s pure anger and there’s wicked anger, and all the anger on your side is justified, and all the anger on the other side is an extreme overreaction.

For people operating under this third perspective, yes, the current level of hatred we’re seeing is alarming, but if we manage to get rid of Trump in 2020 or, if he’s just impeached or removed from office under the 25th amendment, then things will go back to normal. Alternatively if we just stop pampering these college kids then they’ll wake up and realize that they have pushed things too far, that society can’t be perfectly fair and that attempts to make it so only end up causing worse problems than the ones they hope to solve.

They share the perspective of the Chicken Littles in believing it’s bad, but, for them, this badness exists entirely on the other side. It’s all the fault of Trump, or Obama or Clinton, or the Globalists, or the rich or the immigrants, or any of a hundred other individuals and organizations. And if we could just get those people to see the light or to go away. Or in the most extreme cases, if we could just line them all up against the wall at the start of the glorious revolution and shoot them, then everything would be fine.

I’m skeptical about any explanation which lays all the blame on one side or the other. And even if it were true, getting rid of one side is only possible through something resembling the glorious revolution. Thus I’m inclined to dismiss the last perspective as being both naive and, even aside from it’s naivety, offering no practical prescription. The first perspective, that the current social unrest is no big deal, has a lot going for it. And that’s precisely what we should all hope is going on, but even if it is, there’s very little downside to trying to cool things down even if they’ll cool down on their own eventually. Which places us in a situation very familiar to readers of this blog: The wisest course of action is to prepare for the worst, even while you hope for the best. Meaning that even if I get branded as Chicken Little, I will still advocate for treating the current unrest seriously, and as something which has the potential to lead to something a lot worse.

If, as I have suggested, we prudently decide to act as if things are serious and conceivably getting worse, the next question becomes why are they getting worse? Of course, before we continue it should be pointed out that the other perspectives have their own answers to this question. They aren’t actually getting worse, in the case of the first, and in the case of the last, they are, but the culprits are obvious (though very different depending what side you’re on.) But I’ve staked out a position of saying that things are getting worse, and that no one group is an obvious scapegoat. Then, the question which immediately follows from this is why are things getting worse?

Having chosen to act as if the current unrest is historically significant, something with the potential to equal or even eclipse the unrest of the late 60s/early 70s, we should be able to identify something which also equals or exceeds the past causes of unrest. During the Civil War it was slavery. During the late 60s/early 70s there was Vietnam and Civil Rights. Whatever the current rhetoric we don’t have anything close to the Vietnam War or the civil rights violation of 50 years ago, to say nothing of slavery. So if the injustice is objectively less severe, how do I get away with claiming that the unrest might get just as bad if not worse? All of this boils down to the question, what contributing factors exist today which didn’t exist back then? And here we return to my friend’s Twitter feed. Why is it so acrimonious?

You might start by assuming that the problem is with the users, or perhaps Twitter itself. But as I already mentioned this same sort of thing is also a problem on Facebook, and as far as the users, have people really changed that much in the last few decades? Probably not.

In my last post I mentioned a recent podcast from Dan Carlin. His primary topic was the unrest itself, and whether there was the potential for a new civil war. But he made another point which really struck me. Carlin, much like myself, is very interested in the comparing and contrasting the current unrest with the unrest during the late 60s/early 70s. And he brought up a key difference between now and then. Back then you could call in the presidents of the three major networks and suggest that they avoid covering certain stories or saying certain things on the nightly news, and if all three of them agreed (which they very well might) then with a single meeting you had some chance of influencing the narrative for the entire nation.

Obviously this is something of an oversimplification, but Carlin points out the undeniable difference between now and then. Even if you expanded that hypothetical meeting to include the top 500 people in media, getting everyone from the Roger Ailes (assuming he were still alive) to Mark Zuckerberg, and even if you could get all 500 people to agree on something your overall impact on what people saw and heard would be less than with those three people back in the Nixon era. Which is to say, when it comes to what people see and hear, the last election demonstrated that the media landscape, especially the social media landscape, is now vastly more complicated.

I admit up front, that it would be ridiculous to blame social media for all of the unrest, all of the hate, all of the rage and all of the factionalism we’re currently seeing. But it would be equally ridiculous to not discuss it at all, since it’s indisputably created an ideological environment vastly different than any which has existed previously.

Victor Hugo said, “Nothing is stronger than an idea whose time has come.” (And I am aware that is a very loose translation of the original.) I agree with this, but is it possible that social media artificially advances the “arrival” of an idea? Gives ideas a heft and an urgency out of proportion to their actual importance?

To illustrate what I mean let’s imagine a tiny medieval village of say 150 people. And let’s imagine that one of the villagers comes to the conclusion that he really needs to rise up in rebellion and overthrow the king. But that he is alone in this. The other 149 people, while they don’t like the king, have no desire to go to all the trouble and risk of rising up in rebellion. In this case that one guy is probably never even going to mention his desire to overthrow the king, let alone do anything about it. Because that would be treason, which was one of the quicker ways to end up dead (among many back then.)

For any given villager to plot against the king he needs to find other people to plot with. How this happens, and the subtle signals that get exchanged when something is this dangerous is a whole separate subject, but for now it suffices to say that if a villager is going to join into some kind of conspiracy he has to be convinced that there’s enough like-minded people to take the idea from impossible to “if we’re extraordinarily lucky”. You might call this the minimum standard for an idea’s “arrival”.

For sake of argument, let’s say that our hypothetical villager is going to want at least 10% of his fellow villagers to also harbor thoughts of overthrowing the king, just to get to the point where he doesn’t think it’s impossible. And that, further, given the danger attached to the endeavor he’d probably actually want the inverse of that, and know that 90% of his fellow villagers were on his side, before he decided to do something as risky as rising up in rebellion.

Which means our villager needs 15 people before he even entertains the idea that it’s not just him. And he needs 135 before actually drawing his sword. The actual numbers are not that important, what’s important is the idea of social proof. Everyone, particularly when they’re engaged in risky behavior, has a threshold for determining whether they’re deluding themselves and a higher threshold for determining whether they should act. And for 99.9% of human history these thresholds were determined by the opinions of the small circle of people in our immediate vicinity. And 135 people might constitute 90% of everyone you come in contact with. But humans don’t do percentages, so none of us are thinking, what does 90% of everyone believe, they’re thinking do I know 15, or at the extreme end 135, people who think the way I do? But social media, as might have been expected, has changed the standards of social proof, and it’s now much easier to find 15 or even 135 people who will agree with nearly anything. And if 15 other people think the same way you do, you go from thinking you’re crazy to thinking you’re normal, but an outlier. And if 135 people feel the same way you do, then you’re ready to storm the barricades.

Fast forward to now and let’s say that you think that the Sandy Hook Shooting was faked, that it was a false flag operation or something similar. (To clarify I do not think this.) In the past you might not have even heard of the shooting, and even if you did, and then for some reason decided it had been faked, you’d be hard pressed to find even one other person who would entertain the idea that it might have been staged. If, despite all this, you were inclined to entertain that idea, faced with the lack of any social proof, or of anyone else who believed the same thing, in the end you would have almost certainly decided that you were, at best, mistaken, and at worst crazy. But using the internet and social media you can find all manner of people who believe that it was fake, and consequently get all the social proof you need.

Certainly it’s one thing to decide a crazy idea is not, in fact crazy. As is the case with the Sandy Hook conspiracy theories. Holding an incorrect opinion is a lot different than acting on an incorrect opinion. To return to our example villager, you could certainly argue that in the past, kings were deposed too infrequently, that certain rulers were horrible enough that the benefits for rebellion might have been understated by just looking to those around you for social proof. In other words if you want to say that in the past people should have acted sooner, I could see that being possible, but has social media swung things the other way, so that now, rather than acting too slowly, we’re acting too precipitously? Are we deposing kings too soon?

Bashar al-Assad, and the Syrian Civil war are good illustrations of what I mean by this. Assad is indisputably a really bad guy, but when you consider the massive number of people who have died and the massive upheaval that has taken place is it possible that social media, and the internet more generally, made the entire enterprise appear to have more support than it obviously did? As a narrower example of this, for a long time the US was dedicated to helping out secular, moderate rebels, which turned out to be something which had a large online presence, but very little presence in reality, another example of distorted social proof.

None of this is to say that the Syrian Civil War hasn’t been horrible, or that Assad isn’t a bad guy, who should have just stepped down. But we have to deal with things as they are, not as we wish them to be, or as a view, distorted through the lens of social media, portrays them to be. And it’s not just Syria, social media played a big role in all of the major Arab Spring uprising, and it didn’t work out well for any of them, with the possible exception of Tunisia.

Perhaps you think that I’m going too far by asserting that social media caused the Arab Spring uprisings to begin prematurely, leading to a situation objectively worse than the status quo. But recall that things are demonstrably worse in most of the Arab Spring countries (certainly in Libya, Syria, Iraq and Yemen), and not noticeably better in the rest. Meaning the truth of my assertion rests entirely on determining the role played by social media. If it hastened things or gave people a distorted view of the level of support for change, (which I think there’s strong evidence for) then it definitely represents evidence of social media leading to greater unrest and greater violence and a worse overall outcome.

Social media is a technology, and a rather recent one at that (recall that Facebook is only 13 years old). And anytime we discuss potentially harmful technology one useful thing we can do is to take the supernormal stimulus tool out of bag to see if it fits. As you may recall one of the key examples of supernormal stimuli are birds who prefer larger eggs, to such an extent that they prefer artificial eggs almost as large as themselves over their natural eggs. If social media represents some form of larger, artificial egg when it comes to interacting, If people are starting to prefer interacting via social media over interacting face to face, how would that appear? Might it be manifested by stories about teenagers checking their social media accounts 100+ times a day? Or (from the same article) claiming that they’d rather go without food for a week than have their phone taken away. Or the 24% of teens who are online almost constantly? But wait, you might say, didn’t I read an article that teenagers still prefer face-to-face communication? Yeah, by 49%, but it’s also important to remember that, other than the telephone (at 4%), all of the other choices didn’t exist 20 years ago. Which means that face-to-face interaction used to be at 96%, and that it has fallen to 49%.

Obviously it might be a stretch to call social media a supernormal stimuli, but, to return to our hypothetical villager, I don’t think it’s a stretch to imagine that there are some things we select for when socializing with 150 people who we all know personally which don’t scale up to socializing with the 3.7 billion other people on the internet.

In conclusion to go all the way back to the beginning, I think the case for social media being the ultimate cause of the recent unrest is mixed at best. That said we do know that anonymity causes incivility, that social media appears to cause depression, loneliness and anxiety, and that, anecdotally things are pretty heated out there. But if you’re tempted to think that social media isn’t contributing to the unrest, consider the reverse hypothesis. That social media has created the new dawn of understanding and cooperation it’s advocates insisted it would. That social media is a uniting force, rather than a dividing force. That social media makes friendships better and communities stronger. Whatever the evidence for social media’s harm the evidence for its benefits is even thinner. In an age where connectivity has made it easier to harass people, to swat them, and to publicly shame them to a degree unimaginable before the internet age, where is the evidence that social media is decreasing divisiveness? That it is healing the wounds of the country, rather than opening them even wider?

All of this is to say that this is another example of a situation where we were promised that a new technology would make our lives better, that it would lead to an atmosphere of love and understanding, that, in short, it would save us, and once again technology has disappointed us, and, if anything, in this case, it has made the problem it purported to solve even worse.

As I have pointed out repeatedly, we’re in a race between a technological singularity and a catastrophe. And in this race, it would be bad enough if technology can’t save us, but what if it’s actually making the problem worse?


I know I just spent thousands of words arguing that social media is bad, and that blogs are a form of social media, but you can rest assured that this is a good blog. It’s all the other blogs out there that are evil. And based on that assurance, consider donating, you definitely don’t want to be up against the wall when the revolution comes.