If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


This last Saturday I was hanging out with a friend of mine that I don’t see very often. This friend has a profound technical interest in AI and has spent many years working on it, though not in any formal capacity. That said he’s very smart, and my assumption would be that his knowledge runs at least as deep as mine if not much deeper. (Though I don’t think he’s spent much time on the philosophy of AI, in particular AI risk.) In short, I don’t think I’m exaggerating to call AI a long-term obsession of his.

Part of the reason for this is that he thinks that general AI, a single AI that can do everything a human can do, is only about 10 years away and if he wants to make his mark he has to do it now. This prediction of 10 years is about as optimistic as it gets (and indeed it’d be hard to compress the task into much less time than that.) If you conduct a broader survey of experts and aggregate their answers Human Level Machine Intelligence is more likely than not to be developed by 2060. Though there are certainly AI experts at least as optimistic as my friend and, on the other hand, some who basically think it will never happen. In fact, this might be a good description of the situation given that some of the data indicates there’s a bimodal distribution in attitudes, with lots of people thinking it’s just around the corner, and a lot thinking it’s going to take a very long time, if it ever happens, with few people in the middle.

(Interestingly there are significant cultural differences in predictions with the Chinese average coming in at 2044 and the American average coming in at 2092.)

Just recently, and as promised, I finished Robin Hanson’s book The Age of Em: Work, Love and Life When Robots Rule the Earth and this whole discussion of AI probability is an important preface to any discussion of Hanson’s book because Hanson belongs to that category of people who think that human level machine intelligence is a long ways off. And that well before we figure out how to turn a machine into a brain, we’ll figure out how to turn a brain into a machine. Which is to say, he thinks we’ll be able to scan a brain and emulate it on a computer long before we can make a computer brain from scratch.

This idea is often referred to as brain uploading, and it’s been a transhumanist dream for as long as the concept has been around, though normally it sits together with AI in the big-bucket-of-science-fiction-awesomeness we’ll have in the future without much thought being given to how the two ideas might interact or, more likely, be in competition. One of Hanson’s more important contributions is to point out this competition, and pick brain emulation, or “ems”, for short, as the winner. Once you’ve picked a winner, the space of possible futures greatly narrows to the point where you can make some very interesting and specific predictions. And this is precisely what the Age of Em does. (Though perhaps with a level of precision some might find excessive.)

Having considering Hanson’s position and my friend’s position and the generic transhumanist position we are left with four broad views of the future (the fourth of which is essentially my position.)

First, the position of the AI optimists, who believe that human level machine intelligence is just a matter of time, that computers keep getting faster, algorithms keep getting better, and the domain of things which humans can do better than computers keeps narrowing. I would say that these optimists are less focused on exactly when the human intelligence finish line will be crossed and more focused on the inevitability of crossing that line.

Second, there’s the position of Hanson (and I assume a few others) who mostly agree with the above, but go on to point out (correctly) that there are two races being run. One for creating machine intelligence and one for successfully emulating the human brain. Both are singularities, and they’re betting that the brain emulation finish line is closer than the AI finish line, and accordingly that’s the future we should be preparing for.

Third, there’s the generic transhumanist position which holds that some kind of singularity is going to happen soon, and when it does it’s going to be awesome. But who have no strong opinion on whether it will be AI, brain emulation or some third thing (extensive cybernetic enhancement? Unlimited free energy from fusion power? Aliens?)

Finally there are those people, myself included, who think something catastrophic will happen which will derail all of these efforts. Perhaps, to extend the analogy, clouds are gathering over the race track, and if it starts to rain all the races will be canceled even if none of the finish lines have been reached. As I said this is my position, though it has more to do with the difficulties involved in these efforts, than in thinking catastrophe is imminent. Though I think all three of the other camps underestimate the chance of catastrophe as well.

The Age of Em is written to explain and defend the second case. Let’s start our discussion of it by examining Hanson’s argument that we will master brain emulation before we master machine intelligence. I was already familiar with this argument having encountered it in the Age of Em review on Slate Star Codex, which was also the first time I heard about the book. And then later, I heard the argument, in a more extended form when Robin Hanson was the keynote speaker at the 2017 Mormon Transhumanist Association Conference.

Both times I felt like Hanson downplayed the difficulty of brain emulation, and after hearing him speak I got up and asked him about the OpenWorm Project where they’re trying to model the brain of the C. elegans roundworm, which has a brain of only 302 neurons, so far without much success. Didn’t this indicate, I asked, that modelling the human brain, with it’s 100 billion neurons, was going to be nearly impossible? I don’t recall exactly what his answer was, but I definitely recall being unsatisfied by it.

Accordingly, one of the things I hoped to get out of reading the book was a more detailed explanation of this assumption, and in particular why he felt brain emulation was closer than machine intelligence. In this I was somewhat disappointed. I wouldn’t say that the book went into much more detail than Hanson did in his presentation. I didn’t come across any arguments about emulation in the book which Hanson left out of his presentation. That said, the book did make a much stronger case for the difficulties involved in machine intelligence, and I got a much clearer sense that Hanson isn’t so much an emulation optimist as he is an AI pessimist.

Since I started with the story of my friend, the AI optimist, it’s worth examining why Hanson is so pessimistic. I’ll allow him to explain:

It turns out that AI experts tend to be much less optimistic when asked about the topic they should know best: the past rate of progress in the AI subfield where they have the most expertise. When I meet other experienced AI experts informally, I am in the habit of asking them how much progress they have seen in their specific AI research subfield in the last 20 years. A median answer is about 5-10% of the progress required to reach human level AI.

He then argues that taking the past rate of progress and extending it forward is a better way of making estimations than having people make wild guesses about the future. And, that using this tactic, we should expect it to take two to four centuries before we have human level machine intelligence. Perhaps more, since getting to human level in one discipline does not mean that we can easily combine all those disciplines into fully general AI.

Though I am similarly pessimistic, in my friend’s defense I should point out that Age of Em was published in 2016, and thus almost certainly written before the stunning accomplishments of AlphaGo and some of the more recent excitement around image processing, both of which may now be said to be “human level”. It may be that after several eras of AI excitement which were inevitably followed by AI winters, that spring has finally arrived. Only time will tell. But my personal opinion is that there is still one more winter in our future.

I am on record as predicting that brain emulation will not happen in the next 100 years, but Hanson isn’t much more optimistic than I am and predicts it might take up to 100 years, and that the only reason he expects it before AI is that he expects AI to take 200-400 years. Meaning that in the end my actual disagreement with Hanson is pretty minor. Also I think that the skies are unlikely to remain dry for another 100 years, which means neither race will reach the finish line…  

I should also mention that in between seeing Hanson’s presentation at the MTA conference and now that my appreciation for his thinking has greatly increased, and I was glad to find that on the issue of emulation difficulty that we were more in agreement then I initially thought. Which is not to say that I don’t have my problems with Hanson or with the book.

I think I’ll take a short detour into those criticisms before returning to a discussion of potential futures. The biggest criticism I have concerns the length and detail of the book. Early on he says:

The chance that the exact particular scenario I describe in this book will actually happen just as I describe it is much less than one in a thousand. But scenarios that are similar to true scenarios, even if not exactly the same can still be a relevant guide to action and inference. I expect my analysis to be relevant for a large cloud of different but similar scenarios. In particular, conditional on my key assumptions, I expect at least 30% of the future situations to be usefully informed by my analysis. Unconditionally I expect at least 10%.

To begin with, I think the probabilities he gives suffer from being too confident, and he may be, ironically, doing something similar to AI researchers, whose guesses about the future are more optimistic than a review of past performance would indicate. I think if you looked back through history you’d be hard pressed to name a set of predictions made a hundred years in advance which would meet his 10% standard, let alone his 30% standard. And while I admire him for saying “much less than one in a thousand”. He then goes on to spend a huge amount of time and space getting very detailed about this “much less than one in a thousand” prediction. An example:

Em stories predictably differ from ours in many ways. For example, engaging em stories still tell morality tales, but the moral lessons slant toward those favored by the em world. As the death of any one copy is less of a threat to ems, the fear of imminent personal death less often motivates characters in em stories. Instead such characters more fear mind theft and other economic threats that can force the retirement of entire subclans. Death may perhaps be a more sensible fear for the poorest retirees whose last copy could be erased. While slow retirees might also fear an unstable em civilization, they can usually do little about it.

This was taken from the section on what stories will be like in the Age of Em, from the larger chapter on em society. And hopefully it gives you a taste of the level of detail Hanson goes into in describing this future society, and the number of different subjects he covers while doing so. As a setting bible for an epic series of science fiction novels, this book would be fantastic. But as just a normal non-fiction book one might sit down to read for enlightenment and enjoyment, it got a little tedious.

That’s really basically the end of my criticisms, and actually there is a hidden benefit to this enormous amount of detail. It not only describes a potential em society with amazing depth. It also sheds significant light on the third position I mentioned at the beginning, the vague, everything’s going to be cool transhumanist future. Hanson’s level of detail provides a stark contrast to the ideology of most transhumanists who have a big-bucket-of-science- fiction-awesomeness that might happen in the future but little in the way of a coherent vision for how they all fit together, or whether, as Hanson points out in the case of ems vs. AIs, they even can fit together

Speaking of big-bucket-of-science-fiction-awesomeness, and transhumanists, I already mentioned Hanson’s keynote at the MTA Conference, and while I hesitate to speculate too strongly, I suspect most MTA members did not think Hanson’s vision of the future was quite as wonderful or as “cool” as the future they imagine. (For myself, as you may have guessed, I came away convinced that this wasn’t a scenario I could ignore, and resolved to read the book.) But of course it could hardly be otherwise. Most historical periods (including our own) seem pretty amazing if you just focus on the high points, it’s when you get into the details and the drudgery of the day to day existence that they lose their shine. And for all that I wish that Hanson had spent more time in other areas (a point I’ll get back to) he does a superlative job of extrapolating even the most quotidian details of em existence.

In further support of my speculation that the average MTA member was not very excited about Hanson’s vision of the future, at their next conference, a year later, the first speaker mentioned Age of Em as an example of technology going too far in the direction of instrumentality. You may be wondering, what he meant by that, and thus far, other than a few hints here and there, I haven’t gone into too much detail about what the Age of Em future actually looks like. And I’ll only be able to give the briefest of overviews here, but as it turns out much of what we imagine about an AI future applies equally well in an em future. Both AIs and ems share the following broad features:

  1. They can be sped up: Once you’re able to emulate a human brain on a computer you can always turn the speed up. Presumably this would make the “person” being emulated experience time at that new speed. By speeding up the most productive ems, you could get years of work done every day. Hanson suggests the most common speed setting might be 1000 to 1, meaning that for every year of time which passes for normal humans, a thousand subjective years would pass for the most productive ems.
  2. They can be slowed down: You can do the reverse and slow down the rate at which time is experienced by an em. Meaning that rather than ever shutting down an em, you could put them into a very cheap “low resource state”. Perhaps they only experience a day for every month that passes for a normal human. Given how cheap this would be to maintain you could presumably keep these ems “alive” for a very long time.
  3. They can be copied: Because you can copy a virtual brain as many times as you want, not only can you have thousands if not millions of copies of the same individual, you’re also going to only choose the very “best” individual to copy. This means that the vast majority of brain emulations may be copies of only a thousand or so of the most suitable and talented humans.
  4. Other crazy things: You could create a copy each day to go to “work” and then delete that copy at the end of the day, meaning that the “main” em would experience no actual work. You could take a short break, but by turning up the speed make that short break into a subjective week long vacation. You could make a copy to hear sensitive information, allow that copy to make a decision based on that information, then destroy the copy after it had passed the decision along. And on and on.

Presumably at this point you have a pretty good idea of what the MTA speaker meant by going too far in the direction of instrumentality. Also since culture and progress are going to reside almost exclusively in the domain of the speediest ems, chosen from only a handful of individuals, it’s almost certain that no matter how solid your transhumanist cred, you’re going to be watching this future from the sidelines. (And actually even that analogy is far too optimistic, it will be more like reading a history book, and every morning there’s a new history book.)

The point of all of this is that there is significant risk associated with AI (position 1). Hanson points out that the benefits of widespread brain emulation will be very unequally distributed (position 2). Meaning that the two major hopes of transhumanists both promise futures significantly less utopian than initially expected. We still have the vague big-bucket-of-science fiction-awesomeness hope (position 3). But I think Hanson has shown that if you subject any individual cool thing to enough scrutiny it will end up having significant drawbacks. The future is probably not going to go how we expect even if the transhumanists are right about the singularity, and even if we manage to avoid all the catastrophes lying in wait for us (position 4).

The problem with optimistic views of the future (which would include not only the transhumanists, but people like Steven Pinker) is that they’re all based on picking an inflection point somewhere in the not too distant past. The point where everything changed. They then ignore all the things which happened before that inflection point and extrapolate what the future will be like based only on what has happened since. But as I mentioned in a previous post, Hanson is of the opinion that current conditions are anomalous, and that extrapolating from them is exactly the wrong thing to do because they can’t continue. They’re the exception, not the rule. He calls the current period we’re living in “dreamtime” because, for a short time we’re free from the immediate constraints of survival.

Age of Em covers this idea as well, and at slightly greater length than the blogpost where he initially introduced the idea. And when I complain about the book’s length and the time it spends discussing every nook and cranny of em society, I’m mostly complaining about the fact that he could have spent some of that going into more detail on this idea, the idea of “dreamtime”. Also his discussion of larger trends is fascinating as well. And, in the end, I would have preferred for Hanson to have spent most of his time discussing broad scenarios, rather than spending so much on this one, very specific, scenario. Because, as you’ll recall, I’m a believer in the fourth position, that something will derail us in the next 100 years before Hanson’s em predictions are able to come to fruition, and largely because of the things he points out in his more salient (in my opinion) observations about the current “dreamtime”.  

We have also, I will argue, become increasingly maladaptive. Our age is a “dreamtime” of behavior that is unprecedentedly maladaptive, both biologically and culturally. Farming environments changed faster than genetic selection could adapt, and the industrial world now changes faster than even cultural selection can adapt. Today, our increased wealth buffers us more from our mistakes, and we have only weak defenses against the super-stimuli of modern food, drugs, music, television, video games and propaganda. The most dramatic demonstration of our maladaptation is the low fertility rate in rich nations today.

This is what I would have liked to hear more about. This is a list of problems that is relevant now. And which, in my opinion at least, seem likely to keep us from ever getting either AI or ems or even just the big-bucket-of-science-fiction-awesomeness. Because in essence what he’s describing are problems of survival, and as I have said over and over again, if you don’t survive you can’t do much of anything else. And brain emulation and AI and science fiction awesomeness are all on the difficult end of the “stuff you can do” continuum on top of this. I understand that some exciting races are being run, and that the finish line seems close, but I still think we should pay at least some attention to the gathering storm.


If the phrase “big-bucket-of-science-fiction-awesomeness” made you smile, even a little bit, consider donating. Wordsmithing of that level isn’t cheap. (Okay maybe it is, but still…)