Tom Sittler: "Assumptions of arguments for existential risk reduction"

By Tom Sittler, Director

Created: 2017-01-27. Major revision 2017-02-10. 

Read and comment on the Google Document version of this post here.

2017-05-19. Minor correction. In a previous version of this post, I mistakenly used "entire lifespan of humanity" when I actually meant "approximately 1-billion-year lifespan", which caused confusion. Thanks to Jan Kulveit for pointing this out.

Summary:

I review an informal argument for existential risk reduction as the top priority. I argue the informal argument, or at least some renditions of it, are vulnerable to two objections: (i) The far future may not be good, and we are making predictions based on very weak evidence when we estimate whether it will be good (ii) reductions in existential risk over the next century are much less valuable than equivalent increases in the probability that humanity will have a very long future.

Should we maxipok? The assumptions of arguments for existential risk reduction

I often hear existential risk reduction proposed as the top priority with the following simple argument:

1)    The magnitude of expected loss from existential catastrophe is astronomical. As Nick Bostrom writes, “If we suppose with Parfit that our planet will remain habitable for at least another billion years, and we assume that at least one billion people could live on it sustainably, then the potential exist for at least 10^16 human lives of normal duration. [...] However, the relevant figure is not how many people could live on Earth but how many descendants we could have in total. One lower bound of the number of biological human life-years in the future accessible universe (based on current cosmological estimates) is 10^34 years.”

2)    Therefore, even small reductions in existential risk have higher expected value than any imaginable intervention to help people now. Bostrom again: “Even if we use the most conservative of these estimates, which entirely ignores the possibility of space colonization and software minds, we find that the expected loss of an existential catastrophe is greater than the value of 10^16 human lives. This implies that the expected value of reducing existential risk by a mere one millionth of one percentage point is at least a hundred times the value of a million human lives.”

3)    So, at the current margin, we should “maxipok”: “Maximize the probability of an "OK outcome," where an OK outcome is any outcome that avoids existential catastrophe.”

Suppose we grant the assumption that future lives are as valuable as present lives.

I believe the simple argument has other unstated assumptions. I suspect that the most sophisticated existential risk reduction advocates are well aware of these, but that informal renditions of their views, especially within the effective altruism community, tend to overlook them.

Objections to the simple argument:

The future may not be good

The descendants of current humans, or other beings they cause to come into existence (like artificial minds or animals) may not have lives that are good on balance. Agriculture is about 10,000 years old. Writing is 5,000 years old. If we assume that future humans, over the course of a billion years or more, will have lives at least as good as current humans, or that future humans will have lives increasing in quality at the same rate as they have been in the past, we are extrapolating from a very small sample. 10,000 years is 10^-5, or 0.001%, of a billion years. (On the other hand, we must account for the possibility that future humans will have lives unimaginably better than ours.)

One might reply that we should instead steer the future towards more happiness (see this for example). But this appears massively less tractable than extinction risk reduction. Reducing the risk of extinction over the next century is a plausible lever for affecting the far future because if we go extinct, there definitely won’t be a far future. On the other hand, the proposed causal chain between our actions now and happier lives in hundreds of millions of years is much less direct, to the point where I find it very implausible. Steering the far future is a lot harder than avoiding extinction over the next century.

Reductions in existential risk over the next century are much less valuable than equivalent increases in the probability that humanity will have a very long future

Small reductions in the risk of existential catastrophe over the entire 1-billion-year potential lifespan of humanity are much more difficult to achieve than small reductions in the risk of existential catastrophe in the next century. The simple argument risks being misleading by not making the distinction sufficiently clear, and making us believe that, say, a 0.01% reduction in the risk of extinction through nuclear weapons over 2017-2117 makes it 0.01% more likely that a  1-billion-year lifespan of humanity will be allowed to unfold.


To get across the intuition, I’ve made a simple two-period model of the future of humanity. En is the probability of extinction in period n. Then the expected length of humanity (EV) is shown below. Assuming that E2 does not depend on E1 (which I find reasonable, at the scale of hundreds of millions of years), the derivative of EV with respect to a reduction in E1 is shown below. The naive view (i.e. the misleading interpretation of the simple argument) ignores the risk of extinction in period 2.

 

 

What’s the upshot?

I’m currently struggling with the consequences of argument (1) for the Oxford Prioritisation Project’s decision. I see two options:

  • If the future is likely enough to be bad, do not prioritise existential risk reduction. (Also do not try to increase x-risk, for reasons of cooperation). But I have no idea how to decide this. Formally, of course, you should develop a probability distribution over the (average?) goodness of future lives, and support existential risk reduction only if a sufficient amount (half?) of the probability mass is higher than ‘lives well worth living’. But I don’t know how to even begin thinking about that adequately.
  • If my reply 2.a. is mistaken, and there are plausible interventions to steer the far future, then focus on these rather than existential risk reduction. The Foundational Research Institute appears to have this view.

Argument (2) should be easier to integrate:

  • I’d like to encourage others on the Oxford Prioritisation Project team to build an n-period extension of my toy model and plug some numbers into it.