Alice and Bob on existential risk

By Tom Sittler, Director

These are conversation notes from a conversation on 2017-03-14 where two Oxford Prioritisation Project members discussed focus areas, and in particular the tractability of existential risk reduction. They talk about hard questions openly, without worrying about clearly explaining or rigorously backing their views to the usual standard of the Oxford Prioritisation Project blog. As a result, I (Tom) have decided to anonymise the conversation notes.

Bob:

Alice’s focus area ranking is:

1.    global health/development = existential risk reduction (indifferent),

2.    mental health,

3.    animal farming

Alice’s organisation ranking is:

1.    Against Malaria Foundation

2.    schistosomiasis control initiative

3.    Drugs for Neglected Diseases Initiative

4.    StrongMinds

5.    Abdul Latif Jameel Poverty Action Lab

6.    Future of Humanity Institute

7.    Centre for Effective Altruism

8.    Good Food Institute

How can these be consistent? If x-risk is a good focus area, why x-risk organisations low on your ranking?

Alice: Tractability is very low. In light of that, I think my true view about focus area is

1.    Global health

2.    Mental health

3.    Animals

4.    Existential risk

Alice: Really unsure where to put meta. Lots of uncertainty. And I’m not sure the multiplier estimates we saw are really unbiased.

Bob: OK, but you could take all that into account and estimate the expected value. Imagine you had to donate the £10,000 now, what would you do.

Alice: In that case, I would rank the focus areas like this:

1.     Global health

2.    Mental health

3.    Meta

4.    Animals

5.    Existential risk

Bob: So let’s talk about the tractability of x-risk. [describes Matheny, Jason. 2007. Reducing the risk of human extinction.]

Alice: If they can save 12 million people with 20 billion dollars, that’s more cost effective than AMF. But it doesn’t seem like there’s an organization doing that exact thing, and the organizations working on x-risk may not be as cost effective.

Bob: People typically think anthropogenic risks (AI, biorisk, nukes) are much higher this century than natural risks (asteroids…) [Summarises Toby Ord Talk]

Alice: Yes, but it may be much easier (more cost-effective) to reduce natural risks than technological risks.

Bob: How much more?

Bob: What’s your intuition on the anti-nuclear movement (broadly construed) during the cold war? What’s your subjective probability we have that to thank for being alive today?

Alice: Very low. But I’m not knowledgeable.

Bob: haha, none of us know anything. But I think a bad number may still be better than no number.

Alice: I should read If It’s Worth Doing, It’s Worth Doing With Made-up Statistics.

Alice: My prior for the tractability of existential risk reduction is very low. And that is an argument for finding out more.

Bob: But what if you couldn’t find out more, and had to make the decision now. Use expected value.

Alice: Inthat case I would just donate to AMF.

Bob: Of course, there’s some set of beliefs that’s consistent with those actions being e.u. maximising, but I’m not sure they are your actual beliefs. Maybe (implicitly) you are using some kind of satisficing model, saying “I won’t donate to anything that does not have evidence at least as good as this threshold”.

Bob: As per Toby’s paper, the value of x-risk reduction now depends on whether there will be lots more risk in the future.

Alice: I would assume the risk every period after (we solve the AI problem) will be similar in magnitude as now and from independent sources, because we are not good at predicting the future and can’t really assume that AI will solve every future problem and risks will be lower.

Bob: What about Carol’s argument that AI is a trump on other technologies, will bring us to a stable equilibrium, and will keep risks low for a long time / forever?

Alice: I’m not sure we are good enough at predicting the nature of future technologies and whether AI will really take care of everything. Perhaps look at how good we were historically at predicting future technologies. Probably want to assign beliefs on these views and ask experts.

Bob: I agree that Carol’s view is very overconfident.

Alice: So let’s talk about your ranking. You put Global Priorities Research at the top. I guess that could be right, it seems very neglected. I haven’t thought about it much.

Bob: So you’re optimistic about prioritisation research but pessimistic about x-risk reduction? i.e. when Owen Cotton-Barratt in his office works on reducing x-risk, he’s really ineffective, but when he works on prioritisation he’s effective?

Bob: Say we could make a clone of Owen for £4M. How good would that be?

Alice: I don’t really know what they do. I guess there’s Owen’s models, so that seems like a pretty important contribution. On the other hand, it seems like we keep encountering hard questions that we can’t find answers to, so maybe CEA researchers haven’t worked on them, and I don’t know if that’s because 1) they don’t have enough people, 2) they have to spend time on other stuff like fundraising, 3) it’s just too hard to produce such research.

Bob: Owen may have more in his head than he writes down. Same with Toby.

Alice: Maybe they didn’t do this because the community of prioritization research is still small.

Bob: What about other people who are trying to do global prioritisation? Copenhagen consensus comes to mind. There must be more people trying to do this. Like, some economists or something. OpenPhil did a little bit of it when they picked their focus areas. But they are fully funded

Alice: So tell me more about why you think x-risk is sufficiently tractable?

Bob: My intuition is these are (hard) problems that humanity can solve. Let’s say the probabilityof extinction is 2% over the next century. It seems reasonable to me that if x-risk became as much of a big deal politically as climate change is now, we could halve that risk (at least).

Alice: This kind of argument that provides a rough estimate is okay, but we’d also want concrete arguments on whether the things people are working on are useful. Delilah shared a table by the Open Philanthropy Project comparing different x-risks and ended up showing Biosecurity on top.

Bob: I have no strong views on AI vs. other risks. I mostly take it from authority.

Bob: OK, so let’s talk about concrete things that people are doing. Do you think Bostrom publishing Superintelligence has reduced expected x-risk?

Alice: I think at this early stage what’s more important than single pieces of research is to raise awareness, e.g. publishing books like Superintelligence, FLI conference, getting tech companies to care about it and have multiplier effects, influencing policy maker (e.g. Owen talked to UK parliament about existential risk).