Alice and Bob on big-picture worldviews

By Tom Sittler, Director

These are conversation notes from a conversation on 2017-03-02 where two Oxford Prioritisation Project members shared their big-picture worldviews. They talk about hard questions openly, without worrying about clearly explaining or rigorously backing their views to the usual standard of the Oxford Prioritisation Project blog. As a result, I (Tom) have decided to anonymise the conversation notes.

We're centralising all discussion on the Effective Altruism forum. To discuss this post, please comment there.


Astronomical Waste

-       Alice: best formulation: we want to use the energy of the stars in a way that is not zero or not negative

-       Bob: agree that we want this, but it does not follow that humans continuing to exist is great

-       Alice: Would you be happy with human extinction, then?

-       Bob: no, because evolution and wild-animal suffering would continue, so happy with having humans stick around

-       Alice: What are some kinds of good extinction events then?

-       Bob: paperclip maximisers (assuming there are no near aliens)

 

What percentage of AGI-space is paperclip maximizers?

-       Alice: my unfiltered intuition is that much of it is something simple like fill everything with X, where X is probably not sentient

-       Bob: agree, to make lots of suffering, it needs pretty human-like utility functions that lead to simulations or making many sentient beings.

 

Alice: Is donating to MIRI good then, since its main effect might be to decrease probability mass on paperclip-maximizers, and thus increase both very good and very bad outcomes?

Bob: definitely a worry

Alice: ideally you would like to donate to “MIRI2” a MIRI (more) focused on avoiding the bad outcomes. Is your thought process something like “Since I can’t get that, might as well do value spreading”?

Bob: I see them as complementary, since want to influence values of the people who end up making AI happen.

Alice: complementary, but which one is more valuable at the current margin?

Bob: very hard; what are the relative funding levels? Approx 10m for AI safety, maybe 100-200m for animal welfare? I think this suggest AI safety better on margin, very low confidence.

Alice: your main reason for GFI is value spreading. What’s your broad worldview favouring value spreading?

Bob: may be at the start of continuing exponential growth; thus what happens now is important (in particular, influencing AI).

Seems unlikely that value-spreading was responsible for the good we have at the moment (enlightenment values might just come naturally from new environment). So maybe value-spreading is hard.

Alice: Intuition: most of the successful value-spreaders were politicians and philosophers and artists. However, a counter-analogy might be slave-replacing machines that then cause values against slavery (suggesting the machine-makers were the true causes of the value change).

Alice: Let me give important part of my world view. If we zoom out really far (on the 100m year timescale), then start of civilization to now is the same point. The big lever we have for affecting the far future is (preventing) extinction. We’re clueless about whether and what kind of effect trajectory changes will have.

Bob: If we’re clueless about effect of attempting trajectory change, we’re also clueless about sign of future. Why prevent extinction then?

Alice: Relatively more clueless about effects of trajectory change than about sign of future, because you need worldmodel *plus* causal explanation of how the change makes things better. But concede that we might still be very clueless about the sign of the future. But that consideration does not much feed into my actions, because I’m emotionally reluctant to make us go extinct.

Bob: Tend to think that for certain actions, they can be positive in wide variety of future worlds, so can be reliably good. Maybe something like spreading cooperative agents, which is helpful both if things go well or not well.

Alice: So there seems to be a difference in how clueless we think we are about affecting the far future (except through preventing extinction).

Alice: One might still want to do value-spreading for, say, the next 1000 years, about which one is less clueless. This would be saying that the most important decision is whether to reduce or increase x-risk, but we’re too clueless about that, so we should so something medium-term (but still not short term like GiveDirectly).

Bob: Do you think AI researchers can influence the future? Are we less clueless here?

Alice: My inclination is to say no, because in the very long term, AI may not be the big thing forever. Imagine prehistoric EAs. After discovering fire they think they now have a huge amount of control over the far future. So they think it’s really important to convince their tribesmembers of their values, for instance that it’s better to kill caribous humanely. But they would turn out to have been very wrong, even just 100s of thousands of years later (not millions).

Bob: I want to think about whether I claim to have beliefs about effects of actions for hundreds of millions of years, or whether my model of the far future stops in about 10,000 years. As in Alice’s point above, it may still make sense to optimise how things go in this short clueful horizon.

Alice: I think a big weakness of my worldview is that I’m unsure about the sign of the future, but since this is unpleasant to think about, and hard to make any kind of progress on, I tend not to give proper attention to this consideration.

We're centralising all discussion on the Effective Altruism forum. To discuss this post, please comment there.