How much does work in AI safety help the world? Probability distribution version

By Tom Sittler, Director

2017-04-26

We're centralising all discussion on the Effective Altruism forum. To discuss this post, please comment there.

The Global Priorities Project (GPP) has a model quantifying the impact of adding a researcher to the field of AI safety. Quoting from GPP:

There’s been some discussion lately about whether we can make estimates of how likely efforts to mitigate existential risk from AI are to succeed and about what reasonable estimates of that probability might be. In a recent conversation between the two of us, Daniel mentioned that he didn’t have a good way to estimate the probability that joining the AI safety research community would actually avert existential catastrophe. Though it would be hard to be certain about this probability, it would be nice to have a principled back-of-the-envelope method for approximating it. Owen actually has a rough method based on the one he used in his article Allocating risk mitigation across time, but he never spelled it out.

I found this model (moderately) useful and turned it into a Guesstimate model, which you can view here. You can write to me privately and I’ll share my inputs with you.

Guesstimate model

Guesstimate model

Have other people found this model useful? Why, or why not? What would be your inputs into the model?