Qays Langan-Dathi: “Should we cover global catastrophic risks at all? If we do, what are the main risks to consider?”

By Tom Sittler, Director

Read and comment on the Google Document version of the post here

This is part of a series of posts about our progress in the first five days of the Oxford Prioritisation Project. 

Qays Langan-Dathi: “Should we cover global catastrophic risks at all? If we do, what are the main risks to consider?”

Qays called our attention to a summary spreadsheet of the Open Philanthropy Project’s current priorities within Global Catastrophic Risks. The top priorities were biosecurity, geoengineering, geomagnetic storms, and potential risks from artificial intelligence. The spreadsheet describes the highest-damage scenario for each risk, as well as possible philanthropic interventions to mitigate it.

This served as the starting point for an extremely interesting discussion. Qays made a new and, to me, surprising, point: the importance and uncrowdedness of existential risks may make them a plausible focus area even if we don’t care about humanity’s long-term future, and only want to avoid suffering and death for those already in existence. This surprised me since of I the people I know personally who advocate for existential risk reduction as an altruistic priority, all do so because of long-term-future arguments. On the other hand, the view did not strike me as intuitively entirely implausible. I asked Qays to work on formalising that intuition with a quantitative estimate.