A model of Animal Charity Evaluators

Summary. We describe a simple simulation model for the recommendations of a charity evaluator like Animal Charity Evaluators (ACE). In this model, the charity evaluator is unsure about the true impacts of the charities in a fixed pool, and can reduce its uncertainty by performing costly research, thereby improving the quality of its recommendation (in expectation). Better recommendations lead to better utilisation of the money moved by ACE. We also describe how we converted the model’s output, which is measured in chicken years averted / $ into “Human-equivalent well-being-adjusted life-years” (HEWALYs) / $.

A model of StrongMinds

Summary: In 2016 James Snowden of the Centre for Effective Altruism built a quantitative model estimating the impact of StrongMinds. In order to measure our uncertainty about the estimate, we built a detailed, annotated translation of the model in Guesstimate (a “spreadsheet for things that aren’t certain”) which can be found here. This post is acts as an appendix to the Guesstimate model.

Turning a list of numbers into a probability density function

Suppose you have a large list of n random samples from a continuous distribution, and you want to approximate the probability density function of the distribution.

You can’t just take the ith sample for all i and calculate the frequency of each value in the list. Your samples are from a continuous distribution, so each value is likely to be present only once in the sample. If you used this procedure, your probability density function would take the

Alice and Bob on existential risk

These are conversation notes from a conversation on 2017-03-14 where two Oxford Prioritisation Project members discussed focus areas, and in particular the tractability of existential risk reduction. They talk about hard questions openly, without worrying about clearly explaining or rigorously backing their views to the usual standard of the Oxford Prioritisation Project blog. As a result, I (Tom) have decided to anonymise the conversation notes.

Modelling the Good Food Institute

We have attempted to build a quantitative model to estimate the impact of the Good Food Institute (GFI). We have found this exceptionally difficult due to the diversity of GFI’s activities and the particularly unclear counterfactuals. In this post, I explain some of the modelling approaches we tried, and why we are not satisfied with them. This post assumes good background knowledge about GFI, you can read more at Animal Charity Evaluators.

Alice and Bob on big-picture worldviews

Astronomical Waste

-       Alice: best formulation: we want to use the energy of the stars in a way that is not zero or not negative

-       Bob: agree that we want this, but it does not follow that humans continuing to exist is great

-       Alice: Would you be happy with human extinction, then?

-       Bob: no, because evolution and wild-animal suffering would continue, so happy with having humans stick around

Daniel May: "Should we make a grant to a meta-charity?"

I introduce the concept of meta-charity, discuss some considerations for OxPrio, and look into how meta-charities evaluate their impact, and the reliability of these figures for our purposes (finding the most cost-effective organisation to donate £10,000 today). I then look into the room for more funding for a few meta-charities, and finally conclude that these are worth seriously pursuing further.

Qays Langan-Dathi: "AI Safety"

In conclusion my answer to my main point is, yes. There is a good chance that AI risk prevention is the most cost effective focus area for saving the most amount of lives with or without regarding future human lives. The probabilities of AI happening is well above negligent and the risk of it turning catastrophic is seriously considered by many leading researchers. Therefore it excels in the importance criterion, on top of that due to it being a recent development it is also highly neglected. The main problem is tractability and us being unable to ascertain how much we can actually change the percentage of existential risk even if we did donate. However that is something that I intend to estimate more accurately for a future update. As it stands though AI risk prevention looks very promising.

Final decision: Version 0

On February 19, we reached what we call version 0. Version 0 was a self-imposed deadline, five weeks into the project, for producing a minimum viable product, i.e. the name of a grantee and some justification for it. 
If we exclude the rankings from the three team members who did not submit a current view document, the result is a tie between the Machine Intelligence Research Institute and StrongMinds (this can be computed easily using the procedure described here).