Sindy Li: “Should we simply defer to GiveWell within the area of global health and development?”

By Tom Sittler

This is part of a series of posts about our progress in the first five days of the Oxford Prioritisation Project. 

Read and comment on the Google Document version of this post here.

Sindy Li: “Should we simply defer to GiveWell within the area of global health and development?”

Sindy's research memorandum:

As a starting point I have been mostly consulting the website of GiveWell, a nonprofit “dedicated to finding outstanding giving opportunities and publishing the full details of our analysis to help donors decide where to give”. I briefly examined how they arrive at their conclusions to see if I agree with it.

In their current list of top charities they recommend charities that work on malaria prevention, deworming and cash transfer. How do they arrive at these recommendations? In their page on their process, they explain that first they decide to focus on the global poor because 1) they are much poorer than the poor in rich countries, and 2) interventions that help the global poor have more and better evidence for their effectiveness, and also are more cost effective, compared to those that help the poor in rich countries. They decide to focus on global health, as they conclude that health is the area in international aid with the strongest track record from reading relevant literature. They identify priority interventions in global health that are “backed by strong expert recommendations and/or evidence bases (while also being relatively cost-effective)”. They search for charities in these areas, identified hundreds of them, and examined promising ones in more depth to decide if they meet GiveWell’s criteria (more on this below). The most outstanding ones are investigated more intensely, including conversations with charity representatives, partners and funders, and sometimes site visits. They are then listed as “top charities” after being agreed upon by all staff. GiveWell follows up intensely with the top charities and publishes both positive and negative updates. In addition, they also have a list of “Other Charities Worthy of Special Recognition” (on the same page as the “top charities”) which support programs that may be extremely cost-effective according to some evidence, but does not make GiveWell confident enough in their impact due to the evidence being weaker, having shorter track records, or collecting less monitoring information than top charities.

Their criteria for identifying top charities are:

     Evidence: This is asking the question “does this intervention have an effect?”. To answer it, they examine rigorous evidence of the impact of the intervention from the academic literature, ideally from multiple studies and those likely to have external validity. They also look at charity-specific data since charities usually implement their programs differently than in the academic studies.

     Cost effectiveness: To allocate limited resources among charities they need to estimate how cost-effective each one is, that is how much “good” they do per dollar. Since various charities try to improve different outcomes they try to develop a single measure that is comparable across charities, such as “cost per disability-adjusted life-year averted”. In their cost-effectiveness analysis they arrive at these numbers using various sources and types of inputs, including: effect sizes found in academic studies, adjustments to account for concerns of external validity and the difference between program implementation in the study and by the charity, charity’s unit cost (that takes into account all their costs), more subjective values like discount rates and conversion between increase in consumption and DALY averted, etc. For these values, they list in the Excel sheet those input by all staff (and the median) as well as their suggested values and explanations for them.

     Room for more funding: It is important to know not only which charity is the most cost effective but also what additional funds can do given what the charity will receive otherwise. They investigate this by talking to the charity and related experts, etc.

     Transparency: They believe that more transparent charities are likely to 1) have higher quality evidence (i.e. less uncertainty around it; more on this below), 2) be able to notice problems and communicate them to GiveWell when they occur. They engage a lot with (potential) top charities in examining them and following up with them, as well as report publicly on positive or negative development. Charities’ responses to these gives GiveWell an idea about their transparency.

How I feel about their method and conclusions:

First, GiveWell gives me the impression that they are very rigorous and thorough in their research process, including examining their thinking process, communicating it and responding to questions and criticisms on their blog. They are also very open in talking about their mistakes. Aside from examining the substances of their work, this gives me some confidence in their conclusions.

Below are some specific comments:

     Global poor as population to focus on: It seems reasonable from a utilitarian point of view. I am sympathetic to the views that many places in the world need to be improved, not just the poorest ones, even if poor people in rich countries are much better off than the global poor, and also that one has some obligation to help disadvantaged members of one’s own society (in addition to helping the most disadvantaged ones in the whole world), although I am not sure how to justify them. The Open Philanthropy Project which is a collaboration between GiveWell and the foundation Good Ventures and looks at more diversa areas of improving the world, including U.S. policy and global catastrophic risk. Given this, it seems reasonable and desirable that GiveWell itself focuses on global poverty, due to returns to specialization. It seems that they, as well as the Global Priorities Project, also think about how to allocate money across causes, and I am not familiar with that.

     Area (global health) and interventions they focus on: It seems mostly reasonable. However

     It is possible that GiveWell neglected some potentially promising interventions/charities (for which the evidence exists). This does not seem likely given their search process described above. However, two pieces of evidence points the other way: 1) they added some new charities, including one working in a new intervention, to their 2016 recommendation; 2) intervention reports for many potential programs they identify are either not up to date or absent. Although on 1), GiveWell paid attention to Give Directly early on and ended up listing them as a top charity, which shows that they are up to date on potentially effective interventions even outside of global health. On 2), I would be less concerned if I knew why this is the case and find the reason convincing (e.g. they have some reasonable way of determining the expected payoff of each program to decide whether/when to examine them in depth).

     It is possible that some potentially highly impactful interventions lack evidence which is why they are not currently recommended by GiveWell. I am wondering if GiveWell or some other experts have intuitions on which interventions are likely to fall in this category (in this case we have to rely on educated guesses since by definition there is little evidence) and whether the high potential impact justifies collecting more evidence (i.e. more research). I do not have good intuition on this either for interventions they list or other ones.

     Similarly, among “Other Charities Worthy of Special Recognition”, some may have higher expected payoff (expected in light of limited evidence) yet lack evidence, and it may be worthwhile allocating money to investigating them.

     (Their Incubation Grants is an attempt to address the above two points, but perhaps incomplete.)

     When asked why they focus on direct aid rather than root causes of poverty, they say “Root-causes-based approaches are, in our view, the kind of speculative and long-term undertakings that are best suited to highly engaged donors”, “Our top charities aren't the only great charities, … but we believe they are the best bet for a low-information donor looking for a verifiably strong chance to do good”, and “We think it's appropriate for donors to focus on the problems they're best at helping with, recognizing that they aren't the only people who are working toward positive change.” These all seem reasonable, but perhaps it is worth looking into arguments in the other direction and evidence for some of these more speculative programs (e.g. those that aim at improving governance and accountability in developing countries; I suppose they have looked into this but they do not post anything related) -- they may not be as cost effective in poverty reduction as their top charities, but could be worth considering as an area similar to those covered by the Open Philanthropy Project.

     Criterion -- evidence: I am not an expert in global health but as an economist I am quite happy with the way they evaluate evidence (how they evaluate a study and do literature review).

     Criterion -- cost effectiveness: As they point out, their cost-effectiveness estimates have the problem of 1) being sensitive to assumptions, 2) simplified (due to lack of much information that is needed), and at the same time 3) complex (enough for some mistakes to potentially go unnoticed). In addition to requiring information that they may not have, the cost-effectiveness analysis also requires some subjective value judgements (e.g. discount rate, trade-off between increasing consumption and averting DALY), and for these they have staff input their values and take the median (you can also input your own to see if the results differ much). I always wonder if in this case we should try to come up with “suggested value” based on either some moral value (or perhaps multiple) that many people accept, or empirically estimated preferences (revealed or reported) from the relevant population, rather than taking something subjective (I am not sure).

     Even though there are many concerns with the cost-effectiveness analysis itself, their ways of addressing them seem reassuring. First, they argue that expected values should not be taken too literally and that quality of evidence or having independent pieces of evidence matter -- I find this argument based on Bayesian statistics to be intuitive and reasonable. They also use a threshold approach, “preferring charities whose cost-effectiveness is above a certain level but not distinguishing past that level”, and put more weight on other types of evidence. Specifically, they consider “anything under $5,000 per life saved (or equivalent, according to one's subjective values about how to compare other sorts of impacts to lives saved) to be excellent cost-effectiveness”. This is a good idea but in practice I am a bit confused about how it is used: in their Excel sheet you can see that after taking medians of values input by all staff, the resulting “Cost Per Life Saved Equivalent” for all the top charities range between 901 and 6971 USD, and looking at both the median and the majority of staff, the number for GiveDirectly is above 5000 USD.

     On one particular intervention: Among interventions carried out by their top charities, deworming seems the most problematic to me, mostly because huge long-term income gains are found from a 10-year follow-up study of a randomized trial in Kenya, but 1) the increase in income is only statistically significant for the 16% that engage in wage work (and I have not looked at the paper to see how deworming changes the selection into wage work), and 2) studies on short term health benefits do not find much which makes it difficult to make sense of the mechanism. Another follow-up study on the Kenyan sample found gains in cognitive ability of siblings of treated children, and a study using historical data and natural experiment on deworming in the south of the U.S. also found income gains, which help corroborate the long-term income gains result in Kenya. GiveWell also gives large discounts to this result in their cost-effectiveness analysis, and say “Deworming might have huge impact, but might have close to zero impact”. Given all this, I am still a bit unsure about their conclusion -- I probably just need to think more about it given the higher uncertainty than for other interventions.

Overall, my conclusion is that GiveWell’s process for identifying the best giving opportunities for low-information donors seems sound. I am mostly happy to trust their recommendations, but I have a few questions and concerns about: 1) whether it is worthwhile to gather more evidence for some interventions with high potentials and insufficient evidence, 2) cost-effective analysis, and 3) the effect of deworming.

In the subsequent discussion, three possible topics for further investigation were raised:

1)    Are GiveWell interventions in a meaningful sense neglecting the root causes of poverty?

2)    Is it likely that funding research into neglected tropical diseases would dominate GiveWell top interventions?

3)    One we take into account strength of evidence, by updating on a prior distribution over the impact of interventions, is it plausible that funding a ‘safe bet’ like the Against Malaria Foundation would be better than more speculative interventions? How, in detail, would this depend on our prior?