Konstantin Sietzy: current view, StrongMinds

This is part of a series of posts where each research fellow describes their current reasons for favouring a certain recipient of our final donation, as described further in the full post on Version 0.

Name: Konstantin Sietzy
Date: 2017-02-21

My current best guess is that we should donate to intervention/organisation:
StrongMinds

My cost-effectiveness estimate for the intervention/organisation is:
<40 $/DALY averted.
I consider $40/DALY an upper bound for reasons detailed previously (range of value: $34-$404/DALY averted between StrongMinds internal and CEA estimates; assuming Michael Plant’s 10-18 times underrating of depression vis-a-vis other causes, this leaves $404 / 10 ≈ $40 as a worst case)

What would change my mind:
My runner-up is MIRI. Following conversation with Tom at last week’s meeting I substantially updated my thinking towards MIRI given it’s vast effectiveness. Three reasons made me revert back to keeping StrongMinds as my top recommendation - reflexively, good arguments against these would serve to change my mind. This is to be viewed in conjunction with my previous statements about what would change my mind on StrongMinds per se, and reflects specifically the comparison StrongMinds vs. MIRI. Please find my previous thoughts on StrongMinds here. Please find rationale for each of the three criticisms mirrored in this section here.

  1. Signalling: The signalling argument is obviously contingent on a belief that OxPrio should, and could, actually attach value to information generation, over and above direct value. This warrants a group discussion, and I would likely update if someone made a convincing argument for why it should not, or in line with the group majority if people’s unanchored confidence estimates indicated that it could not.

    An additional consideration that could change my mind would be a convincing argument that EA in general funds too many global health interventions as compared to AI safety. If this were the case then we should not encourage this by encouraging a shift within the global health bucket, but should use OxPrio’s signalling value to encourage a shift between the global health and the AI safety buckets. One argument for this might be if highly effective GH charities in general exceeded their RFMF and AI safety organisations in general had budget shortfalls (I believe MIRI failed to meet donation targets last year).

  2. Accuracy and appropriateness of MIRI cost-effectiveness estimate: I have not been able to verify the rationale for the effectiveness of adding one additional AI safety researcher, but from Tom’s initial description I have some scepticism about it. I will undertake to rectify that asap.

  3. Future lives / risk aversion: this is the argument I am least confident in. I plan to revisit the paper on donor risk aversion we discussed in weeks 1 and 2 and update my view after

My current best guess for a runner-up is:
MIRI - see reasoning in ‘what would change my mind’ section.