Tom Sittler: current view, Machine Intelligence Research Institute

Name: Tom Sittler
Date: 2017-02-08

My current best guess is that we should donate to intervention/organisation:
Machine Intelligence Research Institute (MIRI)

My cost-effectiveness estimate for the intervention/organisation is:
I use GPP’s tool for ‘quantifying AI safety’.

Without looking at the drop-downs (to avoid being primed), I estimate the different parameters of the model.

  • the total existential risk associated with developing highly capable AI systems: 10% (based on hard-to-verbalise intuition, very loosely based on this)
  • size of the research community working on safety by the time we develop those potentially risky AI systems:

  • From its website, MIRI has 15 people
  • From its website, my guess is FHI has 20 full-time equivalents

  • I guess DeepMind has 300 people based on (1, 2, 3), supposing 5% of them work on safety, that’s 15 people.

  • So the current size of the community is ~50 people.

  • As a wild guess, I suppose the community will be 100 times larger, at 5,000 people. (with more time, I would like to sanity-check this number by looking at the size of the nuclear security community in 1985)

  • Now estimate the effect of adding a researcher to the community now (& for their career), in terms of the total number of researchers that will be added to the eventual community: 3

  • Now suppose that in a heroic effort we managed to double the total amount of work that would be done on AI safety. What percentage of the bad scenarios should we expect this to avert? 10%.

Output of the model:
Then adding a career (of typical quality for the area) to the field now adds about 3 people to the field total, which is 0.1% of the total. This is about 0.1% of the amount that would be needed to double the field, so we should expect it to avert about 0.01% of the bad outcomes, which is a total chance of 1 in 100,000 of averting existential catastrophe.

A (very naive) guess is that the costs to get someone to join the AI safety community for the rest of their careers corresponds to 40 years of salary for a software engineer. Let’s say we pay them £50,000 per year (see 4), the total cost is £2,000,000.

So we could provide 0.5% of a 1 in 100,000 chance of averting catastrophe, or 0.5 * 10^-7.

Read more about MIRI:

What would change my mind:

  • In response to my concern (1) noted here, either:

    • A convincing way to cash out (1), i.e. an argument that take my concern from “I don’t know what to do about this possibility” to “we should not focus on reducing existential risks”. This would cause me to move to a runner-up outside this focus area.

    • A plausible argument for funding another type of AI research, with a (convincing) research agenda focussed on reducing the probability of a bad future (S-risks)

  • If 70% of academic AI researchers, conditional on believing AI safety research is a good idea in practice, believe that MIRI’s work is not a good way to achieve this; and if this is not the case for FHI’s work, then donate to FHI.

  • If after spending 2 hours understanding Michael Dickens’ model in more depth, and putting in my best guess numbers, the model outputs AI safety beaten by some other intervention by more than a 50% margin, I don’t promise I’ll change my mind, but I’ll update away from MIRI and seriously look into the other intervention.