Qays Langan-Dathi: "AI Safety"

Is it worthwhile for us to look further into donating into AI research?

By Qays Langan-Dathi

Summarised answer

In conclusion my answer to my main point is, yes. There is a good chance that AI risk prevention is the most cost effective focus area for saving the most amount of lives with or without regarding future human lives. The probabilities of AI happening is well above negligent and the risk of it turning catastrophic is seriously considered by many leading researchers. Therefore it excels in the importance criterion, on top of that due to it being a recent development it is also highly neglected. The main problem is tractability and us being unable to ascertain how much we can actually change the percentage of existential risk even if we did donate. However that is something that I intend to estimate more accurately for a future update. As it stands though AI risk prevention looks very promising.

Introduction

Whilst there are many worthwhile interventions available for us to donate to in areas such as health, maybe a more neglected and possibly more important area to look into is Global Catastrophic Risks. Of the main Global Catastrophic risks being considered possibly the most likely is the risk due to Artificial intelligence and so in this blog post I would like to explore what the risk is, what we can do to prevent it and finally if it’s worthwhile compared other focus areas.

What is AI risk and why is it important?

Currently we have robots capable of weak/narrow AI which can complete complex but limited tasks for example the chess programme fritz which whilst it is better than any human at chess, is limited to only that task. The risk we will be discussing will be from strong/general intelligence in which the AI would be able to surpass humans in nearly every cognitive task. How can AI pose an existential risk to humanity? Contrary to most popular science fiction, the biggest risk from strong AI is not that it would gain malevolent emotions and aim to eliminate humanity but may do so as a destructive sub goal in order to complete a benevolent main goal. For example the machine may give you what you asked not necessarily what you want, such as a driverless car taking you to where you want to go literally “as fast as it can” with no regard for whether it breaks the speed limit, you starting to vomiting or the police chasing you. Another similar problem is the paperclip problem in which the AI is tasked with creating as many paperclips as possible for a paperclip factory, but with this as its sole goal it uses up all available resources up to and including human atoms in order to create more paperclips. So to summarise the main concern is that the goals of AI may not be aligned with our goals.

It is also important to note that advances in AI development would bring about great positive benefits for humanity. Just a few examples would be: Speeding up science thereby improving medical procedures, improving methods of obtaining sustainable energy and even optimising aid relief. Thus laying the groundwork for safe AI procedures would help benevolent AI to be accepted earlier into everyday use and so may in fact save lives.

We must also take into account that this is in fact an existential risk and so if we value future lives then this can make calculations weigh heavily in favour of preventing AI risk.

Here is a useful table on the FLI website which summarises quite well the main myths and facts you need to know:

 

What is the likelihood that AI risk will even happen?

Now this may all seem like it’s in the realm of science fiction, but will it really happen? Well the short answer is we can’t be certain but there have been a number of surveys of experts in this filed In order to gain an estimate to when and if AI will become reality.

An important recent survey is Müller & Bostrom (2014)’s survey of the top-100 most cited living AI scientists (called “TOP100,”. The experts were asked: For the purposes of this question, assume that human scientific activity continues without major negative disruption. By what year would you see a (10% / 50% / 90%) probability for HLMI to exist?

Responses were:

 

29 of the 100 responded but this still gives the best estimate that we can come up with right now. They seem to plan to do more surveys like this regularly. Note also that this is for HLMI stands for high-level machine intelligence, which is defined as an AI system “that can carry out most human professions at least as well as a typical human.

 

This graph from http://aiimpacts.org/ai-timeline-surveys/ gives us that the most recent surveys expect AI to be 50% likely by at latest 2050.

However a big point is that even if you think it is still many decades away it is still important to start protection research early for when it does happen. Many of the problems are so hard that it will take decades to solve. In the scientific creation of AI maybe researchers will simply make sure that the AI which they create is safe on the go. Thus there being no need to fund things now. The problem with this is if there is an arms race and organisations forgo safety for speed.

So overall the only thing I can say with certainty is that we really are uncertain when AI will be developed, however due to the median of experts’ opinions the probability of it happening in the next century is at least non negligible.

Tractability

Tractability is one of the main issues regarding choosing to fund AI risk prevention. This is because the timescale we are working with as for when AI will be created is a very large and also unknown with a large variance. One key topic is whether it is best to start funding now as opposed to prioritising to fund later in the future. At first, it seems logical to wait until the future to see the picture of what AI and how things will play out clearer before spending time and money. However there are many more arguments which seem to point in the direction of funding now. Some of these are that without an idea now, any work done may be heading in the complete wrong direction and the earlier you find this out the more profitable it is in the long run. Furthermore as I have said before, it is significantly more favourable to have completed enough research before AI is ready than it to be released on the world unprepared.

This article here summarises the main now vs later points rather well: https://www.fhi.ox.ac.uk/the-timing-of-labour-aimed-at-reducing-existential-risk/

 

Another factor is how government regulation may go down. Work done on this now may help in spreading cooperation, otherwise too much may be learned separately and so could lead to arms race in the future. Furthermore increased awareness asap is a good thing, as this is a topic which can be completely mis exaggerated by the media and if lots is known about it early on then it may be easier for governments to optimally regulate AI technology allowing it to good earlier. Also since so much is unknown it may even be worthwhile to support interventions which aim to decide when and how governments could best regulate AI development.

Overall we can’t really judge when AI will be developed and so can’t measure how much effect what we do will achieve. However for AI to develop there are things that must happen and those are things that we can work on now, and the point is being ready before AI is and so it’s somewhat beneficial for us to start preparations early compared to what could happen if we started too late.

Neglectedness

Research into AI risk is a rather recent research area and as such there may be many suitable areas for funding. The attention being gained however is steadily increasing so there is an option that funding may become saturated. There are multiple organisations working on AI risk, most of them quite new and are still in a growth stage. For example MIRI (Machine Intelligence Research Institute) is currently fundraising (https://intelligence.org/2016/09/16/miris-2016-fundraiser/#targets) in order to expand its research team and have collected $546,165 as of writing out of the $750,000 for their basic target with growth and stretch targets at $1,000,000 and $1,250,000 respectively. Clearly they still believe that they will be able to put more money to good use and this suggests that there are opportunities if we did in fact donate our £10,000.  Open Phil goes so far as to say that AI risk is “I consider this cause to be highly neglected, particularly by philanthropists, and I see major gaps in the relevant fields that a philanthropist could potentially help to address.” http://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity

 

Which organisations are there working on AI risk?

The big ones are:

MIRI

FHI

CSER

FLI

AI100

This article by effective altruism: http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/%3f

has a good description of the big organisations. The things in common with them all is that they are all relatively new compared many charities working in other focus areas, e.g. health. Furthermore they mostly are in a growth stage and so donations at this stage would definitely be appreciated.

A simple quantitative estimate

Here is a quick calculation which I have done to compare donating to AI research to donating to AMF:

A rough estimate of what we can do

Maximum Lives saved = population*risk of extinction = np (maximum lives saved is assuming 100% efficiency of whatever intervention we choose to donate to)

=7,500,000,000 * 0.05 (http://www.worldometers.info/world-population/ , risk of extinction before 2100, https://en.wikipedia.org/wiki/Global_catastrophic_risk FHI 2008)

=375,000,000 aka 375 million lives mean estimate.

Now this is a nice estimate and all but it has some big flaws. Firstly we certainly will not get a complete change in the risk of extinction with our donation. Secondly, we don’t know how much the donation will depend on our amount donated so the fact that we are giving £10,000. I will address this now.

What we can deduce? (comparison to GiveWell top charities)

I shall show that existential risks are at least worthy of some more looking into, if not complete focus in the project.

According to: http://uk.businessinsider.com/the-worlds-best-charity-can-save-a-life-for-333706-and-thats-a-steal-2015-7?r=US&IR=T

AMF needs $3,337.06 per life saved. (Note that I haven’t been able to find GiveWell quoting this number themselves but we shall only use this number for estimate purposes anyway.)

We have £10,000, let us reduce this for simplicity to $10,000 and since AMF saves one life per $3,337.06 on average we can estimate that they would save 3 lives per $10,000 approximately.

From this calculation I would like to calculate the amount of change in existential risk we would need to have an effect on in order for donating to AI risk to be more worthwhile than AMF.

Mean lives lost before donation = np (same as previous calculation)

Mean lives lost after intervention = np_2

For this charity to be as worthwhile as AMF it would need to save 3 lives with $10,000.

=> np-np_2 = 3

=> p_2 = (np-3)/n

=> p_2 = 0.0499999996

So the difference between p and p_2 is: p- p_2 = 4 x 10^-10.

This is an incredibly small number and so is extremely feasible that we could make this difference even with something as simple as raising awareness for existential risks. This statistic could be made even more relevant however if we determine animal suffering to be comparable to human suffering with an estimated 1.04311x10^13 number of animals on earth (http://reducing-suffering.org/how-many-wild-animals-are-there/). These animals are excluding things like insects and only consider fish, reptiles, mammals and birds (however there is an argument preventing future animal deaths may not be positively valued to suffering in the wild). I have even used a lower estimate for the population of humans as the figure should be a mean between now and 2100 population which is roughly 10 billion. However a short coming to this is how much this would matter as humans could be in just as much trouble from another existential risk anyway and so this would be wasted, but that would be the same for any other charity we would choose as well. So in conclusion if we can justify that we would be making a 4 x 10^-10 difference to AI existential risk by donating our £10,000 then AI risk is at least better than AMF and since AMF is a rather high benchmark I would propose it would be a front runner compared to most everything else as well.

Sources used in calculation

Number of animals - http://reducing-suffering.org/how-many-wild-animals-are-there/

Human population (present and future): http://www.worldometers.info/world-population/

Table of existential risk probabilities: https://en.wikipedia.org/wiki/Global_catastrophic_risk

Givewell top, dollars per life saved: http://uk.businessinsider.com/the-worlds-best-charity-can-save-a-life-for-333706-and-thats-a-steal-2015-7?r=US&IR=T

 

Is it worthwhile for us to look further into donating into AI risk prevention?

In conclusion my answer to my main point is, yes. There is a good chance that AI risk prevention is the most cost effective focus area for saving the most amount of lives with or without regarding future human lives. The probabilities of AI happening is well above negligent and the risk of it turning catastrophic is seriously considered by many leading researchers. Therefore it excels in the importance criterion, on top of that due to it being a recent development it is also highly neglected. The main problem is tractability and us being unable to ascertain how much we can actually change the percentage of existential risk even if we did donate. However that is something that I intend to estimate more accurately for a future update. As it stands though AI risk prevention looks very promising.

What would change my mind?

My mind would be changed by doing calculations into how much we may be able to affect the probability of existential risk by AI and finding out that the amount is so small that it is not cost effective compared to our other considered focus areas.

My mind would also be changed if I found that there isn’t enough evidence that the possible organisations would be able to use our £10,000, or any other doubts about the AI risk organisations.

 

Sources

https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/

http://www.openphilanthropy.org/research/cause-reports/ai-risk#footnote41_nihfcto

https://www.technologyreview.com/s/602410/no-the-experts-dont-think-superintelligent-ai-is-a-threat-to-humanity/

http://aiimpacts.org/ai-timeline-surveys/

http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines#sourcesPTAI

https://www.fhi.ox.ac.uk/the-timing-of-labour-aimed-at-reducing-existential-risk/