By Tom Sittler, Director. In order to discuss this post, please visit its counterpart on the EA Forum.
On February 19, we reached what we call version 0. Version 0 was a self-imposed deadline, five weeks into the project, for producing a minimum viable product, i.e. the name of a grantee and some justification for it. One way we thought of it was pretending the whole project was only five weeks long. (That’s not quite right, since for the final decision, our aim is to reach a consensus, while for version 0 we did not attempt this, and simply produced an ordinal ranking of potential grantees).
On February 15, I asked everyone to submit a ranking of candidate grantees for version -1. Any grantee name could be submitted. Naturally, not every team member would rank every grantee. Our voting rule, the Schulze Method (a condorcet extension), takes unranked candidates to mean that the voter (i) strictly prefers all ranked to all unranked candidates, and (ii) is indifferent among all unranked candidates.
Along with the version -1 ranking, we each produced a document defending our (current) top ranked grantee, and saying what would change our minds. We call these “current view” documents.
We then had four days until version 0. The plan was to spend these reading and thinking about each other’s views, each of us attempting to make updates towards the truth and helping others to do so; in particular, I hoped that the “what would change my mind” section of our current views would help suggest ways to update.
In fact, no team member changed their view between version -1 and version 0 (although some team members changed their view subsequently). This was disappointing, and turned into a major learning point for me.
The bottom line for version 0: the Against Malaria Foundation, Machine Intelligence Research Institute, and Good Food Institute were tied for first place. The large number of ties is a result of the small number of voters and of our voting rule. You can see the full results here, which gives you each team member’s full ranking, and a global ranking under various condorcet voting rules. See also the regularly updated rankings spreadsheet here.
If we exclude the rankings from the three team members who did not submit a current view document, the result is a tie between the Machine Intelligence Research Institute and StrongMinds (this can be computed easily using the procedure described here).
Each research fellow has outlined their reasoning for choosing a certain top charity in separate blogposts:
Konstantin Sietzy, StrongMinds (Konstantin’s top choice was originally StrongMinds. For version 0 he changed his ranking to having MIRI on top, but shortly after that changed his mind back to StrongMinds.)
Lessons learned from version 0
Version 0 was an important step, and I’m glad I decided on this self-imposed deadline at the start of the project. Forcing team members to produce an explicit ranking was good. Hiding behind vague pronouncements was no longer an option. Instead, team members had to actually develop a view of their own.
Taking a more direct stab at the final decision, however, revealed some challenges.
Problems with the epistemic atmosphere
Perceptions of authority vs introspection
Version 0 produced some evidence that there exists a tendency to defer to perceived authority within the group rather than consult one’s own beliefs. The fact that MIRI and GFI ranked so highly despite little previous discussion in full-team meetings especially alerted me to this.
This is a difficult problem to solve. Perhaps people do not feel that the Oxford Prioritisation Project is a sufficiently safe environment to develop or express their own views. Perhaps they anchored on first few proposed rankings, which were those of the most confident team members.
Things I plan to try in order to solve this problem include:
As director, presenting the most compelling reasons against my current view and in favour of another
Giving positive reinforcement when someone challenges one of the more confident members of the group
Challenging assertions that appear to simply defer to perceived authority while claiming to be the result of introspection
There is a bias towards discussing fun topics
Sometimes, people on the team, including myself, found ourselves discussing topics that are entertaining, or that serve show off our knowledge, rather than helping us prioritise between grantees. At times, even though we were ostensibly discussing version 0, we talked about topics that had no chance of affecting our respective rankings.
On the positive side, we usually rectify this when it’s explicitly pointed out.
We are reluctant to give true reasons for our beliefs
Our “current view” documents include a section called “what would change my mind”. I encourage people to be as specific as possible by saying precisely which (operationalised) new pieces of evidence would cause them to make which changes in their ranking.
Team members have struggled to find “I would change my mind if”-statements that actually reflect their beliefs and are truly useful for resolving disagreement. (In other words, it’s difficult to find and communicate the true cruxes of one’ view.)
The following examples of the problem don’t use real names and each character is a mixture of different traits I have witnessed on the team.
Alice says she’d change her mind about AwesomeCharity (her current top choice) if she found some randomised evaluations showing that their intervention is not as cost-effective as we thought, and other interventions in the same area as AwesomeCharity are better. But this is not the crux of Alice’s view. Alice finds it difficult to say which hypothetical pieces of evidence would change her view more radically, to a grantee working in a completely different area.
Carol mentions she’d change her mind if she found evidence “X”. Someone asks, “what would it look like, concretely, for you to get up one day and find X?”. The subsequent discussion reveals that Carol already has good reasons to expect X will not ever materialise.
It feels as though we are not exploring the space of possible grantees systematically. Anyone on the Oxford Prioritisation Project can submit a promising potential grantee. But instead of being efficient, I suspect that this process may have lead to some path-dependency. Random factors early on in the project determined which organisations were first proposed, and the rest of the project so far has been, to an extent, dependent on these arbitrary initial conditions. There are likely to be grantees which we are not considering, for no good reason.
Does nobody have beliefs?
My overall sense from the above is that people don’t have beliefs. What I mean by that is not that people will give you a blank stare if you ask them what their beliefs are. They will generate something to say. What I mean is that the process that generates the verbal statement is not one of looking at one’s models of the world. It’s something else, perhaps a combination of anchoring, deferring to authority, asking one’s System 1 which answer would most raise one’s status within the group, etc.
What might be going on here? I suspect that most of us don’t actually have models of the world. We don’t have the beliefs about empirical facts, causal chains, and counterfactuals, that would constitute a model. Or perhaps we do have some models, but these are so unsophisticated that we are embarrassed to reveal them.
Having models is not binary, of course. One can have models of different levels of sophistication and predictive power, and the extent to which one consciously consults one’s implicit models also varies.
Trying to improve
I looked at each team member’s ranking, and asked them to compare their top choice (X) to the first choice in their ranking that is from a different focus area than their first choice (Y). For example, if Alice’s ranking is:
Animal Charity Evaluators
Good Food Institute
I would ask Alice to compare Animal Charity Evaluators (X) to GiveDirectly (Y). The goal of this aspect of the exercise was to force us to prioritise across focus areas.
The exact questions I asked were:
You ranked X above Y. What would have made you rank Y above X?
What made you in fact rank X above Y?
You have a credence p that X is better than Y, and a credence 1-p that Y is better than X. What is p? In other words, how confident are you?
The catch, however, is that everyone had to answer these questions without accessing the internet or their own past writings. I suspect that in the project so far, we have not been developing our own models in part because we simply took the most convincing-looking answer we could find by searching through what other people have written. This aspect of the exercise was a mechanism to force us to introspect, to look at our own beliefs/models.
My hope was that this introspection exercise might jar some of us into realising we don’t in fact have beliefs to the extent that we thought we did. This could be the first step towards building models.
Building quantitative models
The next step was to build some models. By models, I do not necessarily mean quantitative models. A model is anything that allows you to make predictions about the world. However, the easiest way to force oneself to make one’s models unambiguous is to use numbers. Mathematics doesn’t allow ambiguity.
Therefore, we repeated the “no lookups” exercise for model-building. I asked each team member to choose a metric measuring the impact of their top ranked organisation, and then to build a very simple model estimating this metric, without using the internet or their previous writings.
At the end of this session, we discussed our models, and talked about the experience of working purely from what we have stored in our memory. For next time, we will be iterating on the two documents we produced (the introspection exercise and the simple quantitative model). We’ll publish the results on the blog.
In order to discuss this post, please visit its counterpart on the EA Forum.