Effective Altruism is an Ideology, not (just) a Question

Introduction

In a widely-cited article on the EA forum, Helen Toner argues that effective altruism is a question, not an ideology. Here is her core argument:

What is the definition of Effective Altruism? What claims does it make? What do you have to believe or do, to be an Effective Altruist?

I don’t think that any of these questions make sense.

It’s not surprising that we ask them: if you asked those questions about feminism or secularism, Islamism or libertarianism, the answers you would get would be relevant and illuminating. Different proponents of the same movement might give you slightly different answers, but synthesising the answers of several people would give you a pretty good feeling for the core of the movement.

But each of these movements is answering a question. Should men and women be equal? (Yes.) What role should the church play in governance? (None.) What kind of government should we have? (One based on Islamic law.) How big a role should government play in people’s private lives? (A small one.)

Effective Altruism isn’t like this. Effective Altruism is asking a question, something like:

“How can I do the most good, with the resources available to me?”

In this essay I will argue that his view of effective altruism being a question and not an ideology is incorrect. In particular, I will argue that effective altruism is an ideology, meaning that it has particular (if somewhat vaguely defined) set of core principles and beliefs, and associated ways of viewing the world and interpreting evidence. After first explaining what I mean by ideology, I proceed to discuss the ways in which effective altruists typically express their ideology, including by privileging certain questions over others, applying particular theoretical frameworks to answer these questions, and privileging particular answers and viewpoints over others. I should emphasise at the outset that my purpose in this article is not to disparage effective altruism, but to try to strengthen the movement by helping EAs to better understand the intellectual actual intellectual underpinnings of the movement.

What is an ideology?

The first point I want to explain is what I mean when I talk about an ‘ideology’. Basically, an ideology is a constellation of beliefs and perspectives that shape the way adherents of that ideology view the world. To flesh this out a bit, I will present two examples of ideologies: feminism and libertarianism. Obviously these will be simplified since there is considerable heterogeneity within any ideology, and there are always disputes about who counts as a ‘true’ adherent of any ideology. Nevertheless, I think these quick sketches are broadly accurate and helpful for illustrating what I am talking about when I use the word ‘ideology’.

First consider feminism. Feminists typically begin with the premise that the social world is structured in such a manner that men as a group systematically oppress women as a group. There is a richly structured theory about how this works and how this interacts with different social institutions, including the family, the economy, the justice system, education, health care, and so on. In investigating any area, feminists typically focus on gendered power structures and how they shape social outcomes. When something happens, feminists ask ‘what affect does this have on the status and place of women in society?’ Given these perspectives, feminists typically are uninterested in and highly sceptical of any accounts of social differences between men and women based on biological differences, or attempts to rationalist differences on the basis of social stability or cohesion. This way of looking at things, focus on particular issues at the expense of others, and set of underlying assumptions constitutes the ideology of feminism.

Second consider libertarianism. Libertarians typically begin with the idea that individuals are fundamentally free and equal, but that governments throughout the world systematically step beyond their legitimate role of protecting individual freedoms by restricting those freedoms and violating individual rights. In analysing any situation, libertarians focus on how the actions of governments limit the free choices of individuals. Libertarians have extensive accounts as to how this occurs through taxation, government welfare programs, monetary and fiscal policy, the criminal justice system, state-sponsored education, the military industrial complex, and so on. When something happens, libertarians ask ‘what affect does this have on individual rights and freedoms?’ Given these perspectives, libertarians typically are uninterested in and highly sceptical of any attempts to justify state intervention on the basis of increases in efficiency, increasing equality, or improving social cohesion. This way of looking at things, focus on particular issues at the expense of others, and set of underlying assumptions constitutes the ideology of libertarianism.

Given the foregoing, here I summarise some of the key aspects of an ideology:

  1. Some questions are privileged over others.
  2. There are particular theoretical frameworks for answering questions and analysing situations.
  3. As a result of 1 and 2, certain viewpoints and answers to questions are privileged, while others are neglected as being uninteresting or implausible.

With this framework in mind of what an ideology is, I now want to apply this to the case of effective altruism. In doing so, I will consider each of these three aspects of an ideology in turn, and see how they relate to effective altruism.

Some questions are privileged over others

Effective altruism, according to Toner (and many others), asks a question something like ‘How can I do the most good, with the resources available to me?’. I agree that EA does indeed ask this question. However it doesn’t follow that EA isn’t an ideology, since as we have just seen, ideologies privilege some questions over others. In this case we can ask – what other similar questions could effective altruism ask? Here are a few that come to mind:

  • What moral duties do we have towards people in absolute poverty, animals in factory farms, or future generations?
  • What would a virtuous person do to help those in absolute poverty, animals in factory farms, or future generations?
  • What oppressive social systems are responsible for the most suffering in the world, and what can be done to dismantle them?
  • How should our social and political institutions be structured so as to properly represent the interests of all persons, or all sentient creatures?

I’ve written each with a different ethical theory in mind. In order these are: deontology, virtue ethics, Marxist/postcolonial/other critical theories, and contractarian ethics. While some readers may phrase these questions somewhat differently, my point is simply to emphasise that the question you ask depends upon your ideology.

Some EAs may be tempted to respond that all my examples are just different ways, or more specific ways, of asking the EA question ‘how can we do the most good’, but I think this is simply wrong. The EA question is the sort of question that a utilitarian would ask, and presupposes certain assumptions that are not shared by other ethical perspectives. These assumptions include things like: there is (in principle) some way of comparing the value of different causes, that it is of central importance to consider maximising the positive consequences of our actions, and that historical connections between us and those we might try to help are not of critical moral relevance in determining how to act. EAs asking this question need not necessarily explicitly believe all these assumptions, but I argue that in asking the EA question instead of other questions they could ask, they are implicitly relying upon tacit acceptance of these assumptions. To assert that these are beliefs shared by all other ideological frameworks is to simply ignore the differences between different ethical theories and the worldviews associated with them.

Particular theoretical frameworks are applied

In addition to the questions they ask, effective altruists tend to have a very particular approach to answering these questions. In particular, they tend to rely almost exclusively on experimental evidence, mathematical modelling, or highly abstract philosophical arguments. Other theoretical frameworks are generally not taken very seriously or simply ignored. Theoretical approaches that EAs tend to ignore include:

  • Sociological theory: potentially relevant to understanding causes of global poverty, how group dynamics operates and how social change occurs.
  • Ethnography: potentially highly useful in understanding causes of poverty, efficacy of interventions, how people make dietary choices regarding meat eating, the development of cultural norms in government or research organisations surrounding safety of new technologies, and other such questions, yet I have never heard of an EA organisation conducting this sort of analysis.
  • Phenomenology and existentialism: potentially relevant to determining the value of different types of life and what sort of society we should focus on creating.
  • Historical case studies: there is some use of these in the study of existential risk, mostly relating to nuclear war, but mostly this method is ignored as a potential source of information about social movements, improving society, and assessing the risk of catastrophic risks.
  • Regression analysis: potentially highly useful for analysing effective causes in global development, methods of political reform, or even the ability to influence AI or nuclear policy formation, but largely neglected in favour of either experiments or abstract theorising.

If readers disagree with my analysis, I would invite them to investigate the work published on EA websites, particularly research organisations like the Future of Humanity Institute and the Global Priorities Institute (among many others), and see what sorts of methodologies they utilise. Regression analysis and historical case studies are relatively rare, and the other three techniques I mention are virtually unheard of. This represents a very particular set of methodological choices about how to best go about answering the core EA question of how to do the most good.

Note that I am not taking a position on whether it is correct to privilege the types of evidence or methodologies that EA typically does. Rather, my point is simply that effective altruists seem to have very strong norms about what sorts of analysis is worthwhile doing, despite the fact that relatively little time is spent in the community discussing these issues. GiveWell does have a short discussion of their principles for assessing evidence, and there is a short section in the appendix of the GPI research agenda about harnessing and combining evidence, but overall the amount of time spent discussing these issues in the EA community is very small. I therefore content that these methodological choices are primarily the result of ideological preconceptions about how to go about answering questions, and not an extensive analysis of the pros and cons of different techniques.

Certain viewpoints and answers are privileged

Ostensibly, effective altruism seeks to answer the question ‘how to do the most good’ in a rigorous but open-minded way, without making ruling out any possibilities at the outset or making assumptions about what is effective without proper investigation. It seems to me, however, that this is simply not an accurate description of how the movement actually investigates causes. In practise, the movement seems heavily focused on the development and impacts of emerging technologies. Though not so pertinent in the case of global poverty, this is somewhat applicable in the case of animal welfare, given the increasing focus on the development of in vitro meat and plant-based meat substitutes. This technological focus is most evident in the focus on far future causes, since all of the main far future cause areas focused on by 80,000 hours and other key organisations (nuclear weapons, artificial intelligence, biosecurity, and nanotechnology) relate to new and emerging technologies. EA discussions also commonly feature discussion and speculation about the effects that anti-aging treatments, artificial intelligence, space travel, nanotechnology, and other speculative technologies are likely to have on human society in the long term future.

By itself the fact that EAs are highly focused on new technologies doesn’t prove that they privilege certain viewpoints and answers over others – maybe a wide range of potential cause areas have been considered, and many of the most promising causes just happen to relate to emerging technologies. However, from my perspective this does not appear to be the case. As evidence for this view, I will present as an illustration the common EA argument for focusing on AI safety, and then show that much the same argument could also be used to justify work on several other cause areas that have attracted essentially no attention from the EA community.

We can summarise the EA case for working on AI safety as follows, based on articles such as those from 80,000 hours and CEA (note this is an argument sketch and not a fully-fledged syllogism):

  • Most AI experts believe that AI with superhuman intelligence is certainly possible, and has nontrivial probability of arriving within the next few decades.
  • Many experts who have considered the problem have advanced plausible arguments for thinking that superhuman AI has the potential for highly negative outcomes (potentially even human extinction), but there are current actions we can take to reduce these risks.
  • Work on reducing the risks associated with superhuman AI is highly neglected.
  • Therefore, the expected impact of working on reducing AI risks is very high.

The three key aspects of this argument are expert belief in plausibility of the problem, very large impact of the problem if it does occur, and the problem being substantively neglected. My argument is that we can adapt this argument to make parallel arguments for other cause areas. I shall present three: overthrowing global capitalism, philosophy of religion, and resource depletion.

Overthrowing global capitalism

  • Many experts on politics and sociology believe that the institutions of global capitalism are responsible for extremely large amounts of suffering, oppression, and exploitation throughout the world.
  • Although there is much work criticising capitalism, work on devising and implementing practical alternatives to global capitalism is highly neglected.
  • Therefore, the expected impact of working on devising and implementing alternatives to global capitalism is very high.

Philosophy of religion

  • A sizeable minority of philosophers believe in the existence of God, and there are at least some very intelligent and educated philosophers are adherents of a wide range of different religions.
  • According to many religions, humans who do not adopt the correct beliefs and/or practices will be destined to an eternity (or at least a very long period) of suffering in this life or the next.
  • Although religious institutions have extensive resources, the amount of time and money dedicated to systematically analysing the evidence and arguments for and against different religious traditions is extremely small.
  • Therefore, the expected impact of working on investigating the evidence and arguments for the various religious is very high.

Resource depletion

  • Many scientists have expressed serious concern about the likely disastrous effects of population growth, ecological degradation, and resource depletion on the wellbeing of future generations and even the sustainability of human civilization as a whole.
  • Very little work has been conducted to determine how best to respond to resource depletion or degradation of the ecosystem so as to ensure that Earth remains inhabitable and human civilization is sustainable over the very long term.
  • Therefore, the expected impact of working on investigating long-term responses to resource depletion and ecological collapse is very high.

Readers may dispute the precise way I have formulated each of these arguments or exactly how closely they all parallel the case for AI safety, however I hope they will see the basic point I am trying to drive at. Specifically, if effective altruists are focused on AI safety essentially because of expert belief in plausibility, large scope of the problem, and neglectedness of the issue, a similar case can be made with respect to working on overthrowing global capitalism, conducting research to determine which religious belief (if any) is most likely to be correct, and efforts to develop and implement responses to resource depletion and ecological collapse.

One response that I foresee is that none of these causes are really neglected because there are plenty of people focused on overthrowing capitalism, researching religion, and working on environmentalist causes, while very few people work on AI safety. But remember, outsiders would likely say that AI safety is not really neglected because billions of dollars are invested into AI research by academics and tech companies around the world. The point is that there is a difference between working in a general area and working on the specific subset of that area that is highest impact and most neglected. In much the same way as AI safety research is neglected even if AI research more generally is not, likewise in the parallel cases I present, I argue that serious evidence-based research into the specific questions I present is highly neglected, even if the broader areas are not.

Potential alternative causes are neglected

I suspect that at this point many of my readers will at this point be mentally marshaling additional arguments as to why AI safety research is in fact a more worthy cause than the other three I have mentioned. Doubtless there are many such arguments that one could present, and probably I could devise counterarguments to at least some of them – and so the debate would progress. My point is not that the candidate causes I have presented actually are good causes for EAs to work on, or that there aren’t any good reasons why AI safety (along with other emerging technologies) is a better cause. My point is rather that these reasons are not generally discussed by EAs. That is, the arguments generally presented for focusing on AI safety as a cause area do not uniquely pick out AI safety (and other emerging technologies like nanotechnology or bioengineered pathogens), but EAs making the case for AI safety essentially never notice this because their ideological preconceptions bias them towards focusing on new technologies, and away from the sorts of causes I mention here. Of course EAs do go into much more detail about the risks of new technologies than I have here, but the core argument for focusing in AI safety in the first place is not applied to other potential cause areas to see if (as I think it does) it could also apply to those other causes.

Furthermore, it is not as if effective altruists have carefully considered these possible cause areas and come to the reasoned conclusion that they are not the highest priorities. Rather, they have simply not been considered. They have not even been on the radar, or at best barely on the radar. For example, I searched for ‘resource depletion’ on the EA forums and found nothing. I searched for ‘religion’ and found only the EA demographics survey and an article about whether EA and religious organisations can cooperate. A search for ‘socialism’ yielded one article discussing what is meant by ‘systemic change’, and one article (with no comments and only three upvotes) explicitly outlining an effective altruist plan for socialism.

This lack of interest in other cause areas can also be found in the major EA organisations. For example, the stated objective of the global priorities institute is:

To conduct foundational research that informs the decision-making of individuals and institutions seeking to do as much good as possible. We prioritise topics which are important, neglected, and tractable, and use the tools of multiple disciplines, especially philosophy and economics, to explore the issues at stake.

On the face of it this aim is consistent with all three of the suggested alternative cause areas I outlined in the previous section. Yet the GPI research agenda focuses almost entirely on technical issues in philosophy and economics pertaining to the long-termism paradigm. While AI safety is not discussed extensively it is mentioned a number of times, and much of the research agenda appears to be developed around related questions in philosophy and economics that the long-termism paradigm gives rise to. Religion and socialism are not mentioned at all in this document, while resource depletion is only mentioned indirectly by two references in the appendix under ‘indices involving environmental capital’.

Similarly the Future of Humanity Institute focuses on AI safety, AI governance, and biotechnology. Strangely, it also pursues some work on highly obscure topics such as the aestivation solution to the Fermi paradox and on the probability of Earth being destroyed by microscopic black holes or metastable vacuum states. At the same time, nothing about any of the potential new problem areas I have mentioned.

Under their problem profiles, 80,000 hours does not mention having investigated anything relating to religion or overthrowing global capitalism (or even substantially reforming global economic institutions). They do link to an article by Robert Wiblin discussing why EAs do not work on resource scarcity, however this is not a careful analysis or investigation, just his general views on the topic. Although I agree with some of the arguments he makes, the depth of analysis is very shallow relative to the potential risks and concern raised about this issue by many scientists and writers over the decades. Indeed, I would argue that there is about as much substance in this article as a rebuttal of resource depletion as a cause area as one finds in the typical article dismissing AI fears as exaggerated and hysterical.

In yet another example, the Foundational Research Institute states that:

Our mission is to identify cooperative and effective strategies to reduce involuntary suffering. We believe that in a complex world where the long-run consequences of our actions are highly uncertain, such an undertaking requires foundational research. Currently, our research focuses on reducing risks of dystopian futures in the context of emerging technologies. Together with others in the effective altruism community, we want careful ethical reflection to guide the future of our civilization to the greatest extent possible.

Hence, even though it seems that in principle socialists, Buddhists, and ecological activists (among others) are highly concerned about reducing the suffering of humans and animals, FRI ignores the topics that these groups would tend to focus on, and instead focuses their attention on the risks of emerging technologies. As in the case of FHI, they also seem to find room for some topics of highly dubious relevance to any of EAs goals, such as this paper about the potential for correlated actions with civilizations located elsewhere in the multiverse.

Outside of the main organisations, there has been some discussion about socialism as an EA cause, for example on r/EffectiveAltruism and by Jeff Kaufman. I was able to find little else about either of the two potential cause areas I outline.

Overall, on the basis of the foregoing examples I conclude that the amount of time and energy spent by the EA community investigating the three potential new cause areas that I have discussed is negligible compared to the time and energy spent investigating emerging technologies. This is despite the fact that most of these groups are not ostensibly established with the express purpose of reducing the harms of emerging technologies, but have simply chosen this cause area over other possibilities would that also potentially fulfill their broad objectives. I have not found any evidence that this choice is the result of early investigations demonstrating that emerging technologies are far superior to the cause areas I mention. Instead, it appears to be mostly the result of disinterest in the sorts of topics I identify, and a much greater ex ante interest in emerging technologies over other causes. I present this as evidence that the primary reason effective altruism focuses so extensively on emerging technologies over other speculative but potentially high impact causes, is because of the privileging of certain viewpoints and answers over others. This, in turn, is the result of the underlying ideological commitments of many effective altruists.

What is EA ideology?

If many effective altruists share a common ideology, then what is the content of this ideology? As with any social movement, this is difficult to specify with any precision and will obviously differ somewhat from person to person and from one organisation to another. That said, on the basis of my research and experiences in the movement, I would suggest the following core tenets of EA ideology:

  1. The natural world is all that exists, or at least all that should be of concern to us when deciding how to act. In particular, most EAs are highly dismissive of religious or other non-naturalistic worldviews, and tend to just assume without further discussion that views like dualism, reincarnation, or theism cannot be true. For example, the map of EA concepts has listed under ‘important general features of the world’ pages on ‘possibility of an infinite universe’ and ‘the simulation argument’, yet no mention of the possibility that anything could exist beyond the natural world. It requires a very particular ideological framework to regard the simulation as is more important or pressing than non-naturalism.
  2. The correct way to think about moral/ethical questions is through a utilitarian lens in which the focus is on maximising desired outcomes and minimising undesirable ones. We should focus on the effect of our actions on the margin, relative to the most likely counterfactual. There is some discussion of moral uncertainty, but outside of this deontological, virtue ethics, contractarian, and other approaches are rarely applied in philosophical discussion of EA issues. This marginalist, counterfactual, optimisation-based way of thinking is largely borrowed from neoclassical economics, and is not widely employed by many other disciplines or ideological perspectives (e.g. communitarianism).
  3. Rational behaviour is best understood through a Bayesian framework, incorporating key results from game theory, decision theory, and other formal approaches. Many of these concepts appear in the idealised decision making section of the map of EA concepts, and are widely applied in other EA writings.
  4. The best way to approach a problem is to think very abstractly about that problem, construct computational or mathematical models of the relevant problem area, and ultimately (if possible) test these models using experiments. The model appears to be of how research is approached in physics with some influence from analytic philosophy. The methodologies of other disciplines are largely ignored.
  5. The development and introduction of disruptive new technologies is a more fundamental and important driver of long-term change than socio-political reform or institutional change. This is clear from the overwhelming focus on technological change of top EA organisations, including 80,000 hours, the Center for Effective Altruism, the Future of Humanity Institute, the Global Priorities Project, the Future of Life Institute, the Centre for the Study of Existential Risk, and the Machine Intelligence Research Institute.

I’m sure others could devise different ways of describing EA ideology that potentially look quite different to mine, but this is my best guess based on what I have observed. I believe these tenets are generally held by EAs, particularly those working at the major EA organisations, but are generally not widely discussed or critiqued. That this set of assumptions is fairly specific to EA should be evident if one reads various criticisms of effective altruism from those outside the movement. Although they do not always express their concerns using the same language that I have, it is often clear that the fundamental reason for their disagreement is the rejection of one or more of the five points mentioned above.

Conclusion

My purpose in this article has not been to contend that effective altruists shouldn’t have an ideology, or that the current dominant EA ideology (as I have outlined it) is mistaken. In fact, my view is that we can’t really get anywhere in rational investigation without certain starting assumptions, and these starting assumptions constitute our ideology. It doesn’t follow from this that any ideology is equally justified, but how we adjudicate between different ideological frameworks is beyond the scope of this article.

Instead, all I have tried to do is argue that effective altruists do in fact have an ideology. This ideology leads them to privilege certain questions over others, to apply particular theoretical frameworks to the exclusion of others, and to focus on certain viewpoints and answers while largely ignoring others. I have attempted to substantiate my claims by showing how different ideological frameworks would ask different questions, use different theoretical frameworks, and arrive at different conclusions to those generally found within EA, especially the major EA organisations. In particular, I argued that the typical case for focusing on AI safety can be modified to serve as an argument for a number of other cause areas, all of which have been largely ignored by most EAs.

My view is that effective altruists should acknowledge that the movement as a whole does have an ideology. We should critically analyse this ideology, understand its strengths and weaknesses, and then to the extent to which we think this set of ideological beliefs is correct, defend it against rebuttals and competing ideological perspectives. This is essentially what all other ideologies do – it is how the exchange of ideas works. Effective altruists should engage critically in this ideological discussion, and not pretend they are aloof from it by resorting to the refrain that ‘EA is a question, not an ideology’.

A Case for Ethical Naturalism

Introduction

In this article I will outline a brief case for ethical naturalism, which is the view that morality is real and arises purely from aspects of the natural world. My argument will proceed in three parts. First, I will attempt to provide some conceptual clarity by outlining what we mean when we talk about morality, focusing on what sort of thing morality is and what it would entail if it existed. Second, I will sketch an example of a naturalistic moral theory, specifically a theory of reductive moral naturalism outlined by Peter Railton. Third, I will consider some objections to ethical naturalism: motivational internalism, the triviality objection, and the problem of normativity. I will argue that these objections do not substantively undercut the case for ethical naturalism.

What is morality?

Before we can answer the question ‘does morality exist’, we must first determine what we are even talking about when we say ‘morality’. At its most fundamental, morality is a code of conduct for human behaviour which specifies some actions (and inactions, attitudes, motivations, etc) as appropriate or proper, and others as inappropriate or improper. Behaviours congruent with the moral code are praised, while those incongruent with it are condemned. According to moral universalism, there is one privileged code of conduct which is applicable everywhere, at all times and in all societies (at least among humans; here for simplicity I will leave aside issues of animal and machine ethics). This privileged code of conduct, which we might call the ‘correct morality’, need not involve any very specific norms, but might consist of quite general standards and principles which could then be applied differently in different societies depending on circumstances. So for example the ‘correct moral code’ might specify that it is wrong to kill human beings without a very good reason and without proper due process, but exactly what constitutes a good reason and due process may well depend on the social precise circumstances.

In asking whether morality exists, therefore, we are asking whether there is a single correct code of conduct for human behaviour that is applicable to all human societies, even if those societies were unaware of it or chose to ignore it. Traditionally many have identified this privileged code of conduct with God’s laws or commandments. Ethical naturalism, however, is the view that moral facts are natural facts about the physical world, and not the product of some divine injunction or transcendent cosmic principle. Absent some sort of creator or other privileged supernatural being, what else could give rise to, or could account for the existence of, a privileged code of human conduct? Ethical naturalists are those who believe that it is possible for such a privileged code of conduct to exist purely in the natural world. How could this be the case?

To answer this question we must first observe that there is more to morality than simply being a privileged moral code of conduct. Though the details vary, there is effectively universal agreement that the privileged code of conduct that is morality necessarily promotes pro-social cooperation of people within a society, proscribes various behaviours that are detrimental to the self and to others, and promotes fairness and equity. Of course there is sharp disagreement about how to understand notions like ‘harm’ and ‘fairness’, but the point here is simply that the universal code of conduct referred to by morality, if such as thing exists, must relate in some central way to reducing harm and promoting fairness and equity. We could imagine other universal codes of conduct, but I argue they would not be a moral code of conduct, since our conception of morality necessarily and intrinsically includes these notions. This constraint is important, because it provides sufficient detail to begin constructing an account as to how morality could exist in a naturalistic universe.

A theory of ethical naturalism

Armed with a basic conception as to what we mean when we talk about morality, we are now ready to outline a theory as to how ethnical naturalism can account for the existence of moral facts. Here I will present only one of the many theories that have been developed, that propounded by philosopher Peter Railton. The details of his theory are not the major focus of this article, so I will offer only the briefest outline. The key idea is that moral facts derive from what would maximise the fulfilment of idealised preferences. An idealised preference is not what somebody actually wants, but what they would want themselves to want, if they had access to full information and were perfectly rational. This is important because people can want things that are bad for them (e.g. wanting to smoke). Railton’s account also holds that moral facts refer to what would maximally satisfy the sum of idealised preferences aggregated over all individuals, treating each person equally. Even though the idealised preferences of any given person may be purely selfish, the satisfaction of idealised preferences across all people necessarily incorporates the wellbeing of all persons, and thereby provides a basis for moral facts. Thus, according to Railton’s theory, an action is morally good to the degree to which it contributes to satisfying the sum of everyone’s idealised preferences, treating each person equally.

Railton’s account provides a naturalistic theory, because facts about idealised preferences are natural facts – they relate to things in the natural world (specifically idealised human desires about promoting human welfare), as opposed to divine commands, logical abstractions, or metaphysical principles. It is also very clearly a moral theory, since it provides (in outline form) a code of conduct for human behaviour relating in its very essence to human welfare and action in social contexts. Though I lack the space to make the argument here, I also believe that Railton’s theory of morality as maximising the fulfilment of idealised preferences provides an account of the universal code of social conduct that best fits our antecedent notion of what such a code we are looking for. That is, compared to any rival accounts, Railton’s best fits what we mean when we think and talk about morality. As such, just as in science we accept the theory that best accounts for and explains the available data, so too should we regard Railton’s theory as privileged over others, and at least as approximately describing the ‘correct morality’.

Parfit argues against idealised preference theories of morality on the basis that they provide no constraints on what people’s idealised preferences could be. He uses the example of an anorexic girl who, even after full reflection and access to all relevant information, could conceivably still decide that her idealised preference is for her to starve herself to death. According to Railton’s theory, this would mean that this would then be what is non-morally good for her. I personally do not consider this to be a strong objection to idealised preference theories of morality. This is because the idea that anyone who was fully rational and had access to all pertinent information regarding ways of living would still decide that their idealised preference would be to starve themselves to death is, in my view, totally absurd. This is precisely why conditions such as anorexia and depression are rightly regarded as mental disorders, involving false beliefs and a variety of cognitive distortions. If we begin with an assumption that even idealised fully rational, fully informed versions of ourselves would be subject to such defective views, then it does not surprise me that we arrive at absurd conclusions. I see this as stemming from the absurd initial assumption, and not from any particular flaw in Railton’s theory.

Motivational internalism objection

There are three major lines of objection often raised against theories of ethical naturalism. The first such objection comes from adherents of a view called motivational internalism, who argue that motivation is an essential component of morality. According to this view, a belief in some moral fact necessarily involves a motivation to act in accordance with that fact. For example, to believe that it is wrong to eat animals would necessarily entail a motivation not to eat animals. Perhaps that motivation would be overwhelmed by a stronger motivation, but nevertheless it must exist in some form. Anyone who did not possess such a motivation would, under this view, not truly believe that it was wrong to eat animals, even though they may protest to such a belief.

The reason why motivational internalism poses a problem is because it seems hard to fit into a naturalistic worldview. This is because under moral naturalism moral facts are simply natural facts, and it does not seem like natural facts are the sorts of things that necessarily lead to any particular motivation. We can believe all sorts of things about the solar system, the human body, societies, physics, etc, without any necessary motivational state being attached to or following from this. This idea that beliefs are never sufficient for producing motivations to act is known as the Humean theory of motivation, after its main populariser David Hume. According to Hume, for a motivation to exist we must have both some sort of antecedent desire, plus a belief that acting in a certain way would satisfy that desire. Mere belief itself is never sufficient to produce motivation. If we accept both motivational internalism and the Humean theory of motivation, it follows that when people come to believe in a moral fact, that belief always produces a relevant desire to act. Many philosophers regard this as hard to fit into a naturalistic worldview, as there just don’t seem to be any facts about the natural world that necessarily produce desires in this way.

My response to this issue is to simply reject motivational internalism is being too strong and too demanding a view. After all, why should we think that moral beliefs necessarily imply or generate a corresponding motivation? This does not appear to be the case empirically, since there appears to be strong evidence for the existence of psychopaths who know about morality but remain unmotivated to act in accordance with it. Furthermore, it also seems to be the case that the principle fails to apply to other fields of enquiry. We can, for instance, imagine recalcitrant persons who agree that an argument is sound and fail to identify any logical mistake with it, but nevertheless still have no motivation to accept the conclusion of the argument as true (indeed, many of us have likely participated in discussions where this behaviour has manifested!) Given these considerations, I do not see any strong reason to accept motivational internalism, and as such the failure of natural moral facts to necessarily supply any motivation to act in accordance with them does nothing to undermine the case for ethical naturalism.

The triviality objection

Parfit raises a second objection to ethical naturalism, which he calls the ‘triviality objection’. According to this argument, it is impossible for ethical naturalists to simultaneously argue that ethical facts are natural facts, and also maintain that this is a substantive, informative claim, more than just a mere tautology. The example that he uses considers two properties: the natural property ‘maximises happiness’, and the moral property ‘is what we morally ought to do’. Ethical naturalists argue that a property something like ‘maximises happiness’ is the same as the property ‘is what we morally ought to do’. Yet according to Parfit, these two properties cannot simply be identical, as otherwise there would not be two properties but one, and we would essentially just be saying ‘property A is property A’, which is an uninformative tautology. To be a substantive claim, ethical naturalists must instead be saying ‘property A is property B’, but Parfit doesn’t think this makes any sense, since there must be something to distinguish the two properties for them to be different. He gives the example of water and H20, arguing that although water and H20 are the same substance, the property ‘is comprised of two hydrogen atoms and one oxygen atom’ is not the same as the property ‘is a clear substance that falls from the sky and we need to drink to survive’. These two properties might be satisfied by the same stuff, but the properties themselves are not the same. Likewise, Parfit argues that states of affairs can possess both natural and moral properties, but these properties will always and necessarily be distinct and different properties, not the same as the ethical naturalist claims. Parfit thinks this is an argument for irreducible moral properties, which cannot be reduced to or equated with natural properties.

The flaw in Parfit’s argument, in my view, is that he does not articulate what it means for two apparently distinct properties to be the same. We can address the apparent paradox by appealing to the distinction between sense (the internal psychological meaning of a phrase) and reference (the thing in the real world picked out by a phrase). The classic example of this is that of the ‘morning star’ (a star that is visible in the east just before sunrise), and the ‘evening star’ (a star that is visible in the west just after sunset). Although the phrases ‘the morning star’ and ‘the evening star’ do not have the same meaning, they in practise refer to the same thing, namely the planet Venus. We can apply this example to response to Parfit’s concern. The natural property ‘maximises happiness’ does not mean the same thing as the non-natural property ‘is morally good’. However, it turns out that both properties are ‘the same’, by which I mean that:

  • All pertaining states of affairs with property A also have property B
  • There are facts that account for this coincidence of properties, so it is not simply an accident

I argue that this is what reductive naturalists mean when they say that the property ‘is morally good’ is the same thing as the property ‘is morally good’. Understood in this way, there is no paradox about how two properties can be the same and yet different. They are different in that they mean different things (have different senses), but are the same in that they have the same referent (the properties are fulfilled by the same states of affairs and only those states of affairs). The reductive naturalist’s claim that the two properties are the same is therefore substantial, and not trivial. As such, I believe that Parfit’s triviality objection fails.

The problem of normativity

The third objection to ethical naturalism, and probably the most important, is the problem of normativity; relating to the ‘binding force’ of morality. Philosophers typically understand this to mean that all persons have reasons to act morally, even if they may be unaware of those reasons or choose to reject them. This idea is called moral rationalism, the view that if it is wrong for somebody to do a particular act, then there is always a reason for them not to do that action. We would, for instance, typically accept the claim that smokers have a reason to quit smoking (namely the health benefits), even if they are unaware of those benefits or simply choose to ignore them. Likewise, it is argued that all persons have reasons not to kill, steal, etc, even if they fail to act in accordance with those reasons. As with motivational internalism, many thinkers have regarded normativity as a problem for ethical naturalism, on the grounds that natural facts simply are not the sorts of things that necessarily give rise to any particular reason to act. This is related to J. L. Mackie’s famous ‘argument from queerness’, in which he argued against the existence of moral facts on the basis that they would need to posses some queer property of being intrinsically motivating, or have an ‘ought-to-be-doneness’ about them. This sort of property seems hard to fit into a naturalistic worldview, because natural facts simply describe the way things are in the world. The way things are, however, doesn’t impose any obligations on us, or provide any reasons to act one way or the other. According to this argument the ethnical naturalist is, therefore, unable to account for the normativity of morality.

The challenge, then, comes down to what can be said in answer to the amoralist? This is a person who recognises the existence of facts about idealised human preferences  (in line with Railton’s account), and perhaps even agrees that this provides the best account of a code of conduct pertinent to moral issues, but nevertheless demands to know why they have a reason to act in accordance with this code. In my view, the best response to this challenge is to argue that it is a basic, foundational principle that in any domain, rational agents have reasons to act in accordance with the privileged code of conduct (if any) pertinent to that domain. To understand how this answer works, suppose somebody were presented with a mathematical proof which they understood and followed in every stage, but then simply refused to agree that they had any reason to accept the conclusion. We could imagine them retorting that they have no internal desire or motivation to form accurate beliefs about this mathematical question, and therefore reject the idea that they have a reason to accept the conclusion of this proof. I would content, however, that regardless of their particular desires, the person in question still has a reason to accept the conclusion of the proof, because that is consistent with the privileged norms of rational inference governing the pertinent domain (in this case mathematics). To take another example, we would typically say that anyone engaged in a game of chess would have a reason to make a move that would help them to win the game – even if they did not feel any desire to make the move, and even if they didn’t care about winning the game at all. The fact that the person is engaged in the domain of chess means that there is a privileged code of conduct pertinent to that domain which provides them with some reasons for action, irrespective of their desires or motivations. The recalcitrant person in such cases is unlikely to be persuaded by this argument, but it seems to me that it correctly describes the reason they have for accepting the proof or making the chess move.

I argue much the same thing applies for morality. The domain of conduct pertinent to morality is that of living and interacting with other people. Unlike a game of chess or a mathematical proof, there is no real way to avoid engaging in the moral domain. Even if we become a hermit and cut off contact from others, there would still be other people in the world who would come under the remit of the domain of morality (since it applies universally across all people). People may fail to be motivated to act in accordance with what will best promote the wellbeing of themselves and others, but nevertheless they have a reason to act in accordance with this code of conduct, since that is the privileged code of conduct for the domain of living and interacting with other people. This might not be a very satisfying answer to the problem of normativity, but this will be an issue in any domain, since regardless of what reasons have been given for doing something or believing something, I can always still press the question ‘but why?’ Our chain of justifications has to stop somewhere, and I believe it is reasonable to affirm as fundamental the principle that everyone has a domain-specific reason to act in accordance with the privileged code of conduct pertinent to a given domain.

Thus, since everyone has a body that can be healthy or diseased, everyone has a health-related reason to stop smoking even if they don’t care about their health. We would not accept that a person who doesn’t care about dying of lung cancer actually has no reason to quit smoking; rather we would instead say that they are not motivated to act in accordance with this reason. Since (almost) everyone has some sort of money or property, (almost) everyone has a finance-related reason to save and invest their money wisely even if they don’t care about money. Since everyone lives in some sort of society with conventions of how to behave, everyone has an etiquette-related reason to obey their society’s rules of etiquette even if they don’t care about being polite. Likewise, since everyone lives on a planet where their actions potentially affect other people, everyone has a morality-related reason to act morally, even though they may sometimes fail to be motivated by these reasons. This is what provides the basis for the normativity of morality.

Glossary of some key terms

  • Moral realism: there is a privileged, universal code of conduct governing human action, which gives rise to moral facts
  • Reasons internalism: having a reason to do something implies having a motivation to do that thing
  • Humean theory of motivation: beliefs are insufficient for motivation, need antecedent desire too
  • Moral rationalism: if something is morally wrong then there must be a reason not to do it
  • Ethical naturalism: moral facts are natural facts
  • Reasons internalism + Humean theory: having a reason to do something implies a belief and an antecedent desire
  • Reasons internalism + Humean theory + moral rationalism: to believe that something is wrong must necessarily produce or elicit a desire not to do that thing
  • Reasons internalism + Humean theory + moral rationalism + ethical naturalism: there are natural facts, belief in which necessarily produces or elicits a desire to act in accordance with the universal code of conduct which governs human action = Mackie’s ‘categorically prescriptive facts’: facts that provide reasons for action independent of our desires

A Critique of Superintelligence

Introduction

In this article I present a critique of Nick Bostrom’s book Superintelligence. For purposes of brevity I shall not devote much space to summarising Bostrom’s arguments or defining all the terms that he uses. Though I briefly review each key idea before discussing it, I shall also assume that readers have some general idea of Bostrom’s argument, and some of the key terms involved. Also note that to keep this piece focused, I only discuss arguments raised in this book, and not what Bostrom has written elsewhere or others who have addressed similar issues. The structure of this article is as follows. I first offer a summary of what I regard to be the core argument of Bostrom’s book, outlining a series of premises that he defends in various chapters. Following this summary, I commence a general discussion and critique of Bostrom’s concept of ‘intelligence’, arguing that his failure to adopt a single, consistent usage of this concept in his book fatally undermines his core argument. The remaining sections of this article then draw upon this discussion of the concept of intelligence in responding to each of the key premises of Bostrom’s argument. I conclude with a summary of the strengths and weaknesses of Bostrom’s argument.

Summary of Bostrom’s Argument

Throughout much of his book, Bostrom remains quite vague as to exactly what argument he is making, or indeed whether he is making a specific argument at all. In many chapters he presents what are essentially lists of various concepts, categories, or considerations, and then articulates some thoughts about them. Exactly what conclusion we are supposed to draw from his discussion is often not made explicit. Nevertheless, by my reading the book does at least implicitly present a very clear argument, which bears a strong similarity to the sorts of arguments commonly found in the Effective Altruism (EA) movement, in favour of focusing on AI research as a cause area. In order to provide structure for my review, I have therefore constructed an explicit formulation of what I take to be Bostrom’s main argument in his book. I summarise it as follows:

Premise 1: A superintelligence, defined as a system that ‘exceeds the cognitive performance of humans in virtually all domains of interest’, is likely to be developed in the foreseeable future (decades to centuries).

Premise 2: If superintelligence is developed, some superintelligent agent is likely to acquire a decisive strategic advantage, meaning that no terrestrial power or powers would be able to prevent it doing as it pleased.

Premise 3: A superintelligence with a decisive strategic advantage would be likely to capture all or most of the cosmic endowment (the total space and resources within the accessible universe), and put it to use for its own purposes.

Premise 4: A superintelligence which captures the cosmic endowment would likely put this endowment to uses incongruent with our (human) values and desires.

Preliminary conclusion: In the foreseeable future it is likely that a superintelligent agent will be created which will capture the cosmic endowment and put it to uses incongruent with our values. (I call this the AI Doom Scenario).

Premise 5: Pursuit of work on AI safety has a non-trivial chance of noticeably reducing the probability of the AI Doom Scenario occurring.

Premise 6: If pursuit of work on AI safety has at least a non-trivial chance of noticeably reducing the probability of an AI Doom Scenario, then (given the preliminary conclusion above) the expected value of such work is exceptionally high.

Premise 7: It is morally best for the EA community to preferentially direct a large fraction of its marginal resources (including money and talent) to the cause area with highest expected value.

Main conclusion: It is morally best for the EA community to direct a large fraction of its marginal resources to work on AI safety. (I call this the AI Safety Thesis.)

Bostrom discusses the first premise in chapters 1-2, the second premise in chapters 3-6, the third premise in chapters 6-7, the fourth premise in chapters 8-9, and some aspects of the fifth premise in chapters 13-14. The sixth and seventh premises are not really discussed in the book (though some aspects of them are hinted at in chapter 15), but are widely discussed in the EA community and serve as the link between the abstract argumentation and real-world action, and as such I decided also to discuss them here for completeness. Many of these premises could be articulated slightly differently, and perhaps Bostrom would prefer to rephrase them in various ways. Nevertheless I hope that they at least adequately capture the general thrust and key contours of Bostrom’s argument, as well as how it is typically appealed to and articulated within the EA community.

The nature of intelligence

In my view, the biggest problem with Bostrom’s argument in Superintelligence is his failure to devote any substantial space to discussing the nature or definition of intelligence. Indeed, throughout the book I believe Bostrom uses three quite different conceptions of intelligence:

  • Intelligence(1): Intelligence as being able to perform most or all of the cognitive tasks that humans can perform. (See page 22)
  • Intelligence(2): Intelligence as a measurable quantity along a single dimension, which represents some sort of general cognitive efficaciousness. (See pages 70,76)
  • Intelligence(3): Intelligence as skill at prediction, planning, and means-ends reasoning in general. (See page 107)

While certainly not entirely unrelated, these three conceptions are all quite different from each other. Intelligence(1) is mostly naturally viewed as a multidimensional construct, since humans exhibit a wide range of cognitive abilities and it is by no means clear that they are all reducible to a single underlying phenomenon that can be meaningfully quantified with one number. It seems much more plausible to say that the range of human cognitive abilities require many different skills which are sometimes mutually-supportive, sometimes mostly unrelated, and sometimes mutually-inhibitory in varying ways and to varying degrees. This first conception of intelligence is also explicitly anthropocentric, unlike the other two conceptions which make no reference to human abilities. Intelligence(2) is unidimensional and quantitative, and also extremely abstract, in that it does not refer directly to any particular skills or abilities. It most closely parallels the notion of IQ or other similar operational measures of human intelligence (which Bostrom even mentions in his discussion), in that it is explicitly quantitative and attempts to reduce abstract reasoning abilities to a number along a single dimension. Intelligence(3) is much more specific and grounded than either of the other two, relating only to particular types of abilities. That said, it is not obviously subject to simple quantification along a single dimension as is the case for Intelligence(2), nor is it clear that skill at prediction and planning is what is measured by the quantitative concept of Intelligence(2). Certainly Intelligence(3) and Intelligence(2) cannot be equivalent if Intelligence(2) is even somewhat analogous to IQ, since IQ mostly measures skills at mathematical, spatial, and verbal memory and reasoning, which are quite different from skills at prediction and planning (consider for example the phenomenon of autistic savants). Intelligence(3) is also far more narrow in scope than Intelligence(1), corresponding to only one of the many human cognitive abilities.

Repeatedly throughout the book, Bostrom flips between using one or another of these conceptions of intelligence. This is a major weakness for Bostrom’s overall argument, since in order for the argument to be sound it is necessary for a single conception of intelligence to be adopted and apply in all of his premises. In the following paragraphs I outline several of the clearest examples of how Bostrom’s equivocation in the meaning of ‘intelligence’ undermines his argument.

Bostrom argues that once a machine becomes more intelligent than a human, it would far exceed human-level intelligence very rapidly, because one human cognitive ability is that of building and improving AIs, and so any superintelligence would also be better at this task than humans. This means that the superintelligence would be able to improve its own intelligence, thereby further improving its own ability to improve its own intelligence, and so on, the end result being a process of exponentially increasing recursive self-improvement. Although compelling on the surface, this argument relies on switching between the concepts of Intelligence(1) and Intelligence(2). When Bostrom argues that a superintelligence would necessarily be better at improving AIs than humans because AI-building is a cognitive ability, he is appealing to Intelligence(1). However, when he argues that this would result in recursive self-improvement leading to exponential growth in intelligence, he is appealing to Intelligence(2). To see how these two arguments rest on different conceptions of intelligence, note that considering Intelligence(1), it is not at all clear that there is any general, single way to increase this form of intelligence, as Intelligence(1) incorporates a wide range of disparate skills and abilities that may be quite independent of each other. As such, even a superintelligence that was better than humans at improving AIs would not necessarily be able to engage in rapidly recursive self-improvement of Intelligence(1), because there may well be no such thing as a single variable or quantity called ‘intelligence’ that is directly associated with AI-improving ability. Rather, there may be a host of associated but distinct abilities and capabilities that each needs to be enhanced and adapted in the right way (and in the right relative balance) in order to get better at designing AIs. Only by assuming a unidimensional quantitative conception of Intelligence(2) does it make sense to talk about the rate of improvement of a superintelligence being proportional to its current level of intelligence, which then leads to exponential growth. Bostrom therefore faces a dilemma. If intelligence is a mix of a wide range of distinct abilities as in Intelligence(1), there is no reason to think it can be ‘increased’ in the rapidly self-reinforcing way Bostrom speaks about (in mathematical terms, there is no single variable  which we can differentiate and plug into the differential equation, as Bostrom does in his example on pages 75-76). On the other hand, if intelligence is a unidimensional quantitative measure of general cognitive efficaciousness, it may be meaningful to speak of self-reinforcing exponential growth, but it is not necessarily obvious that any arbitrary intelligent system or agent would be particularly good at designing AIs. Intelligence(2) may well help with this ability, but it’s not at all clear it is sufficient – after all, we readily conceive of building a highly “intelligent” machine that can reason abstractly and pass IQ tests etc, but is useless at building better AIs.

Bostrom argues that once a machine intelligence became more intelligent than humans, it would soon be able to develop a series of ‘cognitive superpowers’ (intelligence amplification, strategising, social manipulation, hacking, technology research, and economic productivity), which would then enable it to escape whatever constraints were placed upon it and likely achieve a decisive strategic advantage. The problem is that it is unclear whether a machine endowed only with Intelligence(3) (skill at prediction and means-ends reasoning) would necessarily be able to develop skills as diverse as general scientific research ability, the capability to competently use natural language, and perform social manipulation of human beings. Again, means-ends reasoning may help with these skills, but clearly they require much more beyond this. Only if we are assuming the conception of Intelligence(1), whereby the AI has already exceeded essentially all human cognitive abilities, does it become reasonable to assume that all of these ‘superpowers’ would be attainable.

According to the orthogonality thesis, there is no reason why the machine intelligence could not have extremely reductionist goals such as maximising the number of paperclips in the universe, since an AI’s level of intelligence is totally separate to and distinct from its final goals. Bostrom’s argument for this thesis, however, clearly depends adopting Intelligence(3), whereby intelligence is regarded as general skill with prediction and means-ends reasoning. It is indeed plausible that an agent endowed only with this form of intelligence would not necessarily have the ability or inclination to question or modify its goals, even if they are extremely reductionist or what any human would regard as patently absurd. If, however, we adopt the much more expansive conception of Intelligence(1), the argument becomes much less defensible. This should become clear if one considers that ‘essentially all human cognitive abilities’ includes such activities as pondering moral dilemmas, reflecting on the meaning of life, analysing and producing sophisticated literature, formulating arguments about what constitutes a ‘good life’, interpreting and writing poetry, forming social connections with others, and critically introspecting upon one’s own goals and desires. To me it seems extraordinarily unlikely that any agent capable of performing all these tasks with a high degree of proficiency would simultaneously stand firm in its conviction that the only goal it had reasons to pursue was tilling the universe with paperclips. As such, Bostrom is driven by his cognitive superpowers argument to adopt the broad notion of intelligence seen in Intelligence(1), but then is driven back to a much narrower Intelligence(3) when he wishes to defend the orthogonality thesis. The key point to be made here is that the goals or preferences of a rational agent are subject to rational reflection and reconsideration, and the exercise of reason in turn is shaped by the agent’s preferences and goals. Short of radically redefining what we mean by ‘intelligence’ and ‘motivation’, this complex interaction will always hamper simplistic attempts to neatly separate them, thereby undermining Bostrom’s case for the orthogonality thesis – unless a very narrow conception of intelligence is adopted.

In the table below I summarise several of the key outcomes or developments that are critical to Bostrom’s argument, and how plausible they would be under each of the three conceptions of intelligence. Obviously such judgements are necessarily vague and subjective, but the key point I wish to make is simply that only by appealing to different conceptions of intelligence in different cases is Bostrom able to argue that all of the outcomes are reasonably likely to occur. Fatally for his argument, there is no single conception of intelligence that makes all of these outcomes simultaneously likely or plausible.

Outcome Intelligence(1):        all human cognitive abilities Intelligence(2): unidimensional measure of cognition Intelligence(3): prediction and means-ends reasoning
Quick takeoff Highly unlikely Likely Unclear
Develops all cognitive superpowers Highly likely Highly unlikely Highly unlikely
Absurd ‘paperclip maximising’ goals Extremely unlikely Unclear Likely
Resists changes to goals Unlikely Unclear Likely
Can escape confinement Likely Unlikely Unlikely

Premise 1: Superintelligence is coming soon

I have very little to say about this premise, since I am in broad agreement with Bostrom that even if it takes decades or a century, super-human artificial intelligence is quite likely to be developed. I find Bostrom’s appeals to surveys of AI researchers regarding how long it is likely to be until human level AI is developed fairly unpersuasive, given both the poor track record of such predictions and also the fact that experts on AI research are not necessarily experts on extrapolating the rate of technological and scientific progress (even in their own field). Bostrom, however, does note some of these limitations, and I do not think his argument is particularly dependent upon these sorts of appeals. I therefore will pass over premise 1 and move on to what I consider to be the more important issues.

Premise 2: Arguments against a fast takeoff

Bostrom’s major argument in favour of the contention that a superintelligence would be able to gain a decisive strategic advantage is that the ‘takeoff’ for such an intelligence would likely be very rapid. By a ‘fast takeoff’, Bostrom means that the time between when the superintelligence first approaches human-level cognition and when it achieves dramatically superhuman intelligence would be small, on the order of days or even hours. This is critical because if takeoff is as rapid as this, there will be effectively no time for any existing technologies or institutions to impede the growth of the superintelligence or check it in any meaningful way. Its rate of development would be so rapid that it would readily be able to out-think and out-maneuver all possible obstacles, and rapidly obtain a decisive strategic advantage. Once in this position, the superintelligence would possess an overwhelming advantage in technology and resources, and would therefore be effectively impossible to displace.

The main problem with all of Bostrom’s arguments for the plausibility of a fast takeoff is that they are fundamentally circular, in that the scenario or consideration they propose is only plausible or relevant under the assumption that the takeoff (or some key aspect of it) is fast. The arguments he presents are as follows:

  • Two subsystems argument: if an AI consists of two or more subsystems with one improving rapidly, but only contributing to the ability of the overall system after a certain threshold is reached, then the rate of increase in the performance of the overall system could drastically increase once that initial threshold is passed. This argument assumes what it is trying to prove, namely that the rate of progress in a critical rate-limiting subsystem could be very rapid, experiencing substantial gains on the order of days or even hours. It is hard to see what Bostrom’s scenario really adds here; all he has done is redescribed the fast takeoff scenario in a slightly more specific way. He has not given any reason for thinking that it is at all probable that progress on such a critical rate-limiting subsystem would occur at the extremely rapid pace characteristic of a fast takeoff.
  • Intelligence spectrum argument: Bostrom argues that the intelligence gap between ‘infra-idiot’ and ‘ultra-Einstein’, while appearing very large to us, may actually be quite small in the overall scheme of the spectrum of possible levels of intelligence, and as such the time taken to improve an AI through and beyond this level may be much less than it originally seems. However, even if it is the case that the range of the intelligence spectrum within which all humans fall is fairly narrow in the grand scheme of things, it does not follow that the time taken to traverse it in terms of AI development is likely be on the order of days or weeks. Bostrom is simply making an assumption that such rapid rates of progress could occur. His intelligence spectrum argument can only ever show that the relative distance in intelligence space is small; it is silent with respect to likely development timespans.
  • Content overhang argument: an artificial intelligence could be developed with high capabilities but with little raw data or content to work with. If large quantities of raw data could be processed quickly, such an AI could rapidly expand its capabilities. The problem with this argument is that what is most important is not how long it takes a given AI to absorb some quantity of data, but rather the length of time between producing one version of the AI and the next, more capable version. This is because the key problem is that we currently don’t know how to build a superintelligence. Bostrom is arguing that if we did build a nascent superintelligence that simply needed to process lots of data to manifest its capabilities, then this learning phase could occur quickly. He gives no reason, however, to think that the rate at which we can learn how to build that nascent superintelligence (in other words, the overall rate of progress in AI research) will be anything like as fast as the rate an existing nascent superintelligence would be able to process data. Only if we assume rapid breakthroughs in AI design itself does the ability of AIs to rapidly assimilate large quantities of data become relevant.
  • Hardware overhang argument: it may be possible to increase the capabilities of a nascent superintelligence dramatically and very quickly by rapidly increasing the scale and performance of the hardware it had access to. While theoretically possible, this is an implausible scenario since any artificial intelligence showing promise would likely be operating near the peak of plausible hardware provision. This means that testing, parameter optimisation, and other such tasks will take considerable time, as hardware will be a limiting factor. Bostrom’s concept of a ‘hardware overhang’ amounts to thinking that AI researchers would be content to ‘leave money on the table’, in the sense of not making use of what hardware resources are available to them for extended periods of development. This is especially implausible in the event of groundbreaking research involving AI architectures showing substantial promise. Such systems would hardly be likely to spend years being developed on relatively primitive hardware only to be suddenly and very rapidly dramatically scaled up at the precise moment when practically no further development is necessary, and they are already effectively ready to achieve superhuman intelligence.
  • ‘One key insight’ argument: Bostrom argues that ‘if human level AI is delayed because one key insight long eludes programmers, then when the final breakthrough occurs, the AI might leapfrog from below to radically above human level’. Assuming that ‘one key insight’ would be all it would take to crack the problem of superhuman intelligence is, to my mind, grossly implausible, and not consistent either with the slow but steady rate of progress in artificial intelligence research over the past 60 years, or with the immensely complex and multifaceted phenomenon that is human intelligence.

Additional positive arguments against the plausibility of a fast takeoff include the following:

  • Speed of science: Bostrom’s assertion that artificial intelligence research could develop from clearly sub-human to obviously super-human levels of intelligence in a matter of days or hours is simply absurd. Scientific and engineering projects simply do not work over timescales that short. Perhaps to some degree this could be altered in the future if (for example) human-level intelligence could be emulated on a computer and then the simulation run at much faster than real-time. But Bostrom’s argument is that machine intelligence is likely to precede emulation, and as such all we will have to work with at least up to the point of human/machine parity being reached is human levels of cognitive ability. As such it seems patently absurd to argue that developments of this magnitude could be made on the timespan of days or weeks. We simply see no examples of anything like this from history, and Bostrom cannot argue that the existence of superintelligence would make historical parallels irrelevant, since we are precisely talking about the development of superintelligence in the context of it not already being in existence.
  • Subsystems argument: any superintelligent agent will doubtlessly require many interacting and interconnected subsystems specialised for different tasks. This is the way even much narrower AIs work, and it is certainly how human cognition works. Ensuring that all these subsystems or processes interact efficiently, without one inappropriately dominating or slowing up overall cognition, or without bottlenecks of information transfer or decision making, is likely to be something that requires a great deal of experimentation and trial-and-error. This in turn will take extensive empirical experiments, tinkering, and much clever work. All this takes time.
  • Parallelisation problems: many algorithms cannot be sped up considerably by simply adding more computational power unless an efficient way can be found to parallelise them, meaning that they can be broken down into smaller steps which can be performed in parallel across many processors at once. This is much easier to do for some types of algorithms and computations than others. It is not at all clear that the key algorithms used by a superintelligence would be susceptible to parallelisation. Even if they were, developing efficient parallelised forms of the relevant algorithms would itself be a prolonged process. The superintelligence itself would only be able to help in this development to the degree permitted by its initially limited hardware endowment. We therefore would expect to observe gradual improvement of algorithmic efficiency in parallelisation, thereby enabling more hardware to be added, thereby enabling further refinements to the algorithms used, and so on. It is therefore not at all clear that a superintelligence could be rapidly augmented simply by ‘adding more hardware’.
  • Need for experimentation: even if a superintelligence came into existence quite rapidly, it would still not be able to achieve a decisive strategic advantage in similarly short time. This is because such an advantage would almost certainly require development of new technologies (at least the examples Bostrom gives almost invariably involve the AI using technologies currently unavailable to humans), which would in turn require scientific research. Scientific research is a complex activity that requires far more than skill at ‘prediction and means-end reasoning’. In particular, it also generally requires experimental research and (if engineering of new products is involved) producing and testing of prototypes. All of this will take time, and crucially is not susceptible to computational speedup, since the experiments would need to be performed with real physical systems (mechanical, biological, chemical, or even social). The idea that all (or even most) such testing and experimentation could be replaced by computer simulation of the relevant system is absurd, since most such simulations are completely computationally intractable, and likely to remain so for the foreseeable future (in many cases possibly forever). Therefore in the development of new technologies and scientific knowledge, the superintelligence is still fundamentally limited by the rate at which real-world tests and experiments can be performed.
  • The infrastructure problem: in addition to the issue of developing new technologies, there is the further problem of the infrastructure required to develop such technologies, or even just to carry out the core objectives of the superintelligence. In order to acquire a decisive strategic advantage, a superintelligence will require vast computational resources, energy sources to supply them, real-world maintenance of these facilities, sources of raw materials, and vast manufacturing centres to produce any physical manipulators or other devices it requires. If it needs humans to perform various tasks for it, it will likely also require training facilities and programs for its employees, as well as teams of lawyers to acquire all the needed permits and permissions, write up contracts, and lobby governments. All of this physical and social infrastructure cannot be built in the matter of an afternoon, and more realistically would take many years or even decades to put in place. No amount of superintelligence can overcome physical limitations of the time required to produce and transform large quantities of matter and energy into desired forms. One might argue that improved technology certainly can reduce the time taken to move matter and energy, but the point is that it can only do so after the technology has been embodied in physical forms. The superintelligence would not have access to such hypothetical super-advanced transportation, computation, or construction technologies until it had built the factories needed to produce the machine tools with are needed to precisely refine the raw materials needed for parts in the construction of the nanofactory… and so on for many other similar examples. Nor can even vast amounts of money and intelligence allow any agent to simply brush aside the impediments of the legal system and government bureaucracy in an afternoon. A superintelligence would not simply be able to ignore such social restrictions on its actions until after it had gained enough power to act in defiance of world governments, which it would not be able to do until it had already acquired considerable military capabilities. All of this would take considerable time, precluding a fast takeoff.

Premise 3: Arguments against cosmic expansion

Critical to Bostrom’s argument about the dangers of superintelligence is that a superintelligence with a critical strategic advantage would likely capture the majority of the cosmic endowment (the sum total of the resources available within the regions of space potentially accessible to humans). This is why Bostrom presents calculations for the huge numbers of potential human lives (or at least simulations of lives) whose happiness is at stake should the cosmic endowment be captured by a rogue AI. While Bostrom does present some compelling reasons for thinking that a superintelligence with a decisive strategic advantage would have reasons and the ability to expand throughout the universe, there are also powerful considerations against the plausibility of this outcome which he fails to consider.

First, by the orthogonality thesis, a superintelligent agent could have almost any imaginable goal. It follows that a wide range of goals are possible that are inconsistent with cosmic expansion. In particular, any superintelligence with goals involving the value of unspoiled nature, or of constraining its activities to the region of the solar system, or of economising on the use of resources, would have reasons not to pursue cosmic expansion. How likely it is that a superintelligence would be produced with such self-limiting goals compared to goals favouring limitless expansion is unclear, but it is certainly a relevant outcome to consider, especially given that valuing exclusively local outcomes or conservation of resources seem like plausible goals that might be incorporated by developers into a seed AI.

Second, on a number of occasions, Bostrom briefly mentions that a superintelligence would only be able to capture the entire cosmic endowment if no other technologically advanced civilizations, or artificial intelligences produced by such civilizations, existed to impede it. Nowhere, however, does he devote any serious consideration to how likely the existence of such civilizations or intelligences is. Given the great age and immense size of the cosmos, however, the probability that humans are the first technological civilization to achieve spaceflight, or that any superintelligence we produce would be the first to spread throughout the universe, seems infinitesimally small. Of course this is an area of great uncertainly and we can therefore only speculate about the relevant probabilities. Nevertheless, it seems very plausible to me that the chances of any human-produced superintelligence successfully capturing the cosmic endowment without alien competition are very low. Of course this does not mean that an out-of-control terrestrial AI could not do great harm to life on Earth and even spread throughout neighbouring stars, but it does significantly blunt the force of the huge numbers Bostrom presents as being at stake if we think the entire cosmic endowment is at risk of being misused.

Premise 4: The nature of AI motivation

Bostrom’s main argument in defence of premise 4 is that unless we are extremely careful and/or lucky in establishing the goals and motivations of the superintelligence before it captures the cosmic endowment, it is likely to end up pursuing goals that are not in alignment with our own values. Bostrom presents a number of thought experiments as illustrations of the difficulty of specifying values or goals in a manner that would result in the sorts of behaviours we want it to perform. Most of these examples involve the superintelligence pursuing a goal in a single-minded, literalistic way, which no human being would regard as ‘sensible’. He gives as examples an AI tasked with maximising its output of paperclips sending out probes to harvest all the energy within the universe to make more paperclips, or an AI tasked with increasing human happiness enslaving all humans and hijacking their brains to stimulate the pleasure centres directly. One major problem I have with all such examples is that the AIs always seem to lack a critical ability in interpreting and pursuing their goals that, for want of a better term, we might describe as ‘common sense’. This issue ultimately reduces to which conception of intelligence one applies, since if we adopt Intelligence(1) then any such AIs would necessarily have ‘common sense’ (this being a human cognitive ability), while the other two conceptions of intelligence would not necessarily include this ability. However, if we do take Intelligence(1) as our standard, then it seems difficult to see why a superintelligence would lack the sort of common sense by which any human would be able to see that the simple-minded, literalistic interpretations given as examples by Bostrom are patently absurd and ridiculous things to do.

Aside from the question of ‘common sense’, it is also necessary to analyse the concept of ‘motivation’, which is a multifaceted notion that can be understood in a variety of ways. Two particularly important conceptions of motivation are that it is some sort of internal drive to do or obtain some outcome, and motivation as some sort of more abstract rational consideration by which an agent has a reason to act in a certain way. Given what he says about the orthogonality thesis, it seems that Bostrom thinks of motivation as being some sort of internal drive to act in a particular way. In the first few pages of the chapter on the intelligent will, however, he switches from talking about motivation to talking about goals, without any discussion about the relationship between these two concepts. Indeed, it seems that these are quite different things, and can exist independently of each other. For example, humans can have goals (to quit smoking, or to exercise more) without necessarily having any motivation to take actions to achieve those goals. Conversely, humans can be motivated to do something without having any obvious associated goal. Many instances of collective behaviour in crowds and riots may be examples of this, where people act based on situational factors without any clear reason or objectives. Human drives such as curiously and novelty can also be highly motivating without necessarily having any particular goal associated with them. Given the plausibility that motivation and goals are different and distinct concepts, it is important for Bostrom to explain what he thinks the relationship between them is, and how they would operate in an artificial agent. This seems all the more relevant since we would readily say that many intelligent artificial systems possess goals (such as the common examples of a heat-seeking missile or a chess playing program), but it is not at all clear that these systems are in any way ‘motivated’ to perform these actions – they are simply designed to work towards these goals, and motivations simply don’t come into it. What then would it take to build an artificial agent that had both goals and motivations? How would an artificial agent act with respect to these goals and/or motivations? Bostrom simply cannot ignore these questions if he is to provide a compelling argument concerning what AIs would be motivated to do.

The problems inherent in Bostrom’s failure to analyse these concepts in sufficient detail become evident in the context of Bostrom’s discussion of something that he calls ‘final goals’. While he does not define these, presumably he means goals that are not pursued in order to achieve some further goal, but simply for their own sake. This raises several additional questions: can an agent have more than one final goal? Need they have any final goals at all? Might goals always be infinitely resolvable in terms of fulfilling some more fundamental or more abstract underlying goal? Or might multiple goals form an inter-connected self-sustaining network, such that all support each other but no single goal can be considered most fundamental or final? These questions might seem arcane, but addressing them is crucial for conducting a thorough and useful analysis of the likely behaviour of intelligent agents. Bostrom often speaks as if a superintelligence will necessarily act in single-minded devotion to achieve its one final goal. This assumes, however, that a superintelligence would be motivated to achieve its goal, that it would have one and only one final goal, and that its goal and its motivation to achieve it are totally independent from and not receptive to rational reflection or any other considerations. As I have argued here and previously, however, these are all quite problematic and dubious notions. In particular, as I noted in the discussion about the nature of intelligence, a human’s goals are subject to rational reflection and critique, and can be altered or rejected if they are determined to be irrational or incongruent with other goals, preferences, or knowledge that the person has. It therefore seems highly implausible that a superintelligence would hold so tenaciously to their goals, and pursue them so single-mindedly. Only a superintelligence possessing a much more minimal form of intelligence, such as the skills at prediction and means-ends reasoning of Intelligence(3), would be a plausible candidate for acting in such a myopic and mindless way. Yet as I argued previously, a superintelligence possessing only this much more limited form of intelligence would not be able to acquire all of the ‘cognitive superpowers’ necessary to establish a decisive strategic advantage.

Bostrom would likely contend that such reasoning is anthropomorphising, applying human experiences and examples in cases where they simply do not apply, given how different AIs could be to human beings. Yet how can we avoid anthropomorphising when we are using words like ‘motivation’, ‘goal’, and ‘will’, which acquire their meaning and usage largely through application to humans or other animals (as well as anthropomorphised supernatural agents)? If we insist on using human-centred concepts in our analysis, drawing anthropocentric analogies in our reasoning is unavoidable. This places Bostrom in a dilemma, as he wants to simultaneously affirm that AIs would possess motivations and goals, but also somehow shear these concepts of their anthropocentric basis, saying that they could work totally differently to how these concepts are applied in humans and other known agents. If these concepts work totally differently, then how are we justified in even using the same words in the two different cases? It seems that if this were so, Bostrom would need to stop using words like ‘goal’ and ‘motivation’ and instead start using some entirely different concept that would apply to artificial agents. On the other hand if these concepts work sufficiently similarly in human and AI cases to justify using common words to describe both cases, then there seems nothing obviously inappropriate in appealing to the operation of goals in humans in order to understand how they would operate in artificial agents. Perhaps one might contend that we do not really know whether artificial agents would have human analogues of desires and goals, or whether they would have something distinctively different. If this is the case, however, then our level of ignorance is even more profound than we had realised (since we don’t even know what words we can use to talk about the issue), and therefore much of Bostrom’s argument on these subjects would be grossly premature and under-theorised.

Bostrom also argues that once a superintelligence comes into being, it would resist any changes to its goals, since its current goals are (nearly always) better achieved by refraining from changing them to some other goal. There is an obvious flaw to this argument, namely that humans change their goals all the time, and indeed whole subdisciplines of philosophy are dedicated to pursuing the question of what we should value and how we should go about modifying our goals or pursuing different things to what we currently do. Humans can even change their ‘final goals’ (insomuch as any such things exist), such as when they convert religions or change between radically opposed political ideologies. Bostrom mentions this briefly but does not present any particularly convincing explanation for this phenomenon, nor does he explain why we should assume that this clear willingness to countenance (and even pursue) goal changes is not something that would affect AIs as it affects humans. One potential such response could be that the ‘final goal’ pursued by all humans is really something very basic such as ‘happiness’ or ‘wellbeing’ or ‘pleasure’, and that this never changes even though the means of achieving it can vary dramatically. I am not convinced by this analysis, since many people (religious and political ideologues being obvious example) seem motivated by causes to perform actions that cannot readily be regarded as contributing to their own happiness or wellbeing, unless these concepts are stretched to become implausibly broad. Even if we accept that people always act to promote their own happiness or wellbeing, however, it is certainly the case that they can dramatically change their beliefs about what sort of things will improve their happiness or wellbeing, thus effectively changing their goals. It is unclear to me why we should expect that a superintelligence able to reflect upon its goals could not similarly change its mind about the meaning of its goals, or dramatically alter its views on how to best achieve them.

Premise 5: The tractability of the AI alignment problem

Critical to the question of artificial intelligence research as a cause for effective altruists is the argument that there are things which can be done in the present to reduce the risk of misaligned AI attaining a critical strategic advantage. In particular, it is argued that AI safety research and work on the goal alignment problem has the potential of being able to, after the application of sufficient creativity and intelligence, significantly assist our efforts in constructing an AI which is ‘safe’, and has goals aligned with our best interests. This is often presented as quite an urgent matter, something which must be substantively ‘solved’ before a superintelligent AI comes into existence if catastrophe is to be averted. This possibility, however, seems grossly implausible considering the history of science and technology. I know of not a single example of any significant technological or scientific advance whose behaviour we have accurately been able to predict, and whose safety we have been able to ensure, before it has been developed. In all cases, new technologies are only understood gradually as they are developed and put to use in practise, and their problems and limitations progressively become evident.

In order to ensure that an artificial intelligence would be safe, we would first need to understand a great deal about how artificially intelligent agents work, how their motivations and goals are formed and evolve (if it all), and how artificially intelligent agents would behave in society in their interactions with humans. It seems to me that, to use Bostrom’s language, this constitutes an AI-complete problem, meaning that there is no realistic hope of substantively resolving these issues before human-level artificial intelligence itself is developed. To assert the contrary is to contend that we can understand how an artificial intelligence would work well enough to control it and wisely plan with respect to possible outcomes, before we actually know how to build one. It is to assert that a detailed knowledge about how the AI’s intellect, goals, drives, and beliefs would operate in a wide range of possible scenarios, and also the ability to control its behaviours and motivations in accordance with our values, would still not include essential knowledge needed to actually build such as AI. Yet what it is exactly that such knowledge would leave out? How could we know such much about AIs without being able to actually build one? This possibility seems deeply implausible, and not comparable to any past experiences in the history of technology.

Another major activity advocated by Bostrom is to attempt to alter the relative timing of different technological developments. This rests on the principle of what he calls differential technological development, that it is possible to retard the development of some technologies relative to the arrival time of others. In my view this principle is highly suspect. Throughout the history of science and technology the simultaneous discovery or development of new inventions or discoveries is not only extremely common, but appears to be the norm of how scientific research progresses rather than the exception (see ‘list of multiple discoveries’ on Wikipedia for examples of this). The preponderance of such simultaneous discoveries lends strong support to the notion that the relative arrival of different scientific and technological breakthroughs depends mostly upon the existing state of scientific knowledge and technology – that when a particular discovery or invention has the requisite groundwork to occur, then and only then will it occur. If on the other hand individual genius or funding initiatives were the major drivers of when particular developments occur, we would not expect the same special type of genius or the same sort of funding program to exist in multiple locations leading to the same discovery at the same time. The simultaneous discovery of so many new inventions or discoveries would under this explanation be an inexplicable coincidence. If discoveries come about shortly after all the necessary preconditions are available, however, then we would expect that multiple persons in different settings would take advantage of the common set of prerequisite conditions existing around the same time, leading to many simultaneous discoveries and developments.

If this analysis is correct, then it follows that the principle of differential technological development is unlikely to be applicable in practise. If the timing and order of discoveries and developments largely depends upon the necessary prerequisite discoveries and developments having been made, then simply devoting more resources to a particular emerging technology would do little to accelerate is maturation. These extra resources may help to some degree, but the major bottleneck on research is likely to be the development of the right set of prerequisite technologies and discoveries. Increased funding can increase the number of researchers, which in turn lead to a larger range of applications of existing techniques to slightly new uses and minor incremental improvements of existing tools and methods. Such activities, however, are distinct from the development of innovative new technologies and substantively new knowledge. These sorts of fundamental breakthroughs are essential for the development of major new branches of technology such as geoengineering, whole brain emulation, artificial intelligence, and nanotechnology. In this analysis is correct, however, they cannot simply be purchased with additional research money, but must await the development of essential prerequisite concepts and techniques. Nor can we simply devote research funding to the prerequisite areas, since these fields would in turn have their own set of prerequisite technologies and discoveries upon which they are dependent. In essence, science and technology is a strongly inter-dependent enterprise, and we can seldom predict what ideas or technologies will be needed for a particular future breakthrough to be possible. Increased funding for scientific research overall can potentially increase the general rate of scientific progress (though even this is somewhat unclear), but changing the relative order of arrival of different major new technologies is not something that we have any good reason to think is feasible. Any attempts therefore to strategically manipulate research funding or agendas to alter the relative order of arrival of nanotechnology, whole brain emulation, artificial intelligence, and other such technologies, are very unlikely to succeed.

Premises 6-7: The high expected value of AI research

Essential to the argument that we (society at large or the EA community specifically) should devote considerable resources to solving the AI alignment problem is the claim that even if the probability of actually solving the problem is very low, the size of the outcome in question (according to Bostrom, the entire cosmic endowment) is so large that its expected value still dominates most other possible causes. This also provides a ready riposte to all of my foregoing rebuttals of Bostrom’s argument – namely that even if each premise of Bostrom’s argument is very improbable, and even if as a result the conclusion is most implausible indeed, nevertheless the AI Doom Scenario outcome is so catastrophically terrible that in expectation it might still be worthwhile to focus much of our attention on trying to prevent it. Of course, at one level this is entirely an argument about the relative size of the numbers – just how implausible are the premises, and just how large would the cosmic endowment have to be in order to offset this? I do not believe it is possible to provide any non-question begging answers to this question, and so I will not attempt to provide any numbers here. I will simply note that even if we accept the logic of the expected value argument, it is still necessary to actually establish with some plausibility that the expected value is in fact very large, and not merely assume that it must be large because the hypothetical outcome is large. There are, however, more fundamental conceptual problems with the application of expected value reasoning to problems of this sort, problems which I believe weigh heavily against the validity of applying such reasoning to this issue.

First is a problem which is sometimes called Pascal’s mugging. It is based upon Blaise Pascal’s argument that (crudely put), one should convert to Christianity even if it is unlikely Christianity is true. The reason is that if God exists, then being a Christian will yield an arbitrarily large reward in heaven, while if God does not exist, there is no great downside to being a Christian. On the other hand, if God does exist, then not being a Christian will yield an arbitrarily large negative reward in hell. On the basis of the extreme magnitude of the possible outcomes, therefore, it is rational to become a Christian even if the probability of God existing is small. Whatever one thinks of this as a philosophical argument for belief in God, the problem with this line of argument is that it can be readily applied to a very wide range of possible claims. For instance, a similar case can be made for different religions, and even different forms of Christianity. A fringe apocalyptic cult member could claim that Cthulhu is about to awaken and will torture a trillion trillion souls for all eternity unless you donate your life savings to their cult, which will help to placate him. Clearly this person is not to be taken seriously, but unless we can assign exactly zero probability to his statement being false, there will always be some size negative outcome sufficiently bad as to make taking the action the rational thing to do.

The same argument could be applied in more plausible cases to argue that, for example, some environmental or social cause has the highest expected value, since if we do not act now to shape outcomes in the right way then Earth will become completely uninhabitable and thus mankind unable to spread throughout the galaxy. Or perhaps some neo-Fascist, Islamic fundamentalist, Communist revolutionary, anarcho-primitivist, or other such ideology could establish a hegemonic social and political system that locks humanity into a downward spiral that forever precludes cosmic expansion, unless we undertake appropriate political or social reforms to prevent this. Again, the point is not how plausible such scenarios are – though doubtless with sufficient time and imagination they could be made to sound somewhat plausible to those people with the right ideological predilections. Rather, the point is that in line with the idea of Pascal’s mugging, if the outcome is sufficiently bad, then the expected value of preventing the outcome could still be high in spite of a very low probability of the outcome occuring. If we accept this line of reasoning, we therefore find ourselves vulnerable to being ‘mugged’ by any kind of argument which posits an absurdly implausible speculative scenario, so long as it has a sufficiently large outcome. This possibility effectively constitutes a reductio ad absurdum for these type of very low probability, very high impact arguments.

The second major problem with applying expected value reasoning to this sort of problem is that it is not clear that the conceptual apparatus is properly aligned to the nature of human beliefs. Expected value theory holds that human beliefs can be assigned a probability which fully describes the degree of credence with which we hold that belief. Many philosophers have argued, however, that human beliefs cannot be adequately described this way. In particular, it is not clear that we can identify a single specific number that precisely describes our degree of credence in such amorphous, abstract propositions as those concerning the nature and likely trajectory of artificial intelligence. The possibilities of incomplete preferences, incomparable outcomes, and suspension of judgement are also very difficult to incorporate into standard expected value theory, which assumes complete preferences and that all outcomes are comparable. Finally, it is particularly unclear why we should expect or require that our degrees of credence should adhere to the axioms of standard probability theory. So-called ‘Dutch book arguments’ are sometimes used to demonstrate that sets of beliefs that do not accord with the axioms of probability theory are susceptible to betting strategies whereby the person in question would be guaranteed to lose money. Such arguments, however, only seem relevant to beliefs which are liable to be the subject of bets. For example, of what relevance is it whether one’s beliefs about the behaviour of a hypothetical superintelligent agent in the distant future are susceptible to Dutch book arguments, when the events in question are so far in the future that it is impossible that any enforceable bet could actually be made concerning them? Perhaps beliefs which violate the axioms of probability, though useless for betting, are valuable or justifiable for other purposes or in other domains. Much more has been written about these issues (see for example the Stanford Encyclopedia of Philosophy article on Imprecise Probabilities), however for our purposes it is sufficient to establish that powerful objections can and have been raised concerning the adequacy of expected value arguments, particularly in applications of low probability and high potential impact. These issues require careful consideration before premises 6 and 7 of the argument can be justified.

Conclusion

In concluding, I would just like to say a final word about the manner in which I believe AI safety is likely to present the greatest danger in the future. On the basis of the arguments I have presented above, I believe that the most dangerous AI risk scenario is not that of the paperclip maximiser or some out-of-control AI with a very simplistic goal. Such examples feature very prominently in Bostrom’s argument, but as I have said I do not find them very plausible. Rather, in my view the most dangerous scenario is one in which a much more sophisticated, broadly intelligent AI comes into being which, after some time interaction with the world, acquires a set of goals and motivations which we might broadly describe as those of a psychopath. Perhaps it would have little or no regard for human wellbeing, instead becoming obsessed with particular notions of ecological harmony, or cosmic order, or some abstracted notion of purity, or something else beyond our understanding. Whatever the details, the AI need not have an aversion to changing its ‘final goals’ (or indeed have any such things at all). Nor need it pursue a simple goal single-mindedly without stopping to reflect or being able to be persuaded by conversing with other intelligent agents. Nor need such an AI experience a very rapid ‘takeoff’, since I believe its goals and values could very plausibly alter considerably after its initial creation. Essentially all that is required would be a set of values substantially at odds with those of most or all of humanity. If it was sufficiently intelligent and capable, such an entity could cause considerable harm and disruption. In my view, therefore, AI safety research should focus not only on how to solve the problem of value learning or how to promote differential technological development. It should also focus on how the motivations of artificial agents develop, how these motivations interact with beliefs, and how they can change over time as a result of both internal and external forces. The manner in which an artificial agent would interact with existing human society is also an area which, in my view, warrants considerable further study, since the manner in which such interactions proceed plays a central role in many of Bostrom’s arguments.

Bostrom’s book has much to offer those interested in this topic, and although my critique has been almost exclusively negative, I do not wish to come across as implying that I think Bostrom’s book is not worth reading or presents no important ideas. My key contention is simply that Bostrom fails to provide compelling reasons to accept the key premises in the argument that he develops over the course of his book. It does not, of course, follow that the conclusion of his argument (that AI constitutes a major existential threat worthy of considerable effort and attention) is false, only that Bostrom has failed to establish its plausibility. That is, even if Bostrom’s argument is fallacious, it does not follow that AI safety is a completely spurious issue that should be ignored. On the contrary, I believe it is an important issue that deserves more attention in mainstream society and policy. At the same time, I also believe that relative to other issues, AI safety receives too much attention in EA circles. Fully defending this view would require additional arguments beyond the scope of this article. Nevertheless, I hope this piece contributes to the debate surrounding AI and its likely impact in the near future.

Massive Content Update!

Hi everyone, just wanted to announce the release of a huge amount of new content on my blog. You can see the links to the pages along the top menu bar. I have uploaded a bunch of old projects that I produced in previous years, including a science fiction novella and some presentations about world history and the solar system. I’ve also updated about 50 slide show presentations on topics ranging from philosophy, science, and statistics for talks that I’ve given over the years. Finally, I’ve uploaded close to 100 pdfs of my typed up notes for university courses that I’ve taken or attended over the years, which are organised by category. Hopefully people will find things here of interest and use to them, so check it out!

Levels of Scepticism: How Even Rational People have Sceptical Blind Spots

Most of my readers doubtless recognise the importance of being skeptical about the information, arguments, and ideas that we encounter, be it dietary advice, political opinions, science news articles, or whatever else. There are, however, different levels of scepticism, corresponding to the varying degrees of sophistication we can attain in the manner in which we respond to new ideas or arguments. In this piece I wish to outline a brief topology of these different levels of scepticism. I do not pretend to offer any sort of definitive classification, nor do I claim that these levels are in any way based upon empirical psychological research. Their purpose rather is to serve as a conceptual tool to help us think about the ways in which we can improve our own thinking, and work to eliminate residual biases and blind spots that hamper our efforts to form beliefs that are best justified by strong argument and quality evidence. The hierarchy that I shall outline has four levels, ranging from least sceptical at level 0 to most sceptical at level 3. I want to emphasise that the purpose of these levels is not to create a ranking of particular people as better or worse sceptics, as most people operate at multiple different levels depending on the circumstance and the topic in question. Rather, the purpose is to rank particular types of thinking, so that we may better recognise when we are thinking in a better or worse mode of scepticism.

I will begin my discussion at the bottom of the hierarchy, level 0. When we think at this level, we do not think particularly critically or sceptically about much of anything. Though we may have opinions about various matters of political, ethical, or philosophical import, when operating at level 0 we are typically unable to clearly articulate these views to others, or explain why we hold them. Most such views are typically informed primarily by our upbringing, socialisation, and the attitudes of the people around them as they go about their lives. Many people who operate at this level have little to no ability to critically analyse evidence or analyse an abstract logical argument, having never been taught such skills or found it necessary to learn them. Even those who do have such skills, however, can sometimes be remarkably compartmentalised in the manner in which they apply them, for example being able to hold forth a detailed analytical argument about topic A, but when discussing topic B doing little more than spouting catch-phrases that resonate with them. When we operate at level 0, we tend to think that our viewpoint is ‘obvious’, and react with surprise when we find that others think differently, or that any sensible person can hold a different view. It is likely that the majority of humanity operate at level 0 most of the time, as this is the type of thinking that comes most naturally and easily to most humans. That is, we typically form beliefs about the world not on the basis of careful examination of evidence, logical analysis, or in-depth comparison of alternative perspectives, but unconsciously and reflexively as we go about our lives, drawing largely upon what we know and are familiar with. I do not want to claim that this is inappropriate in all contexts, as certainly we cannot always subject everything to detailed critical analysis. However, I do think that making a habit of thinking in this way is liable to lead us into error and confusion about a great many of our beliefs. Scepticism, logic, and science are valuable tools, and neglecting these tools leaves us intellectually impoverished and prone to biased and mistaken reasoning.

This leads me on to the next level of the scepticism hierarchy, level 1. When operating at this level, we are able to articulate clear opinions on a variety of subjects, martialling various arguments and evidences in favour of our views. We recognise the distinctiveness of different viewpoints and are able to employ the tools of scepticism and rationality to make arguments for what we regard as the correct view. However, when thinking at this level we also tend to identify strongly with one particular perspective, be it religious, political, scientific, or whatever else, and employ these sceptical tools selectively against arguments or information coming from the opposing ‘side’. We are able to spot logical fallacies, faulty reasoning, and inadequate evidence in the arguments of our ideological opponents, but are much less able to apply the same skills to arguments made by those of their own ideological persuasion. When operating at level 1, we tend to respond to new claims by ‘pattern matching’ how the claim is framed and who is making it, and on that basis classify it as ‘for’ or ‘against’ our side. We thus do not judge arguments fairly on their own merits, but subject them to an initial, largely unconscious ‘screening process’, whereby if an argument ‘sounds like’ the sort of thing someone we disagree with would say, then we subject it to closer skeptical examination. On the other hand, if it sounds like the sort of thing somebody who agrees us would say, then it typically avoids any in-depth examination. This sort of self-serving, pro in-group bias comes very naturally to humans, and thus is very difficult to overcome. It is also very difficult to notice in ourselves, because when operating at level 1 we typically are only conscious of the times when we are being skeptical and critical, not the times when we aren’t. To us it feels like we take arguments only on their merit, when in reality we are very selective about how our scepticism is applied, and make little effort to subject views that accord with our beliefs or biases to the same rigorous critical examination that we apply to those that do not. When operating at level 1 we are also liable to be misled by framing effects, slogans, buzzwords, and other irrelevancies relating not to the substance of an argument, but to how it is packaged. Selective scepticism of this sort is very common to those heavily involved in some sort of social movement or organisation, and is not always bad because it can save us time – after all, we can’t critically examine every single claim we come across. At the same time, it can become all too easy to become accustomed to operating at this level, and in doing so we fail to make proper or full use of the tools of rationality and scepticism.

When operating at the next level up in the hierarchy, level 2, we are able to apply critical thinking skills and skeptical analysis consistently and fairly both to arguments that we find agreeable and those that we find disagreeable. We allow the arguments and evidence to be persuasive in their own right, with minimal influence based on who has made them, or how they have been formulated. We consciously recognise our tendencies to favour ‘our side’ over the ‘other side’, and make efforts to circumvent this by deliberately taking time to critique arguments made by those who agree with us, and likewise by finding the strongest, most able defenders of ideas we disagree with. This, of course, is not easy to do, and requires careful attention and genuine effort to fairly engage with different perspectives and ideas. There is, however, one significant failing that we still commonly experience when operating at level 2. Namely, we instinctively and reflexively retain an unreasonable overconfidence in our own reasoning abilities. We tend to believe that our perspectives or conclusions on some issue are the ‘right’ ones, and everyone else has got it ‘wrong’. Taken to extremes, this type of thinking can lead to habitual contrarianism and even conspiratorial thinking. In such cases, we may think that both sides of some major dispute have it wrong, and we are the ‘lone genius’ able to see the correct answer. While most people do not reach such extremes, what those operating at level 2 have in common is their inability or unwillingness to apply the same sceptical attitude and critical examination to their own thought processes that they do to the arguments of others. We thus do not properly appreciate the many limitations of memory, rationality, and knowledge that we ourselves are subject to, and which hamper our efforts to draw correct conclusions. We are skeptical of everyone else, but not sufficiently skeptical of themselves, of our own biases and limitations.

The highest level of my hierarchy is level 3, and it is the level I believe we should all aspire to use as regularly as possible. When operating at level 3, we properly apply scepticism and critical analysis not only to everyone else, but also to ourselves and our own beliefs, preconceptions, and thought processes. We are often hesitant to attach strong credence to the conclusions we reach, because we know that our rationality is grossly imperfect and our knowledge and perspectives sorely limited. This of course should not lead us to radical scepticism or keep us from forming opinions about anything, but it should temper our confidence considerably and keep us from becoming dogmatically attached to our conclusions and perspectives. In level 3 we are also much more self-critical, actively setting out to uncover our own biases and doing our best to compensate for them, and not just criticising the biases and errors of others. Likewise, we actively seek out the viewpoints of other informed persons to critique our opinions and point out our cognitive ‘blind spots’, helping us to apply scepticism to our own thought processes and reasoning. Level 3 is often an uncomfortable state to operate in, for it robs us of the overconfidence in our beliefs that is reassuring to most people, and also requires a degree of active self-criticism which is unnatural and effortful to maintain. We also must also make an effort to find the right balance between appropriate self-criticism and scepticism on the one hand, and paralysing self-doubt, apathy, or total mistrust of reason on the other. Operating at level 3 is neither easy nor natural, but I do believe it is the highest form of ‘true scepticism’, and the ideal to which we should all aspire. Operating in this level may not always be possible, but nevertheless is worth striving for since it allows us to take the fullest advantage of the tools afforded by logic, rationality, and scepticism, thereby providing us maximum chances for ultimately forming accurate beliefs free from error, bias, and distortion.

A Theory of Reductive Naturalism: The Metaphysical Foundations of Non-belief

Introduction

What do you believe about God? What about global warming? Do you think euthanasia should be legalised? What about bible study in schools? Whatever your answer to these questions, it is very unlikely that you hold your views in isolation, independently of all your other opinions and perspectives. That’s not how human minds work. Instead, we hold our views in the context of a large set of overlapping and interconnected beliefs about what the world is, how it works, and why things are the way they are. This very large, overarching set of beliefs and conceptions about the world is what I call a ‘worldview’. When ideas become successful or popular, it is very rarely because of the specific merits of one idea considered in isolation. Rather, ideas are usually ‘sold’ as part of a ‘package deal’ – a set of interconnected, internally coherent beliefs about the world which people find attractive. Socialism, Fascism, Christianity, Humanism, and Environmentalism are all examples of such worldviews. As my choice of examples shows, the effect that such worldviews has on the world varies dramatically – ideas can shape the world for better or for worse. If, therefore, we want to shape the world for the better, we need to spread good ideas, and to do that we need to package these ideas in a way that people find attractive. To put it another way, it is not enough to just be right about a whole bunch of unrelated issues. Rather, one needs to incorporate these positions into a unified conceptual whole, to provide a worldview that people find intellectually and emotionally attractive. My aim in this short article is to present an outline of the key points concerning what such a worldview might look like from a Rationalist/Humanist/Atheist perspective. Specifically, the view that I am outlining is a form of reductive naturalism, the meaning of which I will explain shortly. It is a metaphysical theory, meaning that it makes claims about what exists in the world. I do not claim that this is the only possible naturalistic worldview that one can develop, but I do think it is a particularly compelling one which is worthy of serious consideration.

Reductive Naturalism

To begin, I must first explain what I mean by the term ‘naturalism’. This word is used in a variety of ways in everyday language, but in this context I am using it with reference to a particular set of philosophical positions concerning what sorts of things exist in the world. Put most simply, naturalism holds that only the natural world exists. While there is no generally accepted definition of ‘natural’ in this context, the usual conception is that the natural world includes all things that are not supernatural. Supernatural entities are such things as ghosts, spirits, magical forces, immaterial souls, gods, and immaterial forces like yin-yang from Chinese philosophy. Such supernatural entities are typically thought to be highly distinctive from anything that exists in nature in that they are not made up of matter, and do not follow determinate causal laws in the way the natural world does. I should emphasise that natural entities include not only things like particles, organisms, and planetary bodies, but also man-made artifacts like computers and political institutions. The relevant distinction is thus not between natural and artificial, but rather between natural and non-natural or supernatural. Thus understood, naturalism is simply the position that there are no non-natural or supernatural entities.

The version of naturalism that I am here defending is reductionist, meaning that according to this view, everything that exists is either a fundamental particle, or is something that exists and holds all the properties that it does solely in virtue of the arrangements and interactions of such fundamental particles. Another way of putting this is that according to reductive naturalism, if one specified the exact configuration of all the fundamental particles in the entire universe, then this would also be sufficient to determine all the properties of everything that exists within the universe. There is nothing ‘left out’ of reality beyond the arrangements of fundamental particles. A few points of clarification are necessary here. First, when I speak about ‘fundamental particles’ I do not necessarily assume that these are the same as what physics currently regards to be the fundamental particles of nature (quarks, electrons, photons, etc). Perhaps they are, or perhaps they are something yet more fundamental that we have yet to discover. All that is important to my case is that there are a determinate, relatively small number of such things, and that they follow causal laws in principle describable by a ‘completed physics’. Second, when I say that the arrangement of fundamental particles is sufficient to determine all properties about everything that exists, I am advocating a theory of ontology (what exists), not a theory of epistemology (how we know) or semantics (what words mean). To consider a particularly tricky example, according to reductive naturalism, the statement ‘Bob loves his wide’ must ultimately be either true or false in virtue of some state of affairs concerning particular arrangements of fundamental particles. This is not to say, however, that we come to know whether Bob loves his wife by examining states of fundamental particles. Nor is it to say that when we say ‘Bob loves his wife’ we are in any way actually thinking about fundamental particles. Rather, my claim is about what exists in the world that makes this claim true – the so-called ontological basis of the fact that ‘Bob loves his wife’. The claim of reductive naturalism is that even highly abstract and complex states as this ultimately pertain in virtue of the arrangement of fundamental particles. Thus, there is nothing outside of or beyond such particles and their interactions that is needed in order to bring about the state of Bob loving his wife. I am thus explicitly disputing the claim made by some philosophers that immaterial minds or Platonic forms or other non-natural entities are necessary in order to account for all the various phenomena that we know about in the world.

Even given these clarifications, many people typically find this reductive naturalism intuitively implausible. How, they say, can you claim that the interactions of protons and electrons are all that there is to such complex, indescribably rich phenomena as human emotions? A large part of the implausibility of my position, however, is removed once we consider the reduction hierarchically. That is, rather than trying to imagine jumping directly from subatomic physics to human emotions, we should instead think about the stages in which this reduction occurs. Subatomic physics underpins the structure and properties of atoms, which in turn bind together to form molecules. Molecules join together through various types of chemical bonds to form macromolecules like proteins and DNA which make up the cells of the human body. Different types of cells with different functions combine together to form tissues and organs, each with their own role in supporting the life of the organism. In the case of the human mind, neurons connect together in complex networks to form mental representations of various concepts, including ultimately those of loving another person. Considered in this incremental manner, I think the notion that facts about human thoughts and emotions are ultimately reducible to facts about brain states, which in turn reduce to facts about neuronal firing patterns, then down to proteins, molecules, and atoms, is far more plausible than it is if we think simply of jumping from atoms straight to the mind in a single leap.

The utility of a philosophical theory ultimately is determined by how useful it is in accounting for various phenomena that we wish to explain in the world. In the case in question, two of the most difficult phenomena that have led many people to posit entities beyond those of the natural world are the human mind and moral values. In this short article I have space only to very briefly consider these complex subjects, and I certainly do not claim to have a complete philosophical account of either. Nevertheless, I do wish to at the very least sketch the outlines of how a reductionist naturalistic worldview can account for the existence of both mind and morality in a way that provides a space for such phenomena without needing to posit the existence of any additional, non-natural entities.

Before doing so, however, there is one final concept (borrowed from physics) that I must introduce, namely the distinction between a microstate and a macrostate. A microstate is a single complete configuration of all the fundamental particles in a system. A macrostate, by contrast, is a set of microstates that share some property of interest. Macrostates thus refer to ‘higher level’ phenomena, whose existence is nevertheless wholly dependent upon the particular microstate the system is in. For instance, one example of a microstate is the exact description of all the positions and velocities of the air molecules in a room. We can then consider various macrostates which are higher-level properties that nevertheless are entirely determined by the microstate that the particles in the room reside in. refers to the set of all such microstates in which the room has a particular temperature. One example of a macrostate would be ‘the air temperature in this room is 30 degrees Celsius’. This macrostate refers to the set of all possible microstates that give rise to this temperature. Even though there are many possible microstates that can instantiate a single macrostate, the temperature of the room is still determined completely by the microstate. The macrostate is thus just a useful ‘higher order’ concept we use to refer to sets of microstates that are similar in some relevant way.

Applications: Mind and Morality

Applying this distinction between microstates and macrostates to the cases of the mind and morality, we see that under the reductive naturalistic worldview, mental and moral states of affairs can both understood to be a kind of macrostate. In the case of the mind, examples of macrostates could be ‘he perceives the colour red’, ‘she remembers her grandmother’s face’, or ‘I believe that it will rain tomorrow’. These are all mental states of affairs which are expressed in a psychological language involving appeal to believes, perceptions, desires, etc. According to the theory of reductive naturalism I am advocating, all such mental macrostates ultimately exist in virtue of the (exceedingly large) number of microstates that are capable of instantiating them. There is, for example, a very large number of possible ways the atoms in my brain could be arranged such that they correspond to being in a state of ‘deciding’. Indeed, it is possible that microstates quite different to those which exist in my brain are also capable of instantiating mental macrostates, such as the arrangements of atoms making up the circuitry of an artificial intelligence. This position in the philosophy of mind is known is functionalism, and holds that mental states are constituted by the functional workings of a given system, and that different physical systems may be capable of producing the same functions and therefore of yielding the same mental phenomena. The exact details of functionalism are not important here, the point is simply that such a view fits very readily within the reductive naturalist paradigm that I have been developing, and is capable in broad terms of making sense of how mental states can exist in a purely material world. The key idea, then, is that mental states are not some mysterious things that cannot be accounted for in the natural world. Rather, appeals to mental states such as beliefs, desires, perceptions, and, even acts of free will, ultimately refer to very complex bundles of possible arrangements of fundamental particles. We cannot possibly specify in detail exactly what all these arrangements of particles look like, but nor do we need to, as the arrangements are defined functionally by the higher-level properties they instantiate. There is of course no need to replace such psychological terms with talk of fundamental particles, because that would distract from our purpose and lead us to getting bogged down in irrelevant details. The point of this analysis, rather, is that such psychological language and the mental states they refer to can fit quite comfortably within a naturalistic worldview, without needing to appeal to the existence of any additional non-natural entities.

We can apply much the same analysis to the case of morality. Morally good macrostates can be understood as states of affairs conducive to the flourishing or wellbeing of sentient creatures. Morally bad macrostates, by contrast, would be states of affairs that bring about the suffering and misery of sentient creatures. Obviously we would need to articulate in more detail what we mean by terms like ‘wellbeing’ and ‘misery’, however since we can readily identify examples of each I take it that these terms, while fuzzy, have a robust meaning that is sufficient for our purposes here. This position corresponds to the metaethical theory of reductive moral naturalism, though once again, the details of this theory are not of prime importance here. What I want to emphasise is simply the fact that moral states of affairs can be readily accorded a place in this naturalistic worldview in accordance with whether or not a particular microstate instantiates a macrostate that is conducive to wellbeing or misery. Thus, when we say something like ‘killing for fun is morally wrong’, this statement is true in virtue of the fact that the various microstates which instantiate the act of killing (obviously there are many ways to kill someone) also instantiate a macrostate in which the wellbeing of sentient creatures is diminished relative to a comparable macrostate in which this act of killing did not occur. There is no need to appeal to the existence of God or any other transcendent source of morality for such moral macrostates to pertain, as they exist purely in virtue of the fact that certain arrangements of fundamental particles instantiate the wellbeing of sentient creatures to a greater extent than other arrangements. Of course, whether one is motivated to act so as to bring about morally good states of affairs is another question entirely. My point here is simply to argue that the existence of morally good states of affairs is readily explicable under a reductive naturalistic worldview.

One possible line of objection to my arguments is that we still do not have a very good understanding of precisely how mental or moral states of affairs arise from (or ‘supervene on’) the interactions of fundamental particles. In particular, there is a sizeable gap in our knowledge between the level of the functioning of single neurons and the emergence of complex mental behaviours and sensations in large networks of neurons. As such, it might be argued that to claim that we can say the latter arise solely from the interactions of the former is premature. In response, I would argue that it is in fact not at all premature to make such an inference. Recall that I am not claiming we have a complete theory of how all of nature works – science is an ongoing endeavour. All I am asserting is that we can account for the core phenomena that we need to, including the mind and moral value, without needing to appeal to any entities outside of the natural world. In doing so, I have given an account as to how the mind and morality can be conceptualised in a reductive naturalistic worldview – I have given ‘a place where they can fit’ in a naturalistic ontology. For this to be plausible, all that is needed is sufficient reason to think it plausible that higher order phenomena such as the mind can potentially arise solely as a result of the interaction of fundamental particles. And I think that the current state of knowledge in physics, chemistry, biology, neuroscience, and psychology is more than sufficient to affirm that such a belief is plausible. Certainly we don’t have the full explanation as to how this occurs, but I think we have ample evidence to infer that it is plausible that it does. Most everyone is willing to believe that the immensely complex behaviour of financial markets arises purely as the result of the financial activities of individual traders and corporations, despite the lack of a detailed theory as to how exactly this occurs. Likewise, no one would seriously argue that fluid turbulence is the result of anything other than the interaction of molecules in the fluid, even though our understanding of the physics of fluid dynamics is still relatively poor. I thus content that we are similarly in a position to affirm the plausibility of mind and morality arising purely from the result of neural activity in the brain (and hence ultimately the interactions of fundamental particles), even though we lack a complete theory as to how this occurs.

Conclusion

While I have argued that we can plausibly consider complex mental and moral macrostates as existing solely in virtue of the interactions of fundamental particles, I have not provided any arguments to prove that this must be the case. There may well be entities that exist outside of the natural world, and therefore the theory I have sketched here may constitute a drastically incomplete worldview. My argument, however, is that a reductive naturalistic worldview has sufficient explanatory power to account for the existence of all the phenomena we would wish it to. Furthermore, reductive naturalism is a highly parsimonious worldview, meaning that it posits only the existence of the natural world (whose existence almost all worldviews accept), and nothing else besides. My argument, therefore, is that if we can account for all that we need to from the natural world alone, then we have no reason to posit the existence of anything beyond the natural world. As to the existence of entities outside of nature we, like Laplace, therefore have ‘no need for that hypothesis’.

A Critique of Crude Positivism: Why the Epistemology of Dawkins and Hawking Fails

Introduction

In this essay I wish to address a particular set of opinions that seem to be quite popular among many contemporary atheists, rationalists, and freethinkers. It is not a single specific position, but rather a patchwork of overlapping ideas and perspectives sharing a more-or-less constant core. Being somewhat amorphous, the position of which I am speaking does not really a distinct name. For the purposes of this essay, however, I shall refer to this constellation of views as ‘crude positivism’. ‘Positivism’ is a complex and controversial philosophical perspective, which broadly speaking is characterised by a strong respect for science and empirical enquiry, and an opposition to truth claims based on metaphysical speculation, faith, or authority. My purpose here is not to attack positivism itself, but rather the relatively crude form of it that is popularised, to varying degrees, by figures such as Richard Dawkins, Sam Harris, Peter Boghossian, Neil deGrasse Tyson, Lawrence Krauss, and Stephen Hawking. While one again emphasising that I am describing a family of related and overlapping viewpoints rather than a single well-defined doctrine, three of the key most commonly-encountered components of this ‘crude positivism’ are the following:

  1. Strict evidentialism: the ultimate arbiter of knowledge is evidence, which should determine our beliefs in a fundamental and straightforward way, namely that we believe things if and only if there is sufficient evidence for them.
  2. Narrow scientism: the highest, or perhaps only, legitimate form of objective knowledge is that produced by the natural sciences. The social sciences, along with non-scientific pursuits, either do not produce real knowledge, or only knowledge of a distinctly inferior sort.
  3. Pragmatism: science owes is special status to its unique ability to deliver concrete, practical results – it ‘works’. Philosophy, religion, and other such fields to enquiry do not produce ‘results’ in this same way, and thus have no special status.

My goal in this piece will be to challenge these three claims. In particular, I will argue that the ‘crude positivism’ typified by these three views presents an overly narrow conception of knowledge, and represents an ultimately fragile basis upon which to ground challenges to superstitution, pseudoscience, and other forms of irrationality. My key contention is that we need to move beyond such crude positivism in order to have a stronger intellectual underpinning for the atheistic/rationalist/freethought movements. A final note on style: when I use the phrase ‘crude positivists’ I don’t mean to imply a well-defined group of people. I just use it as shorthand to refer to those who, to varying degrees, hold to one or more of the three positions outlined above.

Strict Evidentialism

Crude positivists insist that all beliefs, or at least all beliefs concerning anything of importance, ought to be based upon appropriate evidence. While I agree with this as an abstract principle, I have concerns about the manner in which crude positivists typically interpret and apply this maxim in practise. The trouble is that, when challenged, nearly everyone will be able to provide some sort of justification for their beliefs, something that they regard to be ‘evidence’. To consider a specific example, the evangelical Christian may claim to know that God works in the lives of believers because they have seen it happen with their own eyes, and experienced it personally in their own lives. Needless to say, this is not the sort of ‘evidence’ that adherents of crude positivism are likely to accept as legitimate. The question, however, is why not? After all, the justification in question is empirically based, in that it is derived from making observations about the world. Generally positivists respond that such experiences are uncontrolled and anecdotal, and thus cannot be trusted to provide reliable evidence. To this, however, the Christian may simply agree, arguing that while such experiences are anecdotal and thus do not qualify as scientific evidence, nevertheless they do constitute evidence of the relevant sort for the domain in question, namely the domain relating to knowledge and experience of God. According to this perspective, only certain particular phenomena or aspects of reality are susceptible to the investigative methods of the empirical sciences, and the nature of God and mankind’s relationship to him would not be one of these areas that science can study. These phenomena can be empirically studied, but this is done by applying different standards than those used for scientific inquiry, using methods that are much more personal and experiential. Scientific methods are applicable in the scientific domain, while other methods and other forms of empirical evidence are applicable in other domains. I am not attempting to defend this ‘separate domains’ position. Instead, I am arguing that it is not sufficient to respond to a position like this by simply asserting that beliefs should be based on evidence, since that is not the point under dispute. That is, the question is not whether some form of ‘evidence’ is important, but the type of evidence is deemed acceptable, and how that evidence justified claim being made.

A related problem concerns the issue of how evidence should be interpreted. Crude positivists often speak as if evidence is self-interpreting, such that a given piece of evidence simply and unambiguously picks out one singular state of affairs over all other possibilities. In practise, however, this is almost never the case, as evidence nearly always requires an elaborate network of background knowledge and pre-existing theory in order to interpret. For example, in order to understand a historical text, one requires not only knowledge of the language in which it is written, but also a broad understanding of the relevant social and political context in which the text was written. Likewise the raw output of most scientific observation or experiments are unintelligible without use of detailed background theories and methodological assumptions.

Given the important role that background assumptions and perspectives shape our interpretations of a given piece of evidence, it is very common for different people coming from different perspectives to conclude that the same evidence supports wildly different conclusions. For instance, many young earth creationists interpret the fossil and other evidence in light of their pre-existing belief that the bible is the literal and infallible word of God, and as a result they conclude that the extant evidence points to a divine creation event in the recent past, devising various ingenious methods of reconciling their beliefs with the apparent evidence to the contrary. My intent is not to defend creationists, but to illustrate that it is not enough to simply say that creationists ignore the evidence. These creationists are responding to the evidence (indeed they argue that it supports their position), but are interpreting it differently on the basis of different suppositions and approaches. We cannot simply dismiss them as being blinded by their presuppositions, since (as I have just argued) evidence can never be interpreted in a vacuum, free of assumptions or preconceptions, but can only ever be interpreted in the context of an existing methodological framework and various background assumptions. To say this isn’t to endorse some form of epistemic relativism, but simply to point out that if we want to explain why creationists and others like them are mistaken, we have to move beyond the crude positivistic cry of ‘seek the evidence’, and articulate a more detailed set of criteria and epistemological principles upon which certain initial assumptions and modes of interpretation are to be preferred over others. We need to do a better job of explaining what types of evidence are most reliable, how to interpret evidence, and why these approaches are more conducive to the formation of true beliefs than other, competing approaches.

Narrow Scientism

The second aspect of ‘crude positivism’ that I want to discuss is the view I have termed ‘narrow scientism’, which refers to the tendency to dismiss, or significantly downplay, the importance and status of all disciplines outside the natural sciences. Physics, chemistry, biology, and geology produce reliable knowledge, while psychology is a bit of a question mark, and economics and political science are clearly ‘not sciences’, but belong with disciples like philosophy and much of the humanities, the domain of fuzzy opinion and not verifiable fact. This, at least, is the typical perception among my advocates of crude positivism. In my view, however, this disciplinary classification is arbitrary, and fails to demarcate any epistemologically relevant distinction. In particular, what is the justification for the view that the only ‘real sciences’ are only the natural sciences? It cannot be the result of having adopted a superior set of methodologies, since in many cases there is more methodological continuity across different disciplines than within single ones. For example, analytical chemistry and cognitive psychology are both largely focused on laboratory experiments, while in astrophysics and macroeconomics experiments are mostly impossible, and so these disciplines instead rely predominantly upon observation and development of mathematical theories. Likewise, piecing together the evolutionary relationships of different species has more in common with the linguistic analysis of different languages than it does with other subfields of biology. Nor can it be the subject matter of the disciplines which sets them apart, since there is a continuum between the study of primate behaviour in biology and the study of human behaviour in the social sciences, and also between the study of natural history in geology and biology, and the study of human history in the social sciences and humanities. Furthermore, many mathematical models originally developed in the context of physics and chemistry have also been profitably applied to many other fields, especially economics and sociology (e.g. equilibrium theory, network analysis, complex systems theory). My contention here is not that there is literally no difference between the natural sciences and social science or non-scientific disciplines. I do, however, think that there is a great deal of continuity and intermingling between them, both in terms of methodologies and subject matter, a fact which belies the sharp science/non-science dichotomy advocated by crude positivists.

This is not, however, merely a question of whether disciplinary boundaries are sharp or fuzzy. The real point I am trying to make is that crude positivists simply have no justification for elevating the natural sciences (whether their boundaries are fuzzy or not) on a pedestal above all other disciples. That is, I do not think the natural sciences are epistemically privileged in the way that crude positivists claim that they are. After all, what is so special about the natural sciences relative to, say, economics, history, or even blatant pseudosciences like astrology? The most straightforward answer, and I think the one crude positivists have mostly in mind, is that the natural sciences apply a rigorous scientific method not found in any of these other disciplines, and this method is more conducive to finding truth than other competing methods. My response to this is threefold. Firstly, I note that this is not a claim that finds a home in any of the natural sciences (i.e. it is not a scientific claim), but seems to appeal to philosophical criteria that lie outside of science. I do not think there is anything wrong with that, except for the fact that it seems to sit at odds with the crude positivistic view that only science is to be trusted. Secondly, as I have argued above, it is simply not true that the natural sciences systematically apply different methodologies to those used in other disciples. Within any disciple the quality of work varies dramatically, some being much more careful and rigorous than others, and this applies just as much to the natural sciences as to other disciplines. Thirdly, and most importantly, if the superior status of the natural sciences is based on their superior adherence to a particular set of epistemological principles, then it is those principles themselves that are the true bearers of the superior status, not the physical sciences themselves. Applying these same principles to any disciple should yield knowledge justified to similarly rigorous standards. If this is correct, and what is at the bottom of the success of the physical sciences is adherence to a particular methodology or methods of inference, then it is those methods that we should focus on championing, whatever discipline they may be applied in.

It has been argued that the subject matter of the social sciences and other such disciplines is inherently ‘messier’ and more complex than the comparatively simpler physical systems studied by the natural sciences. However even if this is true, application of appropriate methodologies should still result in reliable knowledge – the only difference will be that the knowledge will be less precise and known with less confidence, since our understanding of the system in question is less complete and less detailed. This will not, however, result in a qualitatively distinct and far inferior form of knowledge, contrary to the claims of the crude positivists. Some argue that the subject matter of history and social science is such that it is not suited to study by the rigorous methods of natural science. If this were true, it would seem to leave us with two options: either no reliable knowledge about such things is possible in principle (i.e. we can say little or nothing about human history, how societies and economies work, etc), or the reliable methods of attaining knowledge in such disciples are distinctly different and at odds with those used in the natural sciences.

The former possibility strikes me as deeply implausible – why should we not at least be able to know a great deal about such topics through careful investigation, and furthermore how could we possibly know if this were the case given that we could not study these topics? The latter option seems equally unpalatable, for it is essentially identical to the argument by which the evangelical Christian claims that their supernatural claims are outside the bounds of scientific investigation. Indeed, if it is the case that the appropriate methods for studying any subject outside of the natural sciences are fundamentally different to and at odds with scientific methods, then any ground for objecting to irrational or unscientific claims is lost. Religious claims (“the divine cannot be studied scientifically”), alternative medicine (“human health is too holistic to be subjected to scientific methods”), or the paranormal (“the spirits don’t respond under controlled conditions”), it can always be argued that the subject matter lies outside of the natural sciences, and hence different, non-scientific investigative methods are applicable. In my view, this absurd outcome shows that, if we grant superior respect and status to the claims of the natural sciences, it must be because (when conducted properly) the natural sciences utilise justified and reliable general epistemological processes, processes which should similarly be conducive to knowledge acquisition when applied to other subjects. Crude positivists who instead reject any application of scientific methods outside of the natural sciences cannot then simultaneously berate those making religious, paranormal, and supernatural claims for failing to use scientific standards and methods, since by their own admission such methods are only applicable to certain subjects. Narrow scientism, then, is at odds with the core principle of basing all important beliefs upon reliable evidence.

Pragmatism

The third and final aspect of ‘crude positivism’ that I wanted to discuss in this piece is pragmatism, the appeal to the past successes of science as the primary and overriding justification for its epistemically superior status. Science, so the argument goes, simply ‘works’: it puts men on the moon, builds aircraft that fly, and makes transgenic fish that glow in the dark. Ways of knowing that rely on appeals to authority, esoteric knowledge, or personal experience, are inferior precisely because they do not ‘work’ in this way. While I do think this sort of argument has some validity, I think the crude positivist goes too far in advocating practical utility as the defining feature of knowledge. One simple problem with this approach is that many people think that prayer, mystical experiences, etc, ‘work’ in a very real way – they pray to Jesus, and they feel God’s love pouring out over them. The crude positivist, of course, is unlikely to admit that as being a valid example of ‘working’, however all this shows is that science comes out best when judged by its own criteria of what it counts as legitimate ‘success’, while the types of ‘success’ (e.g. drawing closer to god, becoming one with nature, etc) defined by other ways of knowing are simply disregarded.

Beyond this issue of defining criteria for success, there is a deeper philosophical issue concerning the relationship between the ‘success’ of a theory, and the ‘truth’ of that theory. Most of the examples of science ‘delivering results’ are, properly understood, really applications of engineering, not science itself. Of course, engineers utilise scientific findings and theories, but there is nevertheless an important distinction between the development of theory and its practical application. This is important because some schools of thought in philosophy, especially the sort of instrumentalist, pragmatic viewpoints that crude positivists are most closely aligned with, argue that the ability of a theory to deliver successful applications is insufficient to validate the accuracy of that theory in describing the way the world truly is. One example is that of Ptolemaic astronomy: it was capable of generating accurate predictions of the positions of the planets despite the fact that its underlying model for reality (an Earth-centred cosmos with the planets orbiting about crystalline spheres) is completely wrong. To take a more recent example, scientists and engineers still routinely use chemical and physical models which treat atoms as solid spheres interacting in accordance with the laws of classical mechanics. As a description of reality, this is entirely incorrect – atoms are mostly empty space, and what is not empty space consists of protons, neutrons, and electrons, which according to our best theories behave (very loosely) like smeared-out probability wavepackets, evolving in accordance with the laws of quantum (not classical) mechanics. Notwithstanding this completely inaccurate description of the underlying reality, however, the ‘billiard balls’ approach is still very useful and ‘delivers results’ in a wide range of applications. Such examples are one of the major arguments used by those philosophers who adhere to a position known as scientific anti-realism, which is the view that while science produces very useful predictive models, it does not necessarily describe the way things ‘truly are’. Thus, according to this view, science is not in the business of finding ‘truth’ per se, but merely of producing theories that are ‘empirically adequate’ and useful for prediction and practical application.

My point here is not to argue that anti-realism is correct, or that science doesn’t describe reality. Rather my argument is that either way, these considerations pose a problem for the simple pragmatism of crude positivists. If, on the one hand, scientific anti-realism is false, and scientific theories do truly describe the way the world is, then the extreme focus on scientific theories being special because they ‘work’ becomes difficult to justify, since under this view science is special not predominantly because it ‘works’, but because it yields true descriptions of reality. The simplistic pragmatism defence thus simply cannot work, and the fact that other disciplines (e.g. philosophy or theology) may not ‘deliver results’ does not mean that they cannot accurately describe reality. On the other hand, if scientific anti-realism is true, and scientific theories don’t necessary say much about the way reality truly is, then the crude positivist has no basis for critiquing non-scientific ways of knowing for not making predictions or ‘delivering results’. This is because these other ways of knowing (e.g. faith based) don’t necessarily claim to be able to provide predictive models, but claim to describe parts of reality as they truly are. If science and faith/intuition/etc are not even trying to do the same thing, the one attempting to generate useful models, the other not caring about predictive accuracy but about providing true descriptions of reality, then it is unclear how the crude positivist can even compare the two in the way they seem to want to. This approach also seems hard to reconcile with the fact that many adherents of crude positivism do very clearly make truth claims about subjects like religion and the paranormal. If this form of pragmatism is correct, then science and non-science aren’t incompatible, but rather are incomparable, for they are not even trying to do the same thing.

Conclusion

Some people will doubtless read this piece as an attack upon the value of science, or a defence of pseudoscientific, faith-based or emotion-based methods of reasoning. As I have said throughout this piece, however, this is not my intention at all. My goal is in fact to equip skeptics and rationalists to deliver a robust, cogent defence of the value of science and critical thinking in learning about the world, and the superiority of such methods over various rivals. What concerns me is that the constellation of views that I here describe under the label ‘crude positivism’ is quite popular among many rationalists and skeptics. As I have argued, however, I think these views are philosophically naive and very hard to rigorously defend. Worse, some of the more intelligent defenders of non-scientific practices, including religious apologists, practitioners of alternative medicine, and defenders of various pseudosciences, are aware of the problems with such views, and will vigorously critique rationalists who espouse them. I think we can answer their objections, but to do so requires a greater familiarity with philosophy and relevant methodlogical issues than many rationalists and skeptics have, especially when they so often dismiss these fields as irrelevant. In order to advance the cause of science and rationality, therefore, we need to abandon ‘crude positivism’, and replace it with a more sophisticated, thoughtful, and philosophically rigorous account of science and rationality.