Extinction Comes First: The Nuclear Arsenals Topic

Extinction Comes First: Traditional LD and the Nuclear Arsenals Topic

With the current Lincoln-Douglas topic dealing directly with nuclear weapons, it makes sense that a lot of debaters will be making arguments about existential risk. In traditional LD, extinction impacts aren’t all that common. Which is why I decided to write a brief explanation of how extinction impacts function in round, offer some advice on how to beat them, and provide some cards for those of you who want to run them.

Before I discuss how extinction arguments function, it’s important that you understand impact calculus (how to weigh impacts). Generally, impact calc compares impacts based on 4 criteria:

  1.  Magnitude – How large or severe an impact is. For example, Ebola vs the Flu.
  2.  Scope – How many people are affected by the impact. For example, an earthquake in Los Angeles vs an earthquake in Youngstown.
  3.  Probability – How likely it is that the impact happens. For example, World War 3 vs an increase in terror attacks.
  4. Time Frame – When will the impacts occur, and how long will they last. For example, immediate harm to the poor vs climate change.

This isn’t meant to be a detailed explanation of impact calculus (we do have a video planned for that), but it should give you enough information to understand the rest of this article.

         When we talk about extinction, typically we’re dealing with a very high magnitude and very low probability impact. What’s unique about the nuclear topic in LD (and to an extent the fossil fuel topic) is that extinction is actually somewhat plausible, whereas normally, in traditional LD, extinction impacts are not viewed as serious or legitimate arguments. Now, I’m not saying that it’s likely or that it will definitely happen, but it also feels wrong to totally dismiss the idea that nuclear weapons have the capacity to kill everyone on earth. Because of this, running arguments about minimizing existential risk makes sense. It’s not weird, and it doesn’t feel entirely forced.

         So then, let’s examine the argument that extinction comes first. One of the more commonly read authors on the subject is Nick Bostrom. Specifically, this card

Bostrom 11

Even if we use the most conservative of these estimates, which entirely ignores the possibility of space colonization and software minds, we find that the expected loss of an existential catastrophe is greater than the value of 1018 human lives.  This implies that the expected value of reducing existential risk by a mere one millionth of one percentage point is at least ten times the value of a billion human lives.  The more technologically comprehensive estimate of 1054 human-brain-emulation subjective life-years (or 1052 lives of ordinary length) makes the same point even more starkly.  Even if we give this allegedly lower bound on the cumulative output potential of a technologically mature civilization a mere 1% chance of being correct, we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives.

To summarize the argument, Bostrom contends that when we look at how many human lives would be lost via an existential catastrophe we can’t just stop at the number of humans currently alive. Instead, he argues that we would need to include all future generations of humans in our calculus, or in other words we have to look to the maximum possible size of the human race when calculating the magnitude of extinction. This gives us an EXTREMELY large magnitude impact. He then goes a step further to calculate the value of reducing existential risk in terms of human lives, which is also a huge number. This directly ties the elimination of a low probability chance of extinction to a really high impact.

The simplest way to get around this is by reading a non-consequentialist framework, and winning the framework level clash. For the sake of this article though, we’re going to assume you’re not doing that.

The problem here, is that the typical response to high magnitude, low probability impacts is to focus on probability. By explaining that it’s unlikely to happen and we should prefer your higher probability impacts. Of course, this doesn’t work as well when the probability is tied to magnitude via Bostrom’s article. But, attacking probability, in most traditional LD rounds, will be sufficient. Arguing that low probability arguments shouldn’t weigh will probably be enough to beat Bostrom (again, in traditional LD). To make that argument, you could use the following card:

RESCHER 2003 (Nicholas, Prof of Philosophy at the University of Pittsburgh, Sensible Decisions: Issues of Rational Decision in Personal Choice and Public Policy, p. 49-50)

On this issue there is a systemic disagreement between probabilists working on theory-oriented issues in mathematics or natural science and decision theorists who work on practical decision-oriented issues relating to human affairs.  The former takes the line that small number are small numbers and must be taken into account as such—that is, the small quantities they actually are.  The latter tend to take the view that small probabilities represent extremely remote prospect and can be written off.  (De minimis non curat lex, as the old precept has it: in human affairs there is no need to bother with trifles.)  When something is about as probable as a thousand fair dice when tossed a thousand times coming up all sixes, then, so it is held, we can pretty well forget about it as a worthy of concern.  As a matter of practical policy, we operate with probabilities on the principle that when x ≤ E, then x = 0.  We take the line that in our human dealings in real-life situations a sufficiently remote possibility can—for all sensible purposes—be viewed as being of probability zero.  Accordingly, such remote possibilities can simply be dismissed, and the outcomes with which they are associated can accordingly be set aside.  And in “the real world” people do in fact seem to be prepared to treat certain probabilities as effectively zero, taking certain sufficiently improbable eventualities as no long representing real possibilities.  Here an extremely improbable event is seen as something we can simply write off as being outside the range of appropriate concern, something we can dismiss for all practical purposes.  As one writer on insurance puts it: [P]eople…refuse to worry about losses whose probability is below some threshold.  Probabilities below the threshold are treated as though they were zero.  No doubt, remote-possibility events having such a minute possibility can happen in some sense of the term, but this “can” functions somewhat figuratively—it is no longer seen as something that presents a realistic prospect.

The strongest response from a consequentialist perspective is to challenge the concept that unborn hordes of people matter in a utilitarian calculus. Essentially, this will limit the future generations that Bostrom includes in his calculus. This in and of itself minimizes your opponents impact (i.e. reduces the scope of nuclear war), but it is mostly useful in being able to break down the logic behind Bostrom’s numbers. This means you can argue that Bostrom’s final numbers (i.e. a hundred billion times a billion human lives) are problematically determined, and thus, shouldn’t apply.

Another response one can make against extinction comes first is that we can’t know the future, meaning that any action we take could result in some degree of existential risk. This means that a framework focused on reducing existential risk leads to paralysis. Of course, your opponent would likely respond that doing nothing increases existential risk, but that raises the problem that we can’t know for sure how likely something is to lead to extinction. The implication being that there is no way that anything can guide action and in terms of what a government ought to do, again, leading to policy paralysis. [Fried 1] explains

[1] Fried, prof. of law at Harvard, 1994 (Charles, “Absolutism and its Critics”, p. 170)

 “Absoluteness means that the consequences are overwhelmingly bad. Then, not only are we forbidden to do anything, for anything carries with it a risk, we are indeed required to do nothing. So this interpretation is to actually a prescription for paralysis. This norm, by virtue of this view of its absoluteness, takes over the whole of our moral life. This situation opens the possibility of insolvable contradictions within any system containing more than one absolute norm. Now, deontological systems avoid the paralysis and contradiction of this interpretation.”

In addition, you could argue that under the standard of minimizing existential risk, anything could be justifiable or permissible if it *somehow* may lead to even the slightest minimization of existential risk. This could mean that even if an action leads to the death of over one million people, if said action minimizes existential risk, it would be justified under their framework. Basically, because the standard of minimizing existential risk only looks at the impact of extinction, actions that we perceive to be objectively wrong (i.e. genocide, slavery, etc), for either categorical or consequential reasons, could be correct actions under this standard if you could make a tenuous link to minimizing the risk of extinction.

In conclusion, arguments about existential risk rely on using a very high magnitude in order to force everyone to overlook their low probability. The way to counter them is to challenge the premise that allows them to create such high magnitude impacts, point out the problematic implications of this type of thinking, and finally use extinction first logic to your advantage.

Leave a Reply

Your email address will not be published. Required fields are marked *