How To Indict Pieces of Evidence
No matter what event you compete in – LD, PF, or Policy – you likely use a good chunk of evidence. Some debaters rely on evidence a lot more than others, but, almost everyone uses it to some extent. And because of that, you’ve likely been in rounds where your opponent has a piece of evidence that sounds really good. Almost too good to be true. How do you respond? How do you quickly find flaws in their evidence? In this article, I’m going to walk you through some of the basic ways you can find flaws, or indict evidence.
But before that, a disclaimer: Please don’t do this in every round to every piece of your opponents evidence. Most judges are okay with you indicting one or two sources, but if you make the round about it, we really, really do not like to see pointless evidence debates. And, these tips are designed to actually promote thoughtful, detailed debate. Don’t simply use them to score cheap wins. Use these tips strategically and responsibly.
The most obvious way to poke holes in evidence is through recency. If your opponent has a piece of evidence that is from 5 years ago, that’s somewhat recent. But if their evidence is from 1978, that’s a really long time ago and could affect its applicability. However, it’s really important that if you’re going to say your opponents evidence is old and the judge shouldn’t look to it, you must give a reason why their evidence being old might make it irrelevant. Maybe some major policy change occurred after the study was done, or maybe a significant & relevant event took place. Whatever it is, you should give some reason why old evidence = bad.
For an example:
“In my opponents second contention, they make the argument that American military intervention decreases terrorism. They give you a card from Smith 1982 that finds a 15% increase in US military presence leads to a 1% decrease in terrorism. However, the problem with their evidence is that it is from almost 40 years ago, and this matters because the US has become militarily involved in a bunch of locations since this study was conducted, like Iraq and Afghanistan, that has effected the relationship between US military presence and terrorism.”
2.) Sample Size & Data Inclusion
Another way to indict a piece of evidence is to look at that source’s sample size, as well as what was included in the sample. This really looks at how the data was accounted for in the study. In my experience, this tends to be the most useful for debaters for 2 reasons. First, it seems to be where most of the problems with evidence arise and second, it is the easiest for debaters to pinpoint in round. So, there are a number of ways to critique the data that studies use. These include:
- The initial sample size is too small
- Let’s say that the study is analyzing the health effects of a new product, yet they only test said product on 25 people. That’s a really small sample size, and trying to extrapolate trends from small sample sizes is likely not accurate.
- The initial sample data does not include important/relevant data
- This response could be used when a study leaves out data you think might be relevant. For example, a study may analyze the effects military aid has on terrorism. But it does not include specific kinds of aid, such as arms sales – maybe it only looks at cash transfers. Leaving out this important of a category of data could really skew the results.
- The sample period is too short
- Use this response when your opponents study looks at the data in a really small timeframe. As an example, a study could analyze the benefits of a Universal Basic Income, but only look at the effects it has over the span of 6 months. This discounts any long term benefits/harms.
- The sample period leaves out important developments
- Let’s say that an opponent’s piece of evidence is looking at the relationship of nuclear weapons and conflict from 1950-1990. In theory, it’s solid – it has 40 years of data. However, it discounts the last 30 years, where different countries have had nuclear weapons and modern technology has changed what conflict looks like, so this relationship could be totally different. This isn’t about recency (tip #1) but, rather, that it leaves out important historical or empirical context.
- The study shows correlation, not causation
- Studies often lack causality, and simply show correlatory relationships between variables. Correlation refers to whether variables are linearly related and change together. Causation refers to a sort of “cause and effect” relationship; a change in one variable causes a particular change in the other variable. Take an example provided to us by the NSDA: “Consider this argument about climate change, made by our friends at the Church of the Flying Spaghetti Monster: “You may be interested to know that global warming, earthquakes, hurricanes, and other natural disasters are a direct effect of the shrinking numbers of Pirates since the 1800s … . As you can see, there is a statistically significant inverse relationship between pirates and global temperature.” Since the 19th century, global temperatures have risen and (remember our logical connectors now), the number of pirates has declined. So, does it then follow that global warming is due to the decline in the number of pirates?” However, I would advise against simply saying “their study is correlation not causation!!!” and moving on in round. Instead, offer a reason why this matters. Perhaps since it is only correlatory it ignores another important variable that affects the conclusions.
- The way the study acquired their data is faulty
- Let’s say a study is trying to figure out the effects that fossil fuels have on human health. To do this, they send out an email to 1,000 random people across the US to ask them if they have experienced any health problems related to air pollution, such as asthma. While in theory this is a valid question, the way that the study goes about it could lead to inaccurate results because the recipient gets to choose whether they respond, which means the data self-selected. Data that is self-selected may be skewed because individuals who actually feel tied/related to the topic or question are more likely to respond. There are a number of other critiques you could have about how the data is acquired.
- Data may be missing
- Sometimes, studies simply have missing data. This is because many researchers rely on publicly available data or institutions who archive data, and often it can get lost. Take for example, this study below: “Our dependent variables are count variables (number of US citizens killed in terrorist attacks and number of terror incidents involving Americans). For all reported results, the negative binomial estimate is more reliable than the Poisson model, because the sample variance of the number of killings and incidents exceeds its sample mean by a factor of approximately 35 and 7, respectively. In robustness tests, we also used a variant of the negative binomial called the zero-inflated negative binomial. We compute standard errors adjusted for clustering on the terrorists’ home country, though the variations in killings and incidents over time are large and clustering is therefore of minor importance. Our sample covers the period 1978 to 2005 and up to 149 countries. Due to missing data on the explanatory variables not all possible observations can be included in the analysis. We do not include year dummies to account for trends in foreign terror on Americans, but our results remain fully robust if we do.” The study outright says that data is missing, which they even go on to say excluses possible observations in the analysis.
- Sometimes, the study will just tell you
- This is really, really common. The above point also includes a great example for this one too. Often times, studies will just be forthright with possible errors or issues with their analysis. This is typically found in their data and/or research model sections. If you do some quick scanning of the paper, the evidence may indict itself for you.
3.) Google The Source/Concept
A really effective, yet easy way to find problems with evidence is to do some research. You can either search the article/study itself or the concept it is talking about. Often times, this alone can bring up responses.
For example, one of the common arguments on the current Jan/Feb LD topic is that eliminating nuclear weapons will increase the chance of countries using biological/chemical weapons. This is, typically, supported by the argument about “Poor Man’s Atomic Bomb”. But, when you Google that specific phrase, the first thing to show up in search results is “The myth of biological weapons as the poor man’s atomic bomb”. So, sometimes doing preliminary research can lead to papers that indict a particular study/source or a specific idea.
If you’re not having any luck with a preliminary search, you can try searching the evidence/article title or the concept with phrases such as:
- “The problems with”
- “The myth of”
- “A response to”
- “Criticisms of”
- “A critique of”
- “An analysis of”
Using key phrases like this while searching for indictments of studies, concepts or authors can be very helpful.
For example, if one wanted to respond to LD frameworks that centered around John Rawls’ theory of justice, then searching “A critique of John Rawls” can help debaters get started in finding flaws with Rawls’ theory.
These are some of my top tips for indicting pieces of evidence. I hope you found them helpful and have an even more critical approach to evidence!