Component fallacies

You are here :: Home :: What is rhetoric? :: Logical fallacies :: Component fallacies ::

Component fallacies are errors in inductive and deductive reasoning or in syllogistic terms that fail to overlap.

Begging the Question (also called Petitio Principii, this term is sometimes used interchangeably with Circular Reasoning): If writers assume as evidence for their argument the very conclusion they are attempting to prove, they engage in the fallacy of begging the question. The most common form of this fallacy is when the first claim is initially loaded with the very conclusion one has yet to prove. For instance, suppose a particular student group states, “Useless courses like English 101 should be dropped from the college’s curriculum.” The members of the student group then immediately move on in the argument, illustrating that spending money on a useless course is something nobody wants. Yes, we all agree that spending money on useless courses is a bad thing. However, those students never did prove that English 101 was itself a useless course–they merely “begged the question” and moved on to the next “safe” part of the argument, skipping over the part that’s the real controversy, the heart of the matter, the most important component. Begging the question is often hidden in the form of a complex question (see below).

Circular Reasoning is closely related to begging the question. Often the writers using this fallacy word take one idea and phrase it in two statements. The assertions differ sufficiently to obscure the fact that that the same proposition occurs as both a premise and a conclusion. The speaker or author then tries to “prove” his or her assertion by merely repeating it in different words. Richard Whately wrote in Elements of Logic (London 1826): “To allow every man unbounded freedom of speech must always be on the whole, advantageous to the state; for it is highly conducive to the interest of the community that each individual should enjoy a liberty perfectly unlimited of expressing his sentiments.” Obviously the premise is not logically irrelevant to the conclusion, for if the premise is true the conclusion must also be true. It is, however, logically irrelevant in proving the conclusion. In the example, the author is repeating the same point in different words, and then attempting to “prove” the first assertion with the second one. A more complex but equally fallacious type of circular reasoning is to create a circular chain of reasoning like this one: “God exists.” “How do you know that God exists?” “The Bible says so.” “Why should I believe the Bible?” “Because it’s the inspired word of God.” If we draw this out as a chart, it looks like this:

circular_reasoning

The so-called “final proof” relies on unproven evidence set forth initially as the subject of debate. Basically, the argument goes in an endless circle, with each step of the argument relying on a previous one, which in turn relies on the first argument yet to be proven. Surely God deserves a more intelligible argument than the circular reasoning proposed in this example!

Hasty Generalization (Dicto Simpliciter, also called “Jumping to Conclusions,” “Converse Accident”): Mistaken use of inductive reasoning when there are too few samples to prove a point. Example: “Susan failed Biology 101. Herman failed Biology 101. Egbert failed Biology 101. I therefore conclude that most students who take Biology 101 will fail it.” In understanding and characterizing general situations, a logician cannot normally examine every single example. However, the examples used in inductive reasoning should be typical of the problem or situation at hand. Maybe Susan, Herman, and Egbert are exceptionally poor students. Maybe they were sick and missed too many lectures that term to pass. If a logician wants to make the case that most students will fail Biology 101, she should (a) get a very large sample–at least one larger than three–or (b) if that isn’t possible, she will need to go out of his way to prove to the reader that her three samples are somehow representative of the norm. If a logician considers only exceptional or dramatic cases and generalizes a rule that fits these alone, the author commits the fallacy of hasty generalization.

One common type of hasty generalization is the Fallacy of Accident. This error occurs when one applies a general rule to a particular case when accidental circumstances render the general rule inapplicable. For example, in Plato’s Republic, Plato finds an exception to the general rule that one should return what one has borrowed: “Suppose that a friend when in his right mind has deposited arms with me and asks for them when he is not in his right mind. Ought I to give the weapons back to him? No one would say that I ought or that I should be right in doing so. . . .” What is true in general may not be true universally and without qualification. So remember, generalizations are bad. All of them. Every single last one. Except, of course, for those that are not.

Another common example of this fallacy is the misleading statistic. Suppose an individual argues that women must be incompetent drivers, and he points out that last Tuesday at the Department of Motor Vehicles, 50% of the women who took the driving test failed. That would seem to be compelling evidence from the way the statistic is set forth. However, if only two women took the test that day, the results would be far less clear-cut. Incidentally, the cartoon Dilbert makes much of an incompetent manager who cannot perceive misleading statistics. He does a statistical study of when employees call in sick and cannot come to work during the five-day work week. He becomes furious to learn that 40% of office “sick-days” occur on Mondays (20%) and Fridays (20%)–just in time to create a three-day weekend. Suspecting fraud, he decides to punish his workers. The irony, of course, is that these two days compose 40% of a five day work week, so the numbers are completely average. Similar nonsense emerges when parents or teachers complain that “50% of students perform at or below the national average on standardized tests in mathematics and verbal aptitude.” Of course they do! The very nature of an average implies that!

False Cause: This fallacy establishes a cause/effect relationship that does not exist. There are various Latin names for various analyses of the fallacy. The two most common include these types:

(1) Non Causa Pro Causa (Literally, “Not the cause for a cause”): A general, catch-all category for mistaking a false cause of an event for the real cause.

(2) Post Hoc, Ergo Propter Hoc (Literally: “After this, therefore because of this”): This type of false cause occurs when the writer mistakenly assumes that, because the first event preceded the second event, it must mean the first event caused the later one. Sometimes it does, but sometimes it doesn’t. It is the honest writer’s job to establish clearly that connection rather than merely assert it exists. Example: “A black cat crossed my path at noon. An hour later, my mother had a heart-attack. Because the first event occurred earlier, it must have caused the bad luck later.” This is how superstitions begin.

The most common examples are arguments that viewing a particular movie or show, or listening to a particular type of music “caused” the listener to perform an antisocial act–to snort coke, shoot classmates, or take up a life of crime. These may be potential suspects for the cause, but the mere fact that an individual did these acts and subsequently behaved in a certain way does not yet conclusively rule out other causes. Perhaps the listener had an abusive home-life or school-life, suffered from a chemical imbalance leading to depression and paranoia, or made a bad choice in his companions. Other potential causes must be examined before asserting that only one event or circumstance alone earlier in time caused a event or behavior later. For more information, see correlation and causation.

Irrelevant Conclusion (Ignorantio Elenchi): This fallacy occurs when a rhetorician adapts an argument purporting to establish a particular conclusion and directs it to prove a different conclusion. For example, when a particular proposal for housing legislation is under consideration, a legislator may argue that decent housing for all people is desirable. Everyone, presumably, will agree. However, the question at hand concerns a particular measure. The question really isn’t, “Is it good to have decent housing?” The question really is, “Will this particular measure actually provide it or is there a better alternative?” This type of fallacy is a common one in student papers when students use a shared assumption–such as the fact that decent housing is a desirable thing to have–and then spend the bulk of their essays focused on that fact rather than the real question at issue. It’s similar to begging the question, above.

One of the most common forms of Ignorantio Elenchi is the “Red Herring.” A red herring is a deliberate attempt to change the subject or divert the argument from the real question at issue to some side-point; for instance, “Senator Jones should not be held accountable for cheating on his income tax. After all, there are other senators who have done far worse things.” Another example: “I should not pay a fine for reckless driving. There are many other people on the street who are dangerous criminals and rapists, and the police should be chasing them, not harassing a decent tax-paying citizen like me.” Certainly, worse criminals do exist, but that it is another issue! The questions at hand are (1) did the speaker drive recklessly, and (2) should he pay a fine for it?

Another similar example of the red herring is the fallacy known as Tu Quoque (Latin for “And you too!”), which asserts that the advice or argument must be false simply because the person presenting the advice doesn’t consistently follow it herself. For instance, “Susan the yoga instructor claims that a low-fat diet and exercise are good for you–but I saw her last week pigging out on oreos, so her argument must be a load of hogwash.” Or, “Reverend Jeremias claims that theft is wrong, but how can theft be wrong if Jeremias himself admits he stole objects when he was a child?” Or “Thomas Jefferson made many arguments about equality and liberty for all Americans, but he himself kept slaves, so we can dismiss any thoughts he had on those topics.”

Straw Man Argument: A subtype of the red herring, this fallacy includes any lame attempt to “prove” an argument by overstating, exaggerating, or over-simplifying the arguments of the opposing side. Such an approach is building a straw man argument. The name comes from the idea of a boxer or fighter who meticulously fashions a false opponent out of straw, like a scarecrow, and then easily knocks it over in the ring before his admiring audience. His “victory” is a hollow mockery, of course, because the straw-stuffed opponent is incapable of fighting back. When a writer makes a cartoon-like caricature of the opposing argument, ignoring the real or subtle points of contention, and then proceeds to knock down each “fake” point one-by-one, he has created a straw man argument.

For instance, one speaker might be engaged in a debate concerning welfare. The opponent argues, “Tennessee should increase funding to unemployed single mothers during the first year after childbirth because they need sufficient money to provide medical care for their newborn children.” The second speaker retorts, “My opponent believes that some parasites who don’t work should get a free ride from the tax money of hard-working honest citizens. I’ll show you why he’s wrong . . .” In this example, the second speaker is engaging in a straw man strategy, distorting the opposition’s statement about medical care for newborn children into an oversimplified form so he can more easily appear to “win.” However, the second speaker is only defeating a dummy-argument rather than honestly engaging in the real nuances of the debate.

Non Sequitur (literally, “It does not follow”): A non sequitur is any argument that does not follow from the previous statements. Usually what happened is that the writer leaped from A to B and then jumped to D, leaving out step C of an argument she thought through in her head, but did not put down on paper. The phrase is applicable in general to any type of logical fallacy, but logicians use the term particularly in reference to syllogistic errors such as the undistributed middle term, non causa pro causa, and ignorantio elenchi. A common example would be an argument along these lines: “Giving up our nuclear arsenal in the 1980’s weakened the United States’ military. Giving up nuclear weaponry also weakened China in the 1990s. For this reason, it is wrong to try to outlaw pistols and rifles in the United States today.” There’s obviously a step or two missing here.

The “Slippery Slope” Fallacy (also called “The Camel’s Nose Fallacy”) is a non sequitur in which the speaker argues that, once the first step is undertaken, a second or third step will inevitably follow, much like the way one step on a slippery incline will cause a person to fall and slide all the way to the bottom. It is also called “the Camel’s Nose Fallacy” because of the image of a sheik who let his camel stick its nose into his tent on a cold night. The idea is that the sheik is afraid to let the camel stick its nose into the tent because once the beast sticks in its nose, it will inevitably stick in its head, and then its neck, and eventually its whole body. However, this sort of thinking does not allow for any possibility of stopping the process. It simply assumes that, once the nose is in, the rest must follow–that the sheik can’t stop the progression once it has begun–and thus the argument is a logical fallacy. For instance, if one were to argue, “If we allow the government to infringe upon our right to privacy on the Internet, it will then feel free to infringe upon our privacy on the telephone. After that, FBI agents will be reading our mail. Then they will be placing cameras in our houses. We must not let any governmental agency interfere with our Internet communications, or privacy will completely vanish in the United States.” Such thinking is fallacious; no logical proof has been provided yet that infringement in one area will necessarily lead to infringement in another, no more than a person buying a single can of Coca-Cola in a grocery store would indicate the person will inevitably go on to buy every item available in the store, helpless to stop herself. So remember to avoid the slippery slope fallacy; once you use one, you may find yourself using more and more logical fallacies.

Either/Or Fallacy (also called “the Black-and-White Fallacy,” “Excluded Middle,” “False Dilemma,” or “False Dichotomy”): This fallacy occurs when a writer builds an argument upon the assumption that there are only two choices or possible outcomes when actually there are several. Outcomes are seldom so simple. This fallacy most frequently appears in connection to sweeping generalizations: “Either we must ban X or the American way of life will collapse.” “We go to war with Canada, or else Canada will eventually grow in population and overwhelm the United States.” “Either you drink Burpsy Cola, or you will have no friends and no social life.” Either you must avoid either/or fallacies, or everyone will think you are foolish.

Faulty Analogy: Relying only on comparisons to prove a point rather than arguing deductively and inductively. For example, “education is like cake; a small amount tastes sweet, but eat too much and your teeth will rot out. Likewise, more than two years of education is bad for a student.” The analogy is only acceptable to the degree a reader thinks that education is similar to cake. As you can see, faulty analogies are like flimsy wood, and just as no carpenter would build a house out of flimsy wood, no writer should ever construct an argument out of flimsy material.

Undistributed Middle Term: A specific type of error in deductive reasoning in which the minor premise and the major premise of a syllogism might or might not overlap. Consider these two examples: (1) “All reptiles are cold-blooded. All snakes are reptiles. All snakes are cold-blooded.” In the first example, the middle term “snakes” fits in the categories of both “reptile” and “things-that-are-cold-blooded.” (2) “All snails are cold-blooded. All snakes are cold-blooded. All snails are snakes.” In the second example, the middle term of “snakes” does not fit into the categories of both “things-that-are-cold-blooded” and “snails.” Sometimes, equivocation (see below) leads to an undistributed middle term.

Contradictory Premises (also known as a logical paradox): Establishing a premise in such a way that it contradicts another, earlier premise. For instance, “If God can do anything, he can make a stone so heavy that he can’t lift it.” The first premise establishes a deity that has the irresistible capacity to move other objects. The second premise establishes an immovable object impervious to any movement. If the first object capable of moving anything exists, by definition, the immovable object cannot exist, and vice-versa.

Closely related is the fallacy of Special Pleading, in which the writer creates a universal principle, then insists that principle does not for some reason apply to the issue at hand. For instance, “Everything must have a source or creator. Therefore God must exist and he must have created the world. What? Who created God? Well, God is eternal and unchanging–He has no source or creator.” In such an assertion, either God must have His own source or creator, or else the universal principle of everything having a source or creator must be set aside—the person making the argument can’t have it both ways.

Advertisements