Skip to content

Causal & Inductive Fallacies

Learn Something?
Buy The Amateur Logician a Cup of Coffee!

Support the Mission to Revive Logic?
Buy The Amateur Logician a Cup of Coffee!

This tutorial is free; however, it took effort, time, and money to put together. I would greatly appreciate any contributions to keep this website project running and expanding. Thank you!

— George Wick

Support this Intellectual Project!

The Amateur Logician Tutorial:
An Invitation | Arguments | Evaluating Arguments | Laws of Thought | Ontology & Logic | Concepts, Signs, & Names | Categorical Propositions | Negations & Complements | Distribution | Euler’s Circles & Venn Diagrams | Predicables | Categories | Definitions | Square of Opposition | Equivalent “Immediate Inference” | Traditional “Immediate Inference” | Informal “Immediate Inference” | Categorical Syllogism | Syllogisms & Venn Diagrams | Moods & Figures of Syllogisms | Polysyllogisms & the Sorites | Enthymemes | Compound Propositions | Conditional Propositions & Conditional Syllogisms | Conditional Contrapositions, Reductions, & Biconditionals | Conjunctions, Disjunctions, & Disjunctive Syllogisms | Dilemmas | Modal Propositions | Reductio ad Absurdum Arguments | Deduction & Induction | Inductive Abstraction | Inductive Syllogisms | Mill’s Inductive Methods | Science & Hypothesis | Formal Fallacies | Testimony & Unsound Authority | Informal Fallacies & Language | Diversion & Relevancy Fallacies | Presumption Fallacies | Causal & Inductive Fallacies

<— Click on a topic!

Causal & Inductive Fallacies

Causality Intro
Non Causa Pro Causa
Post Hoc Ergo Propter Hoc
– Causal Oversimplification
– Neglecting a Common Cause
– Confusing Necessary with Sufficient Condition
– Fallacy of Slippery Slope
– Gambler’s Fallacy

Induction Intro
– Hasty Generalization (Converse Accident)
– Fallacy of Accident
– Fallacy of the Average
(Generic Propositions)
– Cherry Picking
– P-Hacking
(Data Dredging)

Causality Introduction

Two things can be related in different ways. . .

It’s our task to avoid relating P and Q in erroneous ways.

Efficient causality, which is one type of causality, deals with an agent that brings about the change or existence of something. It’s important to remember that there is an act involved. A causal power is at work that brings about the change. The efficient agent has the power to produce a change in another being, and that being itself is receptive to that causal power at work in it.

Note that causality is not primarily about the sequence of events.

Warning!
Falsely believing that can lead us into the post hoc ergo propter hoc fallacy.

Causation is principally about agents, not events.

Temporal sequence helps us discover cause-and-effect relations, but is not sufficient. I walk down the street. You then start following me. Am I the efficient cause of you walking? No, obviously not! You might be following me, but I’m not the one making your legs move.

Generally, we discover causality in our experience by empirical reflection. As children we first notice that we have a causal power over our own bodies. We notice, by analogy, that non-rational animals also have this. And then, by lesser analogy, we notice that inanimate objects influence each other by their causal powers.

To claim P causes Q requires some understanding, however partial, of the relationship between P and Q. Empirical observation and scientific testing can help with this. Sometimes the “link” is obvious, as when the wind blows a small package down the street. A burst of wind makes its way through the street. A small package is along its path. And that small package is moved as the air moves it.

Warning!
The non causa pro causa fallacy is more likely to be avoided when we don’t “jump to conclusions” too fast. Adequately locating a cause requires understanding the mechanism by which one thing causes another thing.

Non Causa Pro Causa (“not the cause for the cause”)

A “false cause fallacy” reaches a conclusion that P is the cause of Q, but there is little to no evidence that it in fact is. Sometimes it is obvious when the non causa pro causa fallacy has been committed, other times it is not.

Example A: “You shouldn’t have black cats in the house; it’s bad luck.”
Edward Arnold walks under a ladder. A black cat walks by him.
He falls down the staircase. Ergo, the ladder and cat are to blame.

(from the 1937 screwball comedy film Easy Living)

Edward Arnold’s superstitious character might lead him to that fallacious conclusion!

Example B: “Security guards increase crime in the store. Once we hired more security guards, a greater number of thefts have been witnessed.”

The conclusion is the first sentence. It’s a causal inference that’s not warranted. Nor is it evident that crime has gone up, which isn’t implied by an increase in witnessing crimes.

Post Hoc Ergo Propter Hoc
(“after this, therefore because of this”)

This is a subset of the non causa pro causa fallacy. We search for a cause for Q. It’s noticed that P comes before Q, and then conclude that P must have caused Q. It’s the “after this, therefore because of it” fallacy. Logically concluding that P is the cause of Q requires more than just finding out that P comes before Q.

Example A: Luis Alberni’s character kisses a rabbit’s foot. After that, he got his wish. Ergo, the rabbit’s foot worked.

(from the 1937 screwball comedy film Easy Living)

It’s a silly example, but at least the fallacy is apparent.

Though it’s not always apparent; it’s easy to commit the fallacy in the social sciences.

Example B: After taxes increased, it has been observed that wealth has increased. Ergo, increasing taxes produced greater wealth.

Post hoc ergo propter hoc. Is that a plausible inference? No!

Consider the possibilities. Wealth might have increased because of increased taxes. Or it might have increased despite of increased taxes. Alternatively, there may be no relation between wealth and taxation whatsoever.

We need to answer the following question. . .
What would have been the situation if there was no increase in taxation?

Theory and History
(View PDF)

Such questions require economic theory. As economist Ludwig von Mises wrote in Theory and History: “historical experience is always the experience of complex phenomena, of the joint effects brought about by the operation of a multiplicity of elements” (p. 208).

Post hoc reasoning can thereby easily go wrong.
And it’s not something, as a rule, to use in economics.

Human action is not physics nor chemistry where we can merely data collect past events to find invariant causal relations: “the distinctive mark of what we call the human sphere or history or, better, the realm of human action is the absence of such universally prevailing regularity” (p. 4).

There’s no science that will predict the course of human history!

However, economics as a science can reveal what is true of any purposeful human action in general. That is, it can qualitatively reveal that all actions presuppose having goals and doing one thing comes at the expense of doing another thing, etc. Hence, how human actions will be qualitatively modified by specific policies can be derived to answer (some) counterfactual questions.

essential in economic reasoning

Answering the counterfactual question — what would have happened without that tax increase? — requires economic theory. (Wealth could have been higher!)

Causal Oversimplification
(Fallacy of the Single Cause)

Falsely assuming that only one cause is present while there are actually multiple causes is to commit the causal oversimplification fallacy. As mentioned above by von Mises, for example, a complex social phenomena usually has multiple causes, not a single cause.

Example A: Being healthy only requires a good diet.

It’s necessary to eat well, though not sufficient. We better exercise, too!

Example B: U.S. economic growth is due to high technological knowledge.

Knowledge about technology helps! A capital structure is required to make use of it.

Also, consider various social trends. Increased divorces, e.g., are likely due to several reasons, not a single reason. Or consider blaming all cultural or racial disparities today on “racism.” That’s highly unlikely to explain differences between groups of diverse peoples.

Neglecting a Common Cause

Correlation between two things is not enough to conclude that there must be a cause-and-effect relation between them. Another thing might cause the other two things.

Example A: John starts the day with a headache. At work he develops the chills and begins shivering. It seems the headache caused the chills and shivering.

There’s correlation between the two. Yet both, of course, were likely caused by a fever.

Incidentally, man’s study of physics has uncovered that many things that seemed unrelated are related. Consider an apple falling down and the Moon orbiting the Earth have a common cause dealing with Earth’s gravity (an attraction between two masses).

Confusing Necessary with Sufficient Condition

P is sufficient for Q.
P is necessary for Q.
Those are not equivalent statements!

Review the page on conditional propositions.
“P is necessary for Q” is the same as “if Q, then P.”

Necessity demands that without P being the case, Q cannot be the case. For example, oxygen is necessary for fire. Without oxygen, there will be no fire. P being sufficient for Q doesn’t demand that. If P is sufficient for Q, other things might bring about Q. For example, for it to be raining is sufficient for there to be clouds; it’s not necessary, though.

Example A: “My flowers died, even though I gave them water!”

This causal inference confuses a necessary condition with a sufficient condition. Yes, I need to water my flowers. But other things are necessary! Did they get enough sunlight, etc.?

Example B: “I can only buy that expensive condo if I inherit a million dollars.”

Inheriting a million dollars may be sufficient, but it’s not necessary because there are other ways to obtain that money. I may win the lottery. My stocks might dramatically increase in value. A patron might want to support the author of Amateur Logician .com.

Fallacy of Slippery Slope

Causal series are all around us. P1 causes P2, P2 causes P3, and P3 causes P4. This series is justified only insofar as each step in the series is justified. That series may be “accidentally” ordered or “essentially” ordered.

If the former, each causal relation is “an isolated unit.” That is, when P3 causes P4 there is no dependency on P2 for P3 when it exercises its causal powers to produce P4. The classic example is a father begetting a son, and that son growing up to beget his own son. The son’s causal abilities to beget in the present doesn’t depend upon his father’s presence. His father might be deceased.

If the latter, each causal relation is not isolated into totally independent units. That’s because when P3 causes P4, P3’s causal activity presently depends upon P2’s causal activity, and P2’s causal activity presently depends upon P1’s causal activity. The classic example is a mind that moves a hand, a hand that moves a stick, and a stick that moves a stone across the pavement. The hand, stick, and stone “participate in” the mind’s causality. For without the mind, as the first cause, there would be no series. The stick moves the stone through the hand’s influence, and the hand moves through the mind’s influence.

This causal series might also be thought of in terms of events.
One event “leads” to another event, and that “leads” to another event, etc.
Regardless, each step needs to be justified to justify the series as a whole.

The slippery slope fallacy occurs when a causal series is not justified. Each step is not justified. There is a seemingly innocuous starting point and then the causal series ends with disastrous consequences.

Example A:

Jim Hacker – “My new policy is to withhold all honors from all civil servants in this department who do not make a cut in their budgets. . .”

Sir Humphrey Appleby – “. . .It’s out of the question. . . It’s the beginning of the end. The thin end of a wedge. … Where will it end? The abolition of the monarchy?”

(From Yes, Minister, “Doing the Honours.”)

The slippery slope fallacy is also called the “thin end of a wedge.” Sir Appleby nowhere justifies each step in the causal series that ends with the “abolition of the monarchy.”

Remember, not all causal series that have an innocuous starting point and end in disaster are fallacious. If each step in the series is justified, then it’s a legitimate series.

The Gambler’s Fallacy

Let’s begin this fallacy with a quote (and example) from economist Ludwig von Mises. . .

Example A: “A surgeon tells a patient who considers submitting himself to an operation that thirty out of every hundred undergoing such an operation die. If the patient asks whether this number of deaths is already full, he has misunderstood the sense of the doctor’s statement. He has fallen prey to the error known as the ‘gambler’s fallacy.’

(from Human Action, p. 110)

The patient erroneously thinks there’s a dependent causal link between the long-run statistical distribution and his individual chances. He thinks past outcomes affect future outcomes, including his own.

An insurance company has data about a whole class of people that submit to the operation, for example, but nothing else is known about individual persons except that they are members of that class. (This is what von Mises calls “class probability.”)

To return to von Mises. . .

Example B: The patient is like “the roulette player who concludes from a run of ten red in succession that the probability of the next turn being black is now greater than it was before the run …” (ibid).

Each outcome of the roulette wheel is random if the previous outcome doesn’t affect the future outcome.

That’s how a “fair” roulette will behave.

A ball landing on red several times in a row doesn’t increase the chance that it will next land on black.

Formally, this is an invalid argument: (1) this roulette is “fair” or “unbiased;” (2) this roulette landed on red several times in a row; (3) therefore, it is more likely to land on black now.

No, it’s just as likely to land on black as it was before!
(The roulette is “fair” or “unbiased.” That premise blocks the conclusion.)

What is Randomness?

Something is random if there’s no systematic pattern. This implies there’s no regularity. The roulette is random, for example. No gambler can develop a “gambling system” to beat the house. And, finally, there must be independence involved from one trial to the next.

Think again about the roulette wheel. One reason it is “random” is because one trial is independent from the next. If it were not and the previous trial influenced the next, then a gambler could develop a “gambling system.”

Induction Intro

Misusing Inductive and Deductive Reasoning:
Fallacy of Converse Accident
Fallacy of Accident

Induction is the process of deriving a more general proposition (“all of S is P”) from a more particular proposition (“some of S is P”). It infers a universal law from individual cases.

Contradictory Opposition:
If “some S is not P” is true, then “all S is P” is false.

A derivation is falsified if there is at least one counterexample. Consider the square of opposition, if “some of S is not P” is true, it must be false that “all of S is P.” Equivalently, if “all of S is P” is true, then it must be false that “some of S is not P.”

If “some swans are not white” is discovered to be true, then it must be false that “all swans are white.” (Common sense!)

Review “Inductive Abstraction” & “Inductive Syllogisms.”

Hasty Generalization
(Fallacy of Converse Accident)

An induction gone wrong is usually a hasty generalization. It’s also been called the fallacy of converse accident. Particular instances are observed and then it’s incorrectly — i.e., hastily — concluded that it applies to all general cases. These particular instances are not representative of the group as a whole.

Example A: Driving through a city’s neighborhood revealed one house after another abandoned. The city must be mostly abandoned.

Example B: Look at those obnoxious and stupid protestors! They can’t even spell words correctly on their signs. All liberals are obnoxious and stupid.

Example C: I’ve just started with one gutter ball. I must be terrible in bowling.

Avoiding a hasty generalization is more likely by (1) observing a large number of instances and (2) observing a variety of instances.

Fallacy of Accident

The fallacy of accident is not an inductive fallacy. It’s worth adding presently, however, as a contrast with the fallacy of converse accident.

What’s the fallacy of accident? It’s the misapplication of a general principle to a particular case that disregards the special features of that particular case. That is, there may be exceptions to the general principle that are not taken into account.

A proposition might have an implicit domain. Consider these two examples. . .

Example A: “The customer is always right!”
A customer demanded a 95% price cut. So, the price must be cut by 95%?

Example B: “Water freezes at 32 degrees Fahrenheit.”
So, this ocean water must freeze at 32 degrees Fahrenheit!

Example A is a slogan. It’s meant to guide a business’s practices to promote customer satisfaction. Literally interpreted it’s a dumb slogan, as anyone who has worked in retail knows full well. A customer can be a thief, rude, dishonest, and have inane demands.

Example B is more interesting. The initial proposition’s domain is implicitly restricted to fresh water. It doesn’t include ocean water with salt (which freezes at 28.4 degrees Fahrenheit). It is evident that we cannot infer the freezing temperature outside of the implicit domain of the premise.

Hence, both examples engage in the fallacy of accident!

Example C: “Freedom of speech is absolute.”
Therefore, you have a right to scream “fire” in a crowded theater.

Example C is interesting, too. Not all political philosophies support the major premise. Suppose a given political philosophy does. Generally, it doesn’t cover every situation categorically. There are exceptions to the rule.

To be sure, the major premise could be modified to make it more precise. It could include its “implicit domain” by telling us when exactly freedom of speech is absolute. Alternatively, it might spell out the exceptions to the rule.

That’s no easy task, however! Many ethical propositions can be hard to generalize because particular situations often provide exceptions.

This is a point Aristotle made. He was far more concerned with virtuous character formation versus general ethical rules. Yet he saw that the more specific a generalized rule, the more exceptions it will have. There is no set of propositions that will tell you precisely what to do in any unique situation. A generalized rule that upholds in all circumstances will tend to be vague.

However, in the natural law tradition, it’s not as difficult to formulate ethical propositions on what not to do. An intrinsic evil is not to be committed, period.

Fallacy of the Average
and Generic Propositions

What we’ll call the “fallacy of the average” is not an inductive fallacy. It provides us with a cautionary tale on an improper use of induction to disprove generic propositions.

What’s a generic proposition? It’s not a categorical proposition that will fit into the usual A, I, E, or O classification. A generic proposition deals with what’s usual, average, or normal.

Take this example: “Men are taller than women.”
Given that generic proposition, is the following argument any good?

Example A: “It’s claimed that men are taller than women, but Haley is a woman who is much taller than Paul. Don’t believe in the stereotype!”

Here we have the “fallacy of average.” This single data point doesn’t contradict the fact that, on average, men are taller than women. It is a true statement! As a generic proposition, it doesn’t affirm that “all men are taller than women.” If we had that kind of categorical proposition, then a single data point would be enough to falsify it.

It would be a hasty generalization to conclude that “all men are taller than women” by observing various single data points that confirmed it. But there’s no hasty generalization to this generic proposition, since a significantly large and varied set of data points does confirm that men tend to be taller than women.

Example B: It’s claimed that family households with both a mother and father tend to be better environments to raise children in, yet this single mother has done a great job raising a child. So have many others! Therefore, this “conservative” view is old-fashioned.

The generic proposition hasn’t been falsified by such counter-examples. There’s a massive amount of statistical data that reveal that single mother households are associated with emotional and behavior problems and greater poverty.

Arguing against this generic proposition requires showing that the data and/or inferences from that data are bad.

Remember, an average is a central tendency of a distribution of data points. The three most common measures of a central tendency include the average (a.k.a. mean), median, and mode.

A distribution of data points usually will have some dispersion or variability. Individual men and women, for example, will have different heights. An average captures the central tendency; it doesn’t deny dispersion or variability.

By the way, there’s an intuitive sense that the sum of the deviations from the average must equal zero. It’s implied in the concept of average. After all, the average is kind of a “balance” of all the data scores.

We can “prove” that algebraically. . .

Cherry Picking
(Suppressing Evidence)

Biases can divert us from a dispassionate appraisal. Consider a hasty generalization being made from a biased selection of data points. The inference clearly is invalid. Similarly, in a genuine appraisal of social policy, for example, nearly any policy will benefit some people and hurt other people. Someone who argues for policy X but only mentions those who benefit from X may likely be cheery picking the data.

Philosopher Michael Huemer gives the example of affirmative action.

“If I want to convince you, say, that affirmative action is bad, I might search for cases where affirmative action was tried and it didn’t work or it had harmful effects. If I want to convince you that it’s good, I search for cases where it really helped someone.  Of course both kinds of cases exist – it’s a big society, full of millions of people!”

(Knowledge, Reality, and Value, p. 56).

Thomas Sowell, economist and social theorist, once said that there are no absolute solutions, only trade-offs. Any fair evaluation of affirmative action has to look at both pro and con cases to compare them. Pretend you’re writing an essay on the topic for a college English composition course. Argue your thesis! In that argument address the counter-arguments. Use examples that not only support your conclusion but the opposite conclusion. No one can then legitimately claim you’re cheery picking.

P-Hacking
(Data Dredging)

“Correlation does not imply causation” is a popular — and, in this case, true! — expression. Two things observed together is not enough for us to establish that one is the cause of the other. We can claim that when there is causation, there is also correlation. Nevertheless, the converse cannot be said true categorically, that is, when there is correlation, there is necessarily causation.

Just because there might be a strong positive correlation between two variables, this doesn’t necessarily mean that there must exist a causal relationship between those two variables.

Here’s a “silly” example: if we looked at the statistics of those who are divorced, one variable everyone would have in common is that they were formerly married. Obviously, we couldn’t validly infer this variable caused divorce. It would be associated, and in that sense correlated 100%, yet not be a cause.

Correlation might mean nothing of consequence!

Spurious Correlations is a website of Tyler Vigen. He provides several examples of statistical correlations that mean nothing. For example, what does the “Number of people who drowned by falling into a pool” have to do with “Films Nicolas Cage appeared in”?

At the same time, philosophy professor Michael Huemer warns us against using the slogan “Correlation doesn’t imply causation” in an unthinking way. He writes, “Students learn the slogan in college and think it’s sophisticated, but it’s kind of simplistic” (p. 53). There’s a certain point when the correlation is strong enough, it usually becomes improbable that the relation is just a coincidence.

Similarly, Professor Huemer warns, the post hoc fallacy can be thought of in a simplistic way. We shouldn’t be misled to believe that B following A is irrelevant in determining a causal relation.

P-hacking or data dredging consists of running several statistical tests to dig up some correlation among a large data set. Just by chance some correlation will likely be found to pass the test of being “statistically significant.”

Huemer references John Ioannidis’ famous article “Why Most Published Research Findings Are False.” False positive results, Dr. Ioannidis argues, are widespread in medical research papers, and these papers don’t often have conditions that can be replicated to be tested again. (We should also be careful with educational and economical research.)

Huemer also references (in his book Understanding Knowledge, pp. 346-7):
– David H. Freedman, “Lies, Damned Lies, and Medical Science.”
– Brian Nosek, et al. “Estimating the Reproducibility of Psychological Science.”
– Monya Baker, “1,500 Scientists Lift the Lid on Reproducibility.”

Essentially, a false positive result is a “Type I” error. Inferential statistics, which was once rightly called “the theory of errors,” does tests based on a hypothesis. It’s the null hypothesis. Such tests will have a “level of significance,” which is the probability of making a Type I error. A usual level of significance chosen is 0.05.

This suggests that 5% of randomly chosen hypotheses might end up in a Type I error.

A type I error occurs when a statistician rejects the null hypothesis when it is in fact true. The null hypothesis states that there will be no significant differences between what’s experimentally observed and what would occur by pure chance. The statistician, however, believes that there are such differences. With a type I error, he errors in his judgement, since the differences are really insignificant.

In any case, take a large data set. It can be totally random. Still, there is a good chance that there is some (spurious) correlations if we look closely enough. Tyler Vigen gives us funny examples.

“There are three kinds of lies: lies, damned lies, and statistics.”

Warning: Let’s be careful out there!

What does p value mean?

Basically, it’s the observed significance level or probability value. It’s a common approach to hypothesis testing in inferential statistics. A low p value generally indicates the null hypothesis is false. For instance, there’s a low probability that the given sample mean would be what it is if the null were true. Conversely, a high p value indicates we would expect that sample mean to appear; i.e., it would not be improbable with the null being true in fact.

Warning! With that said, it is not the case that the p value is the probability that the hypothesis is true or is false given evidence. It’s rather the probability of observing, e.g., a sample mean or something more unusual by chance if the hypothesis is true.

Statistics is a huge topic! We should consult experts, when needed, for help.

© Copyright 2024. AmateurLogician.com. All Rights Reserved.
AmateurLogician.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.