Skip to content

Inductive Syllogisms

Learn Something?
Buy The Amateur Logician a Cup of Coffee!

Support the Mission to Revive Logic?
Buy The Amateur Logician a Cup of Coffee!

This tutorial is free; however, it took effort, time, and money to put together. I would greatly appreciate any contributions to keep this website project running and expanding. Thank you!

— George Wick

Support this Intellectual Project!

The Amateur Logician Tutorial:
An Invitation | Arguments | Evaluating Arguments | Laws of Thought | Ontology & Logic | Concepts, Signs, & Names | Categorical Propositions | Negations & Complements | Distribution | Euler’s Circles & Venn Diagrams | Predicables | Categories | Definitions | Square of Opposition | Equivalent “Immediate Inference” | Traditional “Immediate Inference” | Informal “Immediate Inference” | Categorical Syllogism | Syllogisms & Venn Diagrams | Moods & Figures of Syllogisms | Polysyllogisms & the Sorites | Enthymemes | Compound Propositions | Conditional Propositions & Conditional Syllogisms | Conditional Contrapositions, Reductions, & Biconditionals | Conjunctions, Disjunctions, & Disjunctive Syllogisms | Dilemmas | Modal Propositions | Reductio ad Absurdum Arguments | Deduction & Induction | Inductive Abstraction | Inductive Syllogisms | Mill’s Inductive Methods | Science & Hypothesis | Formal Fallacies | Testimony & Unsound Authority | Informal Fallacies & Language | Diversion & Relevancy Fallacies | Presumption Fallacies | Causal & Inductive Fallacies

<— Click on a topic!

Inductive Syllogisms

* Inductive Analogy
* Universal Generalization

* Statistical Generalization
* Statistical Syllogism

* Analogy Syllogism

Certainty?

It may be human nature to desire “certainty.” Knowledge has, traditionally, been defined as a true and justified belief. This definition traces back to Plato’s Theaetetus.

As rational animals, embodied as we are in a physical body and as emotional beings, we perceive “certainty” as a subjective mental state with firm assent, a lack of rational doubt, and an objective motive that grounds us in the truth.

Deduction brings certainty, at least to the extent that a valid argument with true premises will produce a true conclusion. Induction, at the very least, gives us likely truths. Given how much of our knowledge is inductive, perhaps we usually won’t obtain “certainty,” but something that is “probably true” is sufficient grounds for a rational belief.

Inductive Analogy and Universal Generalization

An “inductive analogy” makes a predictive inference about the next instance of something. For example, if each time someone ate fish and felt nauseous afterward, then the next fish this person eats will likely make him feel nauseous.

A “universal generalization” makes an inference to the effect that all members of a certain group have a particular attribute. It is thus more sweeping or broad in its conclusion than an “inductive analogy.”


E.g., it might be that all swans so far seen have been white; ergo, it seems probable to conclude all swans are white. This conclusion, as a “universal generalization,” is not merely that the next swan will be white; it, rather, infers that all swans in the world are white, period. Swan #1 is white, Swan #2 is white, … Swan #n is white; ergo, all swans are white.

This famous example shows the fallibility of induction.

It was perhaps a reasonable argument, until, that is, a black swan was discovered in Australia. Once we insert the premise into our data set, that swan #n is black, we can no longer conclude that all swans are white. Logicians refer to this feature of induction as “non-monotonic.” That is, adding new premises can invalidate an argument.

To avoid errors, we want a (1) large sample size and, most importantly, (2) a representative sample. While there was a large sample size of swans, the trouble was the lack of a representative sample. The observations of swans were too narrowly confined geographically.

Statistical Generalization and Statistical Syllogism

Inferential statistics concerns making conclusions about a population based on sample statistics. It generalizes, as a result, from the particular to the general. But since a sample is only a fraction of the population, these samples must be representative of the population as a whole. The only way to make the “leap,” in a manner of speaking, from a sample to a population is if that sample mirrors it.

To reiterate, it is thus absolutely essential that a large and varied sample is used in order to make any sound inference about a population at large. Statisticians emphasize the importance of randomness. There is no “pattern.” Every member must be equally likely to be selected. If that doesn’t happen, we are going to “distort” the population.

Constructing a statistical sample follows these general requirements: (1) in selecting from a population, each member has an equal chance of being selected; (2) any selection of one member doesn’t affect that of another; and (3) all possible selection outcomes can theoretically happen.

All of this requires a deeper study of statistics.

Strictly speaking, a sample doesn’t need to be random in order to derive some meaning from it. It’s just that we desire a sample to be randomly selected. Also, for example, statisticians can produce “cluster sampling” that is valuable. There is a bias in the selection of making selections in “clusters” (though individual sections are random within a given cluster) but those are designed to mimic the population.

When we are making an inference about all members of a population, based on a sample, we have a “statistical generalization.” For example, since 98% of the widgets sampled in the factory work, it likely follows that about 98% of all the widgets in the factory work.

In a “statistical syllogism” we derive a conclusion about a particular member of a population. For example, since 98% of widgets in the factory work, this particular widget will likely work. The major premise was discovered through statistical techniques.

98% is obviously greater than 50%; ergo, it is a good major premise in a “statistical syllogism.” However, there is another requirement to this type of argument: the percentage must be less than 100%.

There’s no generally agreed to formulation of these arguments. For instance, another formal way to present the “statistical syllogism” is as follows: X percentage of S is P; T is an S; ergo, it’s X percentage probable that T is P.

Notice the word “probable.”

According to the “frequentist interpretation of probability,” when we observe an experiment multiple times, we can predict that the probability of an event will be in proportion to our findings. That is, the probability will match relative frequencies. So, if we pick something at random, what is chosen, in the long run, will tend to match the relative frequencies of the underlying data.

Analogy Syllogism

*(Assuming Relevancy!)
*(Assuming Relevancy!)

It’s not uncommon to resort to an “analogy syllogism,” that is, an argument by analogy. For example, “I enjoy ice cream. It’s very similar to gelato. Ergo, I’ll probably enjoy gelato.”

There’s always a comparison between two or more things in an analogy. A similarity is found! It’s not a matter of simply counting similarities, it’s more importantly a matter of finding relevant similarities so as to be able to draw the conclusion.

First, other things being equal, the greater the number of similar attributes, the more likely the analogy is sound. Second, and most importantly, the more relevant these similarities, the more likely the analogy is sound.

A large number of similarities means next to nothing, if those are not relevant. But what is “relevant”? It depends upon the particulars; though, generally, they should have some kind of causal connection or association.

For example, “Marry and Joe were in the same logic course. Both got an A+ on the final exam. They have the same height and weight. Marry got sick last night. Ergo, Joe probably got sick last night.” Clearly, this is a bad argument! I could add, if I wished, one hundred additional similarities between Mary and Joe that have no relevance whatsoever to getting sick. Such “similarities” would do nothing to make the argument stronger.

Third, other things being equal, the argument becomes stronger if the similarity is among multiple entities. Hence, if entities S and T have attributes one to n, entity U has attributes one to n minus one, then it is likely that U has attribute n.

Statistical versus Analogical Arguments . .
Consider these two examples!

Example 1:
“Most coffee roasteries sell coffee beans. This is a coffee roastery. Ergo, it probably sells coffee beans.”

Example 2: “The majority of things true of coffee roastery S is true of coffee roastery T. Coffee roastery S sells coffee beans. Ergo, coffee roastery T probably sells coffee beans.”

Example one is statistical. It references “most” (that is, the majority) of coffee roasteries. The second example is analogical. There’s a comparison. A roastery is compared to another to draw a probable inference.

© Copyright 2024. AmateurLogician.com. All Rights Reserved.
AmateurLogician.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.