Deduction is a special case of induction
There is a long-standing picture in philosophy that treats deductive logic as something pristine: a flawless engine of necessity, floating above the messy business of evidence and experience. But once you start pulling on that thread, you quickly find that deduction’s aura of independence isn’t as unshakeable as it looks. In fact, with a bit of pressure, deduction begins to look like a special, idealised case of induction.

Let’s walk through that idea with a concrete example.
Start with a classic deductive argument:
If it is raining, the ground is wet.
It is raining.
Therefore the ground is wet.
In formal logic, this is unimpeachable. The conclusion follows with necessity. Now convert the same structure into a probabilistic version:
\(P(W\mid R)=0.98\): rain almost always wets the ground.
\(P(R)=0.60\): a \(60\%\) chance that it’s raining.
\(P(W\mid \neg R)=0.10\): sometimes the ground gets wet without rain.
We can use the law of total probability to calculate \(P(W)\):
\[ P(W) = P(W|R)P(R) + P(W|\neg R)P(\neg R) \]
Which gives us \(P(W)=0.628\). Notice what has happened: the conclusion is no longer guaranteed. Instead, it has a quantified degree of support.
Now turn the knobs. Set \(P(W\mid R)=1\) (rain necessitates wetness). Observe \(R\). Since we’ve observed \(R\), the relevant probability is simply \(P(W∣R)=1\) — the ground is wet with certainty. Deduction reappears, but not as a different kind of reasoning, but as the extremal case where inductive probabilities have converged to absolute confidence.
The apparent necessity of deduction can be understood as a particular case within a probabilistic framework. In this view, deductive logic is not an independent or a priori entity, but rather an idealised subset of inductive logic. This perspective suggests that the transition from inductive reasoning, which assigns degrees of belief (probabilities) to propositions, to deductive reasoning, which assigns absolute truth or falsehood (probability 1 or 0), is continuous. Deduction thus emerges when the probability of a conclusion, given its premises, approaches unity.
The picture becomes even clearer when you look at a fallacious argument, like affirming the consequent:
If it is raining, the ground is wet.
The ground is wet.
Therefore it is raining.
Deduction says this is invalid. Probabilistic reasoning shows why. Here are the numbers again:
\(P(W\mid R)=0.98\): rain almost always wets the ground.
\(P(R)=0.60\): a \(60\%\) chance that it’s raining.
\(P(W\mid \neg R)=0.10\): sometimes the ground gets wet without rain.
This time we need to calculate \(P(R|W)\). For this, we first calculate \(P(W)\), which, as we already know, is \(P(W)=0.628\). Now we use Bayes’ theorem:
\[ P(R|W) = \frac{P(W|R)P(R)}{P(W)} = \frac{0.98 \times 0.6}{0.628} \]
This gives us \(P(R|W) = 0.936\), which is, of course, different from 1. But the key insight comes when we set \(P(W|R) = 1\) and observe \(W\) to create the deductive argument. Here’s Bayes’ theorem again, this time spelling \(P(W)\) out:
\[ P(R|W) = \frac{P(W|R)P(R)}{P(W|R)P(R) + P(W|\neg R)P(\neg R)} \]
We can rewrite it as
\[ P(R|W) = 1- \frac{P(W|\neg R)P(\neg R)}{P(W|R)P(R) + P(W|\neg R)P(\neg R)} \]
We still don’t get \(P(R∣W)=1\) unless we also assume \(P(W∣¬R)=0\) — that is, that rain is the only possible cause of wetness. This additional assumption is what makes affirming the consequent fallacious: observing the consequent doesn’t prove the antecedent unless the antecedent is the unique cause. We could of course set \(P(R) = 1\) to make the maths work (because then \(P(\neg R) = 0\)), but that would be a petitio principii (assuming that it is raining in order to prove that it is raining).
Here’s a modus tollens for good measure:
If it is raining, the ground is wet (\(P(W|R) = 1\)).
The ground is not wet.1
Therefore it is not raining (\(P( R|\neg W) = 0\)).
By Bayes’s theorem:
\[ P(R|\neg W) = \frac{P(\neg W|R)P(R)}{P(\neg W)} \]
Since \(P(W|R)=1\), \(P(\neg W|R) = 0\). The equation becomes
\[ P(R|\neg W) = \frac{0}{P(\neg W)} = 0 \]
Which is the conclusion from our modus tollens.
This reinforces a broader philosophical view. If truth is inductively grounded — if our sense of what counts as a ‘valid’ inference is shaped by biological evolution, social learning, and repeated exposure to patterns that work — then deductive logic begins to look like an abstraction we carved out of those long-run inductive regularities. It’s a polished model of our reasoning habits, not a metaphysical fixture of the universe.
On this view, the usual distinction between induction and deduction starts to look artificial. What we call deduction is simply inductive reasoning that has stabilised so thoroughly that we treat its patterns as inviolable. When the probabilities converge to 1, the boundary between the two collapses.
The result is a unified picture of reasoning: one engine, not two. Deduction is what induction looks like under ideal conditions, a process rendered frictionless, certain and precise under optimal conditions. But the underlying source of its authority is the same: experience distilled into stable expectations, and expectations shaped into structured rules. I have written more about this here.
Footnotes
Premise (2) should be understood as conditioning on the observed event \(\neg W\), not as asserting \(P(W) = 0\). Setting \(P(W) = 0\) globally would contradict premise (1): if \(P(W|R) = 1\) and \(P(W) = 0\), then \(P(R)\) must also equal zero, collapsing the entire model. The correct interpretation treats \(\neg W\) as evidence we condition on, preserving the distinction between prior probabilities (the model) and observations (the evidence). We then compute the posterior \(P(R|\neg W)\) by Bayesian updating. In other words, \(P(W)\) needs to be treated as a prior distribution here for Bayes’s theorem to work.↩︎