The Paradox of the Ravens/

A Short Course in Bayesian Philosophy

 

In The Problems of Philosophy, Bertrand Russell points out that men, and for that matter, animals, are strongly inclined to make predictions about future events based upon what has transpired in the past. We are tempted to believe that the sun will rise in the morning in no small part by the fact that it has risen every morning prior. However, as Russell points out, the chicken who has been fed every morning by the farmer will undoubtedly be shocked when, instead, the farmer wrings its neck.

Thus, we should want to develop, as Russell puts it, 'more refined views as to the uniformity of nature'; that is, we should want to administer a scheme that will somehow gauge the predictive value of evidence we observe. For instance, we would aspire to describe just how much (if at all) the observation of a sunrise confirms the hypothesis ‘the sun always rises in the morning’.

Before we begin, everything we will be talking about will be in a 'Bayesian philosophy of science' setting, so we should elaborate on what this means, and give a few technical definitions, without being too technical.

An introduction to Bayesian philosophy

The central idea will be the supposition that given a statement S about the nature of things, one should be able to assign a 'truth value' p(S) to S (with 0 < p(S) < 1) which is intended to measure the degree of certainty that one has that S accurately describes the true nature of things. It is worth mentioning that we intend p to be a genuine probability measure, satisfying the usual axioms. The mathematician will take note; others, read onward.

Examples I would be willing to assign a high truth value to the statement "I exist", a somewhat lower value to the statement "other people exist", and probably a lower value yet to the statement "an American flag is planted on the moon". Statements with truth value 1 (personally, when in 'Bayesian-mode') I tend to limit to statements about mathematics and logical tautologies. A Bayesian could, conceivably, outdo even our pal Immanuel Kant and not even cede truth value 1 to 'sensory-perceptive'-type statements, due to the Uncertainty Principle and what not.

One idea/problem that will occur frequently deals with how to 'update' one's 'personal probabilities'. That is, given a piece of evidence E and a statement S, how will/should p(S) change (if at all) given the evidence E? We will write p(S|E) in lieu of "the revised truth value of S given the evidence E". One will usually read p(S|E) as the probability of S given E. Again, p(*|E) should be a legitimate conditional probability in the mathematical sense.

Examples Given two fair coins, I would assign truth value 1/4 to the statement "both coins will land heads when flipped" (an exercise in elementary probability---there are 4 equally likely outcomes). Given that I observe the 'first' coin is a head, I would revise the truth value of this statement to be 1/2; given the 'first' coin is a tail, I would revise said probability to 0 (in the first case, 'it's all up to the second coin', and in the second case, observation of a tail shoots the hypothesis to hell). Given the 'evidence' that today is Wednesday, I would probably make no revision; I do not believe there is any correlation between day of the week and coin odds.

The crux of the matter deals with how to properly update these probabilities. Without care, one can run into deep trouble. Consider the following examples.

Example 1 I will credit this to Swinburne (1971). Consider the statement "All mice live in locations other than my house". Now, one might say that as I go walking the earth (outside of my house) and observe mice in locations other than my house, I should increase my degree of belief in the given statement. However, if I return home from my earth-walking and see a swarm of mice 3 inches from my doorstep, this piece of evidence seems to conform to the original hypothesis, but I may very well say to myself "I have cause to doubt the truth of the original statement" (due to all the mice in close proximity to my house). My treatment of the Ravens Paradox will deal with issues like this.

Example 2 For a more mathematical example, I will direct the reader to examine a problem of 'updating' in my treatment of The Monty Hall Problem. Again, the theme is a non-intuitive implication of a piece of evidence.

Example 3 Suppose we are taken into a dimly lit room and told the room contains a hidden object that is either white or off-white. We are suddenly, unexpectedly, given a brief glance of the object. How do we update, that is how can we quantify our updating? Good question....

I invite the reader to cook up more examples for his/her own amusement.

The Ravens' Paradox

This brings us to the Paradox of the Ravens, which might be stated succinctly as follows:

The three following principles of confirmation are not consistent with one another:

Principle 1 The observation that an object possesses properties A and B should confirm the proposition that "all objects of type A also possesses property B".

Principle 2 If a proposition P is confirmed by evidence, that evidence should also confirm any proposition that is logically equivalent to P.

Principle 3 Observation of non-black non-ravens should not confirm the hypothesis that "if an object is a raven, then it is black".

We think of a piece of evidence E confirming S if p(S|E) > p(S). Informally, the evidence 'boosts' our confidence in the truthfulness of S.

Henceforth, let us refer to this hypothesis as the "Ravens Hypothesis", or, simply H.

It is worth mentioning that often principle 1 is often referred to as Nicod's condition.

One's intuition might dictate that all three of these principles seem perfectly plausible. As our good friend Mr. Russell points out, we as humans seem to be very drawn to the narcotic that is Nicod's condition; it seems reasonable that observation of a black raven should confirm the Ravens Hypothesis. Likewise, principle 3 looks enticing; 'why should observation of a yellow pencil confirm the Ravens Hypothesis?' one might ask. And, seemingly, no reasonable person would reject principle 2. However, herein lies the problem. Since the statement "all ravens are black" is logically equivalent to the statement H', "all non-black objects are non-ravens", we see the paradox (to elucidate, H is equivalent to H', evidence confirming H' should confirm H, but observation of a non-black non-raven confirms H'). (As an aside, the logical principle at work here is the principle of contraposition, i.e., that the statement 'if A then B' is equivalent to the statement 'if not B, then not A'. As an illustration, let us suppose that the statement 'all cities named Chicago, are in Illinois' is true. One can convince oneself that the statement 'if a city is not in Illinois, it is not named Chicago' must be true also.)

It is worth mentioning that this really is a paradox in the according-to-Webster's sense. That is, there is an internal inconsistency here. Non-mathematicians often misuse the word "paradox". For instance, a familiar problem in physics, which deals with relativistic effects and time dilation, is oft-referred to as the "Twins paradox". However, upon close inspection, one realizes that there is no paradox in sight; rather, a lack of understanding of the nature of relativistic effects might cause one to pause in confusion, but no inconsistency exists in the aforementioned problem.

However, we digress.

There have been many 'solutions' proposed to the Ravens' Paradox. Many by knowledgeable people, many not. By what we mean by a 'solution' to this problem, we will be somewhat minimalistic; we will be satisfied if we can identify which of principles 1 and 3 is false, and provide supporting evidence one way or the other.

But here is where things get somewhat technical. If you wish, you may read my summary of the Ravens' Paradox. Some knowledge of elementary probability will be needed to understand the technical details, but laymen might find some of the examples and commentary interesting.