The Logic of Scientific DiscoveryThe Logic of Scientific Discovery by Karl R. Popper

My rating: 4 of 5 stars

We do not know: we can only guess.

Karl Popper originally wrote Logik der Forchung (The Logic of Research) in 1934. This original version—published in haste to secure an academic position and escape the threat of Nazism (Popper was of Jewish descent)—was heavily condensed at the publisher’s request; and because of this, and because it remained untranslated from the German, the book did not receive the attention it deserved. This had to wait until 1959, when Popper finally released a revised and expanded English translation. Yet this condensation and subsequent expansion have left their mark on the book. Popper makes his most famous point within the first few dozen pages; and much of the rest of the book is given over to dead controversies, criticisms and rejoinders, technical appendices, and extended footnotes. It does not make for the most graceful reading experience.

This hardly matters, however, since it is here that Popper put forward what has arguably become the most famous concept in the philosophy of science: falsification.

This term is widely used; but its original justification is not, I believe, widely understood. Popper’s doctrine must be understood as a response to inductivism. Now, in 1620 Francis Bacon released his brilliant Novum Organum. Its title alludes to Aristotle’s Organon, a collection of logical treatises, mainly focusing on how to make accurate deductions. This Aristotelian method—dominated by syllogisms: deriving conclusions from given premises—dominated the study of nature for millennia, with precious little to show for it. Francis Bacon hoped to change all that with his new doctrine of induction. Instead of beginning with premises (‘All men are mortal’), and reasoning to conclusions (‘Socrates is mortal’), the investigator must begin with experiences (‘Socrates died,’ ‘Plato died,’ etc.) and then generalize a conclusion (‘All men are mortal’). This was how science was to proceed: from the specific to the general.

This seemed all fine and dandy until, in 1738, David Hume published his Treatise of Human Nature, in which he explained his infamous ‘problem of induction.’ Here is the idea. If you see one, two, three… a dozen… a thousand… a million white swans, and not a single black one, it is still illogical to conclude “All swans are white.” Even if you investigated every swan in the world but one, and they all proved white, you still could not conclude with certainty that the last one would be white. Aside from modus tollens (concluding from a negative specific to a negative general), here is no logically justifiable way to proceed from the specific to the general. To this argument, many are tempted to respond: “But we know from experience that induction works. We generalize all the time.” Yet this is to use induction to prove that induction works, which is paradoxical. Hume’s problem of induction has proven to be a stumbling block for philosophers ever since.

In the early parts of the 20th century, the doctrine of logical positivism arose in the philosophical world, particularly in the ‘Vienna Circle’. This had many proponents and many forms, but the basic idea, as explained by A.J. Ayer, is the following. The meaning of a sentence is equivalent to its verification; and verification is performed through experience. Thus the sentence “The cat is on the mat” can be verified by looking at the mat; it is a meaningful utterance. But the sentence “The world is composed of mind” cannot be verified by any experience; it is meaningless. Using this doctrine the positivists hoped to eliminate all metaphysics. Unfortunately, however, the doctrine also eliminates human knowledge, since, as Hume showed, generalizations can never be verified. No experience corresponds, for example, to the statement: “Gravitation is proportional to the product of mass and the inverse square of distance,” since this is an unlimitedly general statement, and experiences are always particular.

Karl Popper’s falsificationism is meant to solve this problem. First, it is important to note that Popper is not, like the positivists, proposing a criterion of ‘meaning’. That is to say that, for Popper, unfalsifiable statements can still be meaningful; they just do not tell us anything about the world. Indeed, he continually notes how metaphysical ideas (such as Kepler’s idea that circles are more ‘perfect’ than other shapes) have inspired and guided scientists. This is itself an important distinction because it prevents him from falling into the same paradox as the positivists. For if only the statements with empirical content have meaning, then the statement “only the statements with empirical content have meaning” is itself meaningless. Popper, for his part, regarded himself as the enemy of linguistic philosophy and considered the problem of epistemology quite distinct from language analysis.

To return to falsification, Popper’s fundamental insight is that verification and falsification are not symmetrical. While no general statement can be proved using a specific instance, a general statement can indeed be disproved with a specific instance. A thousand white swans does not prove all swans are white; but one black swan disproves it. (This is the aforementioned modus tollens.) All this may seem trivial; but as Popper realized, this changes the nature of scientific knowledge as we know it. For science, then, is far from what Bacon imagined it to be—a carefully sifted catalogue of experiences, a collection of well-founded generalizations—and is rather a collection of theories which spring up, as it were, from the imagination of the scientist in the hopes of uniting several observed phenomena under one hypothesis. Or to put it more bluntly: a good scientific theory is a guess that does not prove wrong.

With his central doctrine established, Popper goes on to the technicalities. He discusses what composes the ‘range’ or ‘scope’ of a theory, and how some theories can be said to encompass others. He provides an admirable justification for Occam’s Razor—the preference for simpler over more complex explanations—since theories with fewer parameters are more easily falsified and thus, in his view, more informative. The biggest section is given over to probability. I admit that I had some difficulty following his argument at times, but the gist of his point is that probability must be interpreted ‘objectively,’ as frequency distributions, rather than ‘subjectively,’ as degrees of certainty, in order to be falsifiable; and also that the statistical results of experiments must be reproducible in order to avoid the possibility of statistical flukes.

All this leads up to a strangely combative section on quantum mechanics. Popper apparently was in the same camp as Einstein, and was put off by Heisenberg’s uncertainty principle. Like Einstein, Popper was a realist and did not like the idea that a particle’s properties could be actually undetermined; he wanted to see the uncertainty of quantum mechanics as a byproduct of measurement or of ‘hidden variables’—not as representing something real about the universe. And like Einstein (though less famously) Popper proposed an experiment to decide the issue. The original experiment, as described in this book, was soon shown to be flawed; but a revised experiment was finally conducted in 1999, after Popper’s death. Though the experiment agreed with Popper’s prediction (showing that measuring an entangled photon does not affect its pair), it had no bearing on Heisenberg’s uncertainty principle, which restricts arbitrarily precise measurements on a single particle, not a pair of particles.

Incidentally, it is difficult to see why Popper is so uncomfortable with the uncertainty principle. Given his own dogma of falsifiability, the belief that nature is inherently deterministic (and that probabilistic theories are simply the result of a lack of our own knowledge) should be discarded as metaphysical. This is just one example of how Popper’s personality was out of harmony with his own doctrines. An advocate of the open society, he was famously authoritarian in his private life, which led to his own alienation. This is neither here nor there, but it is an interesting comment on the human animal.

Popper’s doctrine, like all great ideas, has proven both influential and controversial. For my part I think falsification a huge advance over Bacon’s induction or the positivists’ verification. And despite the complications, I think that falsifiability is a crucial test to distinguish, not only science from pseudo-science, but all dependable knowledge from myth. For both pseudo-science and myth generally distinguish themselves by admirably fitting the data set, but resisting falsification. Freud’s theories, for example, can accommodate themselves to any set of facts we throw at them; likewise for intelligent design, belief in supernatural beings, or conspiracy theories. All of these seem to explain everything—and in a way they do, since they fit the observable data—but really explain nothing, since they can accommodate any new observation.

There are some difficulties with falsification, of course. The first is observation. For what we observe, or even what we count as an ‘observation’, is colored by our background beliefs. Whether to regard a dot in the sky as a plane, a UFO, or an angel is shaped by the beliefs we already hold; thus it is possible to disregard observations that run counter to our theories, rather than falsifying the theories. What is more, theories never exist in isolation, but in an entire context of beliefs; so if one prediction is definitively falsified, it can still be unclear what we must change in our interconnected edifice of theories. Further, it is rare for experimental predictions to agree exactly with results; usually they are approximately correct. But where do we draw the line between falsification and approximate correctness? And last, if we formulate a theory which withstands test after test, predicting their results with extreme accuracy time and again, must we still regard the theory as a provisional guess?

To give Popper credit, he responds to all of these points in this work, though perhaps not with enough discussion. But all these criticisms belie the fact that so much of the philosophy of science written after Popper has taken his work as a starting point, either attempting to amplify, modify, or (dare I say it?) falsify his claims. For my part, though I was often bored by the dry style and baffled by the technical explanations, I found myself admiring Popper’s careful methodology: responding to criticisms, making fine distinctions, building up his system piece by piece. Here is a philosopher deeply committed to the ideal of rational argument and deeply engaged with understanding the world. I am excited to read more.

View all my reviews

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s