Review: Almagest

Review: Almagest
The Almagest: Introduction to the Mathematics of the Heavens

The Almagest: Introduction to the Mathematics of the Heavens by Ptolemy

My rating: 4 of 5 stars

… it is not fitting even to judge what is simple in itself in heavenly things on the basis of things that seem to be simple among us.

In my abysmal ignorance, I had for years assumed that tracking the orbits of the sun and planets would be straightforward. All you needed was a starting location, a direction, and the daily speed—and, with some simple arithmetic and a bit of graph paper, it would be clear as day. Attempting to read Ptolemy has revealed the magnitude of my error. Charting the heavenly bodies is a deviously complicated affair; and Ptolemy’s solution must rank as one of the greatest intellectual accomplishments of antiquity—fully comparable with the great scientific achievements of European Enlightenment. Indeed, Otto Neugebauer, the preeminent scholar of ancient astronomy, went so far as to say:

One can perfectly well understand the ‘Principia’ without much knowledge of earlier astronomy but one cannot read a single chapter in Copernicus or Kepler without a thorough knowledge of Ptolemy’s “Almagest”. Up to Newton all astronomy consists in modifications, however ingenious, of Hellenistic astronomy.

With more hope than sense, I cracked open my copy of The Great Books of the Western World, which has a full translation of the Almagest in the 16th volume. Immediately repulsed by the text, I then acquired a students’ edition of the book published by the Green Lion Press. This proved to be an excellent choice. Through introductions, preliminaries, footnotes, and appendices—not to mention generous omissions—this edition attempts to make Ptolemy accessible to a diligent college student. Even so, for someone with my background to attain a thorough knowledge of this text, he would still require months of dedicated study with a teacher as a guide. For the text is difficult in numerous ways.

Most obviously, this book is full of mathematical proofs and calculations, which are not exactly my strong suit. Ptolemy’s mathematical language—relying on the Greek geometrical method—will be unfamiliar to students who have not read some Euclid; and even if it is familiar, it proves cumbrous for the sorts of calculations demanded by the subject. To make matters worse, Ptolemy employs the sexagesimal system (based on multiples of 60) for fractions; so his numbers all must be converted into our decimals for calculation. What is more, even the names of the months Ptolemy uses are different, bearing their Egyptian names (Thoth, Phaöphi, Athur, etc.), since Ptolemy was an Alexandrian Greek. Yet even if we put all these technical obstacles to the side, we are left with Ptolemy’s oddly infelicitous prose, which the translator describes thus:

In general, there is a sort of opacity, even awkwardness, to Ptolemy’s writing, especially when he is providing a larger frame for a topic or presenting a philosophical discussion.

Thus, even in the non-technical parts of the book, Ptolemy’s writing tends to be headache-inducing. All this combines to form an unremitting slog. So since my interest in this book was amateurish, I skimmed and skipped liberally. Yet this text is so rich that, even proceeding in such a dilettantish fashion, I managed to learn a great deal.

Ptolemy’s Almagest, like Euclid’s Elements, proved so comprehensive and conclusive when it was published that it rendered nearly all previous astronomical work obsolete or superfluous. For this reason, we know little about Ptolemy’s predecessors, since there was little point in preserving their work after Ptolemy summed it up in such magnificent fashion. As a result it is unclear how much of this book is original and how much is simply adapted. As Ptolemy himself admits, he owes a substantial debt to the astronomer Hipparchus, who lived around 200 years earlier. Yet it does seem that Ptolemy originated the novel way of accounting for the planets’ position and speed, which he puts forth in later books.

Ptolemy begins by explaining the method by which he will measure chords; this leads him to construct one of the most precise trigonometric tables from antiquity. Later, Ptolemy goes on to produce several proofs of spherical trigonometry, which allows him to measure distances on the inside of a sphere, making this book an important source for Greek trigonometry as well as astronomy. Ptolemy also employs Menelaus’ Theorem, which uses the fixed proportions of a triangle to establish ratios. (From this I see that triangles are marvelously useful shapes, since they are the only shape which is rigid—that is, the angles cannot be altered without also changing the ratio of the sides, and vice versa. This is also, by the way, what makes triangles such strong structural components.)

Ptolemy gets down to business by analyzing the sun’s motion. This is tricky for several reasons. For one, the sun does not travel parallel to the “fixed stars” (so called because the stars do not position change relative to one another), but rather at an angle, which Ptolemy calculates to be around 23 degrees. We now know this is due to earth’s axial tilt, but for Ptolemy it was called the obliquity of the ecliptic (the angle of the sun’s path). Also, the angle that the sun travels through the sky (straight overhead or nearer the horizon) is determined by one’s latitude; this also determines the seasonal shifts in day-length; and during these shifts, the sun rises on different points on the horizon. To add to these already daunting variables, the sun also shifts in speed during the course of the year. And finally, Ptolemy had to factor in the procession of the equinoxes—the ecliptic’s gradual westward motion from year to year.

The planets turn out to be even more complex. For they all exhibit anomalies in their orbits which entail further complications. Venus, for example, not only speeds up and slows down, but also seems to go forwards and backwards along its orbit. This leads Ptolemy to the adoption of epicylces—little circles which travel along the greater circle, called the “deferent,” of the planet’s orbit. But to preserve the circular motion of the deferent, Ptolemy must place the center (called the “eccentric”) away from earth, in empty space. Then, Ptolemy introduces another imaginary circle, around which the planet travels with constant velocity: and the center of this is called the “equant,” which is also in empty space. Thus the planet’s motion was circular around one point (the eccentric) and constant around another circle (the equant), neither of which coincide with earth (so much for geocentric astronomy). In addition to all this, the orbit of Venus is not exactly parallel with the sun’s orbit, but tilted, and its tilt wobbles throughout the year. For Ptolemy to account for all this using only the most primitive observational instruments and without the use of calculus or analytic geometry is an extraordinary feat of patience, vision, and drudgery.

Even after writing all this, I am not giving a fair picture of the scope of Ptolemy’s achievement. This book also includes an extensive star catalogue, with the location and brightness of over one thousand stars observable with the naked eye. He argues strongly for earth’s sphericity (so much for a flat earth) and even offers a calculation of earth’s diameter (which was 28% too small). Ptolemy also calculates the distance from the earth to the moon, using the lunar parallax (the difference in the moon’s appearance when seen from different positions on earth), which comes out the quite accurate figure of 59 earth radii. And all of this is set forth in dry, sometimes baffling prose, accompanied by pages of proofs and tables. One can see why later generations of astronomers thought there was little to add to Ptolemy’s achievement, and why Arabic translators dubbed it “the greatest” (from which we get the English name).

A direct acquaintance with Ptolemy belies his popular image as a metaphysical pseudo-scientist, foolishly clinging to a geocentric model, using ad-hoc epicycles to account for deviations in his theories. To the contrary, Ptolemy scarcely ever touches on metaphysical or philosophical arguments, preferring to stay in the precise world of figures and proofs. And if science consists in predicting phenomena, then Ptolemy’s system was clearly the best scientific theory around for its range and accuracy. Indeed, a waggish philosopher might dismiss the whole question of whether the sun or the earth was at the “center” as entirely metaphysical (is it falsifiable?). Certainly it was not mere prejudice that kept Ptolemy’s system alive for so long.

Admittedly, Ptolemy does occasionally include airy metaphysical statements:

We propose to demonstrate that, just as for the sun and moon, all the apparent anomalistic motions of the five planets are produced through uniform, circular motions; these are proper to the nature of what is divine, but foreign to disorder and variability.

Yet notions of perfection seem hard to justify, even within Ptolemy’s own theory. The combined motions of the deferent and the epicycle do not make a circle, but a wavy shape called an epitrochoid. And the complex world of interlocking, overlapping, slanted circles—centered on imaginary points, riddled with deviations and anomalies—hardly fits the stereotypical image of an orderly Ptolemaic world.

It must be said that Ptolemy’s system, however comprehensive, does leave some questions tantalizingly unanswered. For example, why do Mercury and Venus stay within a definite distance from the sun, and travel along at the same average speed as the sun? And why are the anomalies of the “outer planets” (Mars, Jupiter, Saturn) sometimes related to the sun’s motion, and sometimes not? All this is very easy to explain in a heliocentric model, but rather baffling in a geocentric one; and Ptolemy does not even attempt an explanation. Even so, I think any reader of this volume must come to the conclusion that this is a massive achievement—and a lasting testament to the heights of brilliance and obscurity that a single mind can reach.

View all my reviews

Review: The New Organon

Review: The New Organon

The New OrganonThe New Organon by Francis Bacon

My rating: 4 of 5 stars

Since I’ve lately read Aristotle’s original, I thought I’d go ahead and read Bacon’s New Organon. The title more or less says it all. For this book is an attempt to recast the method of the sciences in a better mold. Whereas Aristotle spends pages and pages enumerating the various types of syllogisms, Bacon dismisses it all with one wave of the hand—away with such scholarly nonsense! Because Aristotle is so single-mindedly deductive, his scientific research came to naught; or, as Bacon puts it, “Aristotle, who made his natural philosophy a mere bond servant to his logic, thereby [rendered] it contentious and well-nigh useless.”

What is needed is not deduction—which draws trivial conclusions form absurd premises—but induction. More specifically, what is needed is a great deal of experiments, the results of which the careful scientist can sort into air-tight conclusions. Down with the syllogism; up with experiment. Down with the schoolmen; up with the scientists.

In my (admittedly snotty) review of Bacon’s Essays, I remarked that he would have done better to have written a work entirely in aphorisms. Little did I know that Bacon did just that, and it is this book. Whatever Bacon’s defects were as a politician or a philosopher, Bacon is the undisputed master of the pithy, punchy maxim. In fact, his writing style can be almost sickening, so dense is it with aphorism, so rich is it with metaphor, so replete is it with compressed thought.

In the first part of his New Organon all of the defects of Bacon’s style are absent, and all of his strengths are present in full force. Indeed, if this work consisted of only the first part, it would have merited five stars, for it is a tour de force. Bacon systematically goes through all of the errors the human mind is prone to when investigating nature, leaving no stone unturned and no vices unexamined, damning them all in epigram after epigram. The reader hardly has time to catch his breath from one astonishing insight, when Bacon is on to another.

Among these insights are, of course, Bacon’s famous four idols. We have the Idol of the Tribe, which consist of the errors humans are wont to make by virtue of their humanity. For our eyes, our ears, and our very minds distort reality in a systematic way—something earlier philosophers had, so far as I know, neglected to account for. We have then the Idols of the Cave, which are the foibles of the individual person, over and above the common limitations of our species. Of these may include certain pet theories, preferences, accidents of background, peculiarities of taste. And then finally we have the Idols of the Market Place, which are caused by the deceptive nature of language and words, as well as the Idols of the Theater, which consists of the various dogmas present in the universities and schools.

Bacon also displays a remarkable insight into psychology. He points out that humans are pattern-seeking animals, which leads us to sometimes see patterns which aren’t there: “The human understanding is of its own nature prone to suppose the existence of more order and regularity in the world than it finds.” Bacon also draws the distinction, made so memorable in Isaiah Berlin’s essay, between foxes and hedgehogs: “… some minds are stronger and apter to mark the differences of things, others to mark their resemblances.” Bacon also notes, in terms no psychologist could fault, a description of confirmation bias:

The human understanding when it has once adopted an opinion (either as being the received opinion or as being agreeable to itself) draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, or else by some distinction sets aside and rejects, in order that by this great and pernicious predetermination the authority of its former conclusions may remain inviolate.

Part two, on the other hand, is a tedious, rambling affair, which makes the patient reader almost forget the greatness of the first half. Here, Bacon moves on from condemning the errors of others to setting up his own system. In his opinion, scientific enquiry is a simple matter of tabulation: make a table of every situation in which a given phenomenon is always found, and then make a table of every situation in which a given phenomenon is never found; finally, make a table of every situation in which said phenomenon is sometimes found, shake well, and out comes your answer.

The modern reader will not recognize the scientific method in this process. For we now know that Bacon’s induction is not sufficient. (Though, he does use his method to draw an accurate conclusion about the nature of heat: “Heat is a motion, expansive, restrained, and acting in its strife upon the smaller particles of bodies.”) What Bacon describes is more or less what we’d now call ‘natural history’, a gathering up of facts and a noting of regularities. But the scientific method proper requires the framing of hypotheses. The hypothesis is key, because it determines what facts need to be collected, and what relationship those facts will have with the theory in question. Otherwise, the buzzing world of facts is too lush and fecund to tabulate; there are simply too many facts. Furthermore, Bacon makes the somewhat naïve—though excusable, I think—assumption that a fact is simply a fact, whereas we now know that facts are basically meaningless unless contextualized; and, in science, it is the theory in question which contextualizes said facts.

The importance of hypotheses also makes deduction far more important than Bacon acknowledges. For the aspiring experimentalist must often go through a long chain of deductive reasoning before he can determine what experiment should be performed in order to test a theory. In short, science relies on both deductive and inductive methods, and the relationship of theory to data is far more intertwined than Bacon apparently thinks. (As a side note, I’d also like to point out that Bacon wasn’t much of a scientist himself; he brings up the Copernican view of the heliocentric solar system many times, only to dismiss it as ridiculous, and also seems curiously unaware of the other scientific advances of his day.)

In a review of David Hume’s Enquiry Concerning the Principles of Morals, I somewhat impertinently remarked that the English love examples—or, to use a more English word, instances. I hope not to offend any English readers, but Bacon confirms me in this prejudice—for the vast bulk of this work is a tedious enumeration of twenty-seven (yes, that’s almost thirty) types of ‘instances’ to be found in nature. Needless to say, this long and dry list of the different sorts of instances makes for both dull reading and bad philosophy, for I doubt any scientist in the history of the world ever made progress by sorting his results into one of Bacon’s categories.

So the brilliant, brash, and brazen beginning of this book fizzles out into pedantry that, ironically enough, rivals even Aristotle’s original Organon. So, to repeat myself, the title of this book more or less says it all.

View all my reviews

Review: The Structure of Scientific Revolutions

Review: The Structure of Scientific Revolutions

The Structure of Scientific RevolutionsThe Structure of Scientific Revolutions by Thomas S. Kuhn

My rating: 5 of 5 stars

Observation and experience can and must drastically restrict the range of admissible scientific belief, else there would be no science. But they cannot alone determine a particular body of such belief. An apparently arbitrary element, compounded of personal and historical accident, is always a formative ingredient of the beliefs espoused by a given scientific community at a given time.

This is one of those wonderfully rich classics, touching on many disparate fields and putting forward ideas that have become permanent fixtures of our mental furniture. Kuhn synthesizes insights from history, sociology, psychology, and philosophy into a novel conception of science—one which, despite seemingly nobody agreeing with it, has become remarkably influential. Indeed, this book made such an impact that the contemporary reader may have difficulty seeing why it was so controversial in the first place.

Kuhn’s fundamental conception is of the paradigm. A paradigm is a research program that defines a discipline, perhaps briefly, perhaps for centuries. This is a not only a dominant theory, but a set of experimental methodologies, ontological commitments, and shared assumptions about standards of evidence and explanation. These paradigms usually trace their existence to a breakthrough work, such as Newton’s Principia or Lavoisier’s Elements; and they persist until the research program is thrown into crisis through stubborn anomalies (phenomena that cannot be accounted for within the theory). At this point a new paradigm may arise and replace the old one, such as the switch from Newton’s to Einstein’s system.

Though Kuhn is often spoken of as responding to Popper, I believe his book is really aimed at undermining the old positivistic conception of science: where science consists of a body of verified statements, and discoveries and innovations cause this body of statements to gradually grow. What this view leaves out is the interconnection and interdependence between these beliefs, and the reciprocal relationship between theory and observation. Our background orients our vision, telling us where to look and what to look for; and we naturally do our best to integrate a new phenomenon into our preexisting web of beliefs. Thus we may extend, refine, and elaborate our vision of the world without undermining any of our fundamental theories. This is what Kuhn describes as “normal science.”

During a period of “normal science” it may be true that scientific knowledge gradually accumulates. But when the dominant paradigm reaches a crisis, and the community finds itself unable to accommodate certain persistent observations, a new paradigm may take over. This cannot be described as a mere quantitative increase in knowledge, but is a qualitative shift in vision. New terms are introduced, older ones redefined; previous discoveries are reinterpreted and given a new meaning; and in general the web of connections between facts and theories is expanded and rearranged. This is Kuhn’s famous “paradigm shift.” And since the new paradigm so reorients our vision, it will be impossible to directly compare it with the older one; it will be as if practitioners from the two paradigms speak different languages or inhabit different worlds.

This scandalized some, and delighted others, and for the same reason: that Kuhn seemed to be arguing that scientific knowledge is socially solipsistic. That is to say that scientific “truth” was only true because it was given credence by the scientific community. Thus no paradigm can be said to be objectively “better” than another, and science cannot be said to really “advance.” Science was reduced to a series of fashionable ideas.

Scientists were understandably peeved by the notion, and social scientists concomitantly delighted, since it meant their discipline was at the crux of scientific knowledge. But Kuhn repeatedly denied being a relativist, and I think the text bears him out. It must be said, however, that Kuhn does not guard against this relativistic interpretation of his work as much as, in retrospect, he should have. I believe this was because Kuhn’s primary aim was to undermine the positivistic, gradualist account of science—which was fairly universally held in the past—and not to replace it with a fully worked-out theory of scientific progress himself. (And this is ironic since Kuhn himself argues that an old paradigm is never abandoned until a new paradigm takes its place.)

Though Kuhn does say a good deal about this, I think he could have emphasized more strongly the ways that paradigms contribute positively to reliable scientific knowledge. For we simply cannot look on the world as neutral observers; and even if we could, we would not be any the wiser for it. The very process of learning involves limiting possibilities. This is literally what happens to our brains as we grow up: the confused mass of neural connections is pruned, leaving only the ones which have proven useful in our environment. If our brains did not quickly and efficiently analyze environmental stimuli into familiar categories, we could hardly survive a day. The world would be a swirling, jumbled chaos.

Reducing ambiguities is so important to our survival that I think one of the primary functions of human culture is to further eliminate possibilities. For humans, being born with considerable behavioral flexibility, must learn to become inflexible, so to speak, in order to live effectively in a group. All communication presupposes a large degree of agreement within members of a community; and since we are born lacking this, we must be taught fairly rigid sets of assumptions in order to create the necessary accord. In science this process is performed in a much more formalized way, but nevertheless its end is the same: to allow communication and cooperation via a shared language and a shared view of the world.

Yet this is no argument for epistemological relativism, any more than the existence of incompatible moral systems is an argument for moral relativism. While people commonly call themselves cultural relativists when it comes to morals, few people are really willing to argue that, say, unprovoked violence is morally praiseworthy in certain situations. What people mean by calling themselves relativists is that they are pluralists: they acknowledge that incompatible social arrangements can nevertheless be equally ethical. Whether a society has private property or holds everything in common, whether it is monogamous or polygamous, whether burping is considered polite or rude—these may vary, and yet create coherent, mutually incompatible, ethical systems. Furthermore, acknowledging the possibility of equally valid ethical systems also does not rule out the possibility of moral progress, as any given ethical system may contain flaws (such as refusing to respect certain categories of people) that can be corrected over time.

I believe that Kuhn would argue that scientific cultures may be thought of in the same pluralistic way: paradigms can be improved, and incompatible paradigms can nevertheless both have some validity. Acknowledging this does not force one to abandon the concept of “knowledge,” any more than acknowledging cultural differences in etiquette forces one to abandon the concept of “politeness.”

Thus accepting Kuhn’s position does not force one to embrace epistemological relativism—or, at least not the strong variety, which reduces knowledge merely to widespread belief. I would go further, and argue that Kuhn’s account of science—or at least elements of his account—can be made to articulate even with the system of his reputed nemesis, Karl Popper. For both conceptions have the scientist beginning, not with observations and facts, but with certain arbitrary assumptions and expectations. This may sound unpromising; but these assumptions and expectations, by orienting our vision, allow us to realize when we are mistaken, and to revise our theories. The Baconian inductivist or the logical positivist, by beginning with an raw mass of data, has little idea how to make sense of it and thus no basis upon which to judge whether an observation is anomalous or not.

This is not where the resemblance ends. According to both Kuhn and Popper (though the former is describing while the second is prescribing), when we are revising our theories we should if possible modify or discard the least fundamental part, while leaving the underlying paradigm unchanged. This is Kuhn’s “normal science.” So when irregularities were observed in Uranus’ orbit, the scientists could have either discarded Newton’s theories (fundamental to the discipline) or the theory that Uranus was the furthest planet in the solar system (a superficial fact); obviously the latter was preferable, and this led to the discovery of Neptune. Science could not survive if scientists too willingly overturned the discoveries and theories of their discipline. A certain amount of stubbornness is a virtue in learning.

Obviously, the two thinkers also disagree about much. One issue is whether two paradigms can be directly compared or definitively tested. Popper envisions conclusive experiments whose outcome can unambiguously decide whether one paradigm or another is to be preferred. There are some difficulties to this view, however, which Kuhn points out. One is that different paradigms may attach very different importance to certain phenomena. Thus for Galileo (to use Kuhn’s example) a pendulum is a prime exemplar of motion, while to an Aristotelian a pendulum is a highly complex secondary phenomenon, unfit to demonstrate the fundamental properties of motion. Another difficulty in comparing theories is that terms may be defined differently. Einstein said that massive objects bend space, but Newtonian space is not a thing at all and so cannot be bent.

Granting the difficulties of comparing different paradigms, I nevertheless think that Kuhn is mistaken in his insistence that they are as separate as two languages. I believe his argument rests, in part, on his conceiving of a paradigm as beginning with definitions of fundamental terms (such as “space” or “time”) which are circular (such as “time is that measured by clocks,” etc.); so that comparing two paradigms would be like comparing Euclidian and non-Euclidian geometry to see which is more “true,” though both are equally true to their own axioms (while mutually incompatible). Yet such terms in science do not merely define, but denote phenomena in our experience. Thus (to continue the example) while Euclidian and non-Euclidian geometries may both be equally valid according to their premises, they may not be equally valid according to how they describe our experience.

Kuhn’s response to this would be, I believe, that we cannot have neutral experiences, but all our observations are already theory-laden. While this is true, it is also true that theory does not totally determine our vision; and clever experimenters can often, I believe, devise tests that can differentiate between paradigms to most practitioners’ satisfaction. Nevertheless, as both Kuhn and Popper would admit, the decision to abandon one theory for another can never be a wholly rational affair, since there is no way of telling whether the old paradigm could, with sufficient ingenuity, be made to accommodate the anomalous data; and in any case a strange phenomena can always be tabled as a perplexing but unimportant deviation for future researchers to tackle. This is how an Aristotelian would view Galileo’s pendulum, I believe.

Yet this fact—that there can be no objective, fool-proof criteria for switching paradigms—is no reason to despair. We are not prophets; every decision we take involves risk that it will not pan out; and in this respect science is no different. What makes science special is not that it is purely rational or wholly objective, but that our guesses are systematically checked against our experience and debated within a community of dedicated inquirers. All knowledge contains an imaginative and thus an arbitrary element; but this does not mean that anything goes. To use a comparison, a painter working on a portrait will have to make innumerable little decisions during her work; and yet—provided the painter is working within a tradition that values literal realism—her work will be judged, not for the taste displayed, but for the perceived accuracy. Just so, science is not different from other cultural realms in lacking arbitrary elements, but in the shared values that determine how the final result is judged.

I think that Kuhn would assent to this; and I think it was only the widespread belief that science was as objective, asocial, and unimaginative as a camera taking a photograph that led him to emphasize the social and arbitrary aspects of science so strongly. This is why, contrary to his expectations, so many people read his work as advocating total relativism.

It should be said, however, that Kuhn’s position does alter how we normally think of “truth.” In this I also find him strikingly close to his reputed nemesis, Popper. For here is the Austrian philosopher on the quest for truth:

Science never pursues the illusory aim of making its answers final, or even probable. Its advance is, rather, towards the infinite yet attainable aim of ever discovering new, deeper, and more general problems, and of subjecting its ever tentative answers to ever renewed and ever more rigorous tests.

And here is what his American counterpart has to say:

Later scientific theories are better than earlier ones for solving puzzles in the often quite different environments to which they are applied. That is not a relativist’s position, and it displays the sense in which I am a convinced believer in scientific progress.

Here is another juxtaposition. Popper says:

Science is not a system of certain, or well-established, statements; nor is it a system which steadily advances towards a state of finality. Our science is not knowledge (episteme): it can never claim to have attained truth, or even a substitute for it, such as probability. … We do not know: we can only guess. And our guesses are guided by the unscientific, the metaphysical (though biologically explicable) faith in laws, in regularities which we can uncover—discover.

And Kuhn:

One often hears that successive theories grow ever closer to, or approximate more and more closely to, the truth… Perhaps there is some other way of salvaging the notion of ‘truth’ for application to whole theories, but this one will not do. There is, I think, no theory-independent way to reconstruct phrases like ‘really there’; the notion of a match between the ontology of a theory and its ‘real’ counterpart in nature now seems to me illusive in principle.

Though there are important differences, to me it is striking how similar their accounts of scientific progress are: the ever-increasing expansion of problems, or puzzles, that the scientist may investigate. And both thinkers are careful to point out that this expansion cannot be understood as an approach towards an ultimate “true” explanation of everything, and I think their reasons for saying so are related. For since Popper begins with theories, and Kuhn with paradigms—both of which stem from the imagination of scientists—their accounts of knowledge can never be wholly “objective,” but must contain an aforementioned arbitrary element. This necessarily leaves open the possibility that an incompatible theory may yet do an equal or better job in making sense of an observation, or that a heretofore undiscovered phenomenon may violate the theory. And this being so, we can never say that we have reached an “ultimate” explanation, where our theory can be taken as a perfect mirror of reality.

I do not think this notion jeopardizes the scientific enterprise. To the contrary, I think that science is distinguished from older, metaphysical sorts of enquiry in that it is always open-ended, and makes no claim to possessing absolute “truth.” It is this very corrigibility of science that is its strength.

This review has already gone on for far too long, and much of it has been spent riding my own hobby-horse without evaluating the book. Yet I think it is a testament to Kuhn’s work that it is still so rich and suggestive, even after many of its insights have been absorbed into the culture. Though I have tried to defend Kuhn from accusations of relativism or undermining science, anyone must admit that this book has many flaws. One is Kuhn’s firm line between “normal” science and paradigm shifts. In his model, the first consists of mere puzzle-solving while the second involves a radical break with the past. But I think experience does not bear out this hard dichotomy; discoveries and innovations may be revolutionary to different degrees, which I think undermines Kuhn’s picture of science evolving as a punctuated equilibrium.

Another weakness of Kuhn’s work is that it does not do justice to the way that empirical discoveries may cause unanticipated theoretical revolutions. In his model, major theoretical innovations are the products of brilliant practitioners who see the field in a new way. But this does not accurately describe what happened when, say, DNA was discovered. Watson and Crick worked within the known chemical paradigm, and operated like proper Popperians in brainstorming and eliminating possibilities based on the evidence. And yet the discovery of DNA’s double helix, while not overturning any major theoretical paradigms, nevertheless had such far-reaching implications that it caused a revolution in the field. Kuhn has little to say about events like this, which shows that his model is overly simplistic.

I must end here, after thrashing about ineffectually in multiple disciples in which I am not even the rankest amateur. What I hoped to re-capture in this review was the intellectual excitement I felt while reading this little volume. In somewhat dry (though not technical) academic prose, Kuhn caused a revolution still forceful enough to make me dizzy.

View all my reviews

Review: The Logic of Scientific Discovery

Review: The Logic of Scientific Discovery
The Logic of Scientific Discovery

The Logic of Scientific Discovery by Karl R. Popper

My rating: 4 of 5 stars

We do not know: we can only guess.

Karl Popper originally wrote Logik der Forchung (The Logic of Research) in 1934. This original version—published in haste to secure an academic position and escape the threat of Nazism (Popper was of Jewish descent)—was heavily condensed at the publisher’s request; and because of this, and because it remained untranslated from the German, the book did not receive the attention it deserved. This had to wait until 1959, when Popper finally released a revised and expanded English translation. Yet this condensation and subsequent expansion have left their mark on the book. Popper makes his most famous point within the first few dozen pages; and much of the rest of the book is given over to dead controversies, criticisms and rejoinders, technical appendices, and extended footnotes. It does not make for the most graceful reading experience.

This hardly matters, however, since it is here that Popper put forward what has arguably become the most famous concept in the philosophy of science: falsification.

This term is widely used; but its original justification is not, I believe, widely understood. Popper’s doctrine must be understood as a response to inductivism. Now, in 1620 Francis Bacon released his brilliant Novum Organum. Its title alludes to Aristotle’s Organon, a collection of logical treatises, mainly focusing on how to make accurate deductions. This Aristotelian method—characterized by syllogisms: deriving conclusions from given premises—dominated the study of nature for millennia, with precious little to show for it. Francis Bacon hoped to change all that with his new doctrine of induction. Instead of beginning with premises (‘All men are mortal’), and reasoning to conclusions (‘Socrates is mortal’), the investigator must begin with experiences (‘Socrates died,’ ‘Plato died,’ etc.) and then generalize a conclusion (‘All men are mortal’). This was how science was to proceed: from the specific to the general.

This seemed all fine and dandy until, in 1738, David Hume published his Treatise of Human Nature, in which he explained his infamous ‘problem of induction.’ Here is the idea. If you see one, two, three… a dozen… a thousand… a million white swans, and not a single black one, it is still illogical to conclude “All swans are white.” Even if you investigated every swan in the world but one, and they all proved white, you still could not conclude with certainty that the last one would be white. Aside from modus tollens (concluding from a negative specific to a negative general), there is no logically justifiable way to proceed from the specific to the general. To this argument, many are tempted to respond: “But we know from experience that induction works. We generalize all the time.” Yet this is to use induction to prove that induction works, which is paradoxical. Hume’s problem of induction has proven to be a stumbling block for philosophers ever since.

In the early parts of the 20th century, the doctrine of logical positivism arose in the philosophical world, particularly in the ‘Vienna Circle’. This had many proponents and many forms, but the basic idea, as explained by A.J. Ayer, is the following. The meaning of a sentence is equivalent to its verification; and verification is performed through experience. Thus the sentence “The cat is on the mat” can be verified by looking at the mat; it is a meaningful utterance. But the sentence “The world is composed of mind” cannot be verified by any experience; it is meaningless. Using this doctrine the positivists hoped to eliminate all metaphysics. Unfortunately, however, the doctrine also eliminates human knowledge, since, as Hume showed, generalizations can never be verified. No experience corresponds, for example, to the statement: “Gravitation is proportional to the product of mass and the inverse square of distance,” since this is an unlimitedly general statement, and experiences are always particular.

Karl Popper’s falsificationism is meant to solve this problem. First, it is important to note that Popper is not, like the positivists, proposing a criterion of ‘meaning’. That is to say that, for Popper, unfalsifiable statements can still be meaningful; they just do not tell us anything about the world. Indeed, he continually notes how metaphysical ideas (such as Kepler’s idea that circles are more ‘perfect’ than other shapes) have inspired and guided scientists. This is itself an important distinction because it prevents him from falling into the same paradox as the positivists. For if only the statements with empirical content have meaning, then the statement “only the statements with empirical content have meaning” is itself meaningless. Popper, for his part, regarded himself as the enemy of linguistic philosophy and considered the problem of epistemology quite distinct from language analysis.

To return to falsification, Popper’s fundamental insight is that verification and falsification are not symmetrical. While no general statement can be proved using a specific instance, a general statement can indeed be disproved with a specific instance. A thousand white swans does not prove all swans are white; but one black swan disproves it. (This is the aforementioned modus tollens.) All this may seem trivial; but as Popper realized, this changes the nature of scientific knowledge as we know it. For science, then, is far from what Bacon imagined it to be—a carefully sifted catalogue of experiences, a collection of well-founded generalizations—and is rather a collection of theories which spring up, as it were, from the imagination of the scientist in the hopes of uniting several observed phenomena under one hypothesis. Or to put it more bluntly: a good scientific theory is a guess that does not prove wrong.

With his central doctrine established, Popper goes on to the technicalities. He discusses what composes the ‘range’ or ‘scope’ of a theory, and how some theories can be said to encompass others. He provides an admirable justification for Occam’s Razor—the preference for simpler over more complex explanations—since theories with fewer parameters are more easily falsified and thus, in his view, more informative. The biggest section is given over to probability. I admit that I had some difficulty following his argument at times, but the gist of his point is that probability must be interpreted ‘objectively,’ as frequency distributions, rather than ‘subjectively,’ as degrees of certainty, in order to be falsifiable; and also that the statistical results of experiments must be reproducible in order to avoid the possibility of statistical flukes.

All this leads up to a strangely combative section on quantum mechanics. Popper apparently was in the same camp as Einstein, and was put off by Heisenberg’s uncertainty principle. Like Einstein, Popper was a realist and did not like the idea that a particle’s properties could be actually undetermined; he wanted to see the uncertainty of quantum mechanics as a byproduct of measurement or of ‘hidden variables’—not as representing something real about the universe. And like Einstein (though less famously) Popper proposed an experiment to decide the issue. The original experiment, as described in this book, was soon shown to be flawed; but a revised experiment was finally conducted in 1999, after Popper’s death. Though the experiment agreed with Popper’s prediction (showing that measuring an entangled photon does not affect its pair), it had no bearing on Heisenberg’s uncertainty principle, which restricts arbitrarily precise measurements on a single particle, not a pair of particles.

Incidentally, it is difficult to see why Popper is so uncomfortable with the uncertainty principle. Given his own dogma of falsifiability, the belief that nature is inherently deterministic (and that probabilistic theories are simply the result of a lack of our own knowledge) should be discarded as metaphysical. This is just one example of how Popper’s personality was out of harmony with his own doctrines. An advocate of the open society, he was famously authoritarian in his private life, which led to his own alienation. This is neither here nor there, but it is an interesting comment on the human animal.

Popper’s doctrine, like all great ideas, has proven both influential and controversial. For my part I think falsification a huge advance over Bacon’s induction or the positivists’ verification. And despite the complications, I think that falsifiability is a crucial test to distinguish, not only science from pseudo-science, but all dependable knowledge from myth. For both pseudo-science and myth generally distinguish themselves by admirably fitting the data set, but resisting falsification. Freud’s theories, for example, can accommodate themselves to any set of facts we throw at them; likewise for intelligent design, belief in supernatural beings, or conspiracy theories. All of these seem to explain everything—and in a way they do, since they fit the observable data—but really explain nothing, since they can accommodate any new observation.

There are some difficulties with falsification, of course. The first is observation. For what we observe, or even what we count as an ‘observation’, is colored by our background beliefs. Whether to regard a dot in the sky as a plane, a UFO, or an angel is shaped by the beliefs we already hold; thus it is possible to disregard observations that run counter to our theories, rather than falsifying the theories. What is more, theories never exist in isolation, but in an entire context of beliefs; so if one prediction is definitively falsified, it can still be unclear what we must change in our interconnected edifice of theories. Further, it is rare for experimental predictions to agree exactly with results; usually they are approximately correct. But where do we draw the line between falsification and approximate correctness? And last, if we formulate a theory which withstands test after test, predicting their results with extreme accuracy time and again, must we still regard the theory as a provisional guess?

To give Popper credit, he responds to all of these points in this work, though perhaps not with enough discussion. But all these criticisms belie the fact that so much of the philosophy of science written after Popper has taken his work as a starting point, either attempting to amplify, modify, or (dare I say it?) falsify his claims. For my part, though I was often bored by the dry style and baffled by the technical explanations, I found myself admiring Popper’s careful methodology: responding to criticisms, making fine distinctions, building up his system piece by piece. Here is a philosopher deeply committed to the ideal of rational argument and deeply engaged with understanding the world. I am excited to read more.

View all my reviews

Quotes & Commentary #60: Santayana

Quotes & Commentary #60: Santayana

We read nature as the English used to read Latin, pronouncing it like English, but understanding it very well.

—George Santayana

This simile about relation between human knowledge and material fact expresses a deep truth: to understand nature we must, so to speak, translate it into human terms.

All knowledge of the world must begin with sensations. All empirical knowledge derives, ultimately, from events we perceive with our five senses. But I think it is a mistake to confuse, as the phenomenalists do, these sensations for reality itself. To the contrary, I think that human experience is of a fundamentally different sort as material reality.

The relationship between my moving finger and the movement of the string I pluck is direct: cause-and effect. The relationship that holds between the vibrations in air caused by the guitar string, and the sound we perceive of the guitar, is, however, not so direct. For conscious sensations are not physical events. You cannot, even in principle, describe the subjective sensation of guitar music using physical terms, like acceleration, mass, charge, etc.

The brain represents the physical stimulus it receives, transforming it into a sensation, much like a composer represents human emotions using notes, harmonies, and rhythms—that is, arbitrarily. There is no essential relationship between sadness and a minor melody; they are only associated through culture and habit. Likewise, the conscious perception of guitar strings is only associated with the vibrations in the air through consistent representation: every time the brain hears a guitar, it creates the same subjective sensation. But the fact remains that the vibrations and the sensation, if they could be compared, would have nothing in common, just as sadness and minor melodies have nothing in common.

I must pause here to note a partial exception. In his Essay Concerning Human Understanding, John Locke notoriously makes the distinction between primary and secondary qualities. The latter are things like color, taste, smell, and sound, which are wholly subjective; the former are things like size, position, number, and shape: qualities that are inherent in the object and independent of the perceiving mind. Berkeley criticized this distinction; he thought that all reality was sensation, and thus there was no basis in distinguishing primary and secondary—both only exist in human experience. Kant, on the other hand, thought that reality in-itself could not, in principle, be described using any terms from human experience; and thus primary and secondary qualities were both wholly subjective.

Yet I persist in thinking that Locke was rather close to the truth. But the point must be qualified. As Einstein showed, our intuitive notions of speed, position, time, and size are only approximately correct at the human scale, and break down in situations of extreme speed or gravity. And we have had the same experience with regard to quantum physics, discovering that even our notion of location and number can be wholly inaccurate on the smallest of scales. Besides these physical consideration, any anthropologist will be full of anecdotes of cultures that conceive of space and time differently; and psychologists will note that our perception of position and shape differs markedly from that of a rat or a bat, for example.

All this being granted, I think that Locke was right in distinguishing primary from secondary qualities. Indeed, this is simply the difference between quantifiable and unquantifiable qualities. By this I mean that a person could give an abstract representation of the various sizes and locations of objects in a room; but no such abstract representation could be given of a scent. The very fact that our notions of these primary qualities could be proven wrong by physicists proves that they are categorically distinct. A person may occasionally make a mistake in identifying a color or a scent, but all of humanity could never be wrong in that way. Scientists cannot, in other words, show us what red “really looks like,” in the same way that scientists can and have shown us how space really behaves.

Nevertheless, we have discovered, through rigorous experiment and hypothesis, that even these apparently “primary qualities”—supposedly independent of the perceiving mind—are really crude notions that are only approximately correct on the scale of human life. This is no surprise. We evolved these capacities of perception to navigate the world, not to imagine black holes or understand electrons. Thus even our most accurate perceptions of the world are only quasi-correct; and there is no reason why another being, adapted to different circumstances, might represent and understand the same facts quite differently.

It seems clear from this description that our sensations have only an indicative truth, not a literal one. We can rely on our sensations to navigate the world, but that does not mean they show us the direct truth. The senses are poets, as Santayana said, and show us reality guised in allegory. We humans must use our senses, since that is all we have, but in the grand scheme of reality what can be seen, heard, or touched may be only a miniscule portion of what really exists—and, as scientists have discovered, that is actually the case.

To put these discoveries to one side for a moment, there are other compelling reasons to suspect that sensations are not open windows to reality. One obvious reason is that any sensation, if too intense, becomes simply pain. Pressure, light, sound, or heat, while all separate feelings at normal intensities, all become pain when intensified beyond the tolerance of our bodies. But does anybody suspect that all reality becomes literal pain when too severe? When intensified still further, sensation ceases altogether with death. Yet are we to suppose that the stimulus of the fatal blow ceases, too, when it becomes unperceivable?

Of course, nobody makes these mistakes except phenomenologists. And when combined with other everyday experiences—such as our ability to increase our range of sight using microscopes and telescopes, the ability of dogs to hear and smells things that humans cannot—then it becomes very clear that our sensations, far from having any cosmic privilege, represent only a limited portion of the reality, and do not represent the truth literally.

What we have discovered about the world, since the scientific revolution, only confirms this notion. Our senses were shaped by evolution to allow us to navigate in a certain environment. Thus, we can see only a small portion of the electromagnetic spectrum—a portion that strongly penetrates our atmosphere. Likewise with every other sense: it is calibrated to the sorts of intensities and stimuli that would aid us in our struggle to survive on the struggle of the earth.

There is nothing superstition, therefore, or even remarkable in believing that the building blocks of reality are invisible to human sensation. Molecules, atoms, protons, quarks—all of these are essential components of our best physical theories, and thus have as much warrant to be believed as the sun and stars. From a human scale, of course, there is a strong epistemological difference: they form components of physical theories; and these theories help us to make sense of experience, rather than constitute experience itself.

But that does not make them any less real. Indeed, our notion of an atom may be closer to nature than our visible image of an apple, since we know for sure that the actual apple is not, fundamentally, as it appears to human sight, while our idea of atoms may indeed give a literally accurate view of nature. Indeed, the view of sensations that I have put forward virtually demands that the truth of nature, whatever it is, be remote from human experience, since human experience is not a literal representation of reality.

This leads to some awkwardness. For if scientific truth is to be abstract—a theorem or an equation remote from daily reality—then what makes it any better than a religious belief? Isn’t what separates scientific knowledge from superstitious fancy the fact that the first is empirical while the latter is not?

But this difficulty is only apparent. Santayana aptly summarized the difference thus: “Mythical thinking has its roots in reality, but, like a plant, touches the ground only at one end. It stands unmoved and flowers wantonly into the air, transmuting into unexpected and richer forms the substances it sucks from the soil.” That is to say that, though religious ideas may take their building blocks from daily life, the final product—the religious dogma—is not fundamentally about daily life; it is a more like a poem that inspires our imaginations and may influence our lives, but is not literally borne out in lived experience.

A scientific theory, on the other hand, is borne out in this way: “Science is a bridge touching experience at both ends, over which practical thought may travel from act to act, from perception to perception.” Though a physical theory, for example, is itself something that is never itself perceived—we never “see” Einstein’s relativity in itself—using it leads to perceivable predictions, such as the deviation of a planet’s orbit. This is the basis of experiment and the essence of science itself. Indeed, I think that this is an essential quality of all valid human knowledge, scientific or not: that it is borne out in experience.

Like quantum physics, superstitious notions and supernatural doctrines all concern things that are, in principle, unperceivable; but the different is that, in quantum physics, the unperceivable elements predict perceivable events with rigid certainty. Superstitious notions, though in principle they have empirical results, are usually whimsical in their operation. The devil may appear or he may not, and the theory of demonic interference does not tell us when, how, or why—which gives it no explanatory value. Supernatural notions, such as about God or angels or heaven, are either reserved for another world, or their operation on this world are too entirely vague to be confirmed or falsified.

So long as the theory touches experience at both ends, so to speak, it is valid. The theory itself is not and cannot be tangible. The fact that our most accurate knowledge involves belief in unperceivable things, in other words, does not make it either metaphysical or supernatural. As Santayana said, “if belief in the existence of hidden parts and movements in nature be metaphysics, then the kitchen-maid is a metaphysician whenever she peels a potato.”

Richard Feynman made almost the same point when he observed that our notion of “inside” is really just a way of making sense of a succession of perceptions. We never actually perceive the “inside” of an apple, for example, since by slicing it all we do is create a new surface. This surface may, for all we know, pop into existence in that moment. But by imagining that there is an “inside” to the apple, unperceived by equally real, we make sense of an otherwise confusing sequence of perceptions. Scientific theories—and all valid knowledge in general—does essentially the same thing: it organizes our experience by positing an unperceived, and unperceivable, structure to reality.

Thus humanity’s attempt to understand nature is very accurately compared to an Englishman reading Latin with a London accent. Though we muddle the form of nature through our perception and our conception, by paying attention to the regularities of experience we may learn to understand nature quite well.

Review: The Beautiful Brain

Review: The Beautiful Brain
Beautiful Brain: The Drawings of Santiago Ramon y Cajal

Beautiful Brain: The Drawings of Santiago Ramon y Cajal by Larry W. Swanson

My rating: 4 of 5 stars

Like the entomologist in pursuit of brightly colored butterflies, my attention hunted, in the flower garden of the gray matter, cells with delicate and elegant forms, the mysterious butterflies of the soul, the beating of whose wings may someday—who knows?—clarify the secret of mental life.

I love walking around cathedrals because they are sublime examples of vital art. I say “vital” because the art is not just seen, but lived through. Every inch of a cathedral has at least two levels of significance: aesthetic and theological. Beauty, in other words, walks hand in hand with a certain view of the world. Indeed, beauty is an essential part of this view of the world, and thus facts and feelings are blended together into one seamlessly intelligible whole: a philosophy made manifest in stone.

The situation that pertains today is quite different. It is not that our present view of the world is inherently less beautiful; but that the vital link between the visual arts and our view of the world has been severed. Apropos of this, I often think of one of Richard Feynman’s anecdotes. He once gave a tour of a factory to a group of artists, trying to explain modern technology to them. The artists, in turn, were supposed to incorporate what they learned into a piece for an exhibition. But, as Feynman notes, almost none of the pieces really had anything to do with the technology. Art and science had tried to make contact, and failed.

This is why I am so intrigued by the anatomical drawings of Santiago Ramón y Cajal. For here we see a successful unification, revealing the same duality of significance as in a cathedral: his drawings instruct and enchant at once.

Though relatively obscure in the anglophone world, Cajal is certainly one of the most important scientists of history. He is justly considered to be the father of neuroscience. Cajal’s research into the fine structures of the brain laid the foundation for the discipline. At a time when neurons were only a hypothesis, Cajal not only convinced the scientific world of their existence (as against the reticular theory), but documented several different types of neurons, describing their fine structure—nucleus, axon, and dendrites—and the flow of information within and between nerve cells.

As we can see in his Advice to a Young Investigator, Cajal in his adulthood became a passionate advocate for scientific research. But he did not always wish to be a scientist. As a child he was far more interested in painting; it was only the pressure of his father, a doctor, which turned him in the direction of research. And as this book shows, he never really gave up his artistic ambition; he only channelled it into another direction.

Research in Cajal’s day was far simpler. Instead of a team of scientists working with a high-powered MRI, we have the lonely investigator hunched over a microscope. The task was no easier for being simpler, however. Besides patience, ingenuity, and a logical mind—the traits of any good scientist—a microanatomist back then needed a prodigious visual acumen. The task was to see properly: to extract a sensible figure from the blurry and chaotic images under the microscope. To meet this challenge Cajal not only had to create new methods—staining the neurons to make them more visible—but to train his eye. And in both he proved a master.

He would often spend hours at the microscope, looking and looking without taking any notes. His analytic mind was not only at work during these periods, making guesses about cell functions and deductions about information flow, but also his visual imagination: he had to hold the cell’s form within his mind, see the cells in context and in isolation, since the fine details of their structure were highly suggestive of their behavior and purpose. His drawings were the final expression of his visual process: “A graphic representation of the object observed guarantees the exactness of the observation itself.” For Cajal, as for Leonardo da Vinci, drawing was a form of thinking.

Though by now long outdated by subsequent research, Cajal’s drawings have maintained their appeal, both as diagrams and as works of art. With the aid of a short caption—ably provided by Eric Newman in this volume—the drawings spring to life as records of scientific research. They summarize complex processes, structures, and relations with brilliant clarity, making the essential point graspable in an instant.

Purely as drawings they are no less brilliant. The twisting and sprawling forms of neurons; the chaotic lattices of interconnected cells; the elegant architecture of our sensory organs—all this possesses an otherworldly beauty. The brain, such an intimate part of ourselves, is revealed to be intensely alien. One is naturally reminded of the surrealists by these dreamlike landscapes; and indeed Lorca and Dalí were both aware of Cajal’s work. Yet Cajal’s drawings are perhaps more fantastic than anything the surrealists ever produced, all the more bizarre for being true.

Even the names of these drawings wouldn’t be out of place in a modern gallery: “Cuneate nucleus of a kitten,” “Neurons in the midbrain of a sixteen-day-old trout,” “Axons in the Purkinje neurons in the cerebellum of a drowned man.” Science can be arrestingly poetic.

One of the functions of art is to help us to understand ourselves. The science of the brain, in a much different way, aims to do the same thing. It seems wholly right, then, that these two enterprises should unite in Cajal, the artistic investigator of our nervous system. And this volume is an ideal place to witness his accomplishment. The large, glossy images are beautiful. The commentary frames and explains, but does not distract. The essays on Cajal’s life and art are concise and incisive, and are supplemented by an essay on modern brain imaging that brings the book up to date. It is a cathedral of a book.

View all my reviews

Review: Advice to a Young Investigator

Review: Advice to a Young Investigator

Reglas y consejos sobre investigación científica. Los tónicos de la voluntad.Reglas y consejos sobre investigación científica. Los tónicos de la voluntad. by Santiago Ramón y Cajal

My rating: 4 of 5 stars

Books, like people, we respect and admire for their good qualities, but we only love them for some of their defects.

Santiago Ramón y Cajal has a fair claim to being the greatest scientist to hail from Spain. I have heard him called the “Darwin of Neuroscience”: his research and discoveries are foundational to our knowledge of the brain. When he won the Nobel Prize in 1906 it was for his work using nerve stains to differentiate neurons. At the time, you see, the existence of nerve cells was still highly controversial; Camillo Golgi, with whom Ramón y Cajal shared the Nobel, was a supporter of the reticular theory, which held that the nervous system was one continuous object.

Aside from being an excellent scientist, Ramón y Cajal was also a man of letters and a passionate teacher. These three aptitudes combined to produce this charming book. Its prosaic title is normally translated into English—inaccurately but more appealingly—as Advice to a Young Investigator. These originated as lectures delivered in the Real Academia de Ciencias Exactas, Físicas y Naturales in 1897 and published the next year by his colleague. They consist of warm and frank advice to students embarking on a scientific career.

Ramón y Cajal is wonderfully optimistic when it comes to the scientific enterprise. Like the philosopher Susan Haack, he thinks that science follows no special logic or method, but is only based on sharpened common sense. Thus one need not be a genius to make a valuable contribution. Indeed, for him, intelligence is much overrated. Focus, dedication, and perseverance are what really separate the successes from the failures. He goes on to diagnose several infirmities of the will that prevent young and promising students from accomplishing anything in the scientific field. Among these are megalófilos, a type exemplified in the character Casaubon in Middlemarch, who cannot finish taking notes and doing research in time to actually write his book.

While much of Ramón y Cajal’s advice is timeless, this book is also very much of a time and a place. He advises his young students to buy their own equipment and to work at home—something that would be impractical today, not least because the equipment used in laboratories today has grown so much in complexity and expense. He even advises his student on finding the right wife (over-cultured women are to be avoided). More seriously, these lectures are marked by the crisis of 1898, when Spain lost the Spanish-American war and the feeling of cultural degeneration was widespread. Ramón y Cajal is painfully aware that Spain lagged behind the other Western countries in scientific research, and much of these lectures is aimed alleviating at specifically Spanish shortcomings.

In every one of these pages Ramón y Cajal’s fierce dedication to the scientific enterprise, his conviction that science is noble, useful, and necessary, and his desire to see the spirit of inquiry spread far and wide, are expressed with pungent wit that cannot fail to infect the reader with the same zeal to expand the bounds of human knowledge and with an admiration for such an exemplary scientist.

View all my reviews

Review: Opticks

Review: Opticks
Opticks

Opticks by Isaac Newton

My rating: 4 of 5 stars

My Design in this Book is not to explain the Properties of Light by Hypotheses, but to propose and prove them by Reason and Experiment

Newton’s masterwork is, unquestionably, his Principia. But it is neither an easy nor a pleasant book to read. Luckily, the great scientist wrote a far more accessible volume that is scarcely less important: the Opticks.

The majority of this text is given over to descriptions of experiments. To the modern reader—and I suspect to the historical reader as well—these sections are remarkably dry. In simple yet exact language, Newton painstakingly describes the setup and results of experiment after experiment, most of them conducted in his darkened chamber, with the window covered up except for a small opening to let in the sunlight. Yet even if this doesn’t make for a thrilling read, it is impossible not to be astounded at the depth of care, the keenness of observation, and the subtle brilliance Newton displays. Using the most basic equipment (his most advanced tool is the prism), Newton tweezes light apart, making an enormous contribution both to experimental science and to the field of optics.

At the time, the discovery that white light could be decomposed into a rainbow of colors, and that this rainbow could be recombined back into white light, must have seemed as momentous as the discovery of the Higgs Boson. And indeed, even the modern reader might catch a glimpse of this excitement as she watches Newton carefully set up his prism in front of his beam of light, tweaking every variable, adjusting every parameter, measuring everything could be measured, and describing in elegant prose everything that could not.

Whence it follows, that the colorifick Dispositions of Rays are also connate with them, and immutable; and by consequence, that all the Productions and Appearances of Colours in the World are derived, not from any physical Change caused in Light by Refraction or Reflexion, but only from the various Mixtures or Separations of Rays, by virtue of their different Refrangibility or Reflexibility. And in this respect the Science of Colours becomes a Speculation as truly mathematical as any other part of Opticks.

Because I had recently read Feynman’s QED, one thing in particular caught my attention. Here is the problem: When you have one surface of glass, even if most of the light passes through it, some of the light is reflected; and you can roughly gauge what portion of light does one or the other. Let us say on a typical surface of glass, 4% of light is reflected. Now we add another surface of glass behind the first. According to common sense, 8% of the light should be reflected, right? Wrong. Now the amount of light which is reflected varies between 0% and 16%, depending on the distance between the two surfaces. This is truly bizarre; for it seems that the mere presence of second surface of glass alters the reflectiveness of the first. But how does the light “know” there is a second surface of glass? It seems the light somehow is affected before it comes into contact with either surface.

Newton was aware of this awkward problem, and he came up with his theory of “fits of easy reflection or transmission” to explain this phenomenon. But this “theory” was merely to say that the glass, for some unknown reason, sometimes lets light through, and sometimes reflects it. In other words, it was hardly a theory at all.

Every Ray of Light in its passage through any refracting Surface is put into a certain transient Constitution or State, which in the progress of the Ray returns at equal Intervals, and disposes the Ray at every return to be easily transmitted through the next refracting Surface, and between the returns to be easily reflected by it.

Also fascinating to the modern reader is the strange dual conception of light as waves and as particles in this work, which cannot help but remind us of the quantum view. The wave theory makes it easy to account for the different refrangibility of the different colors of light (i.e. the different colors reflect at different angles in a prism).

Do not several sorts of Rays make Vibrations of several bignesses, which according to their bignesses excite Sensations of several Colours, much after the manner that the Vibrations of the Air, according to their several bignesses excite Sensations of several sounds. And particularly do not the most refrangible Rays excite the shortest Vibrations for making a Sensation of deep violet, the least refrangible the largest for making a Sensation of deep red, and the several intermediate bignesses to make Sensations of the several intermediate Colours?

To this notion of vibrations, Newton adds the “corpuscular” theory of light, which held (in opposition to his contemporary, Christiaan Huygens) that light was composed of small particles. This theory must have been attractive to Newton because it fit into his previous work in physics. It explained why beams of light, like other solid bodies, travel in straight lines (cf. Newton’s first law), and reflect off surfaces at angles equal to their angles of incidence (cf. Newton’s third law).

Are not the Rays of Light very small Bodies emitted from shining Substances? For such Bodies will pass through uniform Mediums in right Lines without bending into the shadow, which is the Nature of the Rays of Light. They will also be capable of several Properties, and be able to conserve their Properties unchanged in passing through several Mediums, which is another conditions of the Rays of Light.

As a side note, despite some problems with the corpuscular theory of light, it came to be accepted for a long while, until the phenomenon of interference gave seemingly decisive weight to the wave theory. (Light, like water waves, will interfere with itself, creating characteristic patterns; cf. the famous double-slit experiment.) The wave theory was reinforced with Maxwell’s equations, which treated light as just another electro-magnetic wave. It was, in fact, Einstein who brought back the viability of the corpuscular theory, when he suggested the idea that light might come in packets to explain the photoelectric effect. (Blue light, when shined on certain metals, will cause an electric current, while red light will not. Why not?)

All this tinkering with light is good fun. But the real treat, at least for the layreader, comes at the final section, where Newton speculates on many of the unsolved scientific problems of his day. His mind is roving and vast; and even if most of his speculations have turned out incorrect, it is stunning simply to witness him at work. Newton realizes, for example, that radiation can travel without a medium (like air), and can heat objects even in a vacuum. (And thank goodness for that, for how else would the earth be warmed by the sun?) But from this fact he incorrectly deduces that there must be some more subtle medium that remains (like the famous ether).

If in two large tall cylindrical Vessels of Glass inverted, to little Thermometers be suspended so as not to touch the Vessels, and the Air be drawn out of one of these Vessels thus prepared be carried out of a cold place into a warm one; the Thermometer in vacuo will grow warm as much, and almost as soon as the Thermometer that is not in vacuo. And when the Vessels are carried back into the cold place, the Thermometer in vacuo will grow cold almost as soon as the other Thermometer. Is not the Heat of the warm Room convey’d through the Vacuum by the Vibrations of a much subtiler Medium than Air, which after the Air was drawn out remained in the Vacuum?

Yet for all Newton’s perspicacity, the most touching section was a list of question Newton asks, as if to himself, that he cannot hope to answer. It seems that even the most brilliant among us are stunned into silence by the vast mystery of the cosmos:

What is there in places almost empty of Matter, and whence is it that the Sun and Planets gravitate towards one another, without dense Matter between them? Whence is it that Nature doth nothing in vain; and whence arises all that Order and Beauty which we see in the World? To what end are Comets, and whence is it that Planets move all one and the same way in Orbs concentrick, while Comets move all manner of ways in Orbs very excentrick; and what hinders the fix’d Stars from falling upon one another? How came the Bodies of animals to be contrived with so much Art, and for what ends were their several Parts? Was the Eye contrived without Skill in Opticks, and the Ear without Knowledge of Sounds? How do the Motions of the Body follow from the Will, and whence is the Instinct in Animals?

View all my reviews

Review: Aristotle’s Physics

Review: Aristotle’s Physics

PhysicsPhysics by Aristotle

My rating: 4 of 5 stars

Of all the ancient thinkers that medieval Christians could have embraced, it always struck me as pretty remarkable that Aristotle was chosen. Of course, ‘chosen’ isn’t the right word; rather, it was something of a historical coincidence, since Aristotle’s works were available in Latin translation, while those of Plato were not.

Nonetheless, Aristotle strikes me as a particularly difficult thinker to build a monotheistic worldview around. There is simply nothing mystical about him. His feet are planted firmly on the ground, and his eyes are level with the horizon. Whereas mystics see the unity of everything, Aristotle divides up the world into neat parcels, providing lists of definitions and categories wherever he turns. Whereas mystics tend to scorn human knowledge, Aristotle was apparently very optimistic about the potential reach of the human mind—since he so manifestly did his best to know everything.

The only thing that I can find remotely mystical is Aristotle’s love of systems. Aristotle does not like loose ends; he wants his categories to be exhaustive, and his investigations complete. And, like a mystic, Aristotle is very confident about the reach of a priori knowledge, while his investigations of empirical reality—though admittedly impressive—are paltry in comparison with his penchant for logical deduction. At the very least, Aristotle is wont to draw many more conclusions from a limited set of observations than most moderns are comfortable with.

I admit, in the past I have had a hard time appreciating his writing. His style was dry; his arguments, perfunctory. I often wondered: What did so many people see in him? His tremendous influence seemed absurd after one read his works. How could he have seemed so convincing for so long?

I know from experience that when I find a respected author ludicrous, the fault is often my own. Seeking a remedy, I decided that I would read more Aristotle; more specifically, I would read enough Aristotle until I learned to appreciate him. In the words of Stephen Stills, “If you can’t be with the one you love, love the one you’re with.” I decided I would stick with Aristotle until I loved him. I still don’t love Aristotle; but, after reading this book, I have a much deeper respect for the man. For this book really is remarkable.

Hardly a sentence in this book can be accepted as accurate. In fact, from our point of view, Aristotle’s project was doomed from the start. He is investigating physical reality, but is doing so without conducting experiments; in other words, his method is purely deductive, starting from a few assumptions, most of which are wrong. Much of what Aristotle says might even seem silly—such as his dictum that “we always assume the presence in nature of the better.” Another great portion of this work is taken up by thoroughly uninteresting and unconvincing investigations, such as the definitions of ‘together’, ‘apart’, ‘touch’, ‘continuous’, and all of the different types of motions—all of which seem products of a pedantic brain rather than qualities of nature.

But the good in this work far outweighs the bad. For Aristotle commences the first (at least, the first, so far as I know) intellectually rigorous investigations of the basic properties of nature—space, time, cause, motion, and the origins of the universe. I find Aristotle’s inquiry into time particularly fascinating, for I am not aware of any comparatively meticulous investigations of time by later philosophers.

I was particularly impressed with Aristotle’s attempt to overcome Zeno’s paradoxes (a series of thought experiments which ‘prove’ that motion and change is impossible). Aristotle defines and re-defines time—struggling with how it can be divided, and with the exact nature of the present moment—and tries many different angles of attack. What is even more interesting, Aristotle fails in his task, and even falls into Zeno’s intellectual trap by unwittingly accepting Zeno’s assumptions.

Aristotle’s attempts to tackle space were almost equally fascinating. Once again see the magnificent mind of Aristotle struggling to define something of the highest degree of abstractness. In fact, I challenge anyone reading this to come up with a good definition of space. It’s hard. The paradox (at least, the apparent paradox) is that space has some qualities of matter—extension, volume, dimensions—without having any mass. It seems, at first sight at least, like empty space should be simply nothing, yet space itself has certain definite qualities—and anything that has qualities is, by definition, something. However, these qualities only emerge when one imagines a thing in space, for we never, in our day to day lives, encounter space itself, devoid of all content. But how could something with no mass have the quality of extension?

Aristotle does also display an admirable—though perhaps naïve—tendency to trust experience. For his refutation of the thinkers who argue that (a) everything is always in motion, and (b) everything is always at rest, is merely to point out that day-to-day experience refutes this. And Aristotle at least knows—since it is so remarkably obvious to those with eyes—that Zeno must have committed some error. Even if his attacks on the paradoxes do not succeed, therefore, one can at least praise the effort.

To the student of modern physics, this book may present some interesting contrasts. We have learned, through painstaking experience, that the most productive questions to ask of nature begin with “how” rather than “why.” Of course, the two words are often interchangeable; but notice that “why” attributes a motive to something, whereas “how” is motiveless.

Aristotle seeks to understand nature in the same way that one might understand a friend. In a word, he seeks teleological explanations. He assumes both that nature works with a purpose, and that the workings of nature are roughly accessible to common sense, with some logical rigor thrown in. On its face, this is not necessarily a bad assumption. Indeed, it took a lot of time for us humans to realize it was incorrect. In any case, it must be admitted that Aristotle at least seeks to understand far more than us moderns; for Aristotle seeks, so to speak, to get inside the ‘mind’ of nature, understanding the purpose for everything.

Perhaps now I can see what the medieval Christians found in Aristotle. The assumption that nature works with a purpose certainly meshes well with the belief in an omnipotent creator God. And the assumption that knowledge is accessible through common sense and simple logical deductions is reasonable if one believes that the world was created for us. To the modern reader, the Physics might be far less impressive than to the medievals. But it is always worthwhile to witness the inner workings of such a brilliant mind; and, of all the Aristotle I have so far read, none so clearly show Aristotle’s thought process, none so clearly show his mind at work, as this.

View all my reviews

Review: Dialogue Concerning the Two Chief World Systems

Review: Dialogue Concerning the Two Chief World Systems

Dialogue Concerning the Two Chief World SystemsDialogue Concerning the Two Chief World Systems by Galileo Galilei

My rating: 4 of 5 stars

I should think that anyone who considered it more reasonable for the whole universe to move in order to let the earth remain fixed would be more irrational than one who should climb to the top of your cupola just to get a view of the city and its environs, and then demand that the whole countryside should revolve around him so that he would not have to take the trouble to turn his head.

It often seems hard to justify reading old works of science. After all, science continually advances; pioneering works today will be obsolete tomorrow. As a friend of mine said when he saw me reading this, “That shit’s outdated.” And it’s true: this shit is outdated.

Well, for one thing, understanding the history of the development of a theory often aids in the understanding of the theory. Look at any given technical discipline today, and it’s overwhelming; you are presented with such an imposing edifice of knowledge that it seems impossible. Yet even the largest oak was once an acorn, and even the most frightening equation was once an idle speculation. Case in point: Achieving a modern understanding of planetary orbits would require mastery of Einstein’s theories—no mean feat. Flip back the pages in history, however, and you will end up here, at this delightful dialogue by a nettlesome Italian scientist, as accessible a book as ever you could hope for.

This book is rich and rewarding, but for some unexpected reasons. What will strike most moderns readers, I suspect, is how plausible the Ptolemaic worldview appears in this dialogue. To us alive today, who have seen the earth in photographs, the notion that the earth is the center of the universe seems absurd. But back then, it was plain common sense, and for good reason. Galileo’s fictional Aristotelian philosopher, Simplicio, puts forward many arguments for the immobility of the earth, some merely silly, but many very sensible and convincing. Indeed, I often felt like I had to take Simplicio’s side, as Galileo subjects the good Ptolemaic philosopher to much abuse.

I’d like to think that I would have sensed the force of the Copernican system if I were alive back then. But really, I doubt it. If the earth was moving, why wouldn’t things you throw into the air land to the west of you? Wouldn’t we feel ourselves in motion? Wouldn’t canon balls travel much further one way than another? Wouldn’t we be thrown off into space? Galileo’s answer to all of these questions is the principal of inertia: all inertial (non-accelerating) frames of reference are equivalent. That is, an experiment will look the same whether it’s performed on a ship at constant velocity or on dry land.

(In reality, the surface of the earth is non-inertial, since it is undergoing acceleration due to its constant spinning motion. Indeed the only reason we don’t fly off is because of gravity, not because of inertia as Galileo argues. But for practical purposes the earth’s surface can be treated as an inertial reference frame.)

Because this simple principle is the key to so many of Galileo’s arguments, the final section of this book is trebly strange. In the last few pages of this dialogue, Galileo triumphantly puts forward his erroneous theory of the tides as if it were the final nail in Ptolemy’s coffin. Galileo’s theory was that the tides were caused by the movement of the earth, like water sloshing around a bowl on a spinning Lazy Susan. But if this was what really caused the tides, then Galileo’s principle of inertia would fall apart; since if the earth’s movements could move the oceans, couldn’t it also push us humans around? It’s amazing that Galileo didn’t mind this inconsistency. It’s as if Darwin ended On the Origin of Species with an argument that ducks were the direct descendants of daffodils.

Yet for all the many quirks and flaws in this work, for all the many digressions—and there are quite a few—it still shines. Galileo is a strong writer and a superlative thinker; following along the train of his thoughts is an adventure in itself. But of course this work, like all works of science, is not ultimately about the mind of one man; it is about the natural world. And if you are like me, this book will make you think of the sun, the moon, the planets, and the stars in the sky; will remind you that your world is spinning like a top, and that the very ground we stand on is flying through the dark of space, shielded by a wisp of clouds; and that the firmament up above, something we often forget, is a window into the cosmos itself—you will think about all this, and decide that maybe this shit isn’t so outdated after all.

View all my reviews