Review: Sidereal Messanger

Review: Sidereal Messanger

Sidereus Nuncius, or The Sidereal MessengerSidereus Nuncius, or The Sidereal Messenger by Galileo Galilei

My rating: 4 of 5 stars

A most excellent a kind service has been performed by those who defend from envy the great deeds of excellent men and have taken it upon themselves to preserve from oblivion and ruin names deserving of immortality.

This book (more of a pamphlet, really) is proof that you do not need to write many pages to make a lasting contribution to science. For it was in this little book that Galileo set forth his observations made through his newly improved telescope. In 50-odd pages, with some accompanying diagrams and etchings, Galileo quickly asserts the roughness of the Moon’s surface, avers the existence of many more stars than can be seen with the naked eye, and—the grand climax—announces the existence of the moons of Jupiter. Suddenly the universe seemed far bigger, and stranger, than it had before.

The actual text of Siderius Nuncius does not make for exciting reading. To establish his credibility, Galileo includes a blow-by-blow account of his observations of the moons of Jupiter, charting their nightly appearance. The section on our Moon is admittedly more compelling, as Galileo describes the irregularities he observed as the sun passed over its surface. Even so, this edition is immeasurably improved by the substantial commentary provided by Albert van Helden, who gives us the necessary historical background to understand why it was so controversial, and charts the aftermath of the publication.

Though Galileo is sometimes mistakenly credited with inventing the telescope, spyglasses were widely available at the time; what Galileo did was improve his telescope far beyond the magnification commonly available. The result was that, for a significant span of time, Galileo was the only person on the planet with the technology to closely and accurately observe the heavens. The advantage was not lost on him, and he made sure that he published before he got scooped. In another shrewd move, he named the newly-discovered moons of Jupiter after the Grand Duke Cosimo II and his brothers, for which they were known as the Medician Stars (back then, the term “star” meant any celestial object). This earned him patronage and protection.

Galileo’s findings were controversial because none of them aligned with the predictions of Aristotelian physics and Ptolemaic astronomy. According to the accepted view, the heavens were pure and incorruptible, devoid of change or imperfection. Thus it was jarring to find the moon’s surface bumpy, scarred, and mountainous, just like Earth’s. Even more troublesome were the Galilean moons. In the orthodox view the Earth was the only center of orbit; and one of the strongest objections against Copernicus’s system was that it included two centers, the Sun and the Earth (for the Moon). Galileo’s finding of an additional center of orbit meant that this objection ceased to carry any weight, since in any case we must posit multiple centers. Understandably there was a lot of skepticism at first, with some scholars doubting the efficacy of Galileo’s new instrument. But as other telescopes caught up with Galileo’s, and new anomalies were added to the mix—the phases of Venus and the odd shape of Saturn—his observations achieved widespread acceptance.

Though philosophers and historians of science often emphasize the advance of theory, I find this text a compelling example of the power of pure observation. For Galileo’s breakthrough relied, not on any new theory, but on new technology, extending the reach of his senses. He had no optical theory to guide him as he tinkered with his telescope, relying instead on simple trial-and-error. And though theory plays a role in any observation, some of Galileo’s findings—such as that the Milky Way is made of many small stars clustered together—are as close to simple acts of vision as possible. Even if Copernicus’s theory was not available as an alternative paradigm, it seems likely to me that advances in the power of telescopes would have thrown the old worldview into a crisis. This goes to show that observational technology is integral to scientific progress.

It is also curious to note the moral dimension of Galileo’s discovery. Now, the Ptolemaic system is commonly lambasted as narcissistically anthropocentric, placing humans at the center of it all. Yet it is worth pointing out that, in the Ptolemaic system, the heavens are regarded as pure and perfect, and everything below the moon as corruptible and imperfect (from which we get the term “sublunary”). Indeed, Dante placed the circles of paradise on the moon and the planets. So arguably, by making Earth the equal of the other planets, the new astronomy actually raised the dignity of our humble abode. In any case, I think that it is simplistic to characterize the switch from geocentricity to heliocentricity as a tale of declining hubris. The medieval Christians were hardly swollen with pride by their cosmic importance.

As you can see, this is a fascinating little volume that amply rewards the little time spent reading it. Van Helden has done a terrific job in making this scientific classic accessible.

View all my reviews

Review: Almagest

Review: Almagest

The Almagest: Introduction to the Mathematics of the HeavensThe Almagest: Introduction to the Mathematics of the Heavens by Ptolemy

… it is not fitting even to judge what is simple in itself in heavenly things on the basis of things that seem to be simple among us.

In my abysmal ignorance, I had for years assumed that tracking the orbits of the sun and planets would be straightforward. All you needed was a starting location, a direction, and the daily speed—and, with some simple arithmetic and a bit of graph paper, it would be clear as day. Attempting to read Ptolemy has revealed the magnitude of my error. Charting the heavenly bodies is a deviously complicated affair; and Ptolemy’s solution must rank as one of the greatest intellectual accomplishments of antiquity—fully comparable with the great scientific achievements of European Enlightenment. Indeed, Otto Neugebauer, the preeminent scholar of ancient astronomy, went so far as to say:

One can perfectly well understand the ‘Principia’ without much knowledge of earlier astronomy but one cannot read a single chapter in Copernicus or Kepler without a thorough knowledge of Ptolemy’s “Almagest”. Up to Newton all astronomy consists in modifications, however ingenious, of Hellenistic astronomy.

With more hope than sense, I cracked open my copy of The Great Books of the Western World, which has a full translation of the Almagest in the 16th volume. Immediately repulsed by the text, I then acquired a students’ edition of the book published by the Green Lion Press. This proved to be an excellent choice. Through introductions, preliminaries, footnotes, and appendices—not to mention generous omissions—this edition attempts to make Ptolemy accessible to a diligent college student. Even so, for someone with my background to attain a thorough knowledge of this text, he would still require months of dedicated study with a teacher as a guide. For the text is difficult in numerous ways.

Most obviously, this book is full of mathematical proofs and calculations, which are not exactly my strong suit. Ptolemy’s mathematical language—relying on the Greek geometrical method—will be unfamiliar to students who have not read some Euclid; and even if it is familiar, it proves cumbrous for the sorts of calculations demanded by the subject. To make matters worse, Ptolemy employs the sexagesimal system (based on multiples of 60) for fractions; so his numbers all must be converted into our decimals for calculation. What is more, even the names of the months Ptolemy uses are different, bearing their Egyptian names (Thoth, Phaöphi, Athur, etc.), since Ptolemy was an Alexandrian Greek. Yet even if we put these technical obstacles to the side, we are left with Ptolemy’s oddly infelicitous prose, which the translator describes thus:

In general, there is a sort of opacity, even awkwardness, to Ptolemy’s writing, especially when he is providing a larger frame for a topic or presenting a philosophical discussion.

Thus, even in the non-technical parts of the book, Ptolemy’s writing tends to be headache-inducing. All this combines for form an unremitting slog. So since my interest in this book was amateurish, I skimmed and skipped liberally. Yet this text is so rich that, even proceeding in such a dilettantish fashion, I managed to learn a great deal.

Ptolemy’s Almagest, like Euclid’s Elements, proved so comprehensive and conclusive when it was published that it rendered nearly all subsequent astronomical work obsolete or superfluous. For this reason, we know little about Ptolemy’s predecessors, since there was little point in preserving their work after Ptolemy summed it up in such magnificent fashion. As a result it is unclear how much of this book is original and how much is simply adapted. As Ptolemy himself admits, he owes a substantial debt to the astronomer Hipparchus, who lived around 200 years earlier. Yet it seems that Ptolemy originated the novel way of accounting for the planets’ position and speed, which he puts forth in later books.

Ptolemy begins by explaining the method by which he will measure chords; this leads him to construct one of the most precise trigonometric tables from antiquity. Later, Ptolemy goes on to produce several proofs of spherical trigonometry, which allows him to measure distances on the inside of a sphere, making this book an important source for Greek trigonometry as well as astronomy. Ptolemy also employs Menelaus’ Theorem, which also uses the fixed proportions of a triangle to establish ratios. From this I see that triangles are marvelously useful shapes, since they are the only shape which is rigid—that is, the angles cannot be altered without also changing the ratio of the sides, and vice versa. This is also, by the way, what makes triangles such strong structural components.

Ptolemy gets down to business in analyzing the sun’s motion. This is tricky for several reasons. For one, the sun does not travel parallel to the “fixed stars” (so called because the stars do not position change relative to one another), but rather at an angle, which Ptolemy calculates to be around 23 degrees. We now know this is due to earth’s axial tilt, but for Ptolemy is was the obliquity of the ecliptic. Also, the angle that the sun travels through the sky is determined by one’s latitude; this also determines the seasonal shifts in day-length; and during these shifts, the sun rises on different points on the horizon. To add to these already daunting variables, the sun also shifts in speed during the course of the year. And finally, Ptolemy had to factor in that the procession of the equinoxes—the ecliptic’s gradual westward motion from year to year.

The planets turn out to be even more complex. For they all exhibit anomalies in their orbits which entail further complications. Venus, for example, not only speeds up and slows down, but also seems to go forwards and backwards along its orbit. This leads Ptolemy to the adoption of epicylces—little circles which travel along the greater circle, called the “deferent,” of the planet’s orbit. But to preserve the circular motion of the deferent, Ptolemy must place the center (called the “eccentric”) away from earth. Then, Ptolemy introduces another imaginary circle, around which the planet travels with constant velocity: and the center of this is called the “equant.” Thus the planet’s motion was circular around one point (the eccentric) and constant around another (the equant), neither of which coincide with earth. In addition to all this, the orbit of Venus is not exactly parallel with the sun’s orbit, but tilted, and its tilt wobbles throughout the year. For Ptolemy to account for all this using only the most primitive instruments and without the use of calculus or analytic geometry is an extraordinary feat of patience, vision, and drudgery.

Even after writing all this, I am not giving a fair picture of the scope of Ptolemy’s achievement. This book also includes an extensive star catalogue, with the location and brightness of over one thousand stars. He argues strongly for earth’s sphericity and even offers a calculation of earth’s diameter (which was 28% too small). Ptolemy also calculates the distance from the earth to the moon, using the lunar parallax (the difference in the moon’s appearance when seen from different positions on earth), which comes out the quite accurate figure of 59 earth radii. And all of this is set forth in dry, sometimes baffling prose, accompanied by pages of proofs and tables. One can see why later generations of astronomers thought there was little to add to Ptolemy’s achievement, and why Arabic translators dubbed it “the greatest” (from which we get the English name).

A direct acquaintance with Ptolemy belies his popular image as a metaphysical pseudo-scientist, foolishly clinging to a geocentric model, using ad-hoc epicycles to account for deviations in his theories. To the contrary, Ptolemy scarcely ever touches on metaphysical or philosophical arguments, preferring to stay in the precise world of figures and proofs. And if science consists in predicting phenomena, then Ptolemy’s system was clearly the best scientific theory around for its range and accuracy. Indeed, a waggish philosopher might dismiss the whole question of whether the sun or the earth was at the “center” as entirely metaphysical (is it falsifiable?). Certainly it was not mere prejudice that kept Ptolemy’s system alive.

Admittedly, Ptolemy does occasionally include airy metaphysical statements:

We propose to demonstrate that, just as for the sun and moon, all the apparent anomalistic motions of the five planets are produced through uniform, circular motions; these are proper to the nature of what is divine, but foreign to disorder and variability.

Yet notions of perfection seem hard to justify, even within Ptolemy’s own theory. For the combined motion of the deferent and the epicycle do not make a circle, but a wavy shape called an epitrochoid. And the complex world of interlocking, overlapping, slanted circles—centered on imaginary points, riddled with deviations and anomalies—hardly fits the stereotypical image of an orderly Ptolemaic world.

It must be said that Ptolemy’s system, however comprehensive, does leave some questions tantalizingly unanswered. For example, why do Mercury and Venus stay within a definite distance from the sun, and travel along at the same average speed as the sun? And why are the anomalies of the “outer planets” (Mars, Jupiter, Saturn) sometimes related to the sun’s motion, and sometimes not? All this is very easy to explain in a heliocentric model, but rather baffling in a geocentric one; and Ptolemy does not even attempt an explanation. Even so, I think any reader of this volume must come to the conclusion that this is a massive achievement—and a lasting testament to the heights of brilliance and obscurity that a single mind can reach.

View all my reviews

Review: Autobiography (Darwin)

Review: Autobiography (Darwin)

The Autobiography of Charles Darwin, 1809–82The Autobiography of Charles Darwin, 1809–82 by Charles Darwin

My rating: 4 of 5 stars

I have attempted to write the following account of myself, as if I were a dead man in another world looking back at my own life. Nor have I found this difficult, for life is nearly over with me. I have taken no pains about my style of writing.

This is the quintessential scientific autobiography, a brief and charming book that Darwin wrote “for nearly an hour on most afternoons” for a little over two months. Originally published in 1887—five years after the naturalist’s death—it was somewhat censored, the more controversial religious opinions being taken out. It was only in 1958, to celebrate the centennial of The Origin of Species, that the full version was restored, edited by one of Darwin’s granddaughters, Nora Barlow.

The religious opinions that Darwin expresses are, nowadays, not enough to raise eyebrows. In short, his travels and his research slowly eroded his faith until all that remained was an untroubled agnosticism. What is interesting is that Darwin attributes to his loss of faith his further loss of sensitivity to music and to grand natural scenes. Apparently, in later life he found himself unable to experience the sublime. His scientific work also caused him to lose his appreciation for music, pictures, and poetry, which he heartily regrets: “My mind seems to have become a kind of machine for grinding general laws out of large collections of facts,” he says, and attributes to this the fact that “for many years I cannot endure to read a line of poetry.”

The most striking and lovable of Darwin’s qualities is his humility. He notes his lack of facility with foreign languages (which partially caused him to refuse Marx’s offer to dedicate Kapital to him), his terrible ear for music, his difficulty with writing, his incompetence in mathematics, and repeatedly laments his lack of higher aesthetic sensitivities. His explanation for his great scientific breakthrough is merely a talent for observation and dogged persistence. He even ends the book by saying: “With such moderate abilities as I possess, it is truly surprising that thus I should have influenced to a considerable extent the beliefs of scientific men on some important point.” It is remarkable that such a modest and retiring man should have stirred up one of the greatest revolutions in Western thought. Few thinkers have been more averse to controversy.

This little book also offers some reflection on the development of his theory—with the oft-quoted paragraph about reading Malthus—as well as several good portraits of contemporary thinkers. But the autobiography is not nearly as full as one might expect, since Darwin skips over his voyage on the Beagle (he had already written an excellent book about it) and since the second half of his life was extremely uneventful. For Darwin developed a mysterious ailment that kept his mostly house-bound, so much so that he did not even go to his father’s funeral. The explanation eluded doctors in his time and has resisted firm diagnosis ever since. But the consensus seems to be that it was at least in part psychological. It did give Darwin a convenient excuse to avoid society and focus on his work.

The final portrait which emerges is that of a scrupulous, methodical, honest, plainspoken, diffident, and level-headed fellow. It is easy to imagine him as a retiring uncle or a reserved high school teacher. That such a man, through a combination of genius and circumstance—and do not forget that he almost did not go on that famous voyage—could scandalize the public and make a fundamental contribution to our picture of the universe, is perhaps the greatest argument that ever was against the eccentric genius trope.

View all my reviews

Review: The Structure of Scientific Revolutions

Review: The Structure of Scientific Revolutions

The Structure of Scientific RevolutionsThe Structure of Scientific Revolutions by Thomas S. Kuhn

My rating: 5 of 5 stars

Observation and experience can and must drastically restrict the range of admissible scientific belief, else there would be no science. But they cannot alone determine a particular body of such belief. An apparently arbitrary element, compounded of personal and historical accident, is always a formative ingredient of the beliefs espoused by a given scientific community at a given time.

This is one of those wonderfully rich classics, touching on many disparate fields and putting forward ideas that have become permanent fixtures of our mental furniture. Kuhn synthesizes insights from history, sociology, psychology, and philosophy into a novel conception of science—one which, despite seemingly nobody agreeing with it, has become remarkably influential. Indeed, this book made such an impact that the contemporary reader may have difficulty seeing why it was so controversial in the first place.

Kuhn’s fundamental conception is of the paradigm. A paradigm is a research program that defines a discipline, perhaps briefly, perhaps for centuries. This is a not only a dominant theory, but a set of experimental methodologies, ontological commitments, and shared assumptions about standards of evidence and explanation. These paradigms usually trace their existence to a breakthrough work, such as Newton’s Principia or Lavoisier’s Elements; and they persist until the research program is thrown into crisis through stubborn anomalies (phenomena that cannot be accounted for within the theory). At this point a new paradigm may arise and replace the old one, such as the switch from Newton’s to Einstein’s system.

Though Kuhn is often spoken of as responding to Popper, I believe his book is really aimed at undermining the old positivistic conception of science: where science consists of a body of verified statements, and discoveries and innovations cause this body of statements to gradually grow. What this view leaves out is the interconnection and interdependence between these beliefs, and the reciprocal relationship between theory and observation. Our background orients our vision, telling us where to look and what to look for; and we naturally do our best to integrate a new phenomenon into our preexisting web of beliefs. Thus we may extend, refine, and elaborate our vision of the world without undermining any of our fundamental theories. This is what Kuhn describes as “normal science.”

During a period of “normal science” it may be true that scientific knowledge gradually accumulates. But when the dominant paradigm reaches a crisis, and the community finds itself unable to accommodate certain persistent observations, a new paradigm may take over. This cannot be described as a mere quantitative increase in knowledge, but is a qualitative shift in vision. New terms are introduced, older ones redefined; previous discoveries are reinterpreted and given a new meaning; and in general the web of connections between facts and theories is expanded and rearranged. This is Kuhn’s famous “paradigm shift.” And since the new paradigm so reorients our vision, it will be impossible to directly compare it with the older one; it will be as if practitioners from the two paradigms speak different languages or inhabit different worlds.

This scandalized some, and delighted others, and for the same reason: that Kuhn seemed to be arguing that scientific knowledge is socially solipsistic. That is to say that scientific “truth” was only true because it was given credence by the scientific community. Thus no paradigm can be said to be objectively “better” than another, and science cannot be said to really “advance.” Science was reduced to a series of fashionable ideas.

Scientists were understandably peeved by the notion, and social scientists concomitantly delighted, since it meant their discipline was at the crux of scientific knowledge. But Kuhn repeatedly denied being a relativist, and I think the text bears him out. It must be said, however, that Kuhn does not guard against this relativistic interpretation of his work as much as, in retrospect, he should have. I believe this was because Kuhn’s primary aim was to undermine the positivistic, gradualist account of science—which was fairly universally held in the past—and not to replace it with a fully worked-out theory of scientific progress himself. (And this is ironic since Kuhn himself argues that an old paradigm is never abandoned until a new paradigm takes its place.)

Though Kuhn does say a good deal about this, I think he could have emphasized more strongly the ways that paradigms contribute positively to reliable scientific knowledge. For we simply cannot look on the world as neutral observers; and even if we could, we would not be any the wiser for it. The very process of learning involves limiting possibilities. This is literally what happens to our brains as we grow up: the confused mass of neural connections is pruned, leaving only the ones which have proven useful in our environment. If our brains did not quickly and efficiently analyze environmental stimuli into familiar categories, we could hardly survive a day. The world would be a swirling, jumbled chaos.

Reducing ambiguities is so important to our survival that I think one of the primary functions of human culture is to further eliminate possibilities. For humans, being born with considerable behavioral flexibility, must learn to become inflexible, so to speak, in order to live effectively in a group. All communication presupposes a large degree of agreement within members of a community; and since we are born lacking this, we must be taught fairly rigid sets of assumptions in order to create the necessary accord. In science this process is performed in a much more formalized way, but nevertheless its end is the same: to allow communication and cooperation via a shared language and a shared view of the world.

Yet this is no argument for epistemological relativism, any more than the existence of incompatible moral systems is an argument for moral relativism. While people commonly call themselves cultural relativists when it comes to morals, few people are really willing to argue that, say, unprovoked violence is morally praiseworthy in certain situations. What people mean by calling themselves relativists is that they are pluralists: they acknowledge that incompatible social arrangements can nevertheless be equally ethical. Whether a society has private property or holds everything in common, whether it is monogamous or polygamous, whether burping is considered polite or rude—these may vary, and yet create coherent, mutually incompatible, ethical systems. Furthermore, acknowledging the possibility of equally valid ethical systems also does not rule out the possibility of moral progress, as any given ethical system may contain flaws (such as refusing to respect certain categories of people) that can be corrected over time.

I believe that Kuhn would argue that scientific cultures may be thought of in the same pluralistic way: paradigms can be improved, and incompatible paradigms can nevertheless both have some validity. Acknowledging this does not force one to abandon the concept of “knowledge,” any more than acknowledging cultural differences in etiquette forces one to abandon the concept of “politeness.”

Thus accepting Kuhn’s position does not force one to embrace epistemological relativism—or, at least not the strong variety, which reduces knowledge merely to widespread belief. I would go further, and argue that Kuhn’s account of science—or at least elements of his account—can be made to articulate even with the system of his reputed nemesis, Karl Popper. For both conceptions have the scientist beginning, not with observations and facts, but with certain arbitrary assumptions and expectations. This may sound unpromising; but these assumptions and expectations, by orienting our vision, allow us to realize when we are mistaken, and to revise our theories. The Baconian inductivist or the logical positivist, by beginning with an raw mass of data, has little idea how to make sense of it and thus no basis upon which to judge whether an observation is anomalous or not.

This is not where the resemblance ends. According to both Kuhn and Popper (though the former is describing while the second is prescribing), when we are revising our theories we should if possible modify or discard the least fundamental part, while leaving the underlying paradigm unchanged. This is Kuhn’s “normal science.” So when irregularities were observed in Uranus’ orbit, the scientists could have either discarded Newton’s theories (fundamental to the discipline) or the theory that Uranus was the furthest planet in the solar system (a superficial fact); obviously the latter was preferable, and this led to the discovery of Neptune. Science could not survive if scientists too willingly overturned the discoveries and theories of their discipline. A certain amount of stubbornness is a virtue in learning.

Obviously, the two thinkers also disagree about much. One issue is whether two paradigms can be directly compared or definitively tested. Popper envisions conclusive experiments whose outcome can unambiguously decide whether one paradigm or another is to be preferred. There are some difficulties to this view, however, which Kuhn points out. One is that different paradigms may attach very different importance to certain phenomena. Thus for Galileo (to use Kuhn’s example) a pendulum is a prime exemplar of motion, while to an Aristotelian a pendulum is a highly complex secondary phenomenon, unfit to demonstrate the fundamental properties of motion. Another difficulty in comparing theories is that terms may be defined differently. Einstein said that massive objects bend space, but Newtonian space is not a thing at all and so cannot be bent.

Granting the difficulties of comparing different paradigms, I nevertheless think that Kuhn is mistaken in his insistence that they are as separate as two languages. I believe his argument rests, in part, on his conceiving of a paradigm as beginning with definitions of fundamental terms (such as “space” or “time”) which are circular (such as “time is that measured by clocks,” etc.); so that comparing two paradigms would be like comparing Euclidian and non-Euclidian geometry to see which is more “true,” though both are equally true to their own axioms (while mutually incompatible). Yet such terms in science do not merely define, but denote phenomena in our experience. Thus (to continue the example) while Euclidian and non-Euclidian geometries may both be equally valid according to their premises, they may not be equally valid according to how they describe our experience.

Kuhn’s response to this would be, I believe, that we cannot have neutral experiences, but all our observations are already theory-laden. While this is true, it is also true that theory does not totally determine our vision; and clever experimenters can often, I believe, devise tests that can differentiate between paradigms to most practitioners’ satisfaction. Nevertheless, as both Kuhn and Popper would admit, the decision to abandon one theory for another can never be a wholly rational affair, since there is no way of telling whether the old paradigm could, with sufficient ingenuity, be made to accommodate the anomalous data; and in any case a strange phenomena can always be tabled as a perplexing but unimportant deviation for future researchers to tackle. This is how an Aristotelian would view Galileo’s pendulum, I believe.

Yet this fact—that there can be no objective, fool-proof criteria for switching paradigms—is no reason to despair. We are not prophets; every decision we take involves risk that it will not pan out; and in this respect science is no different. What makes science special is not that it is purely rational or wholly objective, but that our guesses are systematically checked against our experience and debated within a community of dedicated inquirers. All knowledge contains an imaginative and thus an arbitrary element; but this does not mean that anything goes. To use a comparison, a painter working on a portrait will have to make innumerable little decisions during her work; and yet—provided the painter is working within a tradition that values literal realism—her work will be judged, not for the taste displayed, but for the perceived accuracy. Just so, science is not different from other cultural realms in lacking arbitrary elements, but in the shared values that determine how the final result is judged.

I think that Kuhn would assent to this; and I think it was only the widespread belief that science was as objective, asocial, and unimaginative as a camera taking a photograph that led him to emphasize the social and arbitrary aspects of science so strongly. This is why, contrary to his expectations, so many people read his work as advocating total relativism.

It should be said, however, that Kuhn’s position does alter how we normally think of “truth.” In this I also find him strikingly close to his reputed nemesis, Popper. For here is the Austrian philosopher on the quest for truth:

Science never pursues the illusory aim of making its answers final, or even probable. Its advance is, rather, towards the infinite yet attainable aim of ever discovering new, deeper, and more general problems, and of subjecting its ever tentative answers to ever renewed and ever more rigorous tests.

And here is what his American counterpart has to say:

Later scientific theories are better than earlier ones for solving puzzles in the often quite different environments to which they are applied. That is not a relativist’s position, and it displays the sense in which I am a convinced believer in scientific progress.

Here is another juxtaposition. Popper says:

Science is not a system of certain, or well-established, statements; nor is it a system which steadily advances towards a state of finality. Our science is not knowledge (episteme): it can never claim to have attained truth, or even a substitute for it, such as probability. … We do not know: we can only guess. And our guesses are guided by the unscientific, the metaphysical (though biologically explicable) faith in laws, in regularities which we can uncover—discover.

And Kuhn:

One often hears that successive theories grow ever closer to, or approximate more and more closely to, the truth… Perhaps there is some other way of salvaging the notion of ‘truth’ for application to whole theories, but this one will not do. There is, I think, no theory-independent way to reconstruct phrases like ‘really there’; the notion of a match between the ontology of a theory and its ‘real’ counterpart in nature now seems to me illusive in principle.

Though there are important differences, to me it is striking how similar their accounts of scientific progress are: the ever-increasing expansion of problems, or puzzles, that the scientist may investigate. And both thinkers are careful to point out that this expansion cannot be understood as an approach towards an ultimate “true” explanation of everything, and I think their reasons for saying so are related. For since Popper begins with theories, and Kuhn with paradigms—both of which stem from the imagination of scientists—their accounts of knowledge can never be wholly “objective,” but must contain an aforementioned arbitrary element. This necessarily leaves open the possibility that an incompatible theory may yet do an equal or better job in making sense of an observation, or that a heretofore undiscovered phenomenon may violate the theory. And this being so, we can never say that we have reached an “ultimate” explanation, where our theory can be taken as a perfect mirror of reality.

I do not think this notion jeopardizes the scientific enterprise. To the contrary, I think that science is distinguished from older, metaphysical sorts of enquiry in that it is always open-ended, and makes no claim to possessing absolute “truth.” It is this very corrigibility of science that is its strength.

This review has already gone on for far too long, and much of it has been spent riding my own hobby-horse without evaluating the book. Yet I think it is a testament to Kuhn’s work that it is still so rich and suggestive, even after many of its insights have been absorbed into the culture. Though I have tried to defend Kuhn from accusations of relativism or undermining science, anyone must admit that this book has many flaws. One is Kuhn’s firm line between “normal” science and paradigm shifts. In his model, the first consists of mere puzzle-solving while the second involves a radical break with the past. But I think experience does not bear out this hard dichotomy; discoveries and innovations may be revolutionary to different degrees, which I think undermines Kuhn’s picture of science evolving as a punctuated equilibrium.

Another weakness of Kuhn’s work is that it does not do justice to the way that empirical discoveries may cause unanticipated theoretical revolutions. In his model, major theoretical innovations are the products of brilliant practitioners who see the field in a new way. But this does not accurately describe what happened when, say, DNA was discovered. Watson and Crick worked within the known chemical paradigm, and operated like proper Popperians in brainstorming and eliminating possibilities based on the evidence. And yet the discovery of DNA’s double helix, while not overturning any major theoretical paradigms, nevertheless had such far-reaching implications that it caused a revolution in the field. Kuhn has little to say about events like this, which shows that his model is overly simplistic.

I must end here, after thrashing about ineffectually in multiple disciples in which I am not even the rankest amateur. What I hoped to re-capture in this review was the intellectual excitement I felt while reading this little volume. In somewhat dry (though not technical) academic prose, Kuhn caused a revolution that still forceful enough to make me dizzy.

View all my reviews

Review: The Logic of Scientific Discovery

Review: The Logic of Scientific Discovery

The Logic of Scientific DiscoveryThe Logic of Scientific Discovery by Karl R. Popper

My rating: 4 of 5 stars

We do not know: we can only guess.

Karl Popper originally wrote Logik der Forchung (The Logic of Research) in 1934. This original version—published in haste to secure an academic position and escape the threat of Nazism (Popper was of Jewish descent)—was heavily condensed at the publisher’s request; and because of this, and because it remained untranslated from the German, the book did not receive the attention it deserved. This had to wait until 1959, when Popper finally released a revised and expanded English translation. Yet this condensation and subsequent expansion have left their mark on the book. Popper makes his most famous point within the first few dozen pages; and much of the rest of the book is given over to dead controversies, criticisms and rejoinders, technical appendices, and extended footnotes. It does not make for the most graceful reading experience.

This hardly matters, however, since it is here that Popper put forward what has arguably become the most famous concept in the philosophy of science: falsification.

This term is widely used; but its original justification is not, I believe, widely understood. Popper’s doctrine must be understood as a response to inductivism. Now, in 1620 Francis Bacon released his brilliant Novum Organum. Its title alludes to Aristotle’s Organon, a collection of logical treatises, mainly focusing on how to make accurate deductions. This Aristotelian method—dominated by syllogisms: deriving conclusions from given premises—dominated the study of nature for millennia, with precious little to show for it. Francis Bacon hoped to change all that with his new doctrine of induction. Instead of beginning with premises (‘All men are mortal’), and reasoning to conclusions (‘Socrates is mortal’), the investigator must begin with experiences (‘Socrates died,’ ‘Plato died,’ etc.) and then generalize a conclusion (‘All men are mortal’). This was how science was to proceed: from the specific to the general.

This seemed all fine and dandy until, in 1738, David Hume published his Treatise of Human Nature, in which he explained his infamous ‘problem of induction.’ Here is the idea. If you see one, two, three… a dozen… a thousand… a million white swans, and not a single black one, it is still illogical to conclude “All swans are white.” Even if you investigated every swan in the world but one, and they all proved white, you still could not conclude with certainty that the last one would be white. Aside from modus tollens (concluding from a negative specific to a negative general), here is no logically justifiable way to proceed from the specific to the general. To this argument, many are tempted to respond: “But we know from experience that induction works. We generalize all the time.” Yet this is to use induction to prove that induction works, which is paradoxical. Hume’s problem of induction has proven to be a stumbling block for philosophers ever since.

In the early parts of the 20th century, the doctrine of logical positivism arose in the philosophical world, particularly in the ‘Vienna Circle’. This had many proponents and many forms, but the basic idea, as explained by A.J. Ayer, is the following. The meaning of a sentence is equivalent to its verification; and verification is performed through experience. Thus the sentence “The cat is on the mat” can be verified by looking at the mat; it is a meaningful utterance. But the sentence “The world is composed of mind” cannot be verified by any experience; it is meaningless. Using this doctrine the positivists hoped to eliminate all metaphysics. Unfortunately, however, the doctrine also eliminates human knowledge, since, as Hume showed, generalizations can never be verified. No experience corresponds, for example, to the statement: “Gravitation is proportional to the product of mass and the inverse square of distance,” since this is an unlimitedly general statement, and experiences are always particular.

Karl Popper’s falsificationism is meant to solve this problem. First, it is important to note that Popper is not, like the positivists, proposing a criterion of ‘meaning’. That is to say that, for Popper, unfalsifiable statements can still be meaningful; they just do not tell us anything about the world. Indeed, he continually notes how metaphysical ideas (such as Kepler’s idea that circles are more ‘perfect’ than other shapes) have inspired and guided scientists. This is itself an important distinction because it prevents him from falling into the same paradox as the positivists. For if only the statements with empirical content have meaning, then the statement “only the statements with empirical content have meaning” is itself meaningless. Popper, for his part, regarded himself as the enemy of linguistic philosophy and considered the problem of epistemology quite distinct from language analysis.

To return to falsification, Popper’s fundamental insight is that verification and falsification are not symmetrical. While no general statement can be proved using a specific instance, a general statement can indeed be disproved with a specific instance. A thousand white swans does not prove all swans are white; but one black swan disproves it. (This is the aforementioned modus tollens.) All this may seem trivial; but as Popper realized, this changes the nature of scientific knowledge as we know it. For science, then, is far from what Bacon imagined it to be—a carefully sifted catalogue of experiences, a collection of well-founded generalizations—and is rather a collection of theories which spring up, as it were, from the imagination of the scientist in the hopes of uniting several observed phenomena under one hypothesis. Or to put it more bluntly: a good scientific theory is a guess that does not prove wrong.

With his central doctrine established, Popper goes on to the technicalities. He discusses what composes the ‘range’ or ‘scope’ of a theory, and how some theories can be said to encompass others. He provides an admirable justification for Occam’s Razor—the preference for simpler over more complex explanations—since theories with fewer parameters are more easily falsified and thus, in his view, more informative. The biggest section is given over to probability. I admit that I had some difficulty following his argument at times, but the gist of his point is that probability must be interpreted ‘objectively,’ as frequency distributions, rather than ‘subjectively,’ as degrees of certainty, in order to be falsifiable; and also that the statistical results of experiments must be reproducible in order to avoid the possibility of statistical flukes.

All this leads up to a strangely combative section on quantum mechanics. Popper apparently was in the same camp as Einstein, and was put off by Heisenberg’s uncertainty principle. Like Einstein, Popper was a realist and did not like the idea that a particle’s properties could be actually undetermined; he wanted to see the uncertainty of quantum mechanics as a byproduct of measurement or of ‘hidden variables’—not as representing something real about the universe. And like Einstein (though less famously) Popper proposed an experiment to decide the issue. The original experiment, as described in this book, was soon shown to be flawed; but a revised experiment was finally conducted in 1999, after Popper’s death. Though the experiment agreed with Popper’s prediction (showing that measuring an entangled photon does not affect its pair), it had no bearing on Heisenberg’s uncertainty principle, which restricts arbitrarily precise measurements on a single particle, not a pair of particles.

Incidentally, it is difficult to see why Popper is so uncomfortable with the uncertainty principle. Given his own dogma of falsifiability, the belief that nature is inherently deterministic (and that probabilistic theories are simply the result of a lack of our own knowledge) should be discarded as metaphysical. This is just one example of how Popper’s personality was out of harmony with his own doctrines. An advocate of the open society, he was famously authoritarian in his private life, which led to his own alienation. This is neither here nor there, but it is an interesting comment on the human animal.

Popper’s doctrine, like all great ideas, has proven both influential and controversial. For my part I think falsification a huge advance over Bacon’s induction or the positivists’ verification. And despite the complications, I think that falsifiability is a crucial test to distinguish, not only science from pseudo-science, but all dependable knowledge from myth. For both pseudo-science and myth generally distinguish themselves by admirably fitting the data set, but resisting falsification. Freud’s theories, for example, can accommodate themselves to any set of facts we throw at them; likewise for intelligent design, belief in supernatural beings, or conspiracy theories. All of these seem to explain everything—and in a way they do, since they fit the observable data—but really explain nothing, since they can accommodate any new observation.

There are some difficulties with falsification, of course. The first is observation. For what we observe, or even what we count as an ‘observation’, is colored by our background beliefs. Whether to regard a dot in the sky as a plane, a UFO, or an angel is shaped by the beliefs we already hold; thus it is possible to disregard observations that run counter to our theories, rather than falsifying the theories. What is more, theories never exist in isolation, but in an entire context of beliefs; so if one prediction is definitively falsified, it can still be unclear what we must change in our interconnected edifice of theories. Further, it is rare for experimental predictions to agree exactly with results; usually they are approximately correct. But where do we draw the line between falsification and approximate correctness? And last, if we formulate a theory which withstands test after test, predicting their results with extreme accuracy time and again, must we still regard the theory as a provisional guess?

To give Popper credit, he responds to all of these points in this work, though perhaps not with enough discussion. But all these criticisms belie the fact that so much of the philosophy of science written after Popper has taken his work as a starting point, either attempting to amplify, modify, or (dare I say it?) falsify his claims. For my part, though I was often bored by the dry style and baffled by the technical explanations, I found myself admiring Popper’s careful methodology: responding to criticisms, making fine distinctions, building up his system piece by piece. Here is a philosopher deeply committed to the ideal of rational argument and deeply engaged with understanding the world. I am excited to read more.

View all my reviews

Review: The Beautiful Brain

Review: The Beautiful Brain

Beautiful Brain: The Drawings of Santiago Ramon y CajalBeautiful Brain: The Drawings of Santiago Ramon y Cajal by Larry W. Swanson

My rating: 4 of 5 stars

Like the entomologist in pursuit of brightly colored butterflies, my attention hunted, in the flower garden of the gray matter, cells with delicate and elegant forms, the mysterious butterflies of the soul, the beating of whose wings may someday—who knows?—clarify the secret of mental life.

I love walking around cathedrals because they are sublime examples of vital art. I say “vital” because the art is not just seen, but lived through. Every inch of a cathedral has at least two levels of significance: aesthetic and theological. Beauty, in other words, walks hand in hand with a certain view of the world. Indeed, beauty is an essential part of this view of the world, and thus facts and feelings are blended together into one seamlessly intelligible whole: a philosophy made manifest in stone.

The situation that pertains today is quite different. It is not that our present view of the world is inherently less beautiful; but that the vital link between the visual arts and our view of the world has been severed. Apropos of this, I often think of one of Richard Feynman’s anecdotes. He once gave a tour of a factory to a group of artists, trying to explain modern technology to them. The artists, in turn, were supposed to incorporate what they learned into a piece for an exhibition. But, as Feynman notes, almost none of the pieces really had anything to do with the technology. Art and science had tried to make contact, and failed.

This is why I am so intrigued by the anatomical drawings of Santiago Ramón y Cajal. For here we see a successful unification, revealing the same duality of significance as in a cathedral: his drawings instruct and enchant at once.

Though relatively obscure in the anglophone world, Cajal is certainly one of the most important scientists of history. He is justly considered to be the father of neuroscience. Cajal’s research into the fine structures of the brain laid the foundation for the discipline. At a time when neurons were only a hypothesis, Cajal not only convinced the scientific world of their existence (as against the reticular theory), but documented several different types of neurons, describing their fine structure—nucleus, axon, and dendrites—and the flow of information within and between nerve cells.

As we can see in his Advice to a Young Investigator, Cajal in his adulthood became a passionate advocate for scientific research. But he did not always wish to be a scientist. As a child he was far more interested in painting; it was only the pressure of his father, a doctor, which turned him in the direction of research. And as this book shows, he never really gave up his artistic ambition; he only channelled it into another direction.

Research in Cajal’s day was far simpler. Instead of a team of scientists working with a high-powered MRI, we have the lonely investigator hunched over a microscope. The task was no easier for being simpler, however. Besides patience, ingenuity, and logical mind—the traits of any good scientist—a microanatomist back then needed a prodigious visual acumen. The task was to see properly: to extract a sensible figure from the blurry and chaotic images under the microscope. To meet this challenge Cajal not had to create new methods—staining the neurons to make them more visible—but to train his eye. And in both he proved a master.

He would often spend hours at the microscope, looking and looking without taking any notes. His analytic mind was not only at work during these periods, making guesses about cell functions and deductions about information flow, but also his visual imagination: he had to hold the cell’s form within his mind, see the the cells in context and in isolation, since the fine details of their structure were highly suggestive of their behavior and purpose. His drawings were the final expression of his visual process: “A graphic representation of the object observed guarantees the exactness of the observation itself.” For Cajal, as for Leonardo da Vinci, drawing was a form of thinking.

Though by now long outdated by subsequent research, Cajal’s drawings have maintained their appeal, both as diagrams and as works of art. With the aid of a short caption—ably provided by Eric Newman in this volume—the drawings spring to life as records of scientific research. They summarize complex processes, structures, and relations with brilliant clarity, making the essential point graspable in an instant.

Purely as drawings they are no less brilliant. The twisting and sprawling forms of neurons; the chaotic lattices of interconnected cells; the elegant architecture of our sensory organs—all this possesses an otherworldly beauty. The brain, such an intimate part of ourselves, is revealed to be intensely alien. One is naturally reminded of the surrealists by these dreamlike landscapes; and indeed Lorca and Dalí were both aware of Cajal’s work. Yet Cajal’s drawings are perhaps more fantastic than anything the surrealists ever produced, all the more bizarre for being true.

Even the names of these drawings wouldn’t be out of place in a modern gallery: “Cuneate nucleus of a kitten,” “Neurons in the midbrain of a sixteen-day-old trout,” “Axons in the Purkinje neurons in the cerebellum of a drowned man.” Science can be arrestingly poetic.

One of the functions of art is to help us to understand ourselves. The science of the brain, in a much different way, aims to do the same thing. It seems wholly right, then, that these two enterprises should unite in Cajal, the artistic investigator of our nervous system. And this volume is an ideal place to witness his accomplishment. The large, glossy images are beautiful. The commentary frames and explains, but does not distract. The essays on Cajal’s life and art are concise and incisive, and are supplemented by an essay on modern brain imaging that brings the book up to date. It is a cathedral of a book.

View all my reviews

Review: Advice to a Young Investigator

Review: Advice to a Young Investigator

Reglas y consejos sobre investigación científica. Los tónicos de la voluntad.Reglas y consejos sobre investigación científica. Los tónicos de la voluntad. by Santiago Ramón y Cajal

My rating: 4 of 5 stars

Books, like people, we respect and admire for their good qualities, but we only love them for some of their defects.

Santiago Ramón y Cajal has a fair claim to being the greatest scientist to hail from Spain. I have heard him called the “Darwin of Neuroscience”: his research and discoveries are foundational to our knowledge of the brain. When he won the Nobel Prize in 1906 it was for his work using nerve stains to differentiate neurons. At the time, you see, the existence of nerve cells was still highly controversial; Camillo Golgi, with whom Ramón y Cajal shared the Nobel, was a supporter of the reticular theory, which held that the nervous system was one continuous object.

Aside from being an excellent scientist, Ramón y Cajal was also a man of letters and a passionate teacher. These three aptitudes combined to produce this charming book. Its prosaic title is normally translated into English—inaccurately but more appealingly—as Advice to a Young Investigator. These originated as lectures delivered in the Real Academia de Ciencias Exactas, Físicas y Naturales in 1897 and published the next year by his colleague. They consist of warm and frank advice to students embarking on a scientific career.

Ramón y Cajal is wonderfully optimistic when it comes to the scientific enterprise. Like the philosopher Susan Haack, he thinks that science follows no special logic or method, but is only based on sharpened common sense. Thus one need not be a genius to make a valuable contribution. Indeed, for him, intelligence is much overrated. Focus, dedication, and perseverance are what really separate the successes from the failures. He goes on to diagnose several infirmities of the will that prevent young and promising students from accomplishing anything in the scientific field. Among these are megalófilos, a type exemplified in the character Casaubon in Middlemarch, who cannot finish taking notes and doing research in time to actually write his book.

While much of Ramón y Cajal’s advice is timeless, this book is also very much of a time and a place. He advises his young students to buy their own equipment and to work at home—something that would be impractical today, not least because the equipment used in laboratories today has grown so much in complexity and expense. He even advises his student on finding the right wife (over-cultured women are to be avoided). More seriously, these lectures are marked by the crisis of 1898, when Spain lost the Spanish-American war and the feeling of cultural degeneration was widespread. Ramón y Cajal is painfully aware that Spain lagged behind the other Western countries in scientific research, and much of these lectures is aimed alleviating at specifically Spanish shortcomings.

In every one of these pages Ramón y Cajal’s fierce dedication to the scientific enterprise, his conviction that science is noble, useful, and necessary, and his desire to see the spirit of inquiry spread far and wide, are expressed with pungent wit that cannot fail to infect the reader with the same zeal to expand the bounds of human knowledge and with an admiration for such an exemplary scientist.

View all my reviews

Review: Opticks

Review: Opticks

OpticksOpticks by Isaac Newton

My rating: 4 of 5 stars

 

My Design in this Book is not to explain the Properties of Light by Hypotheses, but to propose and prove them by Reason and Experiment

I’ve long wanted to read Newton’s Principia, but its reputation intimidates me. Everyone seems to agree that it is intensely difficult, and I’m sorry to say I haven’t worked up enough nerve to face it yet. But I did still want to read Newton; so as soon as I learned about this book, Newton’s more popular and accessible volume, I snatched it up and happily dug in.

The majority of this text is given over to descriptions of experiments. To the modern reader—and I suspect to the historical reader as well—these sections are remarkably dry. In simple yet exact language, Newton painstakingly describes the setup and results of experiment after experiment, most of them conducted in his darkened chamber, with the window covered up except for a small opening to let in the sunlight. Yet even if this doesn’t make for a thrilling read, it is impossible not to be astounded at the depth of care, the keenness of observation, and the subtle brilliance Newton displays. Using the most basic equipment (his most advanced tool is the prism), Newton almost literally tweezes light apart, making an enormous contribution both to experimental science and to the field of optics.

At the time, the discovery that white light could be decomposed into a rainbow of colors, and that this rainbow could be recombined back into white light, must have seemed as momentous as the discovery of the Higgs Boson. And indeed, even the modern reader might catch a glimpse of this excitement as she watches Newton carefully set up his prism in front of his beam of light, tweaking every variable, adjusting every parameter, measuring everything could be measured, and describing in elegant prose everything that couldn’t.

Whence it follows, that the colorifick Dispositions of Rays are also connate with them, and immutable; and by consequence, that all the Productions and Appearances of Colours in the World are derived, not from any physical Change caused in Light by Refraction or Reflexion, but only from the various Mixtures or Separations of Rays, by virtue of their different Refrangibility or Reflexibility. And in this respect the Science of Colours becomes a Speculation as truly mathematical as any other part of Opticks.

Because I had recently read Feynman’s QED, one thing in particular caught my attention. Here’s the problem: When you have one surface of glass, even if most of the light passes through it, some of the light is reflected; and you can roughly gauge what portion of light does one or the other. Let’s say on a typical surface of glass, 4% of light is reflected. Now we add another surface of glass behind the first. According to common sense, 8% of the light should be reflected, right? Wrong. Now the amount of light which is reflected varies between 0% and 16%, depending on the distance between the two surfaces. This is truly bizarre; for it seems that the mere presence of second surface of glass alters the reflectiveness of the first. But how does the light “know” there is a second surface of glass? It seems the light somehow is affected before it comes into contact with either surface.

Well, Newton was aware of this awkward problem. He famously comes up with his theory of “fits of easy reflection or transmission” to explain this phenomenon. But this “theory” was merely to say that the glass, for some unknown reason, sometimes lets light through, and sometimes reflects it. In other words, it was hardly a theory at all.

Every Ray of Light in its passage through any refracting Surface is put into a certain transient Constitution or State, which in the progress of the Ray returns at equal Intervals, and disposes the Ray at every return to be easily transmitted through the next refracting Surface, and between the returns to be easily reflected by it.

Also fascinating to the modern reader is the strange dual conception of light as waves and as particles in this work, which can’t help but remind us of the quantum view. The wave theory makes it easy to account for the different refrangibility of the different colors of light (i.e. the different colors reflect at different angles in a prism).

Do not several sorts of Rays make Vibrations of several bignesses, which according to their bignesses excite Sensations of several Colours, much after the manner that the Vibrations of the Air, according to their several bignesses excite Sensations of several sounds. And particularly do not the most refrangible Rays excite the shortest Vibrations for making a Sensation of deep violet, the least refrangible the largest for making a Sensation of deep red, and the several intermediate bignesses to make Sensations of the several intermediate Colours?

To this notion of vibrations, Newton adds the “corpuscular” theory of light, which held (in opposition to his contemporary, Christiaan Huygens) that light was composed of small particles. This theory must have been attractive to Newton because it fit into his previous work in physics. It explained why beams of light, like other solid bodies, travel in straight lines (cf. Newton’s first law), and reflect off surfaces at angles equal to their angles of incidence (cf. Newton’s third law).

Are not the Rays of Light very small Bodies emitted from shining Substances? For such Bodies will pass through uniform Mediums in right Lines without bending into the shadow, which is the Nature of the Rays of Light. They will also be capable of several Properties, and be able to conserve their Properties unchanged in passing through several Mediums, which is another conditions of the Rays of Light.

As a side note, despite some problems with the corpuscular theory of light, it came to be accepted for a long while, until the phenomenon of interference gave seemingly decisive weight to the wave theory. (Light, like water waves, will interfere with itself, creating characteristic patterns; cf. the famous double-slit experiment.) The wave theory was reinforced with Maxwell’s equations, which treated light as just another electro-magnetic wave. It was, in fact, Einstein who brought back the viability of the corpuscular theory, when he suggested the idea that light might come in packets to explain the photoelectric effect. (Blue light, when shined on certain metals, will cause an electric current, while red light won’t. Why not?)

All this tinkering with light is good fun, especially if you’re a physicist (which I’m not). But the real treat, at least for the layreader, comes at the final section, where Newton speculates on many of the unsolved scientific problems of his day. His mind is roving and vast; and even if most of his speculations have turned out incorrect, it’s stunning to simply witness him at work. For example, Newton realizes that radiation can travel without a medium (like air), and can heat objects even in a vacuum. (And thank goodness for that, for how else would the earth be warmed by the sun?) But from this fact he incorrectly deduces that there must be some more subtle medium that remains (like the famous ether).

If in two large tall cylindrical Vessels of Glass inverted, to little Thermometers be suspended so as not to touch the Vessels, and the Air be drawn out of one of these Vessels thus prepared be carried out of a cold place into a warm one; the Thermometer in vacuo will grow warm as much, and almost as soon as the Thermometer that is not in vacuo. And when the Vessels are carried back into the cold place, the Thermometer in vacuo will grow cold almost as soon as the other Thermometer. Is not the Heat of the warm Room convey’d through the Vacuum by the Vibrations of a much subtiler Medium than Air, which after the Air was drawn out remained in the Vacuum?

Yet for all Newton’s perspicacity, the most touching section was a list of question Newton asks, as if to himself, that he cannot hope to answer. It seems that even the most brilliant among us are stunned into silence by the vast mystery of the cosmos:

What is there in places almost empty of Matter, and whence is it that the Sun and Planets gravitate towards one another, without dense Matter between them? Whence is it that Nature doth nothing in vain; and whence arises all that Order and Beauty which we see in the World? To what end are Comets, and whence is it that Planets move all one and the same way in Orbs concentrick, while Comets move all manner of ways in Orbs very excentrick; and what hinders the fix’d Stars from falling upon one another? How came the Bodies of animals to be contrived with so much Art, and for what ends were their several Parts? Was the Eye contrived without Skill in Opticks, and the Ear without Knowledge of Sounds? How do the Motions of the Body follow from the Will, and whence is the Instinct in Animals?

View all my reviews

Review: Aristotle’s Physics

Review: Aristotle’s Physics

PhysicsPhysics by Aristotle

My rating: 4 of 5 stars

Of all the ancient thinkers that medieval Christians could have embraced, it always struck me as pretty remarkable that Aristotle was chosen. Of course, ‘chosen’ isn’t the right word; rather, it was something of a historical coincidence, since Aristotle’s works were available in Latin translation, while those of Plato were not.

Nonetheless, Aristotle strikes me as a particularly difficult thinker to build a monotheistic worldview around. There’s simply nothing mystical about him. His feet are planted firmly on the ground, and his eyes are level with the horizon. Whereas mystics see the unity of everything, Aristotle divides up the world into neat parcels, providing lists of definitions and categories wherever he turns. Whereas mystics tend to scorn human knowledge, Aristotle was apparently very optimistic about the potential reach of the human mind—since he so manifestly did his best to know everything.

The only thing that I can find remotely mystical is Aristotle’s love of systems. Aristotle does not like loose ends; he wants his categories to be exhaustive, and his investigations complete. And, like a mystic, Aristotle is very confident about the reach of a priori knowledge, while his investigations of empirical reality—though admittedly impressive—are paltry in comparison with his penchant for logical deduction. At the very least, Aristotle is wont to draw many more conclusions from a limited set of observations than most moderns are comfortable with.

I admit, in the past I’ve had a hard time appreciating his writing. His style was dry; his arguments, perfunctory. I often wondered: What did so many people see in him? His tremendous influence seemed absurd after one read his works. How could he have seemed so convincing for so long?

I know from experience that when I find a respected author ludicrous, the fault is often my own. So, seeking a remedy, I decided that I would read more Aristotle; more specifically, I would read enough Aristotle until I learned to appreciate him. For overexposure can often engender a change of heart; in the words of Stephen Stills, “If you can’t be with the one you love, love the one you’re with.” So I decided I would stick with Aristotle until I loved him. I still don’t love Aristotle, but, after reading this book, I have a much deeper respect for the man. For this book really is remarkable.

As Bertrand Russell pointed out (though it didn’t need a mind as penetrating as Russell’s to do so), hardly a sentence in this book can be accepted as accurate. In fact, from our point of view, Aristotle’s project was doomed from the start. He is investigating physical reality, but is doing so without conducting experiments; in other words, his method is purely deductive, starting from a few assumptions, most of which are wrong. Much of what Aristotle says might even seem silly—such as his dictum that “we always assume the presence in nature of the better.” Another great portion of this work is taken up by thoroughly uninteresting and unconvincing investigations, such as the definitions of ‘together’, ‘apart’, ‘touch’, ‘continuous’, and all of the different types of motions—all of which seem products of a pedantic brain rather than qualities of nature.

But the good in this work far outweighs the bad. For Aristotle commences the first (at least, the first, so far as I know) intellectually rigorous investigations of the basic properties of nature—space, time, cause, motion, and the origins of the universe. I find Aristotle’s inquiry into time particularly fascinating, for I’m not aware—at least, I can’t recall—any comparatively meticulous investigations of time by later philosophers I’ve read. Of course, Aristotle’s investigation of ‘time’ can be more properly called Aristotle’s investigation of the human experience of time, but we need not fault Aristotle for not thinking there’s a difference.

I was particularly impressed with Aristotle’s attempt to overcome Zeno’s paradoxes. He defines and re-defines time—struggling with how it can be divided, and with the exact nature of the present moment—and tries many different angles of attack. And what’s even more interesting is that Aristotle fails in his task, and even falls into Zeno’s intellectual trap by unwittingly accepting Zeno’s assumptions.

Aristotle’s attempts to tackle space were almost equally fascinating; for there, we once again see the magnificent mind of Aristotle struggling to define something of the highest degree of abstractness. In fact, I challenge anyone reading this to come up with a good definition of space. It’s hard, right? The paradox (at least, the apparent paradox) is that space has some qualities of matter—extension, volume, dimensions—without having any mass. It seems, at first sight at least, like empty space should be simply nothing, yet space itself has certain definite qualities—and anything that has qualities is, by definition, something. However, these qualities only emerge when one imagines a thing in space, for we never, in our day to day lives, encounter space itself, devoid of all content. But how could something with no mass have the quality of extension?

As is probably obvious by now, I am in no way a physicist—and, for that matter, neither was Aristotle; but his attempt is still interesting.

Aristotle does also display an admirable—though perhaps naïve—tendency to trust experience. For his refutation of the thinkers who argue that (a) everything is always in motion, and (b) everything is always at rest, is merely to point out that day-to-day experience refutes this. And Aristotle at least knows—since it is so remarkably obvious to those with eyes—that Zeno must have committed some error; so even if his attacks on the paradoxes don’t succeed, one can at least praise the effort.

To the student of modern physics, this book may present some interesting contrasts. We have learned, through painstaking experience, that the most productive questions to ask of nature begin with “how” rather than “why.” Of course, the two words are often interchangeable; but notice that “why” attributes a motive to something, whereas “how” is motiveless. Aristotle seeks to understand nature in the same way that one might understand a friend. In a word, he seeks teleological explanations. He assumes both that nature works with a purpose, and that the workings of nature are roughly accessible to common sense, with some logical rigor thrown in. A priori, this isn’t necessarily a bad assumption; in fact, it took a lot of time for us humans to realize it was incorrect. In any case, it must be admitted that Aristotle at least seeks to understand far more than us moderns; for Aristotle seeks, so to speak, to get inside the ‘mind’ of nature, understanding the purpose for everything, whereas modern scientific knowledge is primarily descriptive.

Perhaps now I can see what the medieval Christians found in Aristotle. The assumption that nature works with a purpose certainly meshes well with the belief in an omnipotent creator God. And the assumption that knowledge is accessible through common sense and simple logical deductions is reasonable if one believes that the world was created for us. To the modern reader, the Physics might be far less impressive than to the medievals. But it is always worthwhile to witness the inner workings of such a brilliant mind; and, of all the Aristotle I’ve so far read, none so clearly show Aristotle’s thought process, none so clearly show his mind at work, as this.

View all my reviews

Review: Dialogue Concerning the Two Chief World Systems

Review: Dialogue Concerning the Two Chief World Systems

Dialogue Concerning the Two Chief World SystemsDialogue Concerning the Two Chief World Systems by Galileo Galilei

My rating: 4 of 5 stars

I should think that anyone who considered it more reasonable for the whole universe to move in order to let the earth remain fixed would be more irrational than one who should climb to the top of your cupola just to get a view of the city and its environs, and then demand that the whole countryside should revolve around him so that he would not have to take the trouble to turn his head.

It often seems hard to justify reading old works of science. After all, science continually advances; pioneering works today will be obsolete tomorrow. As a friend of mine said when he saw me reading this, “That shit’s outdated.” And it’s true: this shit is outdated.

Well, for one thing, understanding the history of the development of a theory often aids in the understanding of the theory. Look at any given technical discipline today, and it’s overwhelming; you are presented with such an imposing edifice of knowledge that it seems impossible. Yet even the largest oak was once an acorn, and even the most frightening equation was once an idle speculation. Case in point: Achieving a modern understanding of planetary orbits would require mastery of Einstein’s theories—no mean feat. Flip back the pages in history, however, and you will end up here, at this delightful dialogue by a nettlesome Italian scientist, as accessible a book as ever you could hope for.

This book is rich and rewarding, but for some unexpected reasons. What will strike most moderns readers, I suspect, is how plausible the Ptolemaic worldview appears in this dialogue. To us alive today, who have seen the earth in photographs, the notion that the earth is the center of the universe seems absurd. But back then, it was plain common sense, and for good reason. Galileo’s fictional Aristotelian philosopher, Simplicio, puts forward many arguments for the immobility of the earth, some merely silly, but many very sensible and convincing. Indeed, I often felt like I had to take Simplicio’s side, as Galileo subjects the good Ptolemaic philosopher to much abuse.

I’d like to think that I would have sensed the force of the Copernican system if I were alive back then. But really, I doubt it. If the earth was moving, why wouldn’t things you throw into the air land to the west of you? Wouldn’t we feel ourselves in motion? Wouldn’t canon balls travel much further one way than another? Wouldn’t we be thrown off into space? Galileo’s answer to all of these questions is the principal of inertia: all inertial (non-accelerating) frames of reference are equivalent. That is, an experiment will look the same whether it’s performed on a ship at constant velocity or on dry land.

(In reality, the surface of the earth is non-inertial, since it is undergoing acceleration due to its constant spinning motion. Indeed the only reason we don’t fly off is because of gravity, not because of inertia as Galileo argues. But for practical purposes the earth’s surface can be treated as an inertial reference frame.)

Because this simple principle is the key to so many of Galileo’s arguments, the final section of this book is trebly strange. In the last few pages of this dialogue, Galileo triumphantly puts forward his erroneous theory of the tides as if it were the final nail in Ptolemy’s coffin. Galileo’s theory was that the tides were caused by the movement of the earth, like water sloshing around a bowl on a spinning Lazy Susan. But if this was what really caused the tides, then Galileo’s principle of inertia would fall apart; since if the earth’s movements could move the oceans, couldn’t it also push us humans around? It’s amazing that Galileo didn’t mind this inconsistency. It’s as if Darwin ended On the Origin of Species with an argument that ducks were the direct descendants of daffodils.

Yet for all the many quirks and flaws in this work, for all the many digressions—and there are quite a few—it still shines. Galileo is a strong writer and a superlative thinker; following along the train of his thoughts is an adventure in itself. But of course this work, like all works of science, is not ultimately about the mind of one man; it is about the natural world. And if you are like me, this book will make you think of the sun, the moon, the planets, and the stars in the sky; will remind you that your world is spinning like a top, and that the very ground we stand on is flying through the dark of space, shielded by a wisp of clouds; and that the firmament up above, something we often forget, is a window into the cosmos itself—you will think about all this, and decide that maybe this shit isn’t so outdated after all.

View all my reviews