Review: Autobiography (Darwin)

Review: Autobiography (Darwin)

The Autobiography of Charles Darwin, 1809–82The Autobiography of Charles Darwin, 1809–82 by Charles Darwin

My rating: 4 of 5 stars

I have attempted to write the following account of myself, as if I were a dead man in another world looking back at my own life. Nor have I found this difficult, for life is nearly over with me. I have taken no pains about my style of writing.

This is the quintessential scientific autobiography, a brief and charming book that Darwin wrote “for nearly an hour on most afternoons” for a little over two months. Originally published in 1887—five years after the naturalist’s death—it was somewhat censored, the more controversial religious opinions being taken out. It was only in 1958, to celebrate the centennial of The Origin of Species, that the full version was restored, edited by one of Darwin’s granddaughters, Nora Barlow.

The religious opinions that Darwin expresses are, nowadays, not enough to raise eyebrows. In short, his travels and his research slowly eroded his faith until all that remained was an untroubled agnosticism. What is interesting is that Darwin attributes to his loss of faith his further loss of sensitivity to music and to grand natural scenes. Apparently, in later life he found himself unable to experience the sublime. His scientific work also caused him to lose his appreciation for music, pictures, and poetry, which he heartily regrets: “My mind seems to have become a kind of machine for grinding general laws out of large collections of facts,” he says, and attributes to this the fact that “for many years I cannot endure to read a line of poetry.”

The most striking and lovable of Darwin’s qualities is his humility. He notes his lack of facility with foreign languages (which partially caused him to refuse Marx’s offer to dedicate Kapital to him), his terrible ear for music, his difficulty with writing, his incompetence in mathematics, and repeatedly laments his lack of higher aesthetic sensitivities. His explanation for his great scientific breakthrough is merely a talent for observation and dogged persistence. He even ends the book by saying: “With such moderate abilities as I possess, it is truly surprising that thus I should have influenced to a considerable extent the beliefs of scientific men on some important point.” It is remarkable that such a modest and retiring man should have stirred up one of the greatest revolutions in Western thought. Few thinkers have been more averse to controversy.

This little book also offers some reflection on the development of his theory—with the oft-quoted paragraph about reading Malthus—as well as several good portraits of contemporary thinkers. But the autobiography is not nearly as full as one might expect, since Darwin skips over his voyage on the Beagle (he had already written an excellent book about it) and since the second half of his life was extremely uneventful. For Darwin developed a mysterious ailment that kept his mostly house-bound, so much so that he did not even go to his father’s funeral. The explanation eluded doctors in his time and has resisted firm diagnosis ever since. But the consensus seems to be that it was at least in part psychological. It did give Darwin a convenient excuse to avoid society and focus on his work.

The final portrait which emerges is that of a scrupulous, methodical, honest, plainspoken, diffident, and level-headed fellow. It is easy to imagine him as a retiring uncle or a reserved high school teacher. That such a man, through a combination of genius and circumstance—and do not forget that he almost did not go on that famous voyage—could scandalize the public and make a fundamental contribution to our picture of the universe, is perhaps the greatest argument that ever was against the eccentric genius trope.

View all my reviews

Review: The Structure of Scientific Revolutions

Review: The Structure of Scientific Revolutions

The Structure of Scientific RevolutionsThe Structure of Scientific Revolutions by Thomas S. Kuhn

My rating: 5 of 5 stars

Observation and experience can and must drastically restrict the range of admissible scientific belief, else there would be no science. But they cannot alone determine a particular body of such belief. An apparently arbitrary element, compounded of personal and historical accident, is always a formative ingredient of the beliefs espoused by a given scientific community at a given time.

This is one of those wonderfully rich classics, touching on many disparate fields and putting forward ideas that have become permanent fixtures of our mental furniture. Kuhn synthesizes insights from history, sociology, psychology, and philosophy into a novel conception of science—one which, despite seemingly nobody agreeing with it, has become remarkably influential. Indeed, this book made such an impact that the contemporary reader may have difficulty seeing why it was so controversial in the first place.

Kuhn’s fundamental conception is of the paradigm. A paradigm is a research program that defines a discipline, perhaps briefly, perhaps for centuries. This is a not only a dominant theory, but a set of experimental methodologies, ontological commitments, and shared assumptions about standards of evidence and explanation. These paradigms usually trace their existence to a breakthrough work, such as Newton’s Principia or Lavoisier’s Elements; and they persist until the research program is thrown into crisis through stubborn anomalies (phenomena that cannot be accounted for within the theory). At this point a new paradigm may arise and replace the old one, such as the switch from Newton’s to Einstein’s system.

Though Kuhn is often spoken of as responding to Popper, I believe his book is really aimed at undermining the old positivistic conception of science: where science consists of a body of verified statements, and discoveries and innovations cause this body of statements to gradually grow. What this view leaves out is the interconnection and interdependence between these beliefs, and the reciprocal relationship between theory and observation. Our background orients our vision, telling us where to look and what to look for; and we naturally do our best to integrate a new phenomenon into our preexisting web of beliefs. Thus we may extend, refine, and elaborate our vision of the world without undermining any of our fundamental theories. This is what Kuhn describes as “normal science.”

During a period of “normal science” it may be true that scientific knowledge gradually accumulates. But when the dominant paradigm reaches a crisis, and the community finds itself unable to accommodate certain persistent observations, a new paradigm may take over. This cannot be described as a mere quantitative increase in knowledge, but is a qualitative shift in vision. New terms are introduced, older ones redefined; previous discoveries are reinterpreted and given a new meaning; and in general the web of connections between facts and theories is expanded and rearranged. This is Kuhn’s famous “paradigm shift.” And since the new paradigm so reorients our vision, it will be impossible to directly compare it with the older one; it will be as if practitioners from the two paradigms speak different languages or inhabit different worlds.

This scandalized some, and delighted others, and for the same reason: that Kuhn seemed to be arguing that scientific knowledge is socially solipsistic. That is to say that scientific “truth” was only true because it was given credence by the scientific community. Thus no paradigm can be said to be objectively “better” than another, and science cannot be said to really “advance.” Science was reduced to a series of fashionable ideas.

Scientists were understandably peeved by the notion, and social scientists concomitantly delighted, since it meant their discipline was at the crux of scientific knowledge. But Kuhn repeatedly denied being a relativist, and I think the text bears him out. It must be said, however, that Kuhn does not guard against this relativistic interpretation of his work as much as, in retrospect, he should have. I believe this was because Kuhn’s primary aim was to undermine the positivistic, gradualist account of science—which was fairly universally held in the past—and not to replace it with a fully worked-out theory of scientific progress himself. (And this is ironic since Kuhn himself argues that an old paradigm is never abandoned until a new paradigm takes its place.)

Though Kuhn does say a good deal about this, I think he could have emphasized more strongly the ways that paradigms contribute positively to reliable scientific knowledge. For we simply cannot look on the world as neutral observers; and even if we could, we would not be any the wiser for it. The very process of learning involves limiting possibilities. This is literally what happens to our brains as we grow up: the confused mass of neural connections is pruned, leaving only the ones which have proven useful in our environment. If our brains did not quickly and efficiently analyze environmental stimuli into familiar categories, we could hardly survive a day. The world would be a swirling, jumbled chaos.

Reducing ambiguities is so important to our survival that I think one of the primary functions of human culture is to further eliminate possibilities. For humans, being born with considerable behavioral flexibility, must learn to become inflexible, so to speak, in order to live effectively in a group. All communication presupposes a large degree of agreement within members of a community; and since we are born lacking this, we must be taught fairly rigid sets of assumptions in order to create the necessary accord. In science this process is performed in a much more formalized way, but nevertheless its end is the same: to allow communication and cooperation via a shared language and a shared view of the world.

Yet this is no argument for epistemological relativism, any more than the existence of incompatible moral systems is an argument for moral relativism. While people commonly call themselves cultural relativists when it comes to morals, few people are really willing to argue that, say, unprovoked violence is morally praiseworthy in certain situations. What people mean by calling themselves relativists is that they are pluralists: they acknowledge that incompatible social arrangements can nevertheless be equally ethical. Whether a society has private property or holds everything in common, whether it is monogamous or polygamous, whether burping is considered polite or rude—these may vary, and yet create coherent, mutually incompatible, ethical systems. Furthermore, acknowledging the possibility of equally valid ethical systems also does not rule out the possibility of moral progress, as any given ethical system may contain flaws (such as refusing to respect certain categories of people) that can be corrected over time.

I believe that Kuhn would argue that scientific cultures may be thought of in the same pluralistic way: paradigms can be improved, and incompatible paradigms can nevertheless both have some validity. Acknowledging this does not force one to abandon the concept of “knowledge,” any more than acknowledging cultural differences in etiquette forces one to abandon the concept of “politeness.”

Thus accepting Kuhn’s position does not force one to embrace epistemological relativism—or, at least not the strong variety, which reduces knowledge merely to widespread belief. I would go further, and argue that Kuhn’s account of science—or at least elements of his account—can be made to articulate even with the system of his reputed nemesis, Karl Popper. For both conceptions have the scientist beginning, not with observations and facts, but with certain arbitrary assumptions and expectations. This may sound unpromising; but these assumptions and expectations, by orienting our vision, allow us to realize when we are mistaken, and to revise our theories. The Baconian inductivist or the logical positivist, by beginning with an raw mass of data, has little idea how to make sense of it and thus no basis upon which to judge whether an observation is anomalous or not.

This is not where the resemblance ends. According to both Kuhn and Popper (though the former is describing while the second is prescribing), when we are revising our theories we should if possible modify or discard the least fundamental part, while leaving the underlying paradigm unchanged. This is Kuhn’s “normal science.” So when irregularities were observed in Uranus’ orbit, the scientists could have either discarded Newton’s theories (fundamental to the discipline) or the theory that Uranus was the furthest planet in the solar system (a superficial fact); obviously the latter was preferable, and this led to the discovery of Neptune. Science could not survive if scientists too willingly overturned the discoveries and theories of their discipline. A certain amount of stubbornness is a virtue in learning.

Obviously, the two thinkers also disagree about much. One issue is whether two paradigms can be directly compared or definitively tested. Popper envisions conclusive experiments whose outcome can unambiguously decide whether one paradigm or another is to be preferred. There are some difficulties to this view, however, which Kuhn points out. One is that different paradigms may attach very different importance to certain phenomena. Thus for Galileo (to use Kuhn’s example) a pendulum is a prime exemplar of motion, while to an Aristotelian a pendulum is a highly complex secondary phenomenon, unfit to demonstrate the fundamental properties of motion. Another difficulty in comparing theories is that terms may be defined differently. Einstein said that massive objects bend space, but Newtonian space is not a thing at all and so cannot be bent.

Granting the difficulties of comparing different paradigms, I nevertheless think that Kuhn is mistaken in his insistence that they are as separate as two languages. I believe his argument rests, in part, on his conceiving of a paradigm as beginning with definitions of fundamental terms (such as “space” or “time”) which are circular (such as “time is that measured by clocks,” etc.); so that comparing two paradigms would be like comparing Euclidian and non-Euclidian geometry to see which is more “true,” though both are equally true to their own axioms (while mutually incompatible). Yet such terms in science do not merely define, but denote phenomena in our experience. Thus (to continue the example) while Euclidian and non-Euclidian geometries may both be equally valid according to their premises, they may not be equally valid according to how they describe our experience.

Kuhn’s response to this would be, I believe, that we cannot have neutral experiences, but all our observations are already theory-laden. While this is true, it is also true that theory does not totally determine our vision; and clever experimenters can often, I believe, devise tests that can differentiate between paradigms to most practitioners’ satisfaction. Nevertheless, as both Kuhn and Popper would admit, the decision to abandon one theory for another can never be a wholly rational affair, since there is no way of telling whether the old paradigm could, with sufficient ingenuity, be made to accommodate the anomalous data; and in any case a strange phenomena can always be tabled as a perplexing but unimportant deviation for future researchers to tackle. This is how an Aristotelian would view Galileo’s pendulum, I believe.

Yet this fact—that there can be no objective, fool-proof criteria for switching paradigms—is no reason to despair. We are not prophets; every decision we take involves risk that it will not pan out; and in this respect science is no different. What makes science special is not that it is purely rational or wholly objective, but that our guesses are systematically checked against our experience and debated within a community of dedicated inquirers. All knowledge contains an imaginative and thus an arbitrary element; but this does not mean that anything goes. To use a comparison, a painter working on a portrait will have to make innumerable little decisions during her work; and yet—provided the painter is working within a tradition that values literal realism—her work will be judged, not for the taste displayed, but for the perceived accuracy. Just so, science is not different from other cultural realms in lacking arbitrary elements, but in the shared values that determine how the final result is judged.

I think that Kuhn would assent to this; and I think it was only the widespread belief that science was as objective, asocial, and unimaginative as a camera taking a photograph that led him to emphasize the social and arbitrary aspects of science so strongly. This is why, contrary to his expectations, so many people read his work as advocating total relativism.

It should be said, however, that Kuhn’s position does alter how we normally think of “truth.” In this I also find him strikingly close to his reputed nemesis, Popper. For here is the Austrian philosopher on the quest for truth:

Science never pursues the illusory aim of making its answers final, or even probable. Its advance is, rather, towards the infinite yet attainable aim of ever discovering new, deeper, and more general problems, and of subjecting its ever tentative answers to ever renewed and ever more rigorous tests.

And here is what his American counterpart has to say:

Later scientific theories are better than earlier ones for solving puzzles in the often quite different environments to which they are applied. That is not a relativist’s position, and it displays the sense in which I am a convinced believer in scientific progress.

Here is another juxtaposition. Popper says:

Science is not a system of certain, or well-established, statements; nor is it a system which steadily advances towards a state of finality. Our science is not knowledge (episteme): it can never claim to have attained truth, or even a substitute for it, such as probability. … We do not know: we can only guess. And our guesses are guided by the unscientific, the metaphysical (though biologically explicable) faith in laws, in regularities which we can uncover—discover.

And Kuhn:

One often hears that successive theories grow ever closer to, or approximate more and more closely to, the truth… Perhaps there is some other way of salvaging the notion of ‘truth’ for application to whole theories, but this one will not do. There is, I think, no theory-independent way to reconstruct phrases like ‘really there’; the notion of a match between the ontology of a theory and its ‘real’ counterpart in nature now seems to me illusive in principle.

Though there are important differences, to me it is striking how similar their accounts of scientific progress are: the ever-increasing expansion of problems, or puzzles, that the scientist may investigate. And both thinkers are careful to point out that this expansion cannot be understood as an approach towards an ultimate “true” explanation of everything, and I think their reasons for saying so are related. For since Popper begins with theories, and Kuhn with paradigms—both of which stem from the imagination of scientists—their accounts of knowledge can never be wholly “objective,” but must contain an aforementioned arbitrary element. This necessarily leaves open the possibility that an incompatible theory may yet do an equal or better job in making sense of an observation, or that a heretofore undiscovered phenomenon may violate the theory. And this being so, we can never say that we have reached an “ultimate” explanation, where our theory can be taken as a perfect mirror of reality.

I do not think this notion jeopardizes the scientific enterprise. To the contrary, I think that science is distinguished from older, metaphysical sorts of enquiry in that it is always open-ended, and makes no claim to possessing absolute “truth.” It is this very corrigibility of science that is its strength.

This review has already gone on for far too long, and much of it has been spent riding my own hobby-horse without evaluating the book. Yet I think it is a testament to Kuhn’s work that it is still so rich and suggestive, even after many of its insights have been absorbed into the culture. Though I have tried to defend Kuhn from accusations of relativism or undermining science, anyone must admit that this book has many flaws. One is Kuhn’s firm line between “normal” science and paradigm shifts. In his model, the first consists of mere puzzle-solving while the second involves a radical break with the past. But I think experience does not bear out this hard dichotomy; discoveries and innovations may be revolutionary to different degrees, which I think undermines Kuhn’s picture of science evolving as a punctuated equilibrium.

Another weakness of Kuhn’s work is that it does not do justice to the way that empirical discoveries may cause unanticipated theoretical revolutions. In his model, major theoretical innovations are the products of brilliant practitioners who see the field in a new way. But this does not accurately describe what happened when, say, DNA was discovered. Watson and Crick worked within the known chemical paradigm, and operated like proper Popperians in brainstorming and eliminating possibilities based on the evidence. And yet the discovery of DNA’s double helix, while not overturning any major theoretical paradigms, nevertheless had such far-reaching implications that it caused a revolution in the field. Kuhn has little to say about events like this, which shows that his model is overly simplistic.

I must end here, after thrashing about ineffectually in multiple disciples in which I am not even the rankest amateur. What I hoped to re-capture in this review was the intellectual excitement I felt while reading this little volume. In somewhat dry (though not technical) academic prose, Kuhn caused a revolution that still forceful enough to make me dizzy.

View all my reviews

Review: The Logic of Scientific Discovery

Review: The Logic of Scientific Discovery

The Logic of Scientific DiscoveryThe Logic of Scientific Discovery by Karl R. Popper

My rating: 4 of 5 stars

We do not know: we can only guess.

Karl Popper originally wrote Logik der Forchung (The Logic of Research) in 1934. This original version—published in haste to secure an academic position and escape the threat of Nazism (Popper was of Jewish descent)—was heavily condensed at the publisher’s request; and because of this, and because it remained untranslated from the German, the book did not receive the attention it deserved. This had to wait until 1959, when Popper finally released a revised and expanded English translation. Yet this condensation and subsequent expansion have left their mark on the book. Popper makes his most famous point within the first few dozen pages; and much of the rest of the book is given over to dead controversies, criticisms and rejoinders, technical appendices, and extended footnotes. It does not make for the most graceful reading experience.

This hardly matters, however, since it is here that Popper put forward what has arguably become the most famous concept in the philosophy of science: falsification.

This term is widely used; but its original justification is not, I believe, widely understood. Popper’s doctrine must be understood as a response to inductivism. Now, in 1620 Francis Bacon released his brilliant Novum Organum. Its title alludes to Aristotle’s Organon, a collection of logical treatises, mainly focusing on how to make accurate deductions. This Aristotelian method—dominated by syllogisms: deriving conclusions from given premises—dominated the study of nature for millennia, with precious little to show for it. Francis Bacon hoped to change all that with his new doctrine of induction. Instead of beginning with premises (‘All men are mortal’), and reasoning to conclusions (‘Socrates is mortal’), the investigator must begin with experiences (‘Socrates died,’ ‘Plato died,’ etc.) and then generalize a conclusion (‘All men are mortal’). This was how science was to proceed: from the specific to the general.

This seemed all fine and dandy until, in 1738, David Hume published his Treatise of Human Nature, in which he explained his infamous ‘problem of induction.’ Here is the idea. If you see one, two, three… a dozen… a thousand… a million white swans, and not a single black one, it is still illogical to conclude “All swans are white.” Even if you investigated every swan in the world but one, and they all proved white, you still could not conclude with certainty that the last one would be white. Aside from modus tollens (concluding from a negative specific to a negative general), here is no logically justifiable way to proceed from the specific to the general. To this argument, many are tempted to respond: “But we know from experience that induction works. We generalize all the time.” Yet this is to use induction to prove that induction works, which is paradoxical. Hume’s problem of induction has proven to be a stumbling block for philosophers ever since.

In the early parts of the 20th century, the doctrine of logical positivism arose in the philosophical world, particularly in the ‘Vienna Circle’. This had many proponents and many forms, but the basic idea, as explained by A.J. Ayer, is the following. The meaning of a sentence is equivalent to its verification; and verification is performed through experience. Thus the sentence “The cat is on the mat” can be verified by looking at the mat; it is a meaningful utterance. But the sentence “The world is composed of mind” cannot be verified by any experience; it is meaningless. Using this doctrine the positivists hoped to eliminate all metaphysics. Unfortunately, however, the doctrine also eliminates human knowledge, since, as Hume showed, generalizations can never be verified. No experience corresponds, for example, to the statement: “Gravitation is proportional to the product of mass and the inverse square of distance,” since this is an unlimitedly general statement, and experiences are always particular.

Karl Popper’s falsificationism is meant to solve this problem. First, it is important to note that Popper is not, like the positivists, proposing a criterion of ‘meaning’. That is to say that, for Popper, unfalsifiable statements can still be meaningful; they just do not tell us anything about the world. Indeed, he continually notes how metaphysical ideas (such as Kepler’s idea that circles are more ‘perfect’ than other shapes) have inspired and guided scientists. This is itself an important distinction because it prevents him from falling into the same paradox as the positivists. For if only the statements with empirical content have meaning, then the statement “only the statements with empirical content have meaning” is itself meaningless. Popper, for his part, regarded himself as the enemy of linguistic philosophy and considered the problem of epistemology quite distinct from language analysis.

To return to falsification, Popper’s fundamental insight is that verification and falsification are not symmetrical. While no general statement can be proved using a specific instance, a general statement can indeed be disproved with a specific instance. A thousand white swans does not prove all swans are white; but one black swan disproves it. (This is the aforementioned modus tollens.) All this may seem trivial; but as Popper realized, this changes the nature of scientific knowledge as we know it. For science, then, is far from what Bacon imagined it to be—a carefully sifted catalogue of experiences, a collection of well-founded generalizations—and is rather a collection of theories which spring up, as it were, from the imagination of the scientist in the hopes of uniting several observed phenomena under one hypothesis. Or to put it more bluntly: a good scientific theory is a guess that does not prove wrong.

With his central doctrine established, Popper goes on to the technicalities. He discusses what composes the ‘range’ or ‘scope’ of a theory, and how some theories can be said to encompass others. He provides an admirable justification for Occam’s Razor—the preference for simpler over more complex explanations—since theories with fewer parameters are more easily falsified and thus, in his view, more informative. The biggest section is given over to probability. I admit that I had some difficulty following his argument at times, but the gist of his point is that probability must be interpreted ‘objectively,’ as frequency distributions, rather than ‘subjectively,’ as degrees of certainty, in order to be falsifiable; and also that the statistical results of experiments must be reproducible in order to avoid the possibility of statistical flukes.

All this leads up to a strangely combative section on quantum mechanics. Popper apparently was in the same camp as Einstein, and was put off by Heisenberg’s uncertainty principle. Like Einstein, Popper was a realist and did not like the idea that a particle’s properties could be actually undetermined; he wanted to see the uncertainty of quantum mechanics as a byproduct of measurement or of ‘hidden variables’—not as representing something real about the universe. And like Einstein (though less famously) Popper proposed an experiment to decide the issue. The original experiment, as described in this book, was soon shown to be flawed; but a revised experiment was finally conducted in 1999, after Popper’s death. Though the experiment agreed with Popper’s prediction (showing that measuring an entangled photon does not affect its pair), it had no bearing on Heisenberg’s uncertainty principle, which restricts arbitrarily precise measurements on a single particle, not a pair of particles.

Incidentally, it is difficult to see why Popper is so uncomfortable with the uncertainty principle. Given his own dogma of falsifiability, the belief that nature is inherently deterministic (and that probabilistic theories are simply the result of a lack of our own knowledge) should be discarded as metaphysical. This is just one example of how Popper’s personality was out of harmony with his own doctrines. An advocate of the open society, he was famously authoritarian in his private life, which led to his own alienation. This is neither here nor there, but it is an interesting comment on the human animal.

Popper’s doctrine, like all great ideas, has proven both influential and controversial. For my part I think falsification a huge advance over Bacon’s induction or the positivists’ verification. And despite the complications, I think that falsifiability is a crucial test to distinguish, not only science from pseudo-science, but all dependable knowledge from myth. For both pseudo-science and myth generally distinguish themselves by admirably fitting the data set, but resisting falsification. Freud’s theories, for example, can accommodate themselves to any set of facts we throw at them; likewise for intelligent design, belief in supernatural beings, or conspiracy theories. All of these seem to explain everything—and in a way they do, since they fit the observable data—but really explain nothing, since they can accommodate any new observation.

There are some difficulties with falsification, of course. The first is observation. For what we observe, or even what we count as an ‘observation’, is colored by our background beliefs. Whether to regard a dot in the sky as a plane, a UFO, or an angel is shaped by the beliefs we already hold; thus it is possible to disregard observations that run counter to our theories, rather than falsifying the theories. What is more, theories never exist in isolation, but in an entire context of beliefs; so if one prediction is definitively falsified, it can still be unclear what we must change in our interconnected edifice of theories. Further, it is rare for experimental predictions to agree exactly with results; usually they are approximately correct. But where do we draw the line between falsification and approximate correctness? And last, if we formulate a theory which withstands test after test, predicting their results with extreme accuracy time and again, must we still regard the theory as a provisional guess?

To give Popper credit, he responds to all of these points in this work, though perhaps not with enough discussion. But all these criticisms belie the fact that so much of the philosophy of science written after Popper has taken his work as a starting point, either attempting to amplify, modify, or (dare I say it?) falsify his claims. For my part, though I was often bored by the dry style and baffled by the technical explanations, I found myself admiring Popper’s careful methodology: responding to criticisms, making fine distinctions, building up his system piece by piece. Here is a philosopher deeply committed to the ideal of rational argument and deeply engaged with understanding the world. I am excited to read more.

View all my reviews

Review: The Beautiful Brain

Review: The Beautiful Brain

Beautiful Brain: The Drawings of Santiago Ramon y CajalBeautiful Brain: The Drawings of Santiago Ramon y Cajal by Larry W. Swanson

My rating: 4 of 5 stars

Like the entomologist in pursuit of brightly colored butterflies, my attention hunted, in the flower garden of the gray matter, cells with delicate and elegant forms, the mysterious butterflies of the soul, the beating of whose wings may someday—who knows?—clarify the secret of mental life.

I love walking around cathedrals because they are sublime examples of vital art. I say “vital” because the art is not just seen, but lived through. Every inch of a cathedral has at least two levels of significance: aesthetic and theological. Beauty, in other words, walks hand in hand with a certain view of the world. Indeed, beauty is an essential part of this view of the world, and thus facts and feelings are blended together into one seamlessly intelligible whole: a philosophy made manifest in stone.

The situation that pertains today is quite different. It is not that our present view of the world is inherently less beautiful; but that the vital link between the visual arts and our view of the world has been severed. Apropos of this, I often think of one of Richard Feynman’s anecdotes. He once gave a tour of a factory to a group of artists, trying to explain modern technology to them. The artists, in turn, were supposed to incorporate what they learned into a piece for an exhibition. But, as Feynman notes, almost none of the pieces really had anything to do with the technology. Art and science had tried to make contact, and failed.

This is why I am so intrigued by the anatomical drawings of Santiago Ramón y Cajal. For here we see a successful unification, revealing the same duality of significance as in a cathedral: his drawings instruct and enchant at once.

Though relatively obscure in the anglophone world, Cajal is certainly one of the most important scientists of history. He is justly considered to be the father of neuroscience. Cajal’s research into the fine structures of the brain laid the foundation for the discipline. At a time when neurons were only a hypothesis, Cajal not only convinced the scientific world of their existence (as against the reticular theory), but documented several different types of neurons, describing their fine structure—nucleus, axon, and dendrites—and the flow of information within and between nerve cells.

As we can see in his Advice to a Young Investigator, Cajal in his adulthood became a passionate advocate for scientific research. But he did not always wish to be a scientist. As a child he was far more interested in painting; it was only the pressure of his father, a doctor, which turned him in the direction of research. And as this book shows, he never really gave up his artistic ambition; he only channelled it into another direction.

Research in Cajal’s day was far simpler. Instead of a team of scientists working with a high-powered MRI, we have the lonely investigator hunched over a microscope. The task was no easier for being simpler, however. Besides patience, ingenuity, and logical mind—the traits of any good scientist—a microanatomist back then needed a prodigious visual acumen. The task was to see properly: to extract a sensible figure from the blurry and chaotic images under the microscope. To meet this challenge Cajal not had to create new methods—staining the neurons to make them more visible—but to train his eye. And in both he proved a master.

He would often spend hours at the microscope, looking and looking without taking any notes. His analytic mind was not only at work during these periods, making guesses about cell functions and deductions about information flow, but also his visual imagination: he had to hold the cell’s form within his mind, see the the cells in context and in isolation, since the fine details of their structure were highly suggestive of their behavior and purpose. His drawings were the final expression of his visual process: “A graphic representation of the object observed guarantees the exactness of the observation itself.” For Cajal, as for Leonardo da Vinci, drawing was a form of thinking.

Though by now long outdated by subsequent research, Cajal’s drawings have maintained their appeal, both as diagrams and as works of art. With the aid of a short caption—ably provided by Eric Newman in this volume—the drawings spring to life as records of scientific research. They summarize complex processes, structures, and relations with brilliant clarity, making the essential point graspable in an instant.

Purely as drawings they are no less brilliant. The twisting and sprawling forms of neurons; the chaotic lattices of interconnected cells; the elegant architecture of our sensory organs—all this possesses an otherworldly beauty. The brain, such an intimate part of ourselves, is revealed to be intensely alien. One is naturally reminded of the surrealists by these dreamlike landscapes; and indeed Lorca and Dalí were both aware of Cajal’s work. Yet Cajal’s drawings are perhaps more fantastic than anything the surrealists ever produced, all the more bizarre for being true.

Even the names of these drawings wouldn’t be out of place in a modern gallery: “Cuneate nucleus of a kitten,” “Neurons in the midbrain of a sixteen-day-old trout,” “Axons in the Purkinje neurons in the cerebellum of a drowned man.” Science can be arrestingly poetic.

One of the functions of art is to help us to understand ourselves. The science of the brain, in a much different way, aims to do the same thing. It seems wholly right, then, that these two enterprises should unite in Cajal, the artistic investigator of our nervous system. And this volume is an ideal place to witness his accomplishment. The large, glossy images are beautiful. The commentary frames and explains, but does not distract. The essays on Cajal’s life and art are concise and incisive, and are supplemented by an essay on modern brain imaging that brings the book up to date. It is a cathedral of a book.

View all my reviews

Review: Advice to a Young Investigator

Review: Advice to a Young Investigator

Reglas y consejos sobre investigación científica. Los tónicos de la voluntad.Reglas y consejos sobre investigación científica. Los tónicos de la voluntad. by Santiago Ramón y Cajal

My rating: 4 of 5 stars

Books, like people, we respect and admire for their good qualities, but we only love them for some of their defects.

Santiago Ramón y Cajal has a fair claim to being the greatest scientist to hail from Spain. I have heard him called the “Darwin of Neuroscience”: his research and discoveries are foundational to our knowledge of the brain. When he won the Nobel Prize in 1906 it was for his work using nerve stains to differentiate neurons. At the time, you see, the existence of nerve cells was still highly controversial; Camillo Golgi, with whom Ramón y Cajal shared the Nobel, was a supporter of the reticular theory, which held that the nervous system was one continuous object.

Aside from being an excellent scientist, Ramón y Cajal was also a man of letters and a passionate teacher. These three aptitudes combined to produce this charming book. Its prosaic title is normally translated into English—inaccurately but more appealingly—as Advice to a Young Investigator. These originated as lectures delivered in the Real Academia de Ciencias Exactas, Físicas y Naturales in 1897 and published the next year by his colleague. They consist of warm and frank advice to students embarking on a scientific career.

Ramón y Cajal is wonderfully optimistic when it comes to the scientific enterprise. Like the philosopher Susan Haack, he thinks that science follows no special logic or method, but is only based on sharpened common sense. Thus one need not be a genius to make a valuable contribution. Indeed, for him, intelligence is much overrated. Focus, dedication, and perseverance are what really separate the successes from the failures. He goes on to diagnose several infirmities of the will that prevent young and promising students from accomplishing anything in the scientific field. Among these are megalófilos, a type exemplified in the character Casaubon in Middlemarch, who cannot finish taking notes and doing research in time to actually write his book.

While much of Ramón y Cajal’s advice is timeless, this book is also very much of a time and a place. He advises his young students to buy their own equipment and to work at home—something that would be impractical today, not least because the equipment used in laboratories today has grown so much in complexity and expense. He even advises his student on finding the right wife (over-cultured women are to be avoided). More seriously, these lectures are marked by the crisis of 1898, when Spain lost the Spanish-American war and the feeling of cultural degeneration was widespread. Ramón y Cajal is painfully aware that Spain lagged behind the other Western countries in scientific research, and much of these lectures is aimed alleviating at specifically Spanish shortcomings.

In every one of these pages Ramón y Cajal’s fierce dedication to the scientific enterprise, his conviction that science is noble, useful, and necessary, and his desire to see the spirit of inquiry spread far and wide, are expressed with pungent wit that cannot fail to infect the reader with the same zeal to expand the bounds of human knowledge and with an admiration for such an exemplary scientist.

View all my reviews

Review: Opticks

Review: Opticks

OpticksOpticks by Isaac Newton

My rating: 4 of 5 stars

 

My Design in this Book is not to explain the Properties of Light by Hypotheses, but to propose and prove them by Reason and Experiment

I’ve long wanted to read Newton’s Principia, but its reputation intimidates me. Everyone seems to agree that it is intensely difficult, and I’m sorry to say I haven’t worked up enough nerve to face it yet. But I did still want to read Newton; so as soon as I learned about this book, Newton’s more popular and accessible volume, I snatched it up and happily dug in.

The majority of this text is given over to descriptions of experiments. To the modern reader—and I suspect to the historical reader as well—these sections are remarkably dry. In simple yet exact language, Newton painstakingly describes the setup and results of experiment after experiment, most of them conducted in his darkened chamber, with the window covered up except for a small opening to let in the sunlight. Yet even if this doesn’t make for a thrilling read, it is impossible not to be astounded at the depth of care, the keenness of observation, and the subtle brilliance Newton displays. Using the most basic equipment (his most advanced tool is the prism), Newton tweezes light apart, making an enormous contribution both to experimental science and to the field of optics.

At the time, the discovery that white light could be decomposed into a rainbow of colors, and that this rainbow could be recombined back into white light, must have seemed as momentous as the discovery of the Higgs Boson. And indeed, even the modern reader might catch a glimpse of this excitement as she watches Newton carefully set up his prism in front of his beam of light, tweaking every variable, adjusting every parameter, measuring everything could be measured, and describing in elegant prose everything that couldn’t.

Whence it follows, that the colorifick Dispositions of Rays are also connate with them, and immutable; and by consequence, that all the Productions and Appearances of Colours in the World are derived, not from any physical Change caused in Light by Refraction or Reflexion, but only from the various Mixtures or Separations of Rays, by virtue of their different Refrangibility or Reflexibility. And in this respect the Science of Colours becomes a Speculation as truly mathematical as any other part of Opticks.

Because I had recently read Feynman’s QED, one thing in particular caught my attention. Here’s the problem: When you have one surface of glass, even if most of the light passes through it, some of the light is reflected; and you can roughly gauge what portion of light does one or the other. Let’s say on a typical surface of glass, 4% of light is reflected. Now we add another surface of glass behind the first. According to common sense, 8% of the light should be reflected, right? Wrong. Now the amount of light which is reflected varies between 0% and 16%, depending on the distance between the two surfaces. This is truly bizarre; for it seems that the mere presence of second surface of glass alters the reflectiveness of the first. But how does the light “know” there is a second surface of glass? It seems the light somehow is affected before it comes into contact with either surface.

Well, Newton was aware of this awkward problem. He famously comes up with his theory of “fits of easy reflection or transmission” to explain this phenomenon. But this “theory” was merely to say that the glass, for some unknown reason, sometimes lets light through, and sometimes reflects it. In other words, it was hardly a theory at all.

Every Ray of Light in its passage through any refracting Surface is put into a certain transient Constitution or State, which in the progress of the Ray returns at equal Intervals, and disposes the Ray at every return to be easily transmitted through the next refracting Surface, and between the returns to be easily reflected by it.

Also fascinating to the modern reader is the strange dual conception of light as waves and as particles in this work, which can’t help but remind us of the quantum view. The wave theory makes it easy to account for the different refrangibility of the different colors of light (i.e. the different colors reflect at different angles in a prism).

Do not several sorts of Rays make Vibrations of several bignesses, which according to their bignesses excite Sensations of several Colours, much after the manner that the Vibrations of the Air, according to their several bignesses excite Sensations of several sounds. And particularly do not the most refrangible Rays excite the shortest Vibrations for making a Sensation of deep violet, the least refrangible the largest for making a Sensation of deep red, and the several intermediate bignesses to make Sensations of the several intermediate Colours?

To this notion of vibrations, Newton adds the “corpuscular” theory of light, which held (in opposition to his contemporary, Christiaan Huygens) that light was composed of small particles. This theory must have been attractive to Newton because it fit into his previous work in physics. It explained why beams of light, like other solid bodies, travel in straight lines (cf. Newton’s first law), and reflect off surfaces at angles equal to their angles of incidence (cf. Newton’s third law).

Are not the Rays of Light very small Bodies emitted from shining Substances? For such Bodies will pass through uniform Mediums in right Lines without bending into the shadow, which is the Nature of the Rays of Light. They will also be capable of several Properties, and be able to conserve their Properties unchanged in passing through several Mediums, which is another conditions of the Rays of Light.

As a side note, despite some problems with the corpuscular theory of light, it came to be accepted for a long while, until the phenomenon of interference gave seemingly decisive weight to the wave theory. (Light, like water waves, will interfere with itself, creating characteristic patterns; cf. the famous double-slit experiment.) The wave theory was reinforced with Maxwell’s equations, which treated light as just another electro-magnetic wave. It was, in fact, Einstein who brought back the viability of the corpuscular theory, when he suggested the idea that light might come in packets to explain the photoelectric effect. (Blue light, when shined on certain metals, will cause an electric current, while red light won’t. Why not?)

All this tinkering with light is good fun, especially if you’re a physicist (which I’m not). But the real treat, at least for the layreader, comes at the final section, where Newton speculates on many of the unsolved scientific problems of his day. His mind is roving and vast; and even if most of his speculations have turned out incorrect, it’s stunning to simply witness him at work. For example, Newton realizes that radiation can travel without a medium (like air), and can heat objects even in a vacuum. (And thank goodness for that, for how else would the earth be warmed by the sun?) But from this fact he incorrectly deduces that there must be some more subtle medium that remains (like the famous ether).

If in two large tall cylindrical Vessels of Glass inverted, to little Thermometers be suspended so as not to touch the Vessels, and the Air be drawn out of one of these Vessels thus prepared be carried out of a cold place into a warm one; the Thermometer in vacuo will grow warm as much, and almost as soon as the Thermometer that is not in vacuo. And when the Vessels are carried back into the cold place, the Thermometer in vacuo will grow cold almost as soon as the other Thermometer. Is not the Heat of the warm Room convey’d through the Vacuum by the Vibrations of a much subtiler Medium than Air, which after the Air was drawn out remained in the Vacuum?

Yet for all Newton’s perspicacity, the most touching section was a list of question Newton asks, as if to himself, that he cannot hope to answer. It seems that even the most brilliant among us are stunned into silence by the vast mystery of the cosmos:

What is there in places almost empty of Matter, and whence is it that the Sun and Planets gravitate towards one another, without dense Matter between them? Whence is it that Nature doth nothing in vain; and whence arises all that Order and Beauty which we see in the World? To what end are Comets, and whence is it that Planets move all one and the same way in Orbs concentrick, while Comets move all manner of ways in Orbs very excentrick; and what hinders the fix’d Stars from falling upon one another? How came the Bodies of animals to be contrived with so much Art, and for what ends were their several Parts? Was the Eye contrived without Skill in Opticks, and the Ear without Knowledge of Sounds? How do the Motions of the Body follow from the Will, and whence is the Instinct in Animals?

View all my reviews

Review: Aristotle’s Physics

Review: Aristotle’s Physics

PhysicsPhysics by Aristotle

My rating: 4 of 5 stars

Of all the ancient thinkers that medieval Christians could have embraced, it always struck me as pretty remarkable that Aristotle was chosen. Of course, ‘chosen’ isn’t the right word; rather, it was something of a historical coincidence, since Aristotle’s works were available in Latin translation, while those of Plato were not.

Nonetheless, Aristotle strikes me as a particularly difficult thinker to build a monotheistic worldview around. There’s simply nothing mystical about him. His feet are planted firmly on the ground, and his eyes are level with the horizon. Whereas mystics see the unity of everything, Aristotle divides up the world into neat parcels, providing lists of definitions and categories wherever he turns. Whereas mystics tend to scorn human knowledge, Aristotle was apparently very optimistic about the potential reach of the human mind—since he so manifestly did his best to know everything.

The only thing that I can find remotely mystical is Aristotle’s love of systems. Aristotle does not like loose ends; he wants his categories to be exhaustive, and his investigations complete. And, like a mystic, Aristotle is very confident about the reach of a priori knowledge, while his investigations of empirical reality—though admittedly impressive—are paltry in comparison with his penchant for logical deduction. At the very least, Aristotle is wont to draw many more conclusions from a limited set of observations than most moderns are comfortable with.

I admit, in the past I’ve had a hard time appreciating his writing. His style was dry; his arguments, perfunctory. I often wondered: What did so many people see in him? His tremendous influence seemed absurd after one read his works. How could he have seemed so convincing for so long?

I know from experience that when I find a respected author ludicrous, the fault is often my own. So, seeking a remedy, I decided that I would read more Aristotle; more specifically, I would read enough Aristotle until I learned to appreciate him. For overexposure can often engender a change of heart; in the words of Stephen Stills, “If you can’t be with the one you love, love the one you’re with.” So I decided I would stick with Aristotle until I loved him. I still don’t love Aristotle, but, after reading this book, I have a much deeper respect for the man. For this book really is remarkable.

As Bertrand Russell pointed out (though it didn’t need a mind as penetrating as Russell’s to do so), hardly a sentence in this book can be accepted as accurate. In fact, from our point of view, Aristotle’s project was doomed from the start. He is investigating physical reality, but is doing so without conducting experiments; in other words, his method is purely deductive, starting from a few assumptions, most of which are wrong. Much of what Aristotle says might even seem silly—such as his dictum that “we always assume the presence in nature of the better.” Another great portion of this work is taken up by thoroughly uninteresting and unconvincing investigations, such as the definitions of ‘together’, ‘apart’, ‘touch’, ‘continuous’, and all of the different types of motions—all of which seem products of a pedantic brain rather than qualities of nature.

But the good in this work far outweighs the bad. For Aristotle commences the first (at least, the first, so far as I know) intellectually rigorous investigations of the basic properties of nature—space, time, cause, motion, and the origins of the universe. I find Aristotle’s inquiry into time particularly fascinating, for I’m not aware—at least, I can’t recall—any comparatively meticulous investigations of time by later philosophers I’ve read. Of course, Aristotle’s investigation of ‘time’ can be more properly called Aristotle’s investigation of the human experience of time, but we need not fault Aristotle for not thinking there’s a difference.

I was particularly impressed with Aristotle’s attempt to overcome Zeno’s paradoxes. He defines and re-defines time—struggling with how it can be divided, and with the exact nature of the present moment—and tries many different angles of attack. And what’s even more interesting is that Aristotle fails in his task, and even falls into Zeno’s intellectual trap by unwittingly accepting Zeno’s assumptions.

Aristotle’s attempts to tackle space were almost equally fascinating; for there, we once again see the magnificent mind of Aristotle struggling to define something of the highest degree of abstractness. In fact, I challenge anyone reading this to come up with a good definition of space. It’s hard, right? The paradox (at least, the apparent paradox) is that space has some qualities of matter—extension, volume, dimensions—without having any mass. It seems, at first sight at least, like empty space should be simply nothing, yet space itself has certain definite qualities—and anything that has qualities is, by definition, something. However, these qualities only emerge when one imagines a thing in space, for we never, in our day to day lives, encounter space itself, devoid of all content. But how could something with no mass have the quality of extension?

As is probably obvious by now, I am in no way a physicist—and, for that matter, neither was Aristotle; but his attempt is still interesting.

Aristotle does also display an admirable—though perhaps naïve—tendency to trust experience. For his refutation of the thinkers who argue that (a) everything is always in motion, and (b) everything is always at rest, is merely to point out that day-to-day experience refutes this. And Aristotle at least knows—since it is so remarkably obvious to those with eyes—that Zeno must have committed some error; so even if his attacks on the paradoxes don’t succeed, one can at least praise the effort.

To the student of modern physics, this book may present some interesting contrasts. We have learned, through painstaking experience, that the most productive questions to ask of nature begin with “how” rather than “why.” Of course, the two words are often interchangeable; but notice that “why” attributes a motive to something, whereas “how” is motiveless. Aristotle seeks to understand nature in the same way that one might understand a friend. In a word, he seeks teleological explanations. He assumes both that nature works with a purpose, and that the workings of nature are roughly accessible to common sense, with some logical rigor thrown in. A priori, this isn’t necessarily a bad assumption; in fact, it took a lot of time for us humans to realize it was incorrect. In any case, it must be admitted that Aristotle at least seeks to understand far more than us moderns; for Aristotle seeks, so to speak, to get inside the ‘mind’ of nature, understanding the purpose for everything, whereas modern scientific knowledge is primarily descriptive.

Perhaps now I can see what the medieval Christians found in Aristotle. The assumption that nature works with a purpose certainly meshes well with the belief in an omnipotent creator God. And the assumption that knowledge is accessible through common sense and simple logical deductions is reasonable if one believes that the world was created for us. To the modern reader, the Physics might be far less impressive than to the medievals. But it is always worthwhile to witness the inner workings of such a brilliant mind; and, of all the Aristotle I’ve so far read, none so clearly show Aristotle’s thought process, none so clearly show his mind at work, as this.

View all my reviews

Review: Dialogue Concerning the Two Chief World Systems

Review: Dialogue Concerning the Two Chief World Systems

Dialogue Concerning the Two Chief World SystemsDialogue Concerning the Two Chief World Systems by Galileo Galilei

My rating: 4 of 5 stars

I should think that anyone who considered it more reasonable for the whole universe to move in order to let the earth remain fixed would be more irrational than one who should climb to the top of your cupola just to get a view of the city and its environs, and then demand that the whole countryside should revolve around him so that he would not have to take the trouble to turn his head.

It often seems hard to justify reading old works of science. After all, science continually advances; pioneering works today will be obsolete tomorrow. As a friend of mine said when he saw me reading this, “That shit’s outdated.” And it’s true: this shit is outdated.

Well, for one thing, understanding the history of the development of a theory often aids in the understanding of the theory. Look at any given technical discipline today, and it’s overwhelming; you are presented with such an imposing edifice of knowledge that it seems impossible. Yet even the largest oak was once an acorn, and even the most frightening equation was once an idle speculation. Case in point: Achieving a modern understanding of planetary orbits would require mastery of Einstein’s theories—no mean feat. Flip back the pages in history, however, and you will end up here, at this delightful dialogue by a nettlesome Italian scientist, as accessible a book as ever you could hope for.

This book is rich and rewarding, but for some unexpected reasons. What will strike most moderns readers, I suspect, is how plausible the Ptolemaic worldview appears in this dialogue. To us alive today, who have seen the earth in photographs, the notion that the earth is the center of the universe seems absurd. But back then, it was plain common sense, and for good reason. Galileo’s fictional Aristotelian philosopher, Simplicio, puts forward many arguments for the immobility of the earth, some merely silly, but many very sensible and convincing. Indeed, I often felt like I had to take Simplicio’s side, as Galileo subjects the good Ptolemaic philosopher to much abuse.

I’d like to think that I would have sensed the force of the Copernican system if I were alive back then. But really, I doubt it. If the earth was moving, why wouldn’t things you throw into the air land to the west of you? Wouldn’t we feel ourselves in motion? Wouldn’t canon balls travel much further one way than another? Wouldn’t we be thrown off into space? Galileo’s answer to all of these questions is the principal of inertia: all inertial (non-accelerating) frames of reference are equivalent. That is, an experiment will look the same whether it’s performed on a ship at constant velocity or on dry land.

(In reality, the surface of the earth is non-inertial, since it is undergoing acceleration due to its constant spinning motion. Indeed the only reason we don’t fly off is because of gravity, not because of inertia as Galileo argues. But for practical purposes the earth’s surface can be treated as an inertial reference frame.)

Because this simple principle is the key to so many of Galileo’s arguments, the final section of this book is trebly strange. In the last few pages of this dialogue, Galileo triumphantly puts forward his erroneous theory of the tides as if it were the final nail in Ptolemy’s coffin. Galileo’s theory was that the tides were caused by the movement of the earth, like water sloshing around a bowl on a spinning Lazy Susan. But if this was what really caused the tides, then Galileo’s principle of inertia would fall apart; since if the earth’s movements could move the oceans, couldn’t it also push us humans around? It’s amazing that Galileo didn’t mind this inconsistency. It’s as if Darwin ended On the Origin of Species with an argument that ducks were the direct descendants of daffodils.

Yet for all the many quirks and flaws in this work, for all the many digressions—and there are quite a few—it still shines. Galileo is a strong writer and a superlative thinker; following along the train of his thoughts is an adventure in itself. But of course this work, like all works of science, is not ultimately about the mind of one man; it is about the natural world. And if you are like me, this book will make you think of the sun, the moon, the planets, and the stars in the sky; will remind you that your world is spinning like a top, and that the very ground we stand on is flying through the dark of space, shielded by a wisp of clouds; and that the firmament up above, something we often forget, is a window into the cosmos itself—you will think about all this, and decide that maybe this shit isn’t so outdated after all.

View all my reviews

Review: Voyage of the Beagle

Review: Voyage of the Beagle

Voyage of the BeagleVoyage of the Beagle by Charles Darwin

My rating: 4 of 5 stars

This book is really a rare treasure. Is there anything comparable? Here we have the very man whose ideas have revolutionized completely our understanding of life, writing with charm about the very voyage which sparked and shaped his thinking on the subject. And even if this book wasn’t a window into the mind of one of history’s most influential thinkers, it would still be entertaining on its own merits. Indeed, the public at the time thought so, making Darwin into a bestselling author.

I can hardly imagine how fascinating it would have been for a nineteenth-century Englishman to read about the strange men and beasts in different parts of the world. Today the world is so flat that almost nothing can surprise. But what this book has lost in exotic charm, it makes up for in historical interest; for now it is a fascinating glimpse into the world 150 years ago. Through Darwin’s narrative, we both look out at the world as it was, and into the mind of a charming man. And Darwin was charming. How strange it is that one of today’s most vicious debates—creationism vs. evolution, religion vs. science—was ignited by somebody as mild-mannered and likable as Mr. Darwin.

His most outstanding characteristic is his curiosity; everything Darwin sees, he wants to learn about: “In England any person fond of natural history enjoys in his walks a great advantage, by always having something to attract his attention; but in these fertile climates, teeming with life, the attractions are so numerous, that he is scarcely able to walk at all.”

As a result, the range of topics touched upon in this volume is extraordinary: botany, entomology, geology, anthropology, paleontology—the list goes on. Darwin collects and dissects every creature he can get his hands on; he examines fish, birds, mammals, insects, spiders. (Admittedly, the descriptions of anatomy and geological strata were often so detailed as to be tedious; Darwin, though brilliant, could be very dry.) In the course of these descriptions, Darwin also indulged in quite a bit of speculation, offering an interesting glimpse into both his thought-process and the state of science at that time. (I wonder if any edition includes follow-ups of these conjectures; it would’ve been interesting to see how they panned out.)

In retrospect, it is almost unsurprising that Darwin came up with his theory of evolution, for he encounters many things that are perplexing and inexplicable without it. Darwin finds fossils of extinct megafauna, and wonders how animals so large could have perished completely. He famously sees examples of one body-plan being adapted—like a theme and variations—in the finches of the Galapagos Islands. He also notes that the fauna and flora on those islands are related to, though quite different from, that in mainland South America. (If life there was created separately, why wouldn’t it be completely different? And if it was indeed descended from the animals on the mainland, what made it change?)

Darwin also sees abundant examples of convergent evolution—two distinct evolutionary lines producing similar results in similar circumstances—in Australia:

A little time before this I had been lying on a sunny bank, and was reflecting on the strange character of the animals in this country as compared with the rest of the world. An unbeliever in everything but his own reason might exclaim, ‘Two distinct Creators must have been at work; their object, however, has been the same & certainly the end in each case is complete.’

More surprisingly, Darwin finds that animals in isolated, uninhabited islands tend to have no fear of humans. And, strangely enough, an individual animal from these islands can’t even be taught to fear humans. Why, Darwin asks, does an individual bird in Europe fear humans, even though it’s never been harmed by one? And why can’t you train an individual bird from an isolated island to fear humans? My favorite anecdote is of Darwin repeatedly throwing a turtle into the water, and having it return to him again and again—because, as Darwin notes, its natural predators are ocean-bound, and it has adapted to see the land as a place of safety. Darwin also manages to walk right up to an unwary fox and kill it with his geological hammer.

You can see how all of these experiences, so odd without a theory of evolution, become clear as day when Darwin’s ideas are embraced. Indeed, many are still textbook examples of the implications of his theories.

This book would have been extraordinary just for the light it sheds on Darwin’s early experiences in biology, but it contains many entertaining anecdotes as well. It is almost a Bildungsroman: we see the young Darwin, a respectable Englishman, astounded and amazed by the wide world. He encounters odd creatures, meets strange men, and travels through bizarre landscapes. And, like all good coming of age stories, he often makes a fool of himself:

The main difficulty in using either a lazo or bolas, is to ride so well, as to be able at full speed, and while suddenly turning about, to whirl them so steadily about the head, as to take aim: on foot any person would soon learn the art. One day, as I was amusing myself by galloping and whirling the balls round my head, by accident the free one struck a bush; and its revolving motion being thus destroyed, it immediately fell to the ground, and like magic caught one hind leg of my horse; the other ball was then jerked out of my hand, and the horse fairly secured. Luckily he was an old practiced animal, and knew what it meant; otherwise he would probably have kicked till he had thrown himself down. The Gauchos roared with laughter; they cried they had seen every sort of animal caught, but had never before seen a man caught by himself.

At this point, I’m tempted to get carried away and include all of the many quotes that I liked. Darwin writes movingly about the horrors of slavery, he includes some vivid description of “savages,” and even tells some funny stories. But I’ll leave these quotes to be discovered by the curious reader, who, in his passage through the pages of this book, will indulge in a voyage far more comfortable than, and perhaps half as fascinating as, Darwin’s own. At the very least, the fortunate reader need not fear exotic diseases (Darwin suffered from ill health the rest of his days) or heed Darwin’s warning to the potential traveler at sea: “If a person suffer much from sea-sickness, let him weigh it heavily in the balance. I speak from experience: it is no trifling evil which may be cured in a week.”

View all my reviews

Review: The Ascent of Man

Review: The Ascent of Man

The Ascent of ManThe Ascent of Man by Jacob Bronowski

My rating: 5 of 5 stars

Fifty years from now, if an understanding of man’s origins, his evolution, his history, his progress is not in the common place of the school books, we shall not exist.

I watched this series right after finishing Kenneth Clark’s Civilisation, as I’d heard The Ascent of Man described as a companion piece. So like my review of Clark’s work, this review is about the documentary and not the book (though since the book is just a transcription of the series, I’m sure it applies to both).

The Ascent of Man is a remarkable program. I had doubts that anyone could produce a series to match Civilisation, but Bronowski made something that might even be better. Bronowski was a polymath: he did work in mathematics, biology, physics, history, and even poetry. In this program, his topic is the history of science. Yet for Bronowski, the word “science” not only refers to the modern scientific method, but rather encompasses all of humanity’s efforts to understand and manipulate the natural world.

We thus begin with Homo erectus, learning how to chip away stone to make tools. As Bronowski notes, this simple ability, to chip away at a stone until a cutting edge is left, is a remarkable indication of human uniqueness. Since the behavior is learned and is not an instinct, it requires a preconception of what the toolmaker wants to create, a certain amount of imagination is required to picture the goal before it is realized. What’s more, creating a stone tool requires a sense of the structural properties of the rock. (I’ve actually tried making stone tools with various types of rock, and let me tell you that it’s not so easy. Even with an archaeologist giving me advice, I was only able to create stone tools of the sophistication of an Australopithecus—randomly beating the stone until a sharp edge was created.) Thus both our creative drive and our knowledge are involved in this quintessentially human activity. “Every animal leaves traces of what he was. Man alone leaves traces of what he created.”

This brings Bronowski to one of his main points, one of the themes of this series: that art and science are not fundamentally different; rather, they are two manifestations of the human spirit. What is this human spirit? It is a composite of many qualities, what Bronowski calls “a jigsaw of human faculties,” which include our wide behavioral flexibility, our capacity to play, our need to create, our curiosity about the natural world, our sense of adventure, our love of variety. Indeed, these can be pithily described by saying that humans retain many childlike characteristics throughout their lives. The name of the last episode is “The Long Childhood.”

One of my favorite sequences in this documentary is when Bronowski takes the viewer from the posts and lintels of the Greek temples, to the arches in the Roman aqueduct in Segovia, to the somewhat prettier arches in the Mezquita in Cordoba, to the cathedral at Reims with its magnificent flying buttresses. Each of these structures, he explains, is a more sophisticated solution to this problem: how do you create a covered space out of stone? The lintel and post system used by the Greeks leads to a forest of columns, and the Mezquita, although less crowded, is still filled with arches. The Medieval Christians achieved a magnificent solution by placing the buttresses on the outside, thus leading to the towering, open interior of Reims.

We’re used to thinking of this development as an architectural triumph, but as Bronowski points out, it was also an intellectual triumph. This progression represents better and better understandings of the structural properties of stone, of the force of gravity, and of the distribution of weight. And when you see it play out in front of your eyes, it’s hard to shake the impression that these marvelous works are also progressively more elegant solutions to a mathematical puzzle. This is just one example of Bronowski’s talent: to see the artistic in the scientific and the scientific in the artistic; and he does this by seeing the human spirit in all of it.

Here’s another example. Bronowski wants to talk about how humanity has come to understand space, and how this understanding of space underpins our knowledge of structure. How does he do it? He goes to the Alhambra, and analyzes the symmetry in the tiles of the Moorish Palace. Then, he bends down and spreads a bunch of crystals on the ground, and begins to talk about the molecular symmetry that gave rise to them. It’s such a stunning juxtaposition. How many people would think to compare Moorish architecture with modern chemistry? But it’s so appropriate and so revealing that I couldn’t help but be awed.

As the title suggests, this series is not simply about science (or art), but about science through history. Bronowski aims to show how humanity, once freed from the constraints of instinct, used a combination of logic and imagination to achieve ever-deeper conceptions of our place in the universe. This is the Ascent of Man: a quest for self knowledge. It’s sometimes hard for us moderns to grasp this, but consider that we are living in one of the brief times in history that we can explain the formation of the earth, the origin of our species, and even the workings of our own brains. Imagine not knowing any of that. It’s hard to envy former ages when you consider that their sense of their place of the universe was based on myth supported by authority, or was simply a mystery. I’m sure (and I earnestly hope) that future generations will believe the same about us.

Bronowski’s final message is a plea to continue this ascent. This means spreading a understanding and an appreciation of science, as his programs tries to do. This strikes me as terribly important. I’ve met so many people who say things like “Science is a form of faith” or “Science can’t solve every problem” or “Science is dehumanizing and arrogant.” It’s sad to hear intelligent people say things like this, for it simply isn’t true. It’s an abuse of language to call science a faith; then what isn’t? And yes, of course science can’t solve every problem and can’t answer every question; but can anything? Science can solve some problems, and can do so very well. And science, as Bronowski points out, is the very opposite of dehumanizing and arrogant. Science is a most human form of knowledge, born of humility of our intellectual powers, based on repeated mistakes and guesses, always pressing forward into the unknown, always revising its opinions based on evidence. Atrocities are committed, not by people who are trained to question their own beliefs, but by ideologues who are convinced they are right.

This is Bronowski’s essential message. But like in any good story, the telling is half of it. As I’ve mentioned above, Bronowski and his team are brilliant at finding unexpected ways to illustrate abstract ideas. This series is full of wonderful and striking visual illustrations of Bronowski’s points. What’s more, the man is a natural storyteller, and effectively brings to life many of this series’ heroes: Newton, Galileo, Alfred Russell Wallace, Mendel. He’s also a poet; one of his books is a study of William Blake’s poetry. This not only gives him a knack for similes, but helps him to explain how science is fundamentally creative. One of my favorite scenes is when Bronowski compares abstract portraits of a man to the ways that various scientific instruments—radar, infrared, cameras, X-rays—detect the man’s face. As he explains, both the portrait and these readings are interpretations of their subjects.

The cinematography is also excellent. There are some sequences in this documentary that are still impressive, saturated as we are with CGI. There are even some quite psychedelic sections. One of my favorite of these was a sequence of microscopic shots of human cells with Pink Floyd (who contributed music) jamming chaotically in the background. Unlike in Clark’s Civilisation, which uses exclusively ‘classical’ music and is devoid of special effects, the style of this documentary is surprisingly modern and even edgy. Another thing Bronowski does that Clark doesn’t, is include some information on non-Western cultures, from Meso-America, Japan, China, and Easter Island.

Yes, there are some parts of this that are outdated. Most obviously, much of the scientific information is no longer accurate—particularly the information on human evolution in the first episode. This is unavoidable, and is in fact a tribute to the ideals Bronowski championed. More jarring is Bronowski’s somewhat negative assessments of the culture of Easter Island and the lifestyle of nomadic peoples. Less controversially, he also has some negative words to say about Hegel. (Did you know Hegel published an absurd thesis when he was young about how the distance of the orbits of the planets had to conform to a number series?) Another mark of this program’s age is that Bronowski several times shows nudity and even a human birth. This would never fly on television today, at least not in the States.

But these flaws are minor in such a tremendous program. The Ascent of Man is a landmark in the history of science education and of documentary making, and a stirring vision of the progress of humanity by an brilliant and sympathetic man. I hope you get a chance to watch it.

View all my reviews