Review: Voyage of the Beagle

Review: Voyage of the Beagle
Voyage of the Beagle

Voyage of the Beagle by Charles Darwin

My rating: 4 of 5 stars

This book is really a rare treasure. Is there anything comparable? Here we have the very man whose ideas have revolutionized completely our understanding of life, writing with charm about the very voyage which sparked and shaped his thinking on the subject. And even if this book was not a window into the mind of one of history’s most influential thinkers, it would still be entertaining on its own merits. Indeed, the public at the time thought so, making Darwin into a bestselling author.

I can hardly imagine how fascinating it would have been for a nineteenth-century Englishman to read about the strange men and beasts in different parts of the world. Today the world is so flat that almost nothing can surprise. But what this book has lost in exotic charm, it makes up for in historical interest; for now it is a fascinating glimpse into the world of 150 years ago. Through Darwin’s narrative, we both look out at the world as it was, and into the mind of a charming man. And Darwin was charming. How strange it is that one of today’s most vicious debates—creationism vs. evolution, religion vs. science—was ignited by somebody as mild-mannered and likable as Mr. Darwin.

His most outstanding characteristic is his curiosity; everything Darwin sees, he wants to learn about: “In England any person fond of natural history enjoys in his walks a great advantage, by always having something to attract his attention; but in these fertile climates, teeming with life, the attractions are so numerous, that he is scarcely able to walk at all.”

As a result, the range of topics touched upon in this volume is extraordinary: botany, entomology, geology, anthropology, paleontology—the list goes on. Darwin collects and dissects every creature he can get his hands on; he examines fish, birds, mammals, insects, spiders. (Admittedly, the descriptions of anatomy and geological strata were often so detailed as to be tedious; Darwin, though brilliant, could be very dry.) In the course of these descriptions, Darwin also indulged in quite a bit of speculation, offering an interesting glimpse into both his thought-process and the state of science at that time. (I wonder if any edition includes follow-ups of these conjectures; it would have been interesting to see how they panned out.)

In retrospect, it is almost unsurprising that Darwin came up with his theory of evolution, since he encounters many things that are perplexing and inexplicable without it. Darwin finds fossils of extinct megafauna, and wonders how animals so large could have perished completely. He famously sees examples of one body-plan being adapted—like a theme and variations—in the finches of the Galapagos Islands. He also notes that the fauna and flora on those islands are related to, though quite different from, that in mainland South America. (If life there was created separately, why wouldn’t it be completely different? And if it was indeed descended from the animals on the mainland, what made it change?)

Darwin also sees abundant examples of convergent evolution—two distinct evolutionary lines producing similar results in similar circumstances—in Australia:

A little time before this I had been lying on a sunny bank, and was reflecting on the strange character of the animals in this country as compared with the rest of the world. An unbeliever in everything but his own reason might exclaim, ‘Two distinct Creators must have been at work; their object, however, has been the same & certainly the end in each case is complete.’

More surprisingly, Darwin finds that animals in isolated, uninhabited islands tend to have no fear of humans. And, strangely enough, an individual animal from these islands cannot even be taught to fear humans. Why, Darwin asks, does an individual bird in Europe fear humans, even though it is never been harmed by one? And why can’t you train an individual bird from an isolated island to fear humans? My favorite anecdote is of Darwin repeatedly throwing a turtle into the water, and having it return to him again and again—because, as Darwin notes, its natural predators are ocean-bound, and it has adapted to see the land as a place of safety. Darwin also manages to walk right up to an unwary fox and kill it with his geological hammer.

You can see how all of these experiences, so odd without a theory of evolution, become clear as day when Darwin’s ideas are embraced. Indeed, many are still textbook examples of the implications of his theories.

This book would have been extraordinary just for the light it sheds on Darwin’s early experiences in biology, but it contains many entertaining anecdotes as well. It is almost a Bildungsroman: we see the young Darwin, a respectable Englishman, astounded and amazed by the wide world. He encounters odd creatures, meets strange men, and travels through bizarre landscapes. And, like all good coming of age stories, he often makes a fool of himself:

The main difficulty in using either a lazo or bolas, is to ride so well, as to be able at full speed, and while suddenly turning about, to whirl them so steadily about the head, as to take aim: on foot any person would soon learn the art. One day, as I was amusing myself by galloping and whirling the balls round my head, by accident the free one struck a bush; and its revolving motion being thus destroyed, it immediately fell to the ground, and like magic caught one hind leg of my horse; the other ball was then jerked out of my hand, and the horse fairly secured. Luckily he was an old practiced animal, and knew what it meant; otherwise he would probably have kicked till he had thrown himself down. The Gauchos roared with laughter; they cried they had seen every sort of animal caught, but had never before seen a man caught by himself.

At this point, I am tempted to get carried away and include all of the many quotes that I liked. Darwin writes movingly about the horrors of slavery, he includes some vivid description of “savages,” and even tells some funny stories. But I will leave these passages to be discovered by the curious reader, who, in his voyage through the pages of this book, will indulge in a voyage far more comfortable than, and perhaps half as fascinating as, Darwin’s own. At the very least, the fortunate reader need not fear exotic diseases (Darwin suffered from ill health the rest of his days) or heed Darwin’s warning to the potential traveler at sea: “If a person suffer much from sea-sickness, let him weigh it heavily in the balance. I speak from experience: it is no trifling evil which may be cured in a week.”

View all my reviews

Review: The Decline of the West

Review: The Decline of the West

The Decline of the WestThe Decline of the West by Oswald Spengler

My rating: 4 of 5 stars

All genuine historical work is philosophy, unless it is mere ant-industry.

Gibbon’s Decline and Fall of the Roman Empire is one of my favorite books, not only because it is written so beautifully, but because of the spectacle of decline—of a great empire slowly and inevitably crumbling. The scene is irresistibly tragic. Like a Macbeth or an Oedipus, the Empire succumbs to itself, brought down by its own efforts at self-expansion. Or perhaps the scene can be better compared to the Fall of Man in Milton’s poem, a grand cosmic undoing, followed by the heroic struggle against the inevitable.

Besides the sublime tragedy of Rome’s decline, it fascinates because it gives us a foreboding of what might happen to us. Indeed, maybe it is already? This would explain all the banality we see on television every day, all the terrible music on the radio. More than decline—a loss of political and economic power—this is decadence: a decay of taste, morals, artistic skill. Decadence seems observable in many historical instances: the Egyptians, the Babylonians, the Greeks, the Romans, the Byzantines: they all petered out, losing cultural vitality until they disappeared completely. Couldn’t the same thing be happening to us?

Oswald Spengler thought so, and he turned this thought into the basis for an entire philosophy of history. He was not a professional historian, nor an academic of any kind. He worked as a school teacher until his mother’s inheritance allowed him quit his job and to devote all of his time to scholarship. This scholarship was mustered to write an enormous book, whose publication was delayed by World War I. Probably this was very lucky for Spengler, since the pessimism and anguish caused by that war set the mood for his grand theory of cultural decline.

The Decline of the West puts forward a radically unconventional view of history. Spengler divides up world history, not into countries or epochs, but into “Cultures.” There have been only eight: the Egyptian, the Babylonian, the Meso-American, the Chinese, the Indian, the Classical (Greco-Roman), the Arabian (includes the Byzantine), and the Western (European Culture, beginning around the year 1000). Each of these Cultures he conceives as a super-organism, with its own birth, middle-age, and dotage. These Cultures all age at a similar rate, and go through analogical stages in the process (Napoleon is the Western equivalent to Alexander the Great, for example). Spengler believed that he had delineated these Cultures and traced their basic growth and aging process, thus providing a valid scheme for all future history as well, if any new Culture should arise.

Spengler is a cultural determinist and a cultural relativist. This means that he does not see these Cultures as dependent on the talent of individuals to grow; the individual is a product of the Culture and not the reverse. He also thinks that each of these Cultures creates its own self-contained world of significance, based on its own fundamental ideas. There is no such thing as inter-cultural influence, he thinks, at least not on any deep level. Each of these Cultures conceives the world so differently that they can hardly understand one another, let alone determine one another, even if one Culture can overpower another one in a contest of arms. Their art, their mathematics, their architecture, their experience of nature, their whole mental world is grounded in one specific cultural worldview.

Because Spengler is a determinist, he does not present us with a Gibbonian spectacle of a civilization succumbing to its own faults, struggling against its own decline. For Spengler, everything that happens in history is destiny. People don’t make history; history makes people. Thus, while often classed as a political conservative, it is hard to put any political label on Spengler, or to co-opt his views for any political purpose, since he didn’t think we directed our own history. To be a true Spenglerian is to believe that decline is inevitable: decadence wasn’t anyone’s “fault,” and it can’t be averted.

Much of this book consists of a contrast between what he calls the Apollonian (Greco-Roman) worldview, and the Faustian (Western) worldview. The Apollonian world-picture is based on the idea of definite form and definable shape; the nude statue is its most characteristic art, the delineated human body; its mathematics is all based on geometry, concrete shapes and visible lines. The Faustian picture, by contrast, is possessed by the idea of infinity; we make fugues, roving explorations of musical space; our mathematics is based on the idea of a function, an operation that can create an endless series of numbers. Spengler dwells on this contrast in chapter after chapter, trying to prove his point that Western Culture, far from being a development of Classical Culture, is entirely incompatible with it.

His own Culture, the Western, he traces to around the year 1000, at the commencement of the Romanesque. How or why new a Culture begins, Spengler doesn’t venture to say; but once they do begin, they follow the same definite steps. It was inevitable, he thinks, that the Romanesque transformed into the Gothic, and then eventually flourished into the Baroque, the high point of our Culture, wherein we expressed our deep longing for the infinite in Bach’s fugues and Descartes’s mathematics.

Sometime around the year 1800, the Western Culture entered its late, senescent phase, which Spengler terms ‘Civilization.’ This is the phase that follows cultural growth and flourishing; its onset begins when a Culture has exhausted its fundamental idea and explored its inherent forms. A Civilization is what remains of Culture when it has spent its creative forces: “The aim once attained—the idea, the entire content of inner possibilities, fulfilled and made externally actual—the Culture suddenly hardens, it mortifies, its blood congeals, its force breaks down, and it becomes Civilization.”

The ‘decline’ that forms the title of this book is just this transition from Culture to Civilization, wherein major creative work is at an end. Civilization is, rather, the age of Caesarism, the consolidation of political power. It is the age of world cities, major metropolises filled with cosmopolitan urban intellectuals. It is the age of academics rather than geniuses, the Alexandrine Greeks instead of the Golden-Age of Athens. It is, in other words, the period that corresponds with the onset of the Roman Empire, a period of no substantial innovation, but of magnificent stability. The Western Culture, Spengler thought, was entering just this period.

Whereas those who are actuated by a Culture during its creative period feel themselves driven by inevitable impulses, which allow even mediocre artists to create great works, people within a Civilization are creatures of the intellect, not the instinct; and instead of being given creative power and direction by their Culture, they are left to substitute their own subjective tastes and whims for cultural destiny. Instead of, for example, having one overriding epoch in our artistic productions—such as the Gothic, the Baroque, or what have you—we have artistic ‘movements’ or trends—Futurism, Dadaism, Cubism—which, far from being necessary phases in a Culture’s self-expression, are merely intellectual fads with no force behind them.

Spengler’s theory does have the considerable merit of being testable, because he made very specific predictions about what the immediate future held. We had gone through the period of ‘Warring States,’ he thought, in which country fought country and money ruled everything, and were about to enter a period of Caesarism, wherein people would lose faith in the power of self-interested capitalism and follow a charismatic leader. This would also be a period of ‘Second Religiousness,’ a period of faith rather than reason—a period of patriotism, zeal, and peaceful capitulation to the status quo.

Nowadays, one-hundred years later, it seems these predictions were certainly false. For one, he did not foresee the Second World War, but thought the period of internecine warfare was coming to a close. What is more, economic power has grown even more important—far more important than political power, in many ways—and no Caesar has arisen, despite many contenders (including Hitler, during Spengler’s lifetime, of whom Spengler didn’t think highly).

Aside from its breadth, one thing that sets this book apart is its style. Spengler is a remarkable writer. He can be poetic, describing the “flowers at eventide as, one after the other, they close in the setting sun. Strange is the feeling that then presses in upon you—a feeling of enigmatic fear in the presence of this blind dreamlike earth-bound existence.” He can be bitter, biting, and caustic, castigating the blind scholars who couldn’t see the obvious, satirizing the pseudo-sauve intellectuals who populated the cities of his time. He can be lyrical or epigrammatic, and can write ably about art, music, and mathematics.

His most characteristic mode, however, is the oracular: Spengler proclaims, predicts, pronounces. His voice, resonating through the written word, booms as if from a mountaintop. He sweeps the reader up in his swelling prose, an inundation of erudition, a flood that covers the world and brings us, like Noah in his ark, even higher than mountaintops. Perhaps a flood is the most apt metaphor, since Spengler is not only overwhelming in his rhetorical force, but all-encompassing in his world-view. He seems to have thought of everything, considered every subject, drawn his own conclusions about every fact; no detail escapes him, no conventionality remains to be overturned by his roving mind. The experience can be intoxicating as he draws you into his own perspective, with everything you thought you knew now blurry and swirling.

Spengler is so knowledgeable that, at times, he can sound like some higher power declaiming from above. But he was a man, after all, and his erudition was limited. He was most certainly an expert on music, mathematics, and the arts, and writes with keen insight in each of these subjects. But in politics, economics, religion, and especially science, he is less impressive. He completely fails to understand Darwin’s theory, for example, and he thought that physics was already complete and there would be no more great geniuses (and this, in one of the greatest epochs of physics!). He doesn’t even mention Einstein. Spengler also thought that our scientific theories were culturally determined and culturally bound; the Western conception of nature, for example, would have no validity for the Chinese (which doesn’t seem to stop the Chinese from learning Newton’s theories).

His grand theory, though undeniably fascinating, is also impossible to accept. What is the nature of a Culture? Why do they arise, why are they self-contained, why do they follow the same life-course? Why would one single idea determine every single cultural production—from mathematics to music, from architecture to physics—in a Culture from birth to death? All these seem like fundamental questions, and yet they are not satisfactorily addressed—nor do I see how they could be.

By insisting on the Culture as the unit of history, Spengler seems to be at once too narrow and too broad. Too narrow, because he does not allow for the possibility that these Cultures can influence one another; while it seems obvious to me that, yes, there was influence from the Classical to the Western, as well as from the Classical to the so-called ‘Magian’ (his term for the Arabian Culture), and from the Magian to the Western, and so on. And too broad, because within any given Culture there are not only different ages but different areas. Is the cultural difference between Spain and England ultimately superficial, but between the Renaissance and Classical Greece unbridgeable? Really, the more you think about Spengler’s claims, the less credible they seem. After all, if Spengler were right, how could he, a Western intellectual living in the Civilization phase of Western Culture, delineate the fundamental ideas of other Cultures and produce what he regarded as a major intellectual achievement?

I am certainly not saying that this book is intellectually valueless. By comparison, Walter Pater had this to say about aesthetic theories: “Many attempts have been made by writers on art and poetry to define beauty in the abstract, and express it in the most general terms, to find a universal formula for it. The value of these attempts has most often been in the suggestive and penetrating things said by the way.”

This seems equally true with regard to Spengler’s universal formula for history. Although I think his theory is untenable, this book is nevertheless filled to the brim with suggestive and penetrating observations, especially about art, architecture, music, and mathematics. Spengler may be a failed prophet, but he was an excellent critic, capable of making the most astonishing comparisons between arts of different eras and epochs.

Even if we reject Spengler’s proposed theory, we may still savor the grand vision required to see all of human history as a whole, to scan one’s eye over the past and present of humankind, in all its forms and phases, and to form conjectures as to its destiny. And Spengler was undeniably original in his inclusion of Babylonian, Egyptian, Indian, Chinese, and Meso-American Cultures as of equal importance as Western history; indeed, it is at least in part to Spengler that we owe our notion of world-history. Rich in ideas, set forth in ringing prose, invigorating in its novelty, breathtaking in its scope—here we have a true classic, yet another example of a book whose enormous originality outweighs every conventional defect we can detect in it.

View all my reviews

Review: Tools for Teaching

Review: Tools for Teaching

Tools for Teaching: Discipline, Instruction, Motivation.  Primary Prevention of Classroom Discipline ProblemsTools for Teaching: Discipline, Instruction, Motivation. Primary Prevention of Classroom Discipline Problems by Fredric H. Jones
My rating: 5 of 5 stars

Have you ever looked at the work kids turn in these days and wondered, “What will happen to this country in the next 50 years?” When you watch Larry sharpen his pencil, you know that the future is in good hands. It’s inspirational.

Last year I switched from teaching adults to teaching teenagers. Though I’m still teaching English, the job could hardly be more different. With adults, I could focus entirely on content; my students were mature, intelligent, and motivated, so I could think exclusively about what to teach them, and how. With kids, I am dealing with a classroom full of energetic, distracted, unruly, loud, and sometimes obnoxious humans whose main motivation is not to fail the upcoming exam. They’re not there because they want to be, and they would always inevitably rather be doing something else.

This probably makes me sound jaded and disenchanted (and I hasten to add that I actually have a lot more fun teaching kids, and my students are great, I swear!); but the fact is inescapable: when you’re teaching in a school setting, you need to worry about classroom management. Either you will control the kids, or they will control you.

It is the hope of every beginning teacher, myself included, to manage through instruction. We all begin with the same dream: to create lessons so dynamic, so enriching, so brilliant, and to teach with such charisma and compassion, that misbehavior isn’t a problem. But this doesn’t work, for two obvious reasons. For one, we don’t have unlimited control of the curriculum; to the contrary, our room to maneuver is often quite limited. And even with complete autonomy, having interesting lessons would be no guarantee of participation or attention, since it only takes one bored student to disrupt, and only one disruption to derail a lesson.

Even if you’re Socrates, disruptions will happen. When they do, in the absence of any plan, you will end up falling back on your instincts. The problem is that your instincts are probably bad. I know this well, both from experience and observation. Our impulsive reaction is usually to nag, to argue, to preach, to bargain, to threaten, to cajole—in other words, to flap our mouths in futility until we finally get angry, snap, yell, and then repeat the process.

But no amount of nagging creates a motivated classroom; and no amount of speeches—about the value of education, the importance of respect, or the relevance of the lesson to one’s future—will produce interested and engaged students. In short, our instinctual response is inefficient, ineffective, and stressful for both teacher and students. (Again, I know this both from experience and observation.)

Some strategies are therefore needed to keep the kids settled and on task. And since teachers are chronically overworked as it is—the endless grading and planning, not to mention the physical strain of standing in front of classes all day—these strategies must be neither too complex nor too expensive. To the contrary, they must be relatively straightforward to implement, and they must save time in the long run.

This is where Fred Jones comes in. Fred Jones is the Isaac Newton of classroom management. This book is nothing less than a fully worked out strategy for controlling a room full of young people. This system, according to him, is the result of many hundreds of hours of observing effective and ineffective teachers, trying to analyze what the “natural” teachers did right and the “unnatural” teachers wrong, and to put it all together into a system. And it really is systematic: every part fits into every part, interlocking like the gears of a bicycle.

This makes the book somewhat difficult to summarize, since it is not a bag of tricks to add to your repertoire. Indeed, its main limitation—especially for me, since I’m just assistant who goes from class to class—is that his strategies cannot be implemented piecemeal. They work together, or they don’t work. As a pedagogical nomad who merely helps out, I am not really in a position to put this book into practice, so I cannot personally vouch for it.

Despite this, Jones manages to be utterly convincing. The book is so full of anecdotes, insights, and explanations that were immediately familiar that it seemed as if he was spying on my own classrooms. Unlike so many books on education, which offer ringing phrases and high-minded idealism, this book deals with the nitty-gritty reality of being a teacher: the challenges, frustrations, and the stress.

The main challenge of classroom management—the problem that dwarfs all others—is to eliminate talking to neighbors. Kids like to talk, and they will talk: when they’re supposed to be listening, when they should be working, whenever they think they can get away with it. This is only natural. And with the conventional classroom approach—standing in the front and lecturing, snarling whenever the kids in the back are too loud—talking to neighbors is inevitable, since the teacher is physically distant, and the kids have nothing else to do.

Jones begins by suggesting board work: an activity that each student must start at the beginning of class, something handed out or written on the board, to eliminate the usual chaos that attends the beginning of the lesson. He then goes into detail about how the classroom should be arranged: with large avenues to the teacher can quickly move around. Movement is key, because the most important factor that determines goofing off is physical proximity to the teacher. (This seems certainly less true in Spain, where people are more comfortable with limited personal space, but I imagine it’s quite true in the United States.)

This leads to the lesson. Jones advocates a pedagogical approach that only requires the teacher to talk for five minutes or less at a time. Break down the lesson into chunks, using visual aids for easy understanding, and then immediately follow every concept with an activity. When the kids are working, the teacher is to move around the classroom, helping, checking, and managing behavior, while being sure not to spend too much time with the students he calls “helpless handraisers”—the students who inevitably raise their hands and say they don’t understand. (To be clear, he isn’t saying to ignore these students, but to resist the impulse to re-teach the whole lesson with your back turned to the rest of the class.)

This leads to one of the main limitation of Jones’s method: it works better for math and science than for the humanities. I don’t see how literature or history can be broken down into these five-minute chunks without destroying the content altogether. Jones suggests frequent writing exercises, which I certainly approve of, but it is also hard for me to imagine teaching a lesson about the Spanish Reconquest, for example, without a lengthy lecture. Maybe this is just due to lack of imagination on my part.

When it comes to disruptions, Jones’s advice is refreshingly physical. The first challenge is remaining calm. When you’re standing in front of a crowd, and some kids are chuckling in the back, or worse, talking back to you, your adrenaline immediately begins to flow. Your heart races, and you feel a tense anxiety grip your chest, intermediate between panic and rage. Before doing anything, you must calm down. Jones suggests learning how to relax yourself by breathing deeply. You need to be in control of your emotions to respond effectively.

Then, Jones follows this with a long section on body language. The way we hold our bodies signals a lot about our intentions and our resolve. Confidence and timidity are things we all intuitively perceive just from looking at the way someone holds herself. How do you turn around and face the offending students with conviction? How do you signal that you are taking the disruption seriously? And how do you avoid seeming noncommittal or unserious?

One of the most brilliant sections in this book, I thought, was on dealing with backtalk. Backtalk can be anything, but as Jones points out, it usually takes a very limited number of forms. Denial is probably the most common; in Spanish, this translates to “Pero, ¡no he hecho nada!” Then there is blaming; the student points her finger at her neighbor, and says “But, she asked me a question!” And then there is misdirection, when the offending student says, “But, I don’t understand!” as if they were in a busy intellectual debate. I see all these on a daily basis. The classic mistake to make in these situations is to engage the student—to argue, to nag, or to scold, or to take their claim that they “don’t understand” at face value. Be calm, stay quiet, and if they keep talking move towards them. Talking back yourself only puts you on the same level.

The penultimate section of the book deals with what Jones calls Preferred Activity Time, or PAT. This is an academic activity that the students want to do, and will work for. It is not a reward to hold over their heads, or something to punish the students with by taking it away, but something the teacher gives to the class, with the opportunity for them to earn more through good behavior. This acts as an additional incentive system to stay on task and well behaved.

The book ends with a note on what Jones calls “the backup system,” which consists of the official punishments, like suspension and detention, that the school system inflicts on misbehaving kids. As Jones repeatedly says, this backup system has been in place for generations, and yet it has always been ineffective. The same small number of repeat offenders account for the vast majority of these reprimands; obviously it is not an successful deterrent. Sometimes the backup system is unavoidable, however, and he has some wise words on how to use it when needed.

Now, if you’ve been following along so far, you’ll have noticed that this book is behaviorist. Its ideas are based on control, on incentive systems, on input and output. As a model of human behavior, I think behaviorism is far too simplistic to be accurate, and so I’m somewhat uncomfortable thinking of classroom management in this way. Furthermore, there are moments, I admit, when the job of teaching in a public school feels more like working in a prison than the glorious pursuit of knowledge. Your job is to keep the kids in a room, keep them quiet and seated, and to keep them busy—at least, that’s how it feels at times. And Jones’s whole system can perhaps legitimately be accused of perpetuating this incarceration model of education.

But teachers have the choice of working within an imperfect system or not working. The question of the ideal educational model is entirely different from the question this book addresses: how to effectively teach in the current educational paradigm. Jones’s approach is clear-eyed, thorough, intelligent, insightful, and eminently practical, and for that reason I think he has done a great thing. Teaching, after all, is too difficult a job, and too important a job, to do with only idealism and instinct as tools.

View all my reviews

Quotes & Commentary #39: Emerson

Quotes & Commentary #39: Emerson

 

Each soul is a soul or an individual by virtue of its having or I may say being a power to translate the universe into some particular language of its own.

—Ralph Waldo Emerson

What does it mean for something to be subjective? This means that it depends upon a perspective to exist.

Pleasure and pain are subjective, for example, since they cannot exist independently of an observer; they must be felt to be real. Mt. Everest, on the other hand, exists objectively—or at least we think it does—since that hunk of rock and snow would persist even if there were no humans left to climb it and plant flags on its summit.

Humans, of course, can never get out of their own perspectives and know, for good and certain, that anything exists objectively. Thus “objective” facts are really inter-subjective; that is, they can be verified by other observers.

Contrary to common belief, facts cannot be verified purely through experience, since experience is always personal and therefore private. This is why we are justified in disbelieving mystic visions and reports of miracles.

Two things must happen for raw experience to be turned into objective knowledge.

First the experience must be communicated to another observer through language. Language is a bridge between observers, allowing them to compare, in symbolic form, the reality they perceive. Language is a highly imperfect bridge, to be sure, and much information is lost by turning our raw experience into symbols; nevertheless it is the best we have.

Second, another observer must try to have an experience that matches the experience of the first one. This verification is, again, constrained by the vagueness of language.

Somebody points and says “Look, a helicopter!” Their friend looks up into the sky and says “I see it too!” This correspondence of experience, communicated through words, is the basis for our notion of the objective world.

(There is, of course, the thorny Cartesian question: How can we know for certain that both the helicopter and our friend aren’t hallucinations? We can’t.)

Subjective and objective knowledge share this quality. Our knowledge of the external world—whether a fleeting sensation of a chilly breeze, or a scientific doctrine repeatedly checked—is always symbolic.

A symbol is an arbitrary representation. All words are symbols. The relationship between the word “tree” and actual trees is arbitrary; we could also say arbol or Baum and accomplish the same end. By saying that knowledge is symbolic, I mean that the relationship between the objective facts and our representation of those facts is arbitrary. 

First, the relationship between the external stimulus and our private sensation is an arbitrary one.

Light in itself is electromagnetic radiation. In other words, light in itself doesn’t look like anything; it only has an appearance when photosensitive eyes evolve. Our visual cortex represents the photons that strike our eyes as colors. There is only a symbolic connection between the objective radiation and the internal sensation. The red I experience is only a symbol of a certain wavelength of light that my eyes pick up.

As Santayana said, the senses are poets and only portray the external world as a singer communicates his love: in metaphors. This is the basis for the common observation that there is no way of knowing whether the red I experience is the same as the red you experience. Since the connection between the objective stimulus and the subjective representation is arbitrary, and since it is only me who can observe the result, we can never know for certain how colors look to other individuals.

When we communicate our experiences to others, we translate our direct experience, which is already a symbolic representation of the world, into still more general symbols. As I said above, much information is lost during this second translation. We can, for example, say that we’re seeing the color red, but we cannot say exactly what it looks like.

Modern science, not content with the vagueness of daily speech, uses a stricter language: mathematics. And it also uses a stricter method of confirmation: controlled experiments through which rival hypotheses are tested. Nevertheless, while stricter, scientific knowledge isn’t any less symbolic. To the contrary, modern physics is distinguished for being abstract in the extreme.

To call knowledge symbolic is not to discredit it; it is merely to acknowledge the arbitrariness of our representations of the natural world. Nature can be understood, but first we must translate her into our language. The truth can be spoken, but always with an accent.

Quotes & Commentary #34: Dickens

Quotes & Commentary #34: Dickens

‘Why do you doubt your senses?’ said Marley’s ghost.

‘Because,’ said Scrooge, ‘a little thing affects them. A slight disorder of the stomach makes them cheats. You may be an undigested bit of beef, a blot of mustard, a crumb of cheese, a fragment of an underdone potato. There’s more of gravy than of grave about you, whatever you are!’

—Charles Dickens, A Christmas Carol

Recently I read a book about yoga by Swami Vivekananda. In it, the swami puts forward the common argument in favor of mystical truth: that direct experience of the super-sensuous realm—the spiritual plane—can attest to its existence. That is, the existence of the spiritual plane, while it cannot be detected with any technological device, deduced from any scientific theory, or proven on any philosophical grounds, can be known for certain to exist via direct experience.

I have no doubt that, through yoga, meditation, and prayer, people have had extraordinary experiences. These experiences may well have been far more intense than day-to-day life. On occasion these visions may have left such a lasting mark on a sage’s mind that he was forever transformed, perhaps much for the better.

Since experiences such as these are uncommon and unusual, these gurus will then be faced with the impossible task of capturing their private sensation in words and conveying it to somebody who has never felt anything similar. It would be like describing the color red to a blind man. This is why mystical writing is so often poetical. This also explains why it so often strays into metaphysics.

For my part, I am very fond of mystical poetry. But I have little patience for any epistemology that considers fleeting, incommunicable, and private experiences to be valid sources of insight into the nature of reality. Science works because its methodology does its very best to shun these sorts of visions. Science is effective because it treats knowledge as a social product, not a private hallucination, and because it treats experience as prone to error and in need of interpretation, not as a direct window into reality.

Ebenezer Scrooge was wrong about many things. But he was right to distrust his senses when he thought he saw a ghost. Luckily for him, the reality-status of the ghosts he saw does not make their moral message any less true. Likewise, even if mystical visions and meditative ecstasies may not be valid sources of knowledge about the universe, they can lead to valuable personal transformations.

Review: The Power of Myth

Review: The Power of Myth

The Power of MythThe Power of Myth by Joseph Campbell
My rating: 4 of 5 stars

I have bought this wonderful machine—a computer. Now I am rather an authority on gods, so I identified the machine—it seems to me to be an Old Testament god with a lot of rules and no mercy.

Joseph Campbell’s Hero with a Thousand Faces is a book that, for better or worse, will forever change how you see the world. Once you read his analysis of the monomyth, the basic outline of mythological stories, you find it everywhere. It’s maddening sometimes. Now I can’t watch certain movies without analyzing them in terms of Campbell’s outline.

But that book had another lasting effect on me. Campbell showed that these old myths and stories, even if you don’t believe them literally—indeed, he encourages you not to—still hold value for us. In our sophisticated, secular society, we can still learn from these ancient tales of love, adventure, magic, monsters, heroes, death, rebirth, and transcendence.

This book is a transcription of conversations between Campbell and Bill Moyers, made for a popular TV series. It isn’t exactly identical with the series, but there’s a lot of overlap. Moyers is interested in Campbell for seemingly the same reason I am: to find a value for myths and religion without the need for dogmatism or provinciality.

The book is mainly focused on Campbell’s philosophy of life, but many subjects are touched upon in these conversations. Campbell was, in his own words, a generalist, so you will find passages in here that will annoy nearly anybody. (A good definition of a generalist is somebody who can irritate specialists in many different fields.) Personally, I find Campbell most irritating when he talks about how bad the world is nowadays since people don’t have enough myths to live by. It seems obvious to me that the contemporary world, more secular than ever before, is also better off than ever before (Trump notwithstanding).

Campbell sometimes shows himself to be a sloppy scholar, such as his quoting of a letter by Chief Seattle, now widely believed to be fake. And I certainly don’t agree with his adoption of Jung’s psychology, which is hardly scientific. Indeed, to reduce old myths to Jung’s psychological system is merely to translate one myth into another. Perhaps Jung’s myth is easier to identify with nowadays, but I reject any claim of scientific accuracy. In sum, there is much to criticize in Campbell’s scholarly and academic approach.

Yet his general message—that myths and religions can be made valuable even for contemporary nonbelievers—has a special relevance for me. I grew up in an entirely nonreligious household, and I’m thankful for that. Nevertheless, I sometimes wonder whether I have missed out on something precious. Religious is as near to a human universal as you are likely to find, and I have no experience with it. Often I find myself reading religious books, exploring spiritual practices, and hanging around cathedrals. Although many beliefs and practices repel me, some I find beautiful, and I am fitfully filled with envy at the tranquility and fortitude that some practitioners seem to derive from their faith.

Campbell has been most valuable to me in his ability interpret religions metaphorically, and his insistence that they still have value. Reading Campbell helped me to clarify many of the things I have been thinking and wondering about lately, so I can’t help mixing up my own reflections with Campbell’s. Indeed, there might be more of my opinions in this review than Campbell, but here it goes.

One of the main lessons that art, philosophy, and religion teach us is that society imposes upon us superficial values. Wealth, attractiveness, sex, coolness, success, respectability—these are the values of society. And it’s no wonder. The economy doesn’t function well unless we strive to accumulate wealth; competition for mates creates a need for standards of beauty; cultural, political, and economic power is distributed hierarchically, and there are rules of behavior to differentiate the haves from the have-nots. In short, in a complex society these values are necessary—or at any rate inevitable.

But of course, these are the values of the game: the competition for mates, success, power, and wealth. In other words, they are values that differentiate how well you’re doing from your neighbor. In this way they are superficial—measuring you extrinsically rather than intrinsically. One of the functions of art, philosophy, and religion, as I see it, is to remind us of this, and to direct our attention to intrinsic values. Love, friendship, compassion, beauty, goodness, wisdom—these are valuable in themselves, and give meaning and happiness to an individual life.

How many great stories pit one of these personal values against one of the social values? Love against respectability, friendship against coolness, wisdom against wealth, compassion against success. In comedy—stories with happy endings—the intrinsic value is harmonized with the social value. Consider Jane Austen’s novels. In the end, genuine love is shown to be compatible with social respectability. But this is often not true, as tragedy points out. In tragedy, the social value wins against the personal value. The petty feud between the Capulets and the Montagues prevents Romeo and Juliet from being together. Respectability wins over love. But the victory is hollow, since this respectability brings its adherents nothing but pain and conflict.

Art thus dramatizes this conflict to show us what is really valuable from what is only apparently so. Philosophy does this not through drama, but reason. (I’m not claiming this is all either art or philosophy does.) Religion does it through ritual. This, I think, is the advantage of religion: it is periodical, it is tied to your routine, and it involves the body and not just the mind. Every week and every day you go through a procedure to remind yourself of what is really worthwhile.

But these things can fail, and often do. Art and philosophy can become academic, stereotyped, or commercial. And religion can become just another social value, used to cloak earthly power in superficial sanctity. As Campbell points out during these interviews, religion must change as society changes, or it will lose its efficacy. To use Campbell’s terminology, the social function of myth can entirely replace its pedagogical function. In such cases, the myths and rituals only serve to strengthen the group identity, to better integrate individuals into the society. When this is taken too far—as Campbell believes it has nowadays—then the social virtues are taught at the expensive of the individual virtues, and the religion just becomes another worldly power.

Myths can become ineffective, not only due to society co-opting their power, but also because myths have a cosmological role that can quickly become outdated. This is where religion comes into conflict with science. As Campbell explains, one of the purposes of myths is to help us find our place in the universe and understand our relationship to the world around us. If the religion is based on an outdated picture of the world, it can’t do that effectively, since then it forces people to choose between connecting with contemporary thought or adhering to the faith.

For my part, I think the conflict between science and religion is ultimately sterile, since it is a conflict about beliefs, and beliefs are not fundamental to either.

When I enter a cathedral, for example, I don’t see an educational facility designed to teach people facts. Rather, I see a place carefully constructed to create a certain psychological experience: the shadowy interior, the shining golden altars, the benevolent faces of the saints, the colored light from the stained glass windows, the smell of incense, the howl of the organ, the echo of the priest’s voice in the cavernous interior, the sense of smallness engendered by the towering roof. There are beliefs about reality involved in the experience, but the experience is not reducible to those beliefs; rather, the beliefs form a kind of scaffolding or context to experience the divine presence.

Science, too, is not a system of beliefs, but a procedure for investigating the world. Theories are overturned all the time in science. The most respected scientists have been proven wrong. Scientific orthodoxy today might be outmoded tomorrow. Consequently, when scientists argue with religious people about their beliefs, I think they’re both missing the point.

So far we have covered Campbell’s social, pedagogical, and cosmological functions of myths. This leaves only his spiritual function: connecting us to the mystery of the world. This is strongly connected with mysticism. By mysticism, I mean the belief that there is a higher reality behind the visual world; that there is an invisible, timeless, eternal plain that supports the field of time and action; that all apparent differences are only superficial, and that fundamentally everything is one. Plotinus is one of the most famous mystics in Western history, and his system exemplifies this: the principal of existence, for him, is “The One,” which is only his name for the unknowable mystery that transcends all categories.

Now, from a rational perspective all this is hard to swallow. And yet, I think there is a very simple thought buried underneath all this verbiage. Mysticism is just the experience of the mystery of existence, the mystery there is something instead of nothing. Science can explain how things work, but does not explain why these things are here in the first place. Stephen Hawking expressed this most memorably when he said: “Even if there is only one possible unified theory, it is just a set of rules and equations. What is it that breathes fire into the equations and makes universe for them to describe?”

It is arguably not a rational question—maybe not even a real question at all—to ask “Why is there something rather than nothing?” In any case, it is unanswerable. But I still often find myself filled with wonder that I exist, that I can see and hear things, that I have an identity, and that I am a part of this whole universe, so exquisite and vast. Certain things reliably connect me with this feeling: reading Hamlet, looking up at the starry sky, and standing in the Toledo Cathedral. Because it is not rational, I cannot adequately put it into words or analyze it; and yet I think the experience of mystery and awe is one of the most important things in life.

Since it is just a feeling, there is nothing inherently rational or anti-rational in it. I’ve heard scientists, mystics, and philosophers describe it. Yes, they describe it in different terms, using different concepts, and give it different meaning, but all that is incidental. The feeling of wonder is the thing, the perpetual surprise that we exist at all. Campbell helps me to connect with and understand that, and for that reason I am grateful to him.

View all my reviews

On Egotism and Education

On Egotism and Education

A while ago a friend asked me an interesting question.

As usual, I was engrossed in some rambling rant about a book I was reading—no doubt enlarging upon the author’s marvelous intellect (and, by association, my own). My poor friend, who is by now used to this sort of thing, suddenly asked me:

“Do you really think reading all these books has made you a better person?”

“Well, yeah…” I stuttered. “I think so…”

An awkward silence took over. I could truthfully say that reading had improved my mind, but that wasn’t the question. Was I better? Was I more wise, more moral, calmer, braver, kinder? Had reading made me a more sympathetic friend, a more caring partner? I didn’t want to admit it, but the answer seemed to be no.

This wasn’t an easy thing to face up to. My reading was a big part of my ego. I was immensely proud, indeed even arrogant, about all the big books I’d gotten through. Self-study had strengthened a sense of superiority.

But now I was confronted with the fact that, however much more knowledgeable and clever I had become, I had no claim to superiority. In fact—although I hated even to consider the possibility—reading could have made me worse in some ways, by giving me a justification for being arrogant.

This phenomenon is by no means confined to myself. Arrogance, condescension, and pretentiousness are ubiquitous qualities in intellectual circles. I know this both at first- and second-hand. While lip-service is often given to humility, the intellectual world is rife with egotism. And often I find that the more well-educated someone is, the more likely they are to assume a condescending tone.

This is the same condescending tone that I sometimes found myself using in conversations with friends. But condescension is of course more than a tone; it is an attitude towards oneself and the world. And this attitude can be fostered and reinforced by habits you pick up through intellectual activity.

One of these habits is argumentativeness for me, most closely connected with reading philosophy. Philosophy is, among other things, the art of argument; and good philosophers are able to bring to their arguments a level of rigor, clarity, and precision that is truly impressive. The irony here is that there is far more disagreement in philosophy than in any other discipline. To be fair, this is largely due to the abstract, mysterious, and often paradoxical nature of the questions they investigate—which resist even the most thorough analysis.

Nevertheless, given that their professional success depends upon putting forward the strongest argument to a given problem, philosophers devote a lot of time to picking apart the theories and ideas of their competitors. Indeed, the demolition of a rival point of view can assume supreme importance. A good example of this is Gilbert Ryle’s Concept of Mind—a brilliant and valuable book, but one that is mainly devoted to debunking an old theory rather than putting forward a new one.

This sort of thing isn’t confined to philosophy, of course. I have met academics in many disciplines whose explicit goal is to quash another theory rather than to provide a new one. I can sympathize with this, since proving an opponent wrong can feel immensely powerful. To find a logical fallacy, an unwarranted assumption, an ambiguous term, an incorrect generalization in a competitor’s work, and then to focus all your firepower on this structural weakness until the entire argument comes tumbling down—it’s really satisfying. Intellectual arguments can have all the thrill of combat, with none of the safety hazards.

But to steal a phrase from the historian Richard Fletcher, disputes of this kind usually generate more heat than light. Disproving a rival claim is not the same thing as proving your own claim. And when priority is given to finding the weaknesses rather than the strengths of competing theories, the result is bickering rather than the pursuit of truth.

To speak from my own experience, in the past I’ve gotten to the point where I considered it a sign of weakness to agree with somebody. Endorsing someone else’s conclusions without reservations or qualifications was just spineless. And to fail to find the flaws in another thinker’s argument—or, worse yet, to put forward your own flawed argument—was simply mortifying for me, a personal failing. Needless to say this mentality is not desirable or productive, either personally or intellectually.

Besides being argumentative, another condescending attitude that intellectual work can reinforce is name-dropping.

In any intellectual field, certain thinkers reign supreme. Their theories, books, and even their names carry a certain amount of authority; and this authority can be commandeered by secondary figures through name-dropping. This is more than simply repeating a famous person’s name (although that’s common); it involves positioning oneself as an authority on that person’s work.

Two books I read recently—Mortimer Adler’s How to Read a Book, and Harold Bloom’s The Western Canon—are prime examples of this. Both authors wield the names of famous authors like weapons. Shakespeare, Plato, and Newton are bandied about, used to cudgel enemies and to cow readers into submission. References to famous thinkers and writers can even be used as substitutes for real argument. This is the infamous argument from authority, a fallacy easy to spot when explicit, but much harder when used in the hands of a skilled name-dropper.

I have certainly been guilty of this. Even while I was still an undergraduate, I realized that big names have big power. If I even mentioned the names of Dante or Milton, Galileo or Darwin, Hume or Kant, I instantly gained intellectual clout. And if I found a way to connect the topic under discussion to any famous thinker’s ideas—even if that connection was tenuous and forced—it gave my opinions weight and made me seem more “serious.” Of course I wasn’t doing this intentionally to be condescending or lazy. At the time, I thought that name-dropping was the mark of a dedicated student, and perhaps to a certain extent it is. But there is a difference between appropriately citing an authority’s work and using their work to intimidate people.

There is a third way that intellectual work can lead to condescending attitudes, and that is, for lack of a better term, political posturing. This particular attitude isn’t very tempting for me, since I am by nature not very political, but this habit of mind is extremely common nowadays.

By political posturing I mean several related things. Most broadly, I mean when someone feels that people (himself included) must hold certain beliefs in order to be acceptable. These can be political or social beliefs, but they can also be more abstract, theoretical beliefs. In any group—be it a university department, a political party, or just a bunch of friends—a certain amount of groupthink is always a risk. Certain attitudes and opinions become associated with the group, and they become a marker of identity. In intellectual life this is a special hazard because proclaiming fashionable and admirable opinions can replace the pursuit of truth as the criterion of acceptability.

At its most extreme, this kind of political posturing can lead to a kind of gang mentality, wherein disagreement is seen as evil and all dissent must be punished with ostracism and mob justice. This can be observed in the Twitter shame campaigns of recent years, but a similar thing happens in intellectual circles.

During my brief time in graduate school, I felt an intense and ceaseless pressure to espouse leftist opinions. This seemed to be ubiquitous: students and professors sparred with one another, in person and in print, by trying to prove that their rival is not genuinely right-thinking (or “left-thinking” as the case may be). Certain thinkers could not be seriously discussed, much less endorsed, because their works had intolerable political ramifications. Contrariwise, questioning the conclusions of properly left-thinking people could leave you vulnerable to accusations about your fidelity to social justice or economic equality.

But political posturing has a milder form: know-betterism. Know-betterism is political posturing without the moral outrage, and its victims are smug rather than indignant.

The book Language, Truth, and Logic by A.J. Ayer comes to mind, wherein the young philosopher, still in his mid-twenties, simply dismisses the work of Plato, Aristotle, Spinoza, Kant and others as hogwash, because it doesn’t fit into his logical positivist framework.

Indeed, logical positivism is an excellent example of the pernicious effects of know-betterism. In retrospect, it seems incredible that so many brilliant people endorsed it, because logical positivism has crippling and obvious flaws. But not only did people believe it, but they thought it was “The Answer”—the solution to every philosophical problem—and considered anyone who thought otherwise a crank or a fool, somebody who couldn’t see the obvious. This is the danger of groupthink: when everyone “in the know” believes something, it can seem obviously right, regardless of the strength of the ideas.

The last condescending attitude I want to mention is rightness—the obsession with being right. Now of course there’s nothing wrong with being right. Getting nearer to the truth is the goal of all honest intellectual work. But to be overly preoccupied with being right is, I think, both an intellectual and a personal shortcoming.

As far as I know, the only area of knowledge in which real certainty is possible is mathematics. The rest of life is riddled with uncertainty. Every scientific theory might, and probably will, be overturned by a better theory. Every historical treatise is open to revision when new evidence, priorities, and perspectives arise. Philosophical positions are notoriously difficult to prove, and new refinements are always around the corner. And despite the best efforts of the social sciences, the human animal remains a perpetually surprising mystery.

To me, this uncertainty in our knowledge means that you must always be open to the possibility that you are wrong. The feeling of certainty is just that—a feeling. Our most unshakeable beliefs are always open to refutation. But when you have read widely on a topic, studied it deeply, thought it through thoroughly, it gets more and more difficult to believe that you are possibly in error. Because so much effort, thought, and time has gone into a conclusion, it can be personally devastating to think that you are mistaken.

This is human, and understandable, but can also clearly lead to egotism. For many thinkers, it becomes their goal in life to impose their conclusions upon the world. They struggle valiantly for the acceptance of their opinions, and grow resentful and bitter when people disagree with or, worse, ignore them. Every exchange thus becomes a struggle, pushing your views down another person’s throat.

This is not only an intellectual shortcoming—since it is highly unlikely that your views represent the whole truth—but it is also a personal shortcoming, since it makes you deaf to other people’s perspectives. When you are sure you’re right, you can’t listen to others. But everyone has their own truth. I don’t mean that every opinion is equally valid (since there are such things as uninformed opinions), but that every opinion is an expression, not only of thoughts, but of emotions, and emotions can’t be false.

If you want to have a conversation with somebody instead of giving them a lecture, you need to believe that they have something valuable to contribute, even if they are disagreeing with you. In my experience it is always better, personally and intellectually, to try to find some truth in what someone is saying than to search for what is untrue.

Lastly, being overly concerned with being right can make you intellectually timid. Going out on a limb, disagreeing with the crowd, putting forward your own idea—all this puts you at risk of being publicly wrong, and thus will be avoided out of fear. This is a shame. The greatest adventure you can take in life and thought is to be extravagantly wrong. Name any famous thinker, and you will be naming one of the most gloriously incorrect thinkers in history. Newton, Darwin, Einstein—every one of them has been wrong about something.

For a long time I have been the victim of all of these mentalities—argumentativeness, name-dropping, political posturing, know-betterism, and rightness—and to a certain extent, probably I always will. What makes them so easy to fall into is that they are positive attitudes taken to excess. It is admirable and good to subject claims to logical scrutiny, to read and cite major authorities, to advocate for causes you think are right, to respect the opinions of your peers and colleagues, and to prioritize getting to the truth.

But taken to excesses, these habits can lead to egotism. They certainly have with me. This is not a matter of simple vanity. Not only can egotism cut you off from real intimacy with other people, but it can lead to real unhappiness, too.

When you base your self-worth on beating other people in argument, being more well read than your peers, being on the morally right side, being in the know, being right and proving others wrong, then you put yourself at risk of having your self-worth undermined. To be refuted will be mortifying, to be questioned will be infuriating, to be contradicted will be intolerable. Simply put, such an attitude will put you at war with others, making you defensive and quick-tempered.

An image that springs to mind is of a giant castle with towering walls, a moat, and a drawbridge. On the inside of this castle, in the deepest chambers of the inner citadel, is your ego. The fortifications around your ego are your intellectual defenses—your skill in rhetoric, logic, argument, debate, and your impressive knowledge. All of these defense are necessary because your sense of self-worth depends on certain conditions: being perceived, and perceiving oneself, as clever, correct, well-educated, and morally admirable.

Intimacy is difficult in these circumstances. You let down the drawbridge for people you trust, and let them inside the walls. But you test people for a long time before you get to this point—making sure they appreciate your mind and respect your opinions—and even then, you don’t let them come into the inner citadel. You don’t let yourself be totally vulnerable, because even a passing remark can lead to crippling self-doubt when you equate your worth with your intellect.

Thus the fundamental mindset that leads to all of the bad habits described above is that being smart, right, or knowledgeable is the source of your worth as a human being. This is dangerous, because it means that you constantly have to reinforce the idea that you have all of these qualities in abundance. Life becomes then a constantly performance, an act for others and for yourself. And because a part of you knows that its an act—a voice you try to ignore—then it also leads to considerable bad faith.

As for the solution, I can only speak from my own experience. The trick, I’ve found, is to let down my guard. Every time you defend yourself you make yourself more fragile, because you tell yourself that there is a part of you that needs to be defended. When you let go of your anxieties about being wrong, being ignorant, or being rejected, your intellectual life will be enriched. You will find it easier to learn from others, to consider issues from multiple points of view, and to propose original solutions.

Thus I can say that reading has made me a better person, not because I think intellectual people are worth more than non-intellectuals, but because I realized that they aren’t.

Review: The Ascent of Man

Review: The Ascent of Man

The Ascent of ManThe Ascent of Man by Jacob Bronowski

My rating: 5 of 5 stars

Fifty years from now, if an understanding of man’s origins, his evolution, his history, his progress is not in the common place of the school books, we shall not exist.

I watched this series right after finishing Kenneth Clark’s Civilisation, as I’d heard The Ascent of Man described as a companion piece. So like my review of Clark’s work, this review is about the documentary and not the book (though since the book is just a transcription of the series, I’m sure it applies to both).

The Ascent of Man is a remarkable program. I had doubts that anyone could produce a series to match Civilisation, but Bronowski made something that might even be better. Bronowski was a polymath: he did work in mathematics, biology, physics, history, and even poetry. In this program, his topic is the history of science. Yet for Bronowski, the word “science” not only refers to the modern scientific method, but rather encompasses all of humanity’s efforts to understand and manipulate the natural world.

We thus begin with Homo erectus, learning how to chip away stone to make tools. As Bronowski notes, this simple ability, to chip away at a stone until a cutting edge is left, is a remarkable indication of human uniqueness. Since the behavior is learned and is not an instinct, it requires a preconception of what the toolmaker wants to create, a certain amount of imagination is required to picture the goal before it is realized. What’s more, creating a stone tool requires a sense of the structural properties of the rock. (I’ve actually tried making stone tools with various types of rock, and let me tell you that it’s not so easy. Even with an archaeologist giving me advice, I was only able to create stone tools of the sophistication of an Australopithecus—randomly beating the stone until a sharp edge was created.) Thus both our creative drive and our knowledge are involved in this quintessentially human activity. “Every animal leaves traces of what he was. Man alone leaves traces of what he created.”

This brings Bronowski to one of his main points, one of the themes of this series: that art and science are not fundamentally different; rather, they are two manifestations of the human spirit. What is this human spirit? It is a composite of many qualities, what Bronowski calls “a jigsaw of human faculties,” which include our wide behavioral flexibility, our capacity to play, our need to create, our curiosity about the natural world, our sense of adventure, our love of variety. Indeed, these can be pithily described by saying that humans retain many childlike characteristics throughout their lives. The name of the last episode is “The Long Childhood.”

One of my favorite sequences in this documentary is when Bronowski takes the viewer from the posts and lintels of the Greek temples, to the arches in the Roman aqueduct in Segovia, to the somewhat prettier arches in the Mezquita in Cordoba, to the cathedral at Reims with its magnificent flying buttresses. Each of these structures, he explains, is a more sophisticated solution to this problem: how do you create a covered space out of stone? The lintel and post system used by the Greeks leads to a forest of columns, and the Mezquita, although less crowded, is still filled with arches. The Medieval Christians achieved a magnificent solution by placing the buttresses on the outside, thus leading to the towering, open interior of Reims.

We’re used to thinking of this development as an architectural triumph, but as Bronowski points out, it was also an intellectual triumph. This progression represents better and better understandings of the structural properties of stone, of the force of gravity, and of the distribution of weight. And when you see it play out in front of your eyes, it’s hard to shake the impression that these marvelous works are also progressively more elegant solutions to a mathematical puzzle. This is just one example of Bronowski’s talent: to see the artistic in the scientific and the scientific in the artistic; and he does this by seeing the human spirit in all of it.

Here’s another example. Bronowski wants to talk about how humanity has come to understand space, and how this understanding of space underpins our knowledge of structure. How does he do it? He goes to the Alhambra, and analyzes the symmetry in the tiles of the Moorish Palace. Then, he bends down and spreads a bunch of crystals on the ground, and begins to talk about the molecular symmetry that gave rise to them. It’s such a stunning juxtaposition. How many people would think to compare Moorish architecture with modern chemistry? But it’s so appropriate and so revealing that I couldn’t help but be awed.

As the title suggests, this series is not simply about science (or art), but about science through history. Bronowski aims to show how humanity, once freed from the constraints of instinct, used a combination of logic and imagination to achieve ever-deeper conceptions of our place in the universe. This is the Ascent of Man: a quest for self knowledge. It’s sometimes hard for us moderns to grasp this, but consider that we are living in one of the brief times in history that we can explain the formation of the earth, the origin of our species, and even the workings of our own brains. Imagine not knowing any of that. It’s hard to envy former ages when you consider that their sense of their place of the universe was based on myth supported by authority, or was simply a mystery. I’m sure (and I earnestly hope) that future generations will believe the same about us.

Bronowski’s final message is a plea to continue this ascent. This means spreading a understanding and an appreciation of science, as his programs tries to do. This strikes me as terribly important. I’ve met so many people who say things like “Science is a form of faith” or “Science can’t solve every problem” or “Science is dehumanizing and arrogant.” It’s sad to hear intelligent people say things like this, for it simply isn’t true. It’s an abuse of language to call science a faith; then what isn’t? And yes, of course science can’t solve every problem and can’t answer every question; but can anything? Science can solve some problems, and can do so very well. And science, as Bronowski points out, is the very opposite of dehumanizing and arrogant. Science is a most human form of knowledge, born of humility of our intellectual powers, based on repeated mistakes and guesses, always pressing forward into the unknown, always revising its opinions based on evidence. Atrocities are committed, not by people who are trained to question their own beliefs, but by ideologues who are convinced they are right.

This is Bronowski’s essential message. But like in any good story, the telling is half of it. As I’ve mentioned above, Bronowski and his team are brilliant at finding unexpected ways to illustrate abstract ideas. This series is full of wonderful and striking visual illustrations of Bronowski’s points. What’s more, the man is a natural storyteller, and effectively brings to life many of this series’ heroes: Newton, Galileo, Alfred Russell Wallace, Mendel. He’s also a poet; one of his books is a study of William Blake’s poetry. This not only gives him a knack for similes, but helps him to explain how science is fundamentally creative. One of my favorite scenes is when Bronowski compares abstract portraits of a man to the ways that various scientific instruments—radar, infrared, cameras, X-rays—detect the man’s face. As he explains, both the portrait and these readings are interpretations of their subjects.

The cinematography is also excellent. There are some sequences in this documentary that are still impressive, saturated as we are with CGI. There are even some quite psychedelic sections. One of my favorite of these was a sequence of microscopic shots of human cells with Pink Floyd (who contributed music) jamming chaotically in the background. Unlike in Clark’s Civilisation, which uses exclusively ‘classical’ music and is devoid of special effects, the style of this documentary is surprisingly modern and even edgy. Another thing Bronowski does that Clark doesn’t, is include some information on non-Western cultures, from Meso-America, Japan, China, and Easter Island.

Yes, there are some parts of this that are outdated. Most obviously, much of the scientific information is no longer accurate—particularly the information on human evolution in the first episode. This is unavoidable, and is in fact a tribute to the ideals Bronowski championed. More jarring is Bronowski’s somewhat negative assessments of the culture of Easter Island and the lifestyle of nomadic peoples. Less controversially, he also has some negative words to say about Hegel. (Did you know Hegel published an absurd thesis when he was young about how the distance of the orbits of the planets had to conform to a number series?) Another mark of this program’s age is that Bronowski several times shows nudity and even a human birth. This would never fly on television today, at least not in the States.

But these flaws are minor in such a tremendous program. The Ascent of Man is a landmark in the history of science education and of documentary making, and a stirring vision of the progress of humanity by an brilliant and sympathetic man. I hope you get a chance to watch it.

View all my reviews