Review: Stoic Pragmatism

Review: Stoic Pragmatism

Stoic PragmatismStoic Pragmatism by John Lachs
My rating: 3 of 5 stars

The questions of philosophy will continue to haunt us so long as we remain finite, baffled animals. The fact that philosophy offers no final answers is not an impediment but a lesson. That first great lesson of philosophy is that we must learn to live with uncertainty.

Since it’s that time of year, I’ve lately been seeing many of my friends—struggling artists, mostly—reposting graduation speeches by famous actors, musicians, entrepreneurs, and other celebrities. So many of these communal pep talks boil down to one message: persist. Every artist worth her salt has a story about how they struggled in the purgatory of unsuccessful oblivion for ten centuries—eating ramen and living in a closet—before finally ascending to the paradise of fame. Jonathan Goldsmith, for example—now famous as the Most Interesting Man Alive for the Dos Equis ads—was an obscure actor for over forty years before his “breakout” role.

But success stories and inspiring graduation speeches all have one obvious, debilitating shortcoming: survivor bias. Of course, every successful person was once unsuccessful and then became successful; so for them hard work paid off. But the vital question is not whether hard work ever pays off, but how often, and for whom? History has been the silent witness of generations of brilliant musicians and talented actors who remained obscure all their lives. The world is simply stuffed with artists of all kinds, many mediocre, but a fair number extremely talented—far more than will ever be able to support themselves in comfort with their craft. The plain fact is that, even if every budding artist in those ceremonies follows the advice to persist, not even half will achieve anything close to the level of success as the person on the podium.

And, indeed, even if there is an appealing wisdom in carrying on in the teeth of disappointment and failure, there is also a wisdom in throwing in the towel. Better to cut your losses and do something else, rather than struggle pointlessly for years on end. The real difficulty, though, is knowing which choice to make. What if you give up right when you’re on the cusp of a breakthrough? Or what if you persist for years and get nowhere? And this isn’t just a question for young artists; it is one of the basic questions of life. I recently encountered it in the philosophy of science: When should a hypothesis be abandoned or pursued? An overly tractable scientist may give up on a truly promising theory with the first hint of difficulty; and an overly stubborn scientist may spend a career working on a bankrupt idea, in the vain hope of making it work.

Seeking an analysis of this dilemma, I picked up John Lachs’s book, Stoic Pragmatism, which explicitly promises to address just this question. Lachs is attempting here to combine the pragmatist doctrine that we must improve the world with the stoic resignation to the inevitable. Unfortunately, he does not get any further than noting what I hope is obvious—that we should improve what we can and resign ourselves to what we can’t change. This is true; but of course we very often have no idea what we can or can’t change, what will or won’t work, whether we’ll be successful or not, which leaves us in the same baffled place we started. Insofar as truly answering this question would require knowing the future, it is unanswerable. Uncertainty about success and the need to commit to potentially doomed actions are inescapable elements of our existential situation. The best we can hope for, I think, are a few good rules of thumb; and these will likely depend on personal preference.

In any case, this book is far more than an analysis of this common dilemma, but an attempt to give a complete picture of Lachs’s philosophical perspective. Lachs promises a new philosophical system, but delivers only a disorganized gallimaufry of opinions that do not cohere. For example, Lachs begins by denigrating the professionalization of philosophy, holding that philosophy is not a discipline that seeks the truth—he asserts that not a single proposition would command assent by the majority of practitioners (though I disagree!)—rather, philosophy is better thought of as intellectual training that helps us to make sense of other activities. But the book includes lengthy analyses of ethics, ontology, and epistemology, so apparently Lachs does see the value in answering the traditional problems of philosophy. To make matters worse, Lachs continually excoriates philosophers who do not practice what they preach; and then he goes on to outline an ethical system wholly compatible with a middle-class, bourgeois lifestyle (our main obligations are to do our jobs and to leave other people alone, it seems).

I am being unfairly satirical. I actually agree with most of what Lachs says; and this of course means I must make fun of him. (According to the “Lotz Theory of Agreement” no intellectual will permit herself to simply agree with another intellectual, but will search out any small point of difference, even a difference in attitude or emphasis, in order to seem superior.) Lachs is an inspiring example of an academic trying to address himself to broader problems using more accessible language. He is an attractive thinker and a skilled writer, a humane intellectual capable of fine prose.

Nevertheless, I must admit that this book makes me despair a little. Here we have a man explicitly and repeatedly repudiating his profession and trying to write for non-specialists; and yet Lachs is so palpably an academic that he simply cannot do it. The book begins with his opinions about the canonical philosophers, frequently breaks off to criticize fellow professors and intellectual movements, and includes academic controversies (such as how to interpret Santayana’s use of the word “matter” in his ontological work) of no interest to a general reader. Lachs tries to come up with an ethical system that he can follow himself as an example of a committed intellectual, and then ends up creating an ethical system with no obligations other than to do one’s job (which, in his case, consists of writing books and advising graduate students). Lachs’s primary example of committed moral action, to which he returns again and again, is signing a petition to remove the president of his university (and he notes that most of his colleagues refused to do even this!).

I am being unduly harsh on Lachs. Really, he is one of the very best examples of what academics can and should do to engage with the world around them. And yet his example demonstrates, to me, the enormous gap that separates academia from the rest of society. Lachs dwells again and again on the pointless abstractions of professional philosophers and the wisdom of everyday people, and then the next moment he launches into an analysis of the concept of the individual in the metaphysics of Josiah Royce—Royce, someone who not even most professional philosophers are interested in, much less the general public—and all this in the context of a book that emphasizes self-consistency over and over again.

This makes me sad, because I think we really do need more intellectuals in the public sphere, intellectuals who are capable of communicating clearly and elegantly to non-specialists about problems of wide interest. And yet our age seems to be conspicuously bereft of anyone resembling a public intellectual. Yes, we have popularizers, but that’s a different thing entirely.

Seeking an answer to this absence, I usually return to the model of specialization in the university.

To get a doctorate, you need to write a dissertation on something, usually a topic of excessive, often ludicrous specificity—the upper-arm tattoos of Taiwanese sailors, the arrangement of furniture inside French colonial homes in North Africa in the 1890s, and so on. This model originated in German research universities, I believe; and indeed it makes perfect sense for many disciplines, particularly the natural sciences. But I do not think this model is at all suited to the humanities, where seeing human things in a wide context is so important. This is not to deny that specialized research can make valuable contributions in the humanities—indeed, I think it is necessary, especially in fields like history—but I do not think it should be the only, or even the dominant, pattern for academics in the humanities.

If I can put forward my own very modest proposal in this review, it would be the creation of another class of academic—let’s call them “scholars”—who would focus, not on specialized research, but on general coverage in several related fields (I’m thinking specifically of philosophy, literature, and history, but this is just one possibility). These scholars would be mainly responsible for teaching courses, not publishing research; and this would give them an incentive to communicate to undergraduates, and by extension the general public, rather than to disappear into arcane regions of the inky night.

These scholars could also be responsible for writing reviews and critiques of research. Their more general knowledge might make them more capable of seeing connections between fields; and by acting as gatekeepers to publication (in the form of a reviewer), they could serve as a check on the groupthink, and also the lack of accountability, that can prevail within a discipline where sometimes research is so obscure that nobody outside the community can adequately judge it (thus proving a shield to shoddy work).

I’m sure my own proposal is impractical, has already been tried, is already widespread, or just plain bad, and so on. (Even if you agree with it, the Lotz Theory of Agreement will apply.) But whatever the solution, I think it is a palpable and growing problem that there is so much intellectual work—especially in the humanities, where there is far less excuse for unintelligibility and sterile specialization—that is totally disconnected with the wider society, and is unreadable and uninteresting to most people, even well-educated people. We simply cannot have a functioning society where intellectuals only talk to each other in their own special language. Lachs, to his credit, is doing his best to break this pattern. But this book, to me, is evidence that the problem is far too serious for well-intentioned individuals to solve on their own.

View all my reviews

Review: Poet in New York

Review: Poet in New York

Poet in New York: A Bilingual EditionPoet in New York: A Bilingual Edition by Federico García Lorca
My rating: 4 of 5 stars

I want to cry because I feel like it
as the boys in the back row cry,
because I am not a man nor a poet nor a leaf
but a wounded pulse that probes the things of the other side.

Poetry is an odd thing. You notice this when you encounter poetry in a second language. This happened to me a few weeks ago, when I went to a poetry reading in Madrid. There were four or five poets there, some of them fairly well-known, with a crowd of hushed listeners hanging on their every word. Meanwhile, with my very imperfect Spanish, I was only able to catch bits of phrases and scattered words that added up to nothing.

“Look, I can be a poet,” I said to a friend after the show: “A cow is a moon, / a moon is a balloon.” That’s really how it sounded to me.

In a way, this isn’t surprising, of course; but it got me thinking how strange a thing is poetry. We string phrases together that, interpreted literally, are either false, absurd, meaningless, or banal; and yet somehow, when the poetry works, these phrases open up subtle emotional reactions in their listeners. Why is it that a certain phrase seems just right, inexhaustibly expressive and unutterably perfect, while a similar phrase may be dead on arrival, impotent, sterile, and maybe even unpleasant? Bad poetry, indeed, can be excruciating and embarrassing to witness, perhaps because it is in bad poetry that the essential strangeness of the act of poetry is most acutely manifest. We feel that this whole thing is silly—trying to make portentous sounding phrases that signify close to nothing. And yet the genuine article, once witnessed, is undeniable.

I usually group poetry along with novels and short stories, as literature; but lately I think that poetry may be closer to another art form: dance. Dance is distinct from every other kind of movement—from walking to golf to sign language—in that it is not oriented towards any external goal. That is, the movement itself is the goal; the point is to move, and to move well. In poetry, too, our words—which normally point us towards the world, if only to an imaginary or a hypothetical world—are stripped as much as possible of their normal denoting function; the point becomes, rather, the pure manipulation of diction and grammar, in much the same way that, in dance, the point becomes the pure movement of limb and trunk.

This is a healthy thing, I think, since in life we can get so preoccupied with the attainment of a goal that we become blind to everything that does not advance our progress towards our object. A coach of a football team, for example, is only concerned with how well his players’ actions increase the likelihood of winning; and likewise, normally when we use language, we are using it to accomplish something specific, from ordering pizza to chiding children. Dance and poetry, by stripping away the intentionality of the act, reveal the subtle beauty in the activity itself, allowing us to slow down, to appreciate the rhythm of a word or the gentle flexion of an arm.

I must hasten to add that this description of poetry and dance does not apply equally to all examples. Alexander Pope’s poetry approaches very nearly to prose in its use of denotation; and T.S. Eliot’s “The Waste Land” is on the other side of the spectrum. A similar spectrum applies in the case of dance, I suppose.

Federico García Lorca’s poetry is much closer to Eliot’s in this regard, perhaps even further along in its tendency towards connotation. This makes his poetry doubly hard for a foreigner like me to appreciate, since the specific emotional flavors of his words are bland in my mouth. As a young man Lorca lived in the famous Residencia de Estudiantes, in Madrid, where he became close friends with Dalí. The two exerted a mutual influence on each other, both moving towards the surrealism that was becoming trendy in the art world.

Lorca wrote this book many years later, during and after his visit to New York City in 1929-30, during which he witnessed the Stock Market Crash. Economic depression or not, however, the inhuman vastness of the city, the crowds and concrete, the money-obsessed workers and the poor and the homeless, the racial discrimination and the absence of nature, seems to have made a deep impression on the rural Andalusian poet. These poems are his anguished response to this experience.

Lorca’s poetry is surreal in the textbook sense that he uses a succession of vivid, concrete images that, taken together, add up to something nebulous and unreal. Much like Dalí, Lorca has a talent for creating bizarre images that nevertheless manage to be emotionally compelling. Opening the collection more or less at random I find:

All is broken in the night,
its legs spread wide over the terraces.
All is broken in the warm pipes
of a terrible, silent fountain.

Admittedly it does take some time to find the odd beauty in the apparently random, unconnected pictures. My first instinct was to read them like metaphors; but if Lorca did indeed have something specific in mind that he was trying to allegorize, the allegories are much too complicated and disjointed to be deciphered. Rather, I think these poems must be read simply for the beauty of the language, the striking collisions of words, the flashes of light and the rumblings of sound. The poems seem to capture nothing more nor less than an emotional mood—different shades of desolation—that presents itself to the conscious mind in a kind of personal mythology, as in a dream. Dalí was deeply influenced by Freud during his stay in the student residence, and I wouldn’t be surprised if Lorca was too.

Even if it is difficult to articulate the structure and meaning of Lorca’s image-world, it is certainly not random. Certain words and images come up again and again, as in a dream sequence, being shuffled and re-shuffled throughout the collection. Some of these words are oil, ant, worm, thigh, moon, void, footprint, hollow, glass, night, wounded, agony, sky, cracked, death, coffin, iron… The ultimate effect of these words, recombined again and again, is cumulative; they create echoes of themselves in the reader’s mind, calling up half-remembered associations from other poems, creating an emotional coherence in the literally incoherent text.

Look at concrete shapes seeking their void.
Mistaken dogs and bitten apples.
Look at the longing, the anguish of a sad fossil world
that cannot find the accent of its first sob.

The emotional resonance of the words themselves is also important, something that is unfortunately lost in translation. For example, the word for “oil,” aceite, has an interesting blend of comforting familiarity and a tint of the exotic. I think this is because the word originally comes from Arabic, and maintains a certain foreign flavor, even as it denotes something absolutely integral to the Spanish culture: olive oil, which is used in everything. The word also brings up the rolling olive fields, stumpy trees on sandy soil, that fill Lorca’s Andalucía; and this again calls to mind the age-old farming tradition, the intimate connection with the land, totally absent in New York City. There is also the double association of oil as integral to cooking and as something potentially toxic and polluting. A native Spaniard will likely disagree with this chain of associations, but I think the word is undeniably resonant.

Ultimately, though, I don’t think I can articulate exactly why the text of these poems is gripping, in the same way that I cannot articulate exactly why I find some dancers compelling and others not. You cannot learn anything about New York City from these poems, and arguably you can’t learn very much about Lorca, either. I’m not even sure that the cliché is correct, that these poems can “teach you about yourself.” Maybe they don’t teach anything except how to feel as Lorca felt. I don’t think that’s a problem, though, since the point of reading is not always to learn about something, just as the point of moving isn’t always to get somewhere. Sometimes we read simply for the pleasure of the text.

View all my reviews

Review: Defending Science—Within Reason

Review: Defending Science—Within Reason

Defending Science - within Reason: Between Scientism And CynicismDefending Science – within Reason: Between Scientism And Cynicism by Susan Haack

My rating: 4 of 5 stars

Is quark theory or kwark theory politically more progressive?—the question makes no sense.

Ever since I can remember I was fascinated by science and its discoveries. Like Carl Sagan and Stephen Jay Gould, I grew up in New York City going routinely to the Museum of Natural History. I wondered at the lions and elephants in the Hall of African Mammals; I gazed in awe at the massive dinosaur fossils, which dwarfed even my dad in height and terror; I spent hours in the Hall of Ocean Life gaping at the dolphins, the sea lions, and the whales. The diorama of a sperm whale fighting a giant squid—two massive, monstrous forms, shrouded in the darkness of the deep sea—held a particular power over my childhood imagination. I must have made half a thousand drawings of that scene, the resolute whale battling the hideous squid in the imponderable depths.

Growing up, I found that not everybody shared my admiration for the process of science and its discoveries. This came as a shock. Even now, no intellectual stance upsets me more than science denial. To me, denying science has always seemed tantamount to denying both the beauty of the world and the power of the human mind. And yet here we are, in a world fundamentally shaped by our scientific knowledge, full of people who, for one reason or another, deny the validity of the scientific enterprise.

The reasons for science denial are manifold. Most obviously there is religious fundamentalism; and not far behind is corporate greed in industries, such as the coal or the cigarette industry, that might be hurt by the discoveries of scientists. These forms of science denial often take the form of anti-intellectualism; but what troubles me more are the various forms of science denial in intellectual circles: sociologists who see scientific discoveries as political myth-making, literary theorists who see science as a rhetoric of power, philosophers who see knowledge as wholly relative. Add to this the more plebeian forms of science denial often encountered on the left—such as skepticism about GMOs and vaccines—and we have a disbelief that extends across the political spectrum, throughout every level of education and socio-economic status.

And all this is not to mention the science-worship that has grown up, partly as a response to this skepticism. So often we see headlines proclaiming “Science Discovers” or “Scientists Have Proved” and so on; and time and again I’ve heard people use “because, science says” as an argument. Scientists are treated as a priestly class, handing out truths from high up above, truths reached by inscrutable methods using arcane theories and occult techniques, which must be trusted on faith. Needless so say, this attitude is wholly alien to the spirit of the scientific enterprise, and ultimately plays into the hands of skeptics who wish to treat modern science as something on par with traditional religion. Also needless to say (I hope), both the supinely adoring and the snobbishly scorning attitudes fail to do justice to what science really is and does.

This is where Susan Haack comes in. In this book, Haack attempts to offer an epistemological account of why the sciences have been effective, as well as a critique of the various responses to the sciences—from skepticism, to cynicism, to paranoia, to worship, to deference—to show how these responses misunderstand or mischaracterize, overestimate or underestimate, what science is really all about. Along the way, Haack also offers her opinions on the relation between the natural and the social sciences, science and the law, science and religion, science and values, and the possible “end of science.”

She begins, as all worthy philosophers must, by criticizing her predecessors. The early philosophers of science made two related errors that prevented them from coming to grips with the enterprise. The first was assuming that there was such a thing as the “scientific method”—a special methodology that sets the sciences apart from other forms of inquiry. The second mistake was assuming that this methodology was a special form of logic—deduction, induction, probability, and so on—used by scientists to achieve their results. In other words, they assumed that they could demarcate science from other forms of inquiry; and that this demarcation was logical in nature.

Haack takes issue with both of these assumptions. She asserts that, contrary to popular belief, there is no such thing as a special “scientific method” used only by scientists and not by any other sort of inquirer. Rather, scientific inquiry is continuous with everyday inquiry, from detective work to historical research to trying to find where you misplaced your keys this morning: it relies on the collection of evidence, coming up with theories to explain a phenomenon, testing different theories against the available evidence and new discoveries, using other inquirers to help check your judgment, and so on.

Because of this, Haack objects to the use of the adjective “scientific” as an honorific, as a term of epistemological praise—such as in “scientifically tested toothpaste”—since “scientific” knowledge is the same sort of knowledge as every other sort of knowledge. The only differences between “scientific” knowledge and everyday knowledge are, most obviously, the subject matter (chemistry and not car insurance rates), and less obviously how scrupulously it has been tested, discussed, and examined. To use her phrase, scientific knowledge is like any other sort of knowledge, only “more so”—the fruit of more dedicated research, and subjected to more exacting standards.

What sets the natural sciences apart, therefore, is not a special form of logic or method, but various helps to inquiry: tools that extend the reach of human sensation; peer-reviewed journals that help both to check the quality of information and to pool research from different times and places; mathematical techniques and computers to help deal with quantitative data; linguistic innovations and metaphors that allow scientists to discuss their work more precisely and to extend the reach of the human imagination; and so on.

Haack’s most original contribution to the philosophy of science is her notion of ‘foundherentism’ (an ugly word), which she explains by the analogy of a crossword puzzle. Scientific theories have connections both with other scientific theories and with the observable world, in much the same way that entries in a crossword puzzle have connections with other entries and with their clues. Thus the strength of any theory will depend on how well it explains the phenomenon in question, whether it is compatible with other theories that explain ‘neighboring’ phenomena, and how well those neighboring theories explain their own phenomena. Scientific theories, in other words, connect with observed reality and with each other at many different points—far more like the intersecting entries of a crossword puzzle than the sequential steps of a mathematical proof—which is why any neat logic cannot do them justice.

It is possible that all this strikes you as either obvious or pointless. But this approach is useful because it allows us to acknowledge the ways that background beliefs affect and constrain our theorizing, without succumbing to pure coherentism, in which the only test of a scientific theory’s validity is how compatible it is with background beliefs. While there is no such thing as a “pure” fact or a “pure” observation untainted by theory, and while it is true that our theories of the world always influence how we perceive the world, all this doesn’t mean that our theories don’t tell us anything about the world. Observation, while never “pure,” still provides a real check and restraint on our theorizing. To give a concrete example, we may choose to interpret a black speck in a photograph as a weather balloon, a bird, a piece of dirt that got on the lens, or a UFO—but we can’t choose not to see the black speck.

Using this subtle picture of scientific knowledge, Haack is able to avoid both the pitfalls of an overly formalistic account of science, such as Popper’s deductivism, and an overly relativistic account of science, such as Kuhn’s theory of scientific revolutions. There may be revolutions when the fundamental assumptions of scientists radically change; but the test of a theory’s worth is not purely in respect to these assumptions but also to the stubborn, observed phenomenon—the black speck. Scientific revolutions might be compared to a team of crossword puzzle-solvers suddenly realizing that the clues make more sense in Spanish than in English. The new background assumption will affect how they read the clues, but not the clues themselves; and the ultimate test of those assumptions—whether the puzzle can be convincingly solved—remains the same.

One of the more frustrating things I’ve heard science skeptics assert is that science requires faith. Granted, to do science you do need to take some things for granted—that there is a real world that exists independently of whether you know it or not, that your senses provide a real, if imperfect, window into this world, that the world is predictable and operates by the same laws in the present as in the past and the future, and so on. But all this is also taken for granted when you ruffle through your bag to find the phone you dropped in there that morning, or when you assume your shoelaces will work the same way today as they did yesterday. Attempts to deny objective truth—very popular in the post-modern world—are always self-defeating, since the denial itself presupposes objective truth (it is only subjectively true that objective truth doesn’t exist?).

We simply cannot operate in the world, or say anything about the world, without presupposing that, yes, the world exists, and that we can know something about it. Maybe this sounds obvious to you, gentle reader, but you would be astounded how much intellectual work in the social sciences and humanities is undermined by this inescapable proposition. Haack does a nice job of explaining this in her chapter on the sociology of science—pointing out all the sociologists, literary theorists, and ethnologists of science who, in treating all scientific knowledge as socially constructed, and therefore dubious, undermine their own conclusions (since those, too, are presumably socially constructed by the inquirers)—but I’m afraid Haack, in trying to push back against attempts like these, is pushing back against what I call the “Lotz Theory of Inquiry.”

(The Lotz Theory of Inquiry states that you cannot be a member of any intellectual discipline without presupposing that your discipline is the most important discipline in academe, and that all other disciplines are failed attempts to be your own discipline. Thus, for a sociologist, all physicists are failed sociologists, and so on.)

Because I am relatively unversed in the philosophy of science, I feel unqualified to say anything beyond the fact that I found Haack’s approach, on the whole, reasonable and convincing.

My main criticism is that she puts far too much weight on the idea of “everyday inquiry” or “common sense”—ideas which are far more culturally and historically variable than she seems to assume. This is exemplified in here criticism of religious inquiry as “discontinuous” with everyday forms of inquiry, since it relies on visions, trances, supernatural intervention, and the authority of sacred texts—normally not explanations or forms of evidence we use when explaining why we got food poisoning (the Mexican restaurant, or an act of God?).

While it is true that, nowadays, most people in the ‘developed’ world do not rely on these religious forms of evidence and explanation in their everyday life, it was not always true historically (think of Luther explaining the creaks in the walls as prowling demons), nor is this true across cultures. One has only to read Evans-Pritchard’s Witchcraft, Oracles, and Magic among the Azande to see a society in which even simple explanations and the most routine decisions rely on supernatural intervention. In cultures around the world, trances and visions, spirits and ghosts, are not seen as discontinuous with the everyday world, but a normal part of sensing and explaining the world around them.

Thus Haack’s continuity test can’t do the trick of demarcating superstitious or theological inquiry from other (more dependable) forms of inquiry into the observable world. It seems that something like Popper’s falsificationism (if not exactly Popper’s formulation) is needed to show why explanations in terms of invisible spirits and the visions caused by snorting peyote don’t provide us with reliable explanations. In other words, I think Haack needs to say much more about why one theory ought to be preferred to another in order to provide a fully adequate defense of science.

This criticism notwithstanding, I think this is an excellent, refreshing, humane book—and a necessary one. It is not complete (she does not cover the relation between science and philosophy, and science and mathematics, for example), nor is it likely to appeal to a wide audience—since Haack, though she writes with personality and charm, is prone to fits of academic prolixity and gets into some syntactical tangles (such as when she begins a sentence “It would be less than candid not to admit that this list does not encourage…” (This, by the way, only supports what I call the “Lotz Theory of Academic Writing”—that the quality of prose varies inversely to the number of years spent in academe—but I digress.) Yet for all its flaws and shortcomings, this book does an excellent job of capturing what is good in science and defending science from unfair attacks, without going into the opposite extreme of deifying science.

As the recent withdrawal from the Paris Climate Agreement shows, science denial is an all-too-real and all-too-potent force in today’s world. Too many people I know—many, smart people—don’t understand what scientists do and misconstrue science as a body of beliefs, with scientists as priests, rather than a form of inquiry that rests on the same presuppositions they rely on every day. Either that, or they see science is just a “matter of opinion” or as a bit of arm-chair theorizing. Really, there must be something terribly wrong with our education system if these opinions have become so pervasive. But perhaps there are some reasons for modest optimism. The United States shamefully backed out of the Paris Climate Agreement, but nearly every other country in the world signed on.

So maybe we naive people who believe we can know something about the world need to take a hint from the sperm whale, with its enormous head, preparing to descend to the black depths of the ocean to battle the multi-tentacled squid: hold our breath, have patience, and buck up for a struggle. We may get a few tentacle scars, but we’ve pulled through before and we can pull through again.

[Cover image by Breakyunit. Taken from the Wikipedia article on the Museum of Natural History; used under the Creative Commons BY-SA 3.0 license.]

View all my reviews

48 Hours in London

48 Hours in London

I’ve fallen far behind in my travel posts, and now I find myself in the embarrassing position of writing about a trip I took over a year ago. It also seems that, no matter how hard I try to be brief, I end up writing more and more. Well, enough prefatory remarks; on to business.


Introduction

For an American, there is something religious about visiting London for the first time. We have been hearing about the place all our lives. Dry humor, pints of beer, red phone booths, black taxis, fish and chips, bad teeth, good tea, bad weather, good tikka masala, the British Invasion, the British Parliament, the British Empire, Queen Elizabeth, Queen Victoria, Shakespeare, Dickens, the Beatles, Monty Python, Dr. Who, Harry Potter—London is the focal point of all our stereotypes, good and bad, of England and the English.

This is important for us Americans, since England is the only other country whose media we regularly consume. English media is so important for us because of our shared language. Unlike in Spain—where English-language songs often play on the radio (and people sing these songs without understanding the lyrics), and where American shows, overdubbed in Spanish, are extremely popular—in the United States we don’t listen to music in a foreign language if we can help it, and we only watch television that was originally made in English (overdubbing looks silly). This provincial preference for English media limits our options of foreign media mainly to England and Australia, and England has been the clear favorite.

A consequence of this popularity of English media is that Americans have internalized a highly partial picture of the English character. We associate the English with sophistication, elegance, wit, good manner, royalty, and the historical past.

This is almost the polar opposite of the English reputation in Spain. You see, Spain is an excellent travel destination for English holidaymakers—cheap, close, and sunny—and as a result, lots of English tourists come to Spain looking for a good time. A “good time” entails drinking, of course, and thus there are lots of drunken English people stumbling around city centers on any given night. As a result, Spaniards think of the English, not as genteel aristocrats, but as tipplers.

(Parenthetically, the English also have very different alcohol consumption habits than the Spanish. On a Friday or Saturday night, people in Spain begin drinking in earnest after dinner—which means 11 pm at the earliest. They often don’t even leave their apartment to go to bars and clubs until 2 in the morning, and don’t return home until dawn the next day. In London, on the other hand, drinking begins as soon as people leave work, at 5 pm. This is due, in part, to an old law in London that required pubs to close down at 11. So the English stop drinking when the Spanish barely start.

(This difference in schedule is supplemented by a difference in speed and volume. Spaniards are rarely visibly drunk. I have seen very few Spanish people stumbling from alcohol; instead, they focus on maintaining a level of comfortable tipsiness for a long period of time. Compared with Brits, Spaniards sip their drinks, and eat a lot while they drink. English people, by contrast, get properly drunk, and fast, much like many Americans do. As a consequence, Brits can be very loud drinkers—in my experience, at least. This is an especially interesting contrast, I think, since in every other circumstance Brits tend to be mucher more quiet than Spaniards.)

Of course, both the American and the Spanish stereotype is an over-generalization; they are based on very partial exposures to the English character. Partial and false as they may be, however, these stereotypes did succeed in endowing England with a certain contradictory mystique—a place full of witty drunkards, elegant and boisterous, cultured and slovenly? I needed to go see London for myself, to catch a glimpse of the reality behind the reputation.

My problem was that, at the time, I was particularly low on funds. And however distorted all the other stereotypes may be about London, this one is true: London is expensive. Well, it’s expensive if you enjoy eating, sleeping indoors, using transportation, and doing any activity besides walking and sitting outside. This was a few months before the Brexit referendum, and the pound was still strong.

As a result, my short trip to London—barely 48 hours—became a frantic exercise in traveling cheaply. I didn’t buy an oyster card, and I didn’t use the Tube or the buses. I ate “meal deals”—pre-packed sandwiches at Tesco supermarkets, not terribly delicious—instead of paying for dinner in a restaurant. And I focused on visiting museums, which are free in London, instead of other popular sites.


Arrival & First Impressions

As usual, I traveled with Ryanair. My plane arrived in Stansted, the smallest of London’s airports, where I had to fill out a form and wait in a long queue to enter the country. The English, it seems, are almost as paranoid about their borders as we are in the United States. From Stansted, I took the so-called Stansted Express to London’s central Liverpool Station. The ride took about an hour, and was not cheap. This is a typical Ryanair experience: the flight is inexpensive, but uncomfortable; and you land in an unpopular airport far outside the city. I am a loyal customer.

I sat in the train—dazed from lack of sleep, filled with nervous energy, physically miserable but mentally awake—and stared out the window in disbelief. Was I really here? Was this England, the land of dry humor and wet weather? I gazed out at fleeting patches of green countryside as the train sped by, and savored the delightful names of the train stations between Stansted and London. (Of course I can’t remember any of the names now; but as I look on Google maps, I find such gems as Matching Tye, Hartfield Heath, Hastingwood, Theydon Bois.)

English novels—from Austen, to Dickens, to Rowling—have powerfully shaped the American imagination of the past; and thus, by association, English place-names strike many Americans as irresistibly charming. Each name seems to be the title of another great novel, filled with irony and romance, and written with quaint wit. Likewise, the English countryside—a neatly trimmed park, whose rolling hills are covered in a grey mist—is featured in so many films that even the snatches of green I saw out the train window filled me with delight.

These feelings of romance and fantasy are, I suspect, nearly universal for Americans visiting England, and specifically London, for the first time. England is the only foreign country we regularly see in television and movies. This gives the experience of visiting England the effect of stepping into a movie set—everything is familiar, and yet unreal. The same thing happens, I believe, to many who visit New York for the first time. Many people have independently told me that it felt like they were in a movie, since so many landmarks and features were familiar to them from films.

The train arrived, and I got out to go find my Airbnb. I was on edge. The combination of sleep deprivation (the flight was terribly early) with the usual stress of navigating a foreign city (my phone didn’t have service), plus the feeling of unreality that comes from actually being in a place which I’d been hearing about all my life—all this combined to make me edgy and oversensitive. The double-decker red buses, the black taxis, the cars driving on the wrong side of the road, the eccentric road signs (including the delightfully existential “Change Priorities Ahead”), pubs with absurd names (“Ye Old Cheshire Cheese,” on Fleet Street), the red phone booths scattered seemingly at random (apparently, the city had once sold all these phone booths, only to regret the decision and then repurchase as many as they could)—my first impressions of London did contain many of ye quaint olde stereotypes that I expected.

Red Telephone Booths

But one thing that, as a New Yorker, always surprises me when I visit a new city is the lack of skyscrapers. Madrid has only four buildings which can reasonably be called skyscrapers, and they’re located in the north of the city, far outside the center. London has its own share of skyscrapers, to be sure. But walking around in London has nothing of that vertiginous feeling that New York produces, the feeling of being crushed by steel and glass, the feeling of constantly craning one’s neck. I had always thought of London as being a huge and imposing place, so this lack of skyscrapers did disconcert me somewhat.

In many other respects, however, London can be easily compared with New York: the bustling streets, the flashy billboards and ever-present advertisements, the endless shopping, the infinite variety of chain restaurants, the ethnic diversity, the smell and the grime. London even has the same phony Buddhist monks trying to scam tourists into giving them money. (You can find a great story about them here; and in case you’re wondering, if someone is aggressively asking you for money, you can safely assume that they’re not a Buddhist monk.)

As I discovered when I got to my Airbnb, one way that London is incompatibly different from both my country and Spain is the style of its outlets. I had to buy a power-adaptor there; and like everything in London, it wasn’t cheap. Be wise and buy one ahead of time.

These were my first impressions, hazy and distorted, as I walked from the station to my Airbnb. Already I was running short of time. It was midday Friday, and my flight home would leave early on Sunday. So I set out to the first place on my list, the National Gallery.


A Note on Cuisine and Language

I should preface my trip to the National Gallery with a mention of a small restaurant, the Breadline, which can be found nearby. I decided to eat there because it had fish and chips—I know it’s silly, but I couldn’t leave London without eating that iconic meal—and because its prices were eminently reasonable. The food was plain and basic, but nonetheless, for me, extremely satisfying. I even returned the next day to try an English breakfast, which I quite liked.

English food has a poor reputation, and I understand why; it is hardly a cuisine designed to have universal appeal. Nevertheless, if those two meals can be trusted to give a fair representation (an open question), I can say that I am a fan. There is something about greasy fried potatoes and fried fish, covered in white vinegar, that just feels right to me. And sausage and beans for breakfast is brilliant.

While I was eating, a young British man came in and said “A small white coffee to take away.” This is an excellent example of the differences between British and American English. This sentence, uttered in New York, would produce only bafflement. You would have to translate it to “A small coffee with milk to go,” if you wanted to be understood. I run into these differences constantly as I teach English. Before coming to Spain, I thought the differences between British and American English were minor and negligible, besides the accent; but I was wrong. Working with British textbooks and materials can be extremely frustrating, since often I don’t know what certain expressions or words mean—which is embarrassing when my students ask. Not only that, but there are a few subtle grammatical differences between the dialects, such as in the use of the perfect tense. But this is a digression of a digression; now to the museum.


The National Gallery

It is immensely satisfying to simply walk into a museum, without fees or lines, like it’s your own home. The experience is even better when the museum is one of the best in the world. The National Gallery is only behind the Louvre, the British Museum, and the Metropolitan in visitors per year; and this is especially impressive considering the museum’s collection is comparatively small, easily viewable in three hours or so. But for those with any sensitivity to art, these three hours will be among the most rewarding of your aesthetic life; for the National Gallery’s collection is remarkable both for its breadth and its excellence. The only museums I’ve visited that compare in the average quality of the paintings on display are the Prado in Madrid and the Musée d’Orsay in Paris. Every room in the gallery contains a masterpiece, often many.

Indeed, there are so many wonderful paintings—paintings I had seen and loved in art history books—that I cannot even hope to mention all of them in this post, much less describe the impression each one made on me. Nevertheless, I can’t resist the temptation to dwell on some of these exquisite works of the human imagination.

The first painting which attracted my attention was the portrait of Erasmus by Hans Holbein the Younger. This is an extraordinary demonstration of the portraitist’s art; instead of a photographic image, capturing the physical surface of the famous writer, we get a glimpse of the writer’s mind. As in any excellent portrait, the inner is made manifest in the outer without compromising the realism of the portrait. His sharply angular face bespeaks cleverness; his gaunt features reveals a life dedicated to the mind and not the body; his half-closed eyes and serene expression show calm intelligence and a wisdom that sees beyond earthly troubles. We also catch a hint of Erasmus’s self-complacent vanity: he looks a little too comfortable in his fine fur robe, and his hands rest a little too easily upon a volume of his own writings. Is there a more convincing portrait of the scholar?

Erasmus
Erasmus

Holbein has an even more famous work on display at the museum: The Ambassadors. This is a portrait of two aristocratic ambassadors (their identity was long debated), in a room which includes an exquisitely-rendered still-life of several objects—a lute, several globes, a psalm-book, and various instruments of navigation. But the most memorable, and bizarre, feature of this painting is the giant anamorphic skull in the center. Anamorphic means that it is purposefully distorted when viewed head-on, and must be seen from a specific perspective to be seen properly. When viewed from the front, the skull is just a strange grey diagonal shape; but when you walk to the painting’s left, the skull comes into focus. I can only imagine the technical virtuosity required of a painter to pull off this trick with such consummate perfection; when seen properly, the skull is finely detailed, beautifully shaded, and anatomically accurate. Holbein painted this tour de force in 1533.

The Ambassadors
The Ambassadors

The National Gallery also possesses what is probably the most famous papal portrait in history: Raphael’s portrait of Pope Julius II. Julius was the most important of the high renaissance popes; he is responsible for the beginning of the Vatican museum, Michelangelo’s commission to paint the Sistine Chapel, and Raphael’s commission to paint the Vatican Library. Not only that, Julius originated the idea of tearing down the original St. Peter’s and building a new one. Such a man must have had enormous energy and a deep sensitivity to art. And yet in Raphael’s portrait we see him weary, worn-out, and melancholic. He is gently gripping a handkerchief in one hand and his chair in the other; his eyes are hollow, and the wrinkled skin of his face droops loosely from his skull. He seems to be just feebly holding on to the last chords of life, staring at his own end with resignation. Such terrible realism was entirely new in papal portraiture.

Julius II
Julius II

Before going to the National Gallery, I didn’t look up any of the famous pictures that could be found there; so I was surprised and delighted when I found myself face to face with one of my favorite pictures, Jan van Eyck’s Arnolfini Portrait. I remember first seeing this portrait in Ernst Gombrich’s Story of Art, and being stunned. In the context of its time, 1434, the portrait is startling for its realism and its domestic subject: a marriage contract taking place in a bedroom. To a modern eye, perhaps the portrait no longer seems terribly realistic; the husband, with his pale expressionless face and his oversized clothes, always looks like he belongs in a Tim Burton film to me; but this only adds to its charm. The little toy-sized dog in the foreground—as adorable as ever—and the mirror in the background—showing us the whole scene from reverse in a distorted perspective—add to the painting’s undeniable power.

Alfonsini Portrait
Arnolfini Portrait

There are dozens more paintings—of equal importance and beauty—that I could devote an unworthy paragraph to describing; but this would only swell this post to unartistic dimensions. Yet I cannot move on without mentioning the National Gallery’s collection of Italian Renaissance art. This includes Piero del Pollaiolo’s masterpiece, The Martyrdom of Saint Sebastian, a landmark in the realistic use of perspective, with the saint enricled by crossbowmen.

Preist with Arrows
The Martyrdom of Saint Sebastian

Even more important is one of the two versions of Leonardo da Vinci’s Virgin of the Rocks. The other one is in the Louvre, and is usually considered the original; but I think the Gallery’s version, with its deeper shades and more dramatic chiaroscuro, is lovelier. Apart from its beauty, this painting is notable for its setting. Leonardo, as is typical of him, creates a carefully naturalistic background for this traditional Biblical scene. In previous eras, the background of paintings was almost entirely neglected; monochrome gold foil set off the human figures. But in Leonardo’s masterpiece, the background—a cave, which was an unprecedented choice—swallows up its subject. Such careful attention to rendering nature was something new in history.

Virgin of the Rocks

I also cannot move on without mention of Rembrandt. The National Gallery has several of Rembrandt’s most highly regarded works, including two of his self-portraits. Looking into the eyes of a famous artist, as he stares back at you from a self-portrait, is an unnerving experience; suddenly the gap in space and time that separates your lives vanishes; the artist has transcended death, and even transcended life; his focused gaze, dry pigment on a canvas, will outlast even your own living flesh. On a less dramatic note, the Gallery also has one of Velazquez’s most famous works: The Rockeby Venus, famous for being one of the few female nudes in Spanish art (one other being Goya’s La Maja Desnuda).

I will muster my self-control and mention only two more works.

By common consent, the greatest painter in English history is Joseph Mallord William Turner; and several of his finest works can be seen at the Gallery. Of these, my favorite is Rain, Steam, and Speed—The Great Western Railway. A locomotive emerges from a tempest, a black tube bursting through grey fog. Every line and color is blurred as if seen from an out-of-focus camera. All we can see in the background are hints of blue sky, a bridge, and a lake where some people are rowing in a little boat.

Turner Steam and Rail

In this paintings, Turner seems to have both anticipated and surpassed the impressionists in rendering momentary flashes of life. The swirl of indistinct color is absolutely hypnotic; yet the painting is not merely pretty, as are many impressionistic paintings, but a convincing symbol of the relationship between human technology and natural power. The train punches through the mist, in a confident gesture of industrial might; and yet the stormy clouds that swirl all around menace the lonely black locomotive. Both the train and its surrounding are impressive, even sublime, but also inhumanly vast and cold; and the two slight figures in the rowboat below reveal our true vulnerability in the face of these forces.

The last painting I’ll mention before forcing myself away—even remembering the Gallery is a pleasure—is Bathers at Asnières, by Georges Seurat. This painting was completed in 1884; but it was not until many years after Seurat’s death that it was recognized as a masterpiece. It depicts several middle-class Parisians relaxing by the Seine on a hot summer day. The technique Seurat used is almost pointillistic in its precise use of strokes and colors, relying mainly on bright horizontal daubs. The combination of statuesque modeling and poses—the bathers’ heavy bodies and horizontal orientation remind me of an Egyptian frieze—with Seurat’s delicate treatment of brushstrokes, makes the painting look crystal-clear from a afar and blurred from up close. The treatment really captures the feeling of heat: how everything can seem perfectly clear in the summer sun, and yet distant objects are blurred.

Bathers

Complementing this tension between form and vagueness, is an emotional tension between fun and desolation. At first glance the bathers are having a wonderful day. They are at leisure, enjoying the sunshine, the smooth grass, and the cool water. But then you notice how isolated is each one of the figures. They are all in their own world; many seem lost in thought. Their expressions are emotionless; their hunching posture bespeaks weariness. The factory spewing smoke in the background adds another hint of gloom.

To me, the painting is a devastating portrait of the isolation and meaninglessness of contemporary life. We imagine the figures working 9 to 5 jobs in offices during the week, performing mechanical tasks that mean nothing to them. Then they go to their usual restaurant for a bite to eat and then to their apartment to sleep. When with their friends, they drink and talk of trivialities. On a holiday, they come here, and stare into space, unable to articulate to themselves or anyone else the strange sense of emptiness that engulfs them whenever they have a free moment. It is a comfortable world that conceives of nothing beyond wealth and luxury; and its members, when released from their usual routine, can think of nothing to do. Convention dictates that they come here to ‘relax’. The painting is the perfect complement and illustration of Albert Camus’s The Stranger: it is a painting of a world of strangers, to one another and to themselves.


The next day, I headed to one of the other great museums in London: the British Museum. Originally I planned to include my account of that great institution in this post; but I ended up writing so much that I decided that the British Museum deserved its own separate essay, which you can find here.


Brief Snatches of London Life

When I wasn’t visiting museums, I had a few spare hours to wander around the city. This allowed me to glimpse, all too briefly, most of the major sights in London—the places that must be given a mention and a respectful nod in any post about that old city.

The first landmark I insisted on seeing was Big Ben. A trip to London without seeing that venerable clocktower would be like a trip to Pisa without its leaning campanile. I was so ignorant when I visited London (and remain, despite strenuous efforts) that I didn’t even know that Big Ben was attached to the British parliament building, the Palace of Westminster. It was a delightful surprise to find these two landmarks joined together.

Westminster Palace

Although it looks gothic, the palace is of fairly recent construction. The old Westminster palace burned down in 1834 (Turner witnesses the fire, and painted several pictures of it). The new building was designed by Charles Barry, who used a Gothic revival style in his plan. I doubt there is any parliament buildings in the world so elegant, so imposing, and so charming. Few experiences in London, if any, can do a better job of creating that Hollywood sensation of being in a movie than standing on the Westminster bridge, seeing that palace and the clocktower, and hearing the ringing bells of Big Ben chime out the hour.

From there I walked away from the bridge, pausing to examine the statue of Winston Churchill (covered in pigeon droppings) in the nearby plaza, and went to Westminster Abbey. In my very limited experience, this is easily the most beautiful church building in London. I can’t say much about it, because I didn’t go inside—it was closed by the time I arrived, and in any case I didn’t want to pay the steep entry fee—but I can say that its façade is exquisite, especially the north entrance. Funnily enough, Westminster Abbey is not an abbey—at least, not anymore. Originally it was an abbey of the Benedictine monks, but after the Protestant Reformation, and after a brief stint as a cathedral, the abbey was designated a church. For the last 1,000 years it has been the site of coronations and royal weddings.

Westminster Abbey

The walk from Westminster Abbey to Buckingham Palace is about 15 minutes—slightly longer if, like me, you walk through St. James’s Park. I highly recommend this, since the park is absolutely lovely.

Architecturally, Buckingham Palace isn’t much to look at; it presents itself as a cheerless, square, grey block. The building was not originally designed as a royal residence; it only became the seat of the monarchy in 1837, during the reign of Queen Victoria. The palace takes its name from the Duke of Buckingham, who originally had it built. It sits at the end of the Mall—a major road often used for processions—in a roundabout in which stands the golden Victoria Memorial, which commemorates that famous queen.

Buckingham Palace

Even so, neither the monument nor the palace would attract a great deal of attention, I suspect, were it not for the Queen’s Guard. Equipping guards with antique weapons and dressing them in bright red outfits with fluffy tall hats seems to be one of those conspicuously impractical things that wealthy and powerful people do to showcase their wealth and power. Your average rich entrepreneur or politician could not afford to keep a corps of totally inefficient guards performing ceremonial movements all day (which are, naturally, supplemented by other guards using modern weapons, keeping careful watch, and wearing less conspicuous clothes). Here is an incident that demonstrates the guards’ mainly ceremonial role: in 1982 a man managed to evade the palace guard and make his way to the Queen’s bedroom, where he was apprehended by the city police.

I spent some time watching the guards march back and forth, their limbs as stiff as a wooden nutcracker. Purely as athletic performers, the soldiers are undeniably impressive: the timing, the coordination, the posture, the endurance—it must require excellent physical condition and serious training to keep up the routine, especially considering that they wear those clothes even in hot weather. The guards now mainly function as a tourist attraction and an amusing symbol of British culture; but to be fair, the Queen’s Guard aren’t the only soldiers to wear funny clothes (think of the Swiss Guard in the Vatican) or to engage in elaborate ceremony purely for show (think of the tomb of the unknown soldier in Washington D.C.).


By the time I left the British Museum the next day, I only had about 6 hours left before I’d have to go to sleep and say goodbye to London. The best way to get the most out of this time, I figured, was a free walking tour. The guide was excellent, and the tour just what I wanted. Unfortunately I don’t remember the name of the company or of our guide; he introduced himself as the only American tour guide in London—so he shouldn’t be too hard to find. (But apparently this isn’t true; a Google search reveals an American woman named Amber who also gives tours.)

The tour focused on the City of London. You may not know—I certainly didn’t—that the “City of London” refers to the original part of the metropolis, founded by the Romans way back when. This original City of London is now only a tiny fraction of the greater metropolitan area; indeed, it is quite a small place, having an area of only one square mile. This city is far older than England; it has enjoyed special privileges (or, to use the phrase of the Magna Carta, “ancient liberties”) since the Norman Conquest;  and even now it retains the privilege to create many of its own regulations, independent of the greater metropolitan area or of England herself. The city has laxer building codes, which explains why so many of London’s skyscrapers are found there, and also looser financial regulations, which explains why it remains the center of London’s economic life. The City of London is home to the Bank of London, the London Stock Exchange, and Lloyd’s of London (the insurance market).

The tour began at Temple Station. Our guide took us along the river and then down Fleet Street, giving us bits of details about London’s past and present. We walked by Ye Olde Cheshire Cheese, one of the oldest and best known pubs in London, famous both for its silly name and its dark, windowless interior; and this prompted our guide to embark on a long, impassioned explanation of London pub culture. Though an American, he was clearly a convert to the pub way of life; he had strong opinions about what made a pub good or bad; and he had pub recommendations for nearly any area of the city. (I was so inspired that, after the tour, I went into a pub to get a drink; but the beer was so expensive and so mediocre that my disappointment was even more bitter than the beer.)

Soon we reached St. Paul’s Cathedral. The tour didn’t pause for us to go inside; and, in any case, the entrance fee is formidable enough to discourage penurious travelers like me. Among other things, St. Paul’s is famous for having one of the tallest domes in the world. But the present St. Paul’s replaced an older, even taller cathedral (well, it was taller before its spire was destroyed by lightning), which was badly damaged in the Great Fire of London in 1666. The present building was designed by Sir Christopher Wren, and completed in his lifetime. Wren was, if not the greatest, at least the most prolific architect in England’s history. He designed and oversaw the construction of no less than 52 churches after the Great Fire. The architect himself is buried in the crypt of the cathedral, in a modest grave that says “Reader, if you seek his monument—look around you.”

St. Paul's Buildings

From there we moved on to the Monument to the Great Fire, also designed by, you guessed it, Sir Christopher Wren. As our guide pointed out, the monument—a tall doric column that originally rose far above its surroundings—is now hemmed in by neighboring buildings and dwarfed by modern architecture. The guide used this as an example of the tendency of Londoners to be more interested in the future than the past.

To emphasize this point, he directed our attention to the skyscraper at 20 Fenchurch Street, a bizarre, top-heavy construction, completed in 2014, whose shape quickly earned it the nickname ‘The Walkie Talkie’. This building won—and earned—an award for ugliness. (It was also discovered that the building’s concave shape focused the sun’s rays strongly enough to damage cars, ignite doormats, and fry eggs; a screen has since been installed to prevent this from happening.) But the Walkie Talkie is only one of the many skyscrapers that have sprung up in the City of London in recent memory, despite concerns that these tall monstrosities will dwarf and obstruct historic buildings.

Walkie Talkie
Walkie Talkie

From the cathedral, we went down towards the river and ended up under London Bridge. Many people, including me, assume from the nursery rhyme that London Bridge is a tourist attraction; indeed, the justly famous Tower Bridge, which spans the Thames nearby (see below), is often mistakenly called the London Bridge. Sad to say, the current London Bridge is a brutalist piece of concrete and steel, a minimalistic slab of stone that stretches across the Thames, without charm, beauty, or really any distinguishing quality.

The nursery rhyme dates from a time when a different London Bridge spanned the Thames. The ‘Old’ London Bridge, built in 1209 and demolished in 1831, rested on stone arches and was covered in wooden buildings (which proved to be a fire hazard). It was famous for being the site where the severed heads of those executed for treason, dipped in tar and impaled on pikes, were displayed for passersby to take heed. William Wallace’s head was the first to play this role.

In 1831, the ‘New’ London Bridge was built to replace the crumbling medieval construction; this bridge also rested on arches, but it was taller and so allowed bigger ships to pass underneath. In the 1960s it was discovered that London Bridge was falling down (sinking into the riverbed) and had to be replaced. In true English entrepreneurial spirit, the bridge was sold; an American oil tycoon, Robert McCulloch, bought the bridge, disassembled it, shipped it to the United States, and then reassembled it in Lake Havasu City, Arizona—a little piece of English history in the American south. The current behemoth was finished in 1972.

The tour came to an end front of the Tower of London. Once again, I didn’t go inside that old castle—I am really exposing myself as a pathetic traveler, I know—but contented myself with walking around the perimeter. From the outside, the Tower of London doesn’t seem to merit the name “tower”; the White Tower, the central citadel which sits at the center of the castle complex, is less than 100 feet tall—almost invisible in the context of London. The castle is quite venerable; it was first constructed by the Normans in the 11th century, and was expanded in the preceding two centuries. At present the Tower of London is a large complex with two concentric layers of stone walls surrounding the central keep, and some additional buildings such as a chapel and a barracks. The outer wall is surrounded by a moat, now left dry. Besides the castle itself, visitors can see several historical objects on display, such as Henry VIII’s armor and—most notably—the Crown Jewels of England.

The Tower of London has played an important and often a nefarious role in English history. For a long time it served as the British version of the Bastille, as a prison for traitors and other political pests. Anne Boleyn, unfortunate wife of Henry VIII, is the most famous prisoner ever to be held and executed in the tower; legend has it that her ghost still travels through the old castle, her severed head under her arm. But as I stood there looking at that stone pile, I thought only of Thomas More, the British intellectual who dreamed of a utopia with freedom of religion, and who was imprisoned in the tower and then executed for being true to his Catholic faith (also by Henry VIII). More’s head was eventually covered in tar and displayed on a pike on the old London Bridge.

The tour guide ended with a short speech, which I will try to reproduce here:

“In this tour, we’ve seen many different types of power. We have the political and military power of the Tower of London, the religious power of St. Paul’s cathedral and the Church of England, and the economic power of the London Stock Exchange. And this, ultimately, is what the City of London has always been about: the use of power to control its own destiny. It’s a place oriented towards the future, constantly striving to master whatever is the next form of social power in order to maintain its dominance in the world’s affairs.”

And this strikes me as perfectly true.

From the Tower of London I made a quick walk to the nearby Tower Bridge. This is the iconic bridge often mistakenly called the London Bridge. It’s a pretty sight, with two neo-Gothic towers supporting two platforms, one higher and one lower. Built in the 1890s, its design, by Sir Joseph Bazalgette, was innovative: a combined suspension bridge and drawbridge. The idea (according to the tour guide) was to allow pedestrians to keep using the bridge even when the drawbridge was drawn up to allow ships to pass.

Pedestrians soon learned, however, that walking up the stairs in one of the towers, crossing the upper platform, and then walking down the stairs in the other platform, took even more time than just waiting for the drawbridge to close again. Accordingly, pedestrians hardly ever used the upper platform, which came to be frequented mainly by criminals and prostitutes; it was closed in 1910. Nowadays, you need to pay an entrance fee to go up to the upper walkway. This is just another example of a brilliant idea that doesn’t take into account basic human realities: an innovative plan for a bridge that ignores the time and effort needed to climb several flights of stairs. It is certainly pretty, though.

Tower Bridge

As my last stop I made my way to Shoreditch, a neighborhood which had been recommended to me by a Londoner in my Spanish class. Shoreditch is London’s Williamsburg: a previously working class neighborhood that has been gentrified, and is now home to trendy restaurants and technology companies. The area even looks like Williamsburg, with narrower streets and older, shorter buildings, full of colorful shops and cafes. The population, too, is almost indistinguishable from its New York counterpart: men with large mustaches, plaid shirts, and suspenders; women with half their heads shaven, nose rings, and small, tasteful tattoos—in a word, hipsters. I felt right at home. The gentrification is so extreme as to be beyond parody; there is, for example, a cafe, the Cereal Killer Cafe, that serves only breakfast cereal.

To illustrate my own complicity in the world of hipsterdom, I went to a cafe famous for its rainbow-colored bagels, the Brick Lane Beigel Bake. This little cafe is open 24 hours a day, it is cheap, and it is excellent. I didn’t order a rainbow bagel, but instead a ‘hot salt beef’ on a roll. The beef comes with pickles and strong, superb mustard. I had two (for a very reasonable price) and I was stuffed. Another positive mark for English cuisine.

My time was up. My flight was leaving at seven the following morning, which meant I had to wake up at four to give myself enough time to walk to the train station and take the train to the airport.

All told, I spent less than 48 hours in London. I was constantly tired, hungry, and physically exhausted. I ate little, I slept less, and I walked almost constantly—more than 10 hours each day. I spent as little money as I could, and still the trip was expensive. I learned as much as I could, but left the vast majority of the city unseen and unknown. The trip was a physical ordeal and a financial hardship. But in return for all this trouble, I encountered, however briefly, one of the great cities of the world.

Lessons from the British Museum

Lessons from the British Museum

The British Museum is a project of the Enlightenment. It is one of the oldest—older than both the Louvre and the Prado—and the biggest museums in the world. Its collection began when Sir Hans Sloane, a doctor and naturalist, bequeathed his private collection of “curiosities” to the state. The collection grew from there, with the goal of encompassing all of human history under one roof. And because the British Empire soon came to dominate half the globe, this ambition was not so ludicrous as it may at first appear. Ironically, you can probably find finer artifacts in the British Museum than in the countries that the exhibits represent.

Museum Facade

The museum’s massive collection is housed in an equally massive neoclassical building designed by Robert Smirke. Its collection is divided by era and area: Prehistory, the Ancient Near East, Ancient Egypt, Ancient Greece, South Asia, East Asia, the Americas, Africa, and Oceania. Wandering around the museum is like getting lost in a copy of a World History textbook brought to life. The collection is so vast and detailed that the visitor is simply overwhelmed. There is far too much information to take in and process in one visit—even in a dozen visits. Each artifact on display deserves deep study; and when each room is full of hundreds of these artifacts, there is not much you can do except dumbly gape. Likewise, there is not much a writer can do except emulate Sir Hans Sloane and collect curiosities.

Central Room

I began in the Ancient Near East: Mesopotamia, the cradle of civilization. There is something sacred about the simple fact of age. Seeing ancient artifacts is the closest we get to time travel. The passing years corrode all material things, just as the gentle flowing of a stream eventually cuts through rock. The physical bodies of these ancients have long decayed; everything they knew and loved is gone. And yet, 5,000 years later, the messages they carved still preserve an echo of their voice.

Cuneiform tablet

Every time I look at a cuneiform tablet—its crisscrossing wedges and lines unintelligible to me, but visibly a language—I find myself profoundly moved. For all I know, the message is a record of a banal commercial exchange—so many goats for so many bushels of hay—but the simple fact of writing something down, of imprinting words indelibly, signals the beginning of that noble and doomed war against time—the war we call ‘civilization’.

Seeing these first scratches in stone is like catching a glimpse of the universe a few seconds after the big bang. It marks the commencement of something entirely new in history: the ability to transfer knowledge across generations; to develop literature, philosophy, mathematics, and science; to create unchanging codes of law to fairly govern societies; to make the shadows of thought external and permanent. Less fortunately, the beginning of writing also marks the origin of bureaucracy and accounting—indeed, this seems to have been its original purpose, as communities grew too big to be governed by word of mouth.

Perhaps the most impressive object in this section is the Standard of Ur. (This is one of the objects chosen in Neil MacGregor’s series, A History of the World in 100 Objects. You can listen to the segment here. I wish I had read the accompanying book, which looks excellent, before my visit to the museum; it’s on my list.)

Standard of Ur
Detail from the Peace side

It is called a ‘standard’, but nobody really knows what it was used for: a soundbox for a musical instrument or a box to store money for sacred projects—who can say? All we can really determine is that it almost definitely was not a standard, since the drawings are too detailed to be seen from far away. The object dates from 2,600 BCE and consists of a box whose sides depict scenes of war and peace, in three lines of images that look like a comic book. On the war side, we see an army marching off to battle, with armored footsoldiers and men in chariots; below, these charioteers trample enemies underfoot. On the reverse side, we see men seated at a banquet, drinking, while a harpist and a singer provide background music. Below, men are herding animals and carrying sacks of goods on their back, presumably to offer them in tribute to the king.

This standard was found in the site known as the Royal Cementary of Ur, along with objects seen on both the War and the Peace side. Judging from the numerous skeletons in the tomb, it seems that the Sumerians had a practice similar to the Egyptians: upon the death of kings and queens, the royal attendants were put to death to serve their master in the afterlife. I always shudder when I hear about these practices. Drinking poison to follow your king in death seems to be the height of unjust absurdity. I feel angry on behalf of the attendants who lived in oppression and who did not even find freedom in their master’s death. And yet, despite my anger, I can’t help feeling a sort of awe at the level of devotion displayed by this practice. To identify so strongly with a leader that you follow them in death seems hardly human; just as an ant or a bee colony dies with its queen, so these human groups voluntarily put themselves to death.

Violence and oppression thus form the subject-matter of this artifact and surround its discovery. On one side we see the king marching off the war and killing enemies; on the other side the king enjoys the tribute of his hard-working subjects. Nowadays it is impossible to see the society depicted on the Standard of Ur as anything but monstrous: a predatory upper-class stealing from the poor, and then sending the lower-class off to war to defend their bounty and to capture slaves.

But it is worth asking whether the beginning of civilization could have been any different. Humans had just begun farming and forming cities. For the first time in the history of our species, we were living in large, permanent settlements alongside strangers. For the first time, we had enough resources to allow some people in the community to specialize in tasks other than gathering food: priests, soldiers, musicians, administrators, rulers, and artisans. The accumulation of resources always invites raids from without and crimes from within; and fending off these attacks requires organization, leadership, and violence. A community simply couldn’t afford to be anything but authoritarian and militaristic if it hoped to survive. It is an unfortunate fact of human history that justice and security are often at odds—a fact we still confront in the question of surveillance and terrorism.

As a parting thought, I just want to note how remarkable it is that we can look at something like the Standard of Ur—a luxury product made 5,000 years ago, by people who spoke a different language, most of whom couldn’t write, who had a different religion, who lived in a different climate, a people whose experience of the world had so little in common with our own, a people who lived just at the beginning of history—we can look at this object and find it not only intelligible, but beautiful. We experience this same miracle when we read the Epic of Gilgamesh—a story still moving, 4,000 years after it was written down.

In my first anthropology class we learned that humans are cultural creatures, fundamentally shaped by their social environment. But if this were true—if our inborn nature were something negligible and our culture omnipotent—wouldn’t we expect a civilization which flourished in such different circumstances to give rise to art that we couldn’t even hope to understand? And yet, so universal is the human experience that, 5,000 years later, we can still recognize ourselves in the Standard of Ur.

This constancy of our nature is not only manifested in great works of art. For me, the most touching illustration of this are the little baubles and trinkets, the sundry domestic items that give us a taste of daily life in that faraway age. We see the universal human urge to beautify our bodies demonstrated by the jewels of Ancient Greece, Persia, and Egypt, the rings, earrings, pendants, necklaces, armlets, and bracelets which still glitter and charm today—indeed, designs inspired by ancient examples can be bought in the museum store. We see this also in one of the oldest board games ever discovered, the Royal Game of Ur, whose game-board and game-pieces are instantly recognizable by the modern visitor. A cuneiform tablet has also been found which explains the rules, allowing scholars to play the game 4,500 years after its creation (though I can’t find out whether they enjoyed it).

Yet if the continuities are striking, so are the divergences. I feel the gap that separates the present from the ancient past most poignantly whenever I look at a papyrus scroll covered in Egyptian hieroglyphics. Fewer human artifacts look more alien to me than these bits of ancient writing. Lines of simple images—eyes, storks, sparrows, hawks, snakes, scarabs, and many I can’t recognize—run up and down the papyrus, in a parade of symbolic forms. On the top and in the corner are larger drawings, depictions of mythological scenes, illustrations of dead gods and long-forgotten myths. What is most striking is how the writing is a kind of picture, and the pictures a sort of writing; the visual and the verbal are combined into a web of meaning, absolutely saturated with significance.

Heiroglyphics

The thing that is so fascinating about the culture of ancient Egypt is that, for hundreds and thousands of years, through the rise and fall of dynasties and the passing away of dozens of generations, there is a unified, complete, and instantly recognizable aesthetic. It is immediately obvious to any visitor that they have entered the Egyptian section, whether in the first dynasty or the twentieth.

There is undoubtedly something terrifying about this continuity—terrifying that a society based on gross injustice persisted, with its culture nearly unchanged, for a span of time that dwarfs that of our own Western culture. But it is also easy for me to imagine the deep satisfaction enabled by such a complete mythology—a symbolic worldview that decorates every surface, imbues every hour of the day with importance, structures the year and explains the cosmos, penetrates into the depths of reality and even looks beyond the veil that separates life from death. I feel similar stirrings when I look at an illuminated manuscripts from our own Middle Ages, an artifact not so different from the Egyptian scrolls.

Sarcofagus

In any exhibition on ancient Egypt the mummies are always the stars—those shrunken, dried corpses carefully wrapped and sealed in stone sarcophagi to be sent down the eons. When I was there, a crowd was gathered around a mummy of a woman named Cleopatra, perhaps in the mistaken belief that she was Mark Antony’s famous paramour. Yet the most moving object in the Egypt section, for me, is the colossal bust of Ramesses II. (This was also featured on The History of the World in 100 Objects; you can listen to it here.)

Ramesses II

Ramesses II was one of the most effective leaders in all of Egypt’s history. He was born about 1,300 years before the common era, and lived 90 long years, making his reign not only the most iconic, but the longest of ancient Egypt. An energetic general, statesman, and administrator, he was most of all a builder. He presided over the construction of dozens of colossal statues, temples, monuments, and palaces. It was this Ramesses who inaugurated the Abu Simbel complex, whose great temple includes four colossal statues (20 meters, or 66 feet high) of Ramesses himself, carved directly from the hillside. Ramesses was also responsible for the so-called Ramesseum, not a tomb, but a temple complex built for the worship of him, the deified Ramesses, during his reign and after his death.

The bust of Ramesses in the British Museum was taken from this Ramesseum. It is only a fragment: the base of the statue, in which the pharaoh is seated, is still in the Ramesseum. Napoleon’s troops first tried and failed to move the statue; then the British hired an Italian adventurer to do it, who used a combination of pulleys, hydraulics, and old-fashioned manpower. As Neil MacGregor notes, it is a testament to the power and ingenuity of the Egyptians that, 3,000 years later, their statues still require technical tours de forces to move. Imagine the discipline, organization, and sheer amount of sweat and backbreaking effort to move the original stone?

Cracked and battered as he is, the statue still has the effect that its creator intended: the impression of calm omnipotence. The pharaoh looks down serenely from a great height—imperturbable, immovable, eternal. Such a work is clearly the product of a culture in its prime, when artistic execution and social organization were raised to the pitch of perfection. As a mere display of technique, the statue is remarkable: the ability to transport such a massive block of stone, and then to chip away and polish the surface until all that remains is a perfect image of power. And you can imagine how effective these images were as propaganda, in a time before television or telescreens.

In life, Ramesses was as close as any human can get to complete power. In death, he was worshipped as a god. His name and his face have come down to us from over 3,000 years ago. This statue has outlasted whole kingdoms and countries; and there is a good chance it will keep persisting, even when (God forbid) the British Museum is no more. So you might say that, as propaganda, the statue has been an unmitigated success. And yet, Ramesses himself, his empire, and his entire culture—all of them have passed into memory, leaving only their stones and their bones. Impressive as the bust undeniably is, it is also undeniable that it now stands as a sample of Egyptian statuary, to be gawked at by visitors, impressed but certainly not worshipful.

All wood rots, all iron rusts, and everything human turns to dust. Shelley, upon hearing reports of this very bust of Ramesses II, put this sentiment into famous lines:

And on the pedestal these words appear:
My name is Ozymandias, king of kings:
Look on my works, ye mighty, and despair!”
Nothing beside remains: round the decay
Of that colossal wreck, boundless and bare,
The lone and level sands stretch far away.

The final irony is that those immortal lines, like Ramesses’s bust, have outlasted their makers and will likely last as long as there are humans who worry about the finitude of life.

If there is any hope of immortality, it is through the communication of our ideas—something demonstrated most poignantly by the Rosetta Stone. That ancient document—an administrative decree about taxes and tithes—now stands in the British Museum as a testament to the ability of different cultures in different places and times to understand one another. In the modern world it has become trendy to agonize about the impossibility of translation and the gulfs that separate different cultural worldviews. But humans have been translating since the beginning of history; and the very fact that we can decipher a long-dead language, written in an archaic script, using another translation of an ancient language written in another archaic script, shows that communication can transcend wide differences of perspective.

Rosetta Stone.jpg
Photo includes a reflection of the writer in the glass

I have already spent far too much space describing the treasures of the British Museum. But I cannot leave off without a mention of the Elgin Marbles from the Parthenon.

The Parthenon, as everyone knows, is the most important and iconic ruin from Ancient Greece. Built during Athens’ golden age as a temple to their patron goddess, Athena, it has been both a church and a mosque in its long life. The Ottomans even decided to use the temple to store ammunition—guessing that their enemies, the Venetians, would never dare to fire at such a hallowed edifice. This guess was incorrect; in 1687 a Venetian bomb detonated the ammunition inside, causing a massive explosion that left only the building’s husk intact. Then in 1800 an art-loving British aristocrat, the Earl of Elvin, in highly dubious circumstances, excavated sculptures and friezes from the ruined Parthenon to decorated his home. But a costly divorce forced him to sell his home and his collection to the British government. As a result, the parts of the Parthenon, in the next chapter of their long and battered history, found their way into the British Museum.

Unsurprisingly, this acquisition is controversial. Imagine if a museum in England had a part of Mount Rushmore. Americans wouldn’t be happy, and neither are the Greeks. The Greek government has been trying to repossess the collection since 1983. There are many arguments averred for sending the marbles back to Greece. The most compelling is the simplest: that the Parthenon is one of the most important cultural monuments in European history, and should be as complete as possible. In any case, the legality of the original transfer has always been questioned: it’s possible that Elvin didn’t have official permission from the Ottoman Empire. In England, public opinion was divided at the time—Lord Byron famously thought it was inexcusable vandalism—and seems to be in favor of returning the collection nowadays. The British Museum is (also unsurprisingly) in favor of keeping the marbles.

For my part, it seems unquestionably just to return the Elgin marbles to Athens. I do admit, however, that I was grateful for the opportunity to see the Parthenon friezes in the British Museum. The display is excellent, allowing the visitor to clearly see the friezes and the statues. If the marbles were inserted back into their original places in the Parthenon (if this is even possible), then they wouldn’t be as clearly visible. And if the marbles were merely displayed in a museum in Athens, then I’m not sure there would be any improvement of presentation. Nevertheless, it does seem that strict justice demands that the marbles be returned.

As for the Elgin marbles themselves—the friezes, metopes, and pediments that line the wall of one enormous exhibit in the British Museum—what is there to be said? The sculptures are likely the most studied and analyzed works of art in Western history; and not only that, they are perhaps the most influential. Almost from the start these works have defined and illustrated classical taste. Indeed, the Parthenon has served as such a ubiquitous model for later artists that it is nearly impossible to respond to them as genuine works of art. They are immediately familiar; you feel that you’ve seen it all before, even if this is your first time in the British Museum.

To the modern eye, the Parthenon sculptures can appear cold, austere, and timeless—perfect human forms carved from perfect white marble. It is scandalous to imagine that these frigid sculptures were once painted with gaudy colors; and inconceivable that, once upon a time, these paragons of artistic orthodoxy were once innovative and daring works that broke every convention.

A visitor to the British Museum can catch a glimpse of the originality of these works if they visit the Babylonian and Egyptian sections first. Moving on from those precursors to the Greeks, you can see obvious continuities—heros and gods, mythological beings and legends, religious processions and rituals—but the changes are even more striking. In the Parthenon, we see a new thing in history: a confident belief in the powers of human intelligence and creativity. Unlike the static and rigid bodies of Egyptian pharaohs, sitting straight up and look straight ahead, we see bodies twisting, turning, leaping, extending, straining—in other words, we see the human body in motion, propelled by its own force. This is not a society that believes in stable order, but in ceaseless striving.

Parthenon Metope

The new perspective is illustrated most clearly by the metopes depicting the centauromachy: the battle between the human lapiths and the half-human half-animal centaurs. In Egyptian mythology, many of the gods were half-animal; and Sumerian palaces were often guarded by the sphinx-like lamassus. In both of these cultures, the natural world, the world of animal life, was seen as a source of power and cosmic order. Yet in the Parthenon the half-animal creatures, the centaurs, are agents of chaos and destruction—creatures who must be conquered and vanquished. For better or for worse, this urge to conquer our own animal nature has been with us ever since.

There are so many more—thousands and thousands more—works that deserve deep contemplation in the museum’s collection, but I will stop here. Yet as I take leave of the British Museum, I want to leave you with one parting thought.

No institution I have seen better illustrates both the enormous strengths and the limitations of the Enlightenment than the British Museum. And because the Enlightenment is very much still with us, it is vital that we understand these strengths and limitations.

Its strengths are undeniable, especially in the context of history. As compared with what came before it, the conception of humanity and history embodied in the museum is undoubtedly an advance. Europeans began to be interested in non-Europeans cultures. Their sense of ancient history began to extend far beyond Ancient Greece and the tribes of Israel. Instead of focusing on their own country or their own religion, Europeans could conceive of humanity as a whole, with a single origin and a common destiny. The museum also demonstrates the democratic spirit of the Enlightenment. The knowledge is put on display for all to see and learn, not sequestered in schools or guarded by jealous academics. Just as the friezes of the Parthenon illustrate the confidence in human intelligence, so does the British Museum exemplify the new, boundless confidence in human reason—the belief that the world is intelligible, that we can communicate our knowledge to anyone, and that our knowledge is not bounded by creed, language, or nation.

But the museum also demonstrates the limitation of this universalist aim. For the idea of a museum that encompasses all of human history relies on the idea that we can create a neutral context in which to understand that history. This underlying notion is clear at a glance: each room—plain, white, full of right-angles—is filled with objects wrenched from their original context. Some of this context is restored, but only as information on panels. My question is: can a modern visitor, looking at a bracelet from ancient Egypt, reading about that bracelet on its accompanying caption, really grasp what this bracelet was to the jewel-maker who created it or the aristocrat who wore it? For comparison, imagine walking into a museum filled with objects from your room, except each object is carefully labeled and sits on its own display. Could any visitor understand what life was like for you?

My point is that there is something inescapably artificial and sterile about the museum. In attempting to create a universal history, a neutral context for information, the museum transforms its objects and imposes a new context. The original meaning of each artifact, how they were used and understood by their creators, is abolished; and instead, each artifact becomes a piece of evidence in a specifically Enlightenment story about the growth of humankind.

To put this another way, the Enlightenment attitude fails to come to grips with how our attempts to understand the world transform what we’re trying to understand. When knowledge is seen as impersonal, existing in a neutral context, simply a matter of seeing and describing, then knowledge becomes blind to its own power. And the British Museum is, among many other things, a demonstration of British power: the financial, political, and military means to scour the world and collect its most valuable objects into one location. It is also a demonstration of British intellectual power: the power to understand all of human history, to see truly and to interpret correctly, to escape provincialism into neutral universality.

I need to pause here. I sound as if I am being harshly critical of the British Museum, and indeed I am. But the truth is that my brief visit was staggering. I saw and learned so much in such a short time that I cannot possibly deny that I think the museum is valuable. The reason I level these criticisms at the British Museum is not because I think this intellectual project it represents is bankrupt or futile, but because, with all its flaws and limitations, with all its political and economic underpinnings, it seems to be the best we have yet achieved in humanity’s understanding of itself. I see these challenges not as reasons to despair—any intellectual project will have its limitations—but as spurs to creative solutions.

Review: The Stranger

Review: The Stranger

The StrangerThe Stranger by Albert Camus
My rating: 5 of 5 stars


In Search of Lost Time

The Stranger is a perplexing book: on the surface, the story and writing are simple and straightforward; yet what exactly lies underneath this surface is difficult to decipher. We can all agree that it is a philosophical novel; yet many readers, I suspect, are left unsure what the philosophical lesson was. This isn’t one of Aesop’s fables. Yes, Camus hits you over the head with something; but the hard impact makes it difficult to remember quite what.

After a long and embarrassingly difficult reread (I’d decided to struggle through the original French this time), my original guess as to the deeper meaning of this book was confirmed: this is a book about time. It is, I think, an allegorical exploration of how our experience of time shapes who we are, what we think, and how we live.

Time is highlighted in the very first sentence: Meursault isn’t quite sure what day his mother passed. Then, he makes another blunder in requesting two days off for the funeral, instead of one—for he forgot that the weekend was coming. How old was his mother when she died? Meursault isn’t sure. Clearly, time is a problem for this fellow. What sort of a man is this, who doesn’t keep track of the days of the week or his mother’s age? What does he think about, then?

For the first half of the book, Meursault is entirely absorbed in the present moment: sensations, desires, fleeting thoughts. He thinks neither of the past nor of the future, but only of what’s right in front of him. This is the root of his apathy. When you are absolutely absorbed in the present, the only things that can occupy your attention are bodily desires and passing fancies. Genuine care or concern, real interest of any kind, is dependent on a past and a future: in our past, we undergo experiences, develop affections, and emotionally invest; and these investments, in turn, shape our actions—we tailor our behavior to bring us closer to the things we care about. Without ever thinking of the past or the future, therefore, our life is a passing dream, a causeless chaos that dances in front of our eyes.

This is reflected in the language Camus uses. As Sartre noted, “The sentences in The Stranger are islands. We tumble from sentence to sentence, from nothingness to nothingness.” By this, Sartre merely wishes to highlight one aspect of Meursault’s thought-process, as mirrored in Camus’s prose: it avoids all causal connection. One thing happens, another thing happens, and then a third thing. This is why Camus so often sounds like Hemingway in this book: the clipped sentences reflect the discontinuous instants of time that pass like disjointed photographs before the eyes of Meursault. There is no making sense of your environment when you are residing in the immediate, for making sense of anything requires abstraction, and abstraction requires memory (how can you abstract a quality from two separate instances if you cannot hold the two instances in your mind at once?).

Now, the really disturbing thing, for me, is how easily Meursault gets along in this condition. He makes friends, he has a job, he even gets a girlfriend; and for quite a long time, at least, he didn’t get into trouble. Yet the reader is aware that Meursault is, if not a sociopath, at least quite close to being one. So how is he getting along so well? This, I think, is the social critique hidden in this book.

Meursault lives a perfectly conventional life; for a Frenchman living in Algeria during this time, his life could hardly be more ordinary. This is no coincidence; because he’s not interested in or capable of making decisions, Meusault has simply fallen into the path provided for him by his society. In fact, Meursault’s society had pre-fabricated everything a person might need, pre-determining his options to such an extent that he could go through life without ever making a decision. Meursault got along so well without having to make decisions because he was never asked to make one. Every decision was made by convention, every option conscribed by custom. If Meursault had not been locked up, chances are he would have simply married Marie. Why? Because that’s what one does.

So Camus lays out a problem: custom prevents us from thinking by circumscribing our decisions. But Camus does not only offer a diagnoses; he prescribes a solution. For this, we must return to the subject of time. When Meursault gets imprisoned, he is at first unhappy because he is no longer able to satisfy his immediate desires. He has been removed from society and from its resources. This produces a fascinating change in him: instead of being totally absorbed in the present moment, Meursault begins to cultivate a sense of the past. He explores his memories. For the first time, he is able, by pure force of will, to redirect his attention from what is right in front of him to something that is distant and gone. He now has a present and a past; and his psychology develops a concomitant depth. The language gets less jerky towards the end, and more like a proper narrative.

This real breakthrough, however, doesn’t happen until Meursault is forced to contemplate the future; and this, of course, happens when he is sentenced to death. His thoughts are suddenly flung towards some future event—the vanishing of his existence. Thus, the circle opened at the beginning is closed at the end, with a perfect loop: the novel ends with a hope for what will come, just as it began with ignorance and apathy for what has passed. Meursault’s final breakthrough is a complete sense of time—past, present, and future—giving him a fascinating depth and profundity wholly lacking at the beginning of the book.

In order to regain this sense of time, Meursault had to do two things: first, remove himself from the tyranny of custom; second, contemplate his own death. And these two are, you see, related: for custom discourages us from thinking about our mortality. Here we have another opened and closed circle. In the beginning of the book, Meusault goes through the rituals associated with the death of a family member. These rituals are pre-determined and conventional; death is covered with a patina of familiarity—it is made into a routine matter, to be dealt with like paying taxes or organizing a trip to the beach. Meusault has to do nothing except show up. The ceremony he witnesses is more or less the same ceremony given to everyone. (Also note that the ceremony is so scripted that he is later chastised for not properly playing the part.)

At the end of the book, society attempts once again to cover up death—this time, in the form of the chaplain. The chaplain is doing just what the funeral ceremony did: conceal death, this time with a belief about God and repentance and the afterlife. You see, even on death row, society has its conventions for death; death is intentionally obscured with rituals and ceremonies and beliefs.

Meursault’s repentance comes by penetrating this illusion, by throwing off the veil of convention and staring directly at his own end. In this one act, he transcends the tyranny of custom and, for the first time in his life, becomes free. This is the closest I can come to an Aesopian moral: Without directly facing our own mortality, we have no impetus to break out of the hamster-wheel of conventional choices. Our lives are pre-arranged and organized, even before we are born; but when death is understood for what it is—a complete and irreversible end—then it spurs us to reject the idle-talk and comforting beliefs presented to us, and to live freely.

This is what Camus would have all of us do: project our thoughts towards our own inescapable end, free of all illusions, so as to regain our ability to make real choices, rather than to chose from a pre-determined menu. Only this way will we cease to be strangers to ourselves.

(At least, that is the Heideggerian fable I think he was going for.)

View all my reviews

Review: A Study of History

Review: A Study of History

A Study of History, Abridgement of Vols 1-6A Study of History, Abridgement of Vols 1-6 by Arnold Joseph Toynbee

My rating: 3 of 5 stars

One of the perennial infirmities of human beings is to ascribe their own failure to forces that are entirely beyond their control.

One day, a couple years ago, as I was walking to Grand Central Station from my office in Manhattan—hurrying, as usual, to get to the 6:25 train in time to get a good seat by the window, which meant arriving by 6:18 at the latest—while crossing an intersection, I looked down and found a Toynbee tile lying in the middle of the street.

Toynbee tiles are mysterious plaques, pressed into the asphalt in city streets, that have appeared in several cities in the United States. Small (about the size of a tablet) and flat (they’re made of linoleum), nearly all of them bear the same puzzling message: “TOYNBEE IDEA MOVIE ‘2001 RESURRECT DEAD ON PLANET JUPITER.” Sometimes other little panicky or threatening messages about the Feds, Gays, Jews, and instructions to “lay tile alone” are scribbled in the margins. Nobody knows the identity of the tile-maker; but they are clearly the work of a dedicated conspiracy theorist with a tenuous grasp on conventional reality; and considering that they’ve been appearing since the 1980s, all around the US and even in South America, you’ve got to give the tile-maker credit for perseverance. (Click here for more on the tiles.)

I was stunned. I had heard of the tiles before, but I never thought I’d see one. I walked across that intersection twice daily; clearly the tile had been recently installed, perhaps just hours ago. I wanted to bend down and examine it closely, but the traffic light soon changed and I had to get out of the way. Reluctantly I moved on towards Grand Central; but I felt gripped by curiosity. Who is this mysterious tile maker? What is he hoping to accomplish? Suddenly I felt an overpowering desire to unlock his message. So instead of jumping on my usual train—I wasn’t going to get a window seat, anyway—I stopped by a bookstore and picked up Toynbee’s Study of History.

Toynbee, for his part, was apparently no lover of mystery, since he tried to explain nothing less than all of human history. The original study is massive: 12 volumes, each one around 500 pages. This abridgement squeezes 3,000 pages into 550; and that only covers the first five books. (Curiously, although the cover of this volume says that it is an abridgement of volumes one through six, it is clear from the table of contents that it only includes one through five. Similarly, though the next volume of the abridgement says it begins with book seven and ends with book ten, it actually begins with book six and ends with book twelve. This seems like a silly mistake.)

The abridgement was done by an English school teacher, D.C. Somervell, apparently just for fun. He did an excellent job, since it was this abridged version that became enormously popular and which is still in print. All this only proves what Toynbee says in the preface, that “the author himself is unlikely to be the best judge of what is and is not an indispensable part of his work.”

As a scholar, Toynbee achieved a level of fame and influence nearly incomprehensible today. His name was dominant in both academe and foreign affairs. In 1947, just after this abridgement of his major work became a best-seller, he was even featured on the cover of Time magazine. This, I might add, is a perverse index of how much our culture has changed since then. It is nearly impossible to imagine this book—a book with no narrative, written in a dry style about an abstract thesis—becoming a best-seller nowadays, and equally impossible to imagine any bookish intellectual on the cover of Time.

But enough about tiles and Toynbee; what about the book?

In A Study of History, Toynbee set out to do what Oswald Spengler attempted in his influential theory of history, The Decline of the West—that is, to explain the rise and fall of human communities. In method and content, the two books are remarkably similar; but this similarity is obscured by a powerful contrast in style. Where Spengler is oracular and prophetic, biting and polemical, literary and witty, Toynbee is mild, modest, careful, and deliberate. Spengler can hardly go a sentence without flying off into metaphor; Toynbee is literal-minded and sober. Toynbee’s main criticism of his German counterpart seems to have been that Spengler was too excitable and fanciful. The English historian seeks to tread the same ground, but with rigor and caution.

Nevertheless, the picture that Toynbee paints, if less colorful, is quite similar in outline to Spengler’s. The two of them seek to divide up humans into self-contained communities (‘cultures’ for Spengler, ‘societies’ for Toynbee); these communities are born, grow, break down, collapse, and pass away according to a certain pattern. Both thinkers see these groups as having a fertile early period and a sterile late period; and they both equate cultural vigor with artistic and intellectual innovation rather than political, economic, or military might.

Naturally, there are significant divergences, too. For one, Toynbee attempts to describe the origin and geographic distribution of societies, something that Spengler glosses over. Toynbee’s famous thesis is that civilizations arise in response to geographic challenge. Some terrains are too comfortable and invite indolence; other terrains are too difficult and exhaust the creative powers of their colonizers. Between these two extremes there is an ideal challenge, one that spurs communities to creative vigor and masterful dominance.

While I applaud Toynbee for the attempt, I must admit that I find this explanation absurd, both logically and empirically. The theory itself is vague because Toynbee does not analyze what he means by a ‘challenging’ environment. How can an environment be rated for ‘difficulty’ in the abstract, irrespective of any given community? A challenge is only challenging for somebody; and what may be difficult for some is easy for others. Further, thinking only about the ‘difficulty’ collapses many different sorts of things—average rainfall and temperature, available flora and fauna, presence of rival communities, and a host of other factors—into one hazy metric.

This metric is then applied retrospectively, in supremely unconvincing fashion. Toynbee explains the dominance of the English colony in North America, for example, as due to the ‘challenging’ climate of New England. He even speculates that the ‘easier’ climate south of the Mason-Dixon line is why the North won the American Civil War. Judgments like these rest on such vague principles that they can hardly be confirmed or refuted; you can never be sure how much Toynbee or ignoring or conflating. In any case, as an explanation it is clearly inadequate, since it ignores several obvious advantages possessed by the English colonists—that England was ascendant while Spain was on the wane, for example.

Now that we know more about the origins of agriculture, we have come to exact opposite conclusion as Toynbee. The communities that developed agriculture did not arise in the most ‘challenging’ environments, but in the areas which had the most advantages—namely, plants and animals that could be easily domesticated. But Toynbee cannot be faulted for the state of archaeology in his day.

The next step in Toynbee’s theory is also vague. The growing society must transfer its field of action from outside to inside itself; that is, the society must begin to challenge itself rather than be challenged by its environments. This internal challenge gives rise to a ‘creative minority’—a group of gifted individuals who innovate in art, science, and religion. These creative individuals always operate by a process of ‘withdraw-and-return’: they leave society for a time, just as Plato’s philosopher left the cave, and then return with their new ideas. The large majority of any given society is an uncreative, inert mass and merely imitates the innovations of the creative minority. The difference between a growing society and either a ‘primitive’ or a degenerating society is that the mass imitate contemporary innovators rather than hallowed ancestors.

Incredibly, Toynbee sees no relation between either technological progress or military prowess with a civilization’s vigor. Like Spengler, he measures a culture’s strength by its creative vitality—its art, music, literature, philosophy, and religion. This allows him to see the establishment of the Roman Empire, as Spengler did, not as a demonstration of vitality but as a last desperate attempt to hold on to Hellenic civilization. Toynbee actually places the ‘breakdown’ of Hellenic society (when they lost their cultural vitality) at the onset of the Peloponnesian War, in 431 BCE, and considers all the subsequent history of Hellene and Rome as degeneration.

But why does the creative minority cease to be genuinely creative and petrify into a merely ‘dominant’ minority? This is because, after one creative triumph, they grow too attached to their triumph and cannot adapt to new circumstances; in other words, they rest on their laurels. What’s more, even the genuine innovations of the creative minority may not have their proper effect, since they must operate through old, inadequate, and at times incompatible institutions. Their ideas thus become either perverted in practice or simply not practiced at all, impeding the proper ‘mimesis’ (imitation) by the masses. After the breakdown, the society takes refuge in a universal state (such as the Roman Empire), and then in a universal church (such as the Catholic church). (As with Spengler, Toynbee seems to have the decline and fall of the Roman Empire as his theory’s ur-type.)

To me—and I suspect to many readers—Toynbee’s theories seem to be straightforward consequences of his background. Born into a family of scholars and intellectuals, Toynbee is, like Spengler, naturally inclined to equate civilization with ‘high’ culture, which leads naturally to elitism. Having lived through and been involved in two horrific World Wars, Toynbee was deeply antipathetic to technology and warfare. Nearly everyone hates war, and rightly; but in Toynbee’s theory, war is inevitably a cause or an effect of societal decay—something which is true by definition in his moral worldview, but which doesn’t hold up if we define decay in more neutral terms. The combination of his family background and his hatred of violence turned Toynbee into a kind of atheistic Christian, who believed that love and non-violence conquered all. I cannot fault him ethically; but this is a moral principle and not an accurate depiction of history.

Although the association is not flattering, I cannot help comparing both Toynbee and Spengler to the maker of the Toynbee tiles. Like that lonely crank, wherever he is, these two scholars saw connections where nobody else had before, and propounded their original worldviews in captivating fashion. Unfortunately, it seems that coming up with a theory that could explain the rise and fall of every civilization in every epoch seems to be just about as possible as resurrecting the dead on planet Jupiter. But sometimes great things are accomplished when we try to do the impossible; and thanks to this unconquerable challenge, we have two monuments of human intelligence and ambition, works which will last far, far longer than linoleum on asphalt.

View all my reviews

Review: The Trial

Review: The Trial

The TrialThe Trial by Franz Kafka
My rating: 5 of 5 stars

Back in university, I had a part-time job at a research center. It was nothing glamorous: I conducted surveys over the phone. Some studies were nation-wide, others were only in Long Island. A few were directed towards small businesses. There I would sit in my little half-cubicle, with a headset on, navigating through the survey on a multiple-choice click screen.

During the small business studies, a definite pattern would emerge. I would call, spend a few minutes navigating the badly recorded voice menu, and then reach a secretary. Then the survey instructed me to ask for the president, vice-president, or manager. “Oh, sure,” the receptionist would say, “regarding?” I would explain that I was conducting a study. “Oh…” their voice would trail off, “let me check if he’s here.” Then would follow three to five minutes of being on hold, with the usual soul-sucking on-hold music. Finally, she would pick up: “Sorry, he’s out of the office.” “When will he be back?” would be my next question. “I’m not sure…” “Okay, I’ll call back tomorrow,” I would say, and the call would end.

Now imagine this process repeating again and again. As the study went on, I would be returning calls to dozens of small businesses where the owners were always mysteriously away. I had no choice what to say—it was all in the survey—and no choice who to call—the computer did that. By the end, I felt like I was getting to know some of these secretaries. They would recognize my voice, and their announcement of the boss’s absence would be given with a strain of annoyance, or exhaustion, or pity. I would grow adept at navigating particular voice menus, and remembered the particular sounds of being on hold at certain businesses. It was strait out of this novel.

When I picked up The Trial, I was expecting it to be great. I had read Kafka’s short stories—many times, actually—and he has long been one of my favorite writers. But by no means did I expect to be so disturbed. Maybe it was because I was groggy, because I hadn’t eaten yet, or because I was on a train surrounded by strangers. But by the time I reached my destination, I was completely unnerved. For a few moments, I even managed to convince myself that this actually was a nightmare. No book could do this.

What will follow in this already-too-long review will be some interpretation and analysis. But it should be remarked that, whatever conclusions you or I may draw, interpretation is a second-level activity. In Kafka’s own words: “You shouldn’t pay too much attention to people’s opinions. The text cannot be altered, and the various opinions are often no more than an expression of despair over it.” Attempts to understand Kafka should not entail a rationalizing away of his power. This is a constant danger in literary criticism, where the words sit mutely on the page, and passages can be pasted together at the analyst’s behest. This is mere illusion. If someone were to tell you that Picasso’s Guernica is about the Spanish Civil War, you may appreciate the information; but by no means should this information come between you and the visceral experience of standing in front of the painting. Just so with literature.

To repeat something that I once remarked of Dostoyevsky, Kafka is a great writer, but a bad novelist. His books do not have even remotely believable characters, character development, or a plot in any traditional sense. Placing The Trial alongside Jane Eyre or Lolita will make this abundantly clear. Rather, Kafka’s stories are somewhere in-between dream and allegory. Symbolism is heavy, and Kafka seems to be more intent on establishing a particular feeling than in telling a story. The characters are tools, not people

So the question naturally arises: what does the story represent? Like any good work of art, any strict, one-sided reading is insufficient. Great art is multivalent—it means different things to different people. The Trial may have meant only one thing to Kafka (I doubt it), but once a book (or symphony, or painting) is out in the world, all bets are off.

The broadest possible interpretation of The Trial is as an allegory of life. And isn’t this exactly what happens? You wake up one day, someone announces that you’re alive. But no one seems to be able to tell you why or how or what for. You don’t know when it will end or what you should do about it. You try to ignore the question, but the more you evade it, the more it comes back to haunt you. You ask your friends for advice. They tell you that they don’t really know, but you’d better hire a lawyer. Then you die like a dog.

Another interpretation is based on Freud. Extraordinary feelings of guilt is characteristic of Kafka’s work, and several of his short stories (“The Judgment,” “The Metamorphosis”) portray Kafka’s own unhealthy relationship with his father. Moreover, the nightmarish, nonsensical quality of his books, and his fascination with symbols and allegories, cannot help but remind one of Freud’s work on dreams. If I was a proper Freudian, I would say that The Trial is an expression of Kafka’s extraordinary guilt at his patricidal fantasies.

A different take would group this book along with Joseph Heller’s Catch-22 as a satire of bureaucracy. And, in the right light, parts of this book are hilarious. Kafka’s humor is right on. He perfectly captures the inefficiency of organizations in helping you, but their horrifying efficiency when screwing you over. And as my experience in phone surveys goes to show, this is more relevant than ever.

If we dip into Kafka’s biography, we can read this book as a depiction of the anguish caused by his relationship with Felice Bauer. (For those who don’t know, Kafka was engaged with her twice, and twice broke it off. Imagine dating Kafka. Poor woman.) This would explain the odd current of sexuality that undergirds this novel.

Here is one idea that I’ve been playing with. I can’t help but see The Trial as a response to Dostoyevsky’s Crime and Punishment. As their names suggest, they deal with similar themes: guilt, depression, alienation, the legal system, etc. But they couldn’t end more differently. Mulling this over, I was considering whether this had anything to do with the respective faiths of their authors. Dostoyevsky found Jesus during his imprisonment, and never turned back. His novels, however dark, always offer a glimmer of the hope of salvation. Kafka’s universe, on the other hand, is proverbially devoid of hope. Kafka was from a Jewish family, and was interested in Judaism throughout his life. Is this book Crime and Punishment without a Messiah?

I can go on and on, but I’ll leave it at that. There can be no one answer, and the book will mean something different to all who read it. And what does that say about Kafka?

View all my reviews

Review: The Righteous Mind

Review: The Righteous Mind

The Righteous Mind: Why Good People are Divided by Politics and ReligionThe Righteous Mind: Why Good People are Divided by Politics and Religion by Jonathan Haidt

My rating: 4 of 5 stars

I expected this book to be good, but I did not expect it to be so rich in ideas and dense with information. Haidt covers far more territory than the subtitle of the book implies. Not only is he attempting to explain why people are morally tribal, but also the way morality works in the human brain, the evolutionary origins of moral feelings, the role of moral psychology in the history of civilization, the origin and function of religion, and how we can apply all this information to the modern political situation—among much else along the way.

Haidt begins with the roles of intuition and reasoning in making moral judgments. He contends that our moral reasoning—the reasons we aver for our moral judgments—consists of mere post hoc rationalizations for our moral intuitions. We intuitively condemn or praise an action, and then search for reasons to justify our intuitive reaction.

He bases his argument on the results of experiments in which the subjects were told a story—usually involving a taboo violation of some kind, such as incest—and then asked whether the story involved any moral breach or not. These stories were carefully crafted so as not to involve harm to anyone (such as a brother and sister having sex in a lonely cabin and never telling anyone, and using contraception to prevent the risk of pregnancy).

Almost inevitably he found the same result: people would condemn the action, but then struggle to find coherent reasons to do so. To use Haidt’s metaphor, our intuition is like a client in a court case, and our reasoning is the lawyer: its job is to win the case for intuition, not to find the truth.

This is hardly a new idea. Haidt’s position was summed up several hundred years before he was born, by Benjamin Franklin: “So convenient a thing it is to be a reasonable creature, since it enables one to find or make a reason for everything one has a mind to do.” An intuitionist view of morality was also put forward by David Hume and Adam Smith. But Haidt’s account is novel for the evolutionary logic behind his argument and the empirical research used to back his claims. This is exemplified in his work on moral axes.

Our moral intuition is not one unified axis from right to wrong. There are, rather, six independent axes: harm, proportionality, equality, loyalty, authority, and purity. In other words, actions can be condemned for a variety of reasons: for harming others, for cheating others, for oppressing others, for betraying one’s group, for disrespecting authority, and for desecrating sacred objects, beings, or places.

These axes of morality arose because of evolutionary pressure. Humans who cared for their offspring and their families survived better, as did humans who had a greater sensitivity to being cheated by freeloaders (proportionality) and who resisted abusive alpha males trying to exploit them (equality). Similarly, humans who were loyal to their group and who respected a power hierarchy outperformed less loyal and less compliant humans, because they created more coherent groups (this explanation relies on group selection theory; see below). And lastly, our sense of purity and desecration—usually linked to religious and superstitious notions—arose out of our drive to avoid physical contamination (for example, pork was morally prohibited because it was unsafe to eat).

Most people in the world use all six of these axes in their moral systems. It is only in the West—particularly in the leftist West—where we focus mainly on the first three: harm, proportionality, and equality. Indeed, one of Haidt’s most interesting points is that the right tends to be more successful in elections because it appeals to a broader moral palate: it appeals to more “moral receptors” in the brain than left-wing morality (which primarily appeals to the axis of help and harm), and is thus more persuasive.

This brings us to Part III of the book, by far the most speculative.

Haidt begins with a defense of group selection: the theory that evolution can operate on the level of groups competing against one another, rather than on individuals. This may sound innocuous, but it is actually a highly controversial topic in biology, as Haidt himself acknowledges. Haidt thinks that group selection is needed to explain the “groupishness” displayed by humans—our ability to put aside personal interest in favor of our groups—and makes a case for the possibility of group selection occurring during the last 10,000 or so years of our history. He makes the theory seem plausible (to a layperson like me), but I think the topic is too complex to be covered in one short chapter.

True or not, Haidt uses the theory of group theory to account for what he calls “hiveish” behavior that humans sometimes display. Why are soldiers willing to sacrifice themselves for their brethren? Why do people like to take ecstasy and rave? Why do we waste so much money and energy going to football games and cheering for our teams? All these behaviors are bizarre when you see humans as fundamentally self-seeking; they only make sense, Haidt argues, if humans possess the ability to transcend their usual self-seeking perspective and identify themselves fully with a group. Activating this self-transcendence requires special circumstances, and it cannot be activated indefinitely; but it produces powerful effects that can permanently alter a person’s perspective.

Haidt then uses group selection and this idea of a “hive-switch” to explain religion. Religions are not ultimately about beliefs, he says, even though religions necessarily involve supernatural beliefs of some kind. Rather, the social functions of religions are primarily to bind groups together. This conclusion is straight out of Durkheim. Haidt’s innovation (well, the credit should probably go to David Sloan Wilson, who wrote Darwin’s Cathedral) is to combine Durkheim’s social explanation of religion with a group-selection theory and a plausible evolutionary story (too long to relate here).

As for empirical support, Haidt cites a historical study of communes, which found that religious communes survived much longer than their secular counterparts, thus suggesting that religions substantially contribute to social cohesion and stability. He also cites several studies showing that religious people tend to be more altruistic and generous than their atheistic peers; and this is apparently unaffected by creed or dogma, depending only on attendance rates of religious services. Indeed, for someone who describes himself as an atheist, Haidt is remarkably positive on the subject of religion; he sees religions as valuable institutions that promote the moral level and stability of a society.

The book ends with a proposed explanation of the political spectrum—people genetically predisposed to derive pleasure from novelty and to be less sensitive to threats become left-wing, and vice versa (the existence of libertarians isn’t explained, and perhaps can’t be)—and finally with an application of the book’s theses to the political arena.

Since we are predisposed to be “groupish” (to display strong loyalty towards our own group) and to be terrible at questioning our own beliefs (since our intuitions direct our reasoning), we should expect to be blind to the arguments of our political adversaries and to regard them as evil. But the reality, Haidt argues, is that each side possesses a valuable perspective, and we need to have civil debate in order to reach reasonable compromises. Pretty thrilling stuff.

Well, there is my summary of the book. As you can see, for such a short book, written for a popular audience, The Righteous Mind is impressively vast in scope. Haidt must come to grips with philosophy, politics, sociology, anthropology, psychology, biology, history—from Hume, to Darwin, to Durkheim—incorporating mountains of empirical evidence and several distinct intellectual traditions into one coherent, readable whole. I was constantly impressed by the performance. But for all that, I had the constant, nagging feeling that Haidt was intentionally playing the devil’s advocate.

Haidt argues that our moral intuition guides our moral reasoning, in a book that rationally explores our moral judgments and aims to convince its readers through reason. The very existence of his book undermines his uni-directional model of intuitions to reasoning. Being reasonable is not easy; but we can take steps to approach arguments more rationally. One of these steps is to summarize another person’s argument before critiquing it, which is what I’ve done in this review.

He argues that religions are not primarily about beliefs but about group fitness; but his evolutionary explanation of religion would be rejected by those who deny evolution on religious grounds; and even if specific beliefs don’t influence altruistic behavior, they certainly do influence which groups (homosexuals, biologists) are shunned. Haidt also argues that religions are valuable because of their ability to promote group cohesion; but if religions necessarily involve irrational beliefs, as Haidt admits, is it really wise to base a moral order on religious notions? If religions contribute to the social order by encouraging people to sacrifice their best interest for illogical reasons—such as in the commune example—should they really be praised?

The internal tension continues. Haidt argues that conservatives have an advantage in elections because they appeal to a broader moral palate, not just care and harm; and he argues that conservatives are valuable because their broad morality makes them more sensitive to disturbances of the social order. Religious conservative groups which enforce loyalty and obedience are more cohesive and durable than secular groups that value tolerance. But Haidt himself endorses utilitarianism (based solely on the harm axis) and ends the book with a plea for moral tolerance. Again, the existence of Haidt’s book presupposes secular tolerance, which makes his stance confusing.

Haidt’s arguments with regard to broad morality come dangerously close to the so-called ‘naturalistic fallacy’: equating what is natural with what is good. He compares moral axes to taste receptors; a morality that appeals to only one axis will be unsuccessful, just like a cuisine that appeals to only one taste receptor will fail to satisfy. But this analogy leads directly to a counter-point: we know that we have evolved to love sugar and salt, but this preference is no longer adaptive, indeed it is unhealthy; and it is equally possible that our moral environment has changed so much that our moral senses are no longer adaptive.

In any case, I think that Haidt’s conclusions about leftist morality are incorrect. Haidt asserts that progressive morality rests primarily on the axis of care and harm, and that loyalty, authority, and purity are actively rejected by liberals (“liberals” in the American sense, as leftist). But this is implausible. Liberals can be extremely preoccupied with loyalty—just ask any Bernie Sanders supporter. The difference is not that liberals don’t care about loyalty, but that they tend to be loyal to different types of groups—parties and ideologies rather than countries. And the psychology of purity and desecration is undoubtedly involved in the left’s concern with racism, sexism, homophobia, or privilege (accusing someone of speaking from privilege creates a moral taint as severe as advocating sodomy does in other circles).

I think Haidt’s conclusion is rather an artifact of the types of questions that he asks in his surveys to measure loyalty and purity. Saying the pledge of allegiance and going to church are not the only manifestations of these impulses.

For my part, I think the main difference between left-wing and right-wing morality is the attitude towards authority: leftists are skeptical of authority, while conservatives are skeptical of equality. This is hardly a new conclusion; but it does contradict Haidt’s argument that conservatives think of morality more broadly. And considering that a more secular and tolerant morality has steadily increased in popularity over the last 300 years, it seems prima facie implausible to argue that this way of thinking is intrinsically unappealing to the human brain. If we want to explain why Republicans win so many elections, I think we cannot do it using psychology alone.

The internal tensions of this book can make it frustrating to read, even if it is consistently fascinating. It seems that Haidt had a definite political purpose in writing the book, aiming to make liberals more open to conservative arguments; but in de-emphasizing so completely the value of reason and truth—in moral judgments, in politics, and in religion—he gets twisted into contradictions and risks undermining his entire project.

Be that as it may, I think his research is extremely valuable. Like him, I think it is vital that we understand how morality works socially and psychologically. What is natural is not necessarily what is right; but in order to achieve what is right, it helps to know what we’re working with.

View all my reviews