48 Hours in London

48 Hours in London

I’ve fallen far behind in my travel posts, and now I find myself in the embarrassing position of writing about a trip I took over a year ago. It also seems that, no matter how hard I try to be brief, I end up writing more and more. Well, enough prefatory remarks; on to business.


Introduction

For an American, there is something religious about visiting London for the first time. We have been hearing about the place all our lives. Dry humor, pints of beer, red phone booths, black taxis, fish and chips, bad teeth, good tea, bad weather, good tikka masala, the British Invasion, the British Parliament, the British Empire, Queen Elizabeth, Queen Victoria, Shakespeare, Dickens, the Beatles, Monty Python, Dr. Who, Harry Potter—London is the focal point of all our stereotypes, good and bad, of England and the English.

This is important for us Americans, since England is the only other country whose media we regularly consume. English media is so important for us because of our shared language. Unlike in Spain—where English-language songs often play on the radio (and people sing these songs without understanding the lyrics), and where American shows, overdubbed in Spanish, are extremely popular—in the United States we don’t listen to music in a foreign language if we can help it, and we only watch television that was originally made in English (overdubbing looks silly). This provincial preference for English media limits our options of foreign media mainly to England and Australia, and England has been the clear favorite.

A consequence of this popularity of English media is that Americans have internalized a highly partial picture of the English character. We associate the English with sophistication, elegance, wit, good manner, royalty, and the historical past.

This is almost the polar opposite of the English reputation in Spain. You see, Spain is an excellent travel destination for English holidaymakers—cheap, close, and sunny—and as a result, lots of English tourists come to Spain looking for a good time. A “good time” entails drinking, of course, and thus there are lots of drunken English people stumbling around city centers on any given night. As a result, Spaniards think of the English, not as genteel aristocrats, but as tipplers.

(Parenthetically, the English also have very different alcohol consumption habits than the Spanish. On a Friday or Saturday night, people in Spain begin drinking in earnest after dinner—which means 11 pm at the earliest. They often don’t even leave their apartment to go to bars and clubs until 2 in the morning, and don’t return home until dawn the next day. In London, on the other hand, drinking begins as soon as people leave work, at 5 pm. This is due, in part, to an old law in London that required pubs to close down at 11. So the English stop drinking when the Spanish barely start.

(This difference in schedule is supplemented by a difference in speed and volume. Spaniards are rarely visibly drunk. I have seen very few Spanish people stumbling from alcohol; instead, they focus on maintaining a level of comfortable tipsiness for a long period of time. Compared with Brits, Spaniards sip their drinks, and eat a lot while they drink. English people, by contrast, get properly drunk, and fast, much like many Americans do. As a consequence, Brits can be very loud drinkers—in my experience, at least. This is an especially interesting contrast, I think, since in every other circumstance Brits tend to be mucher more quiet than Spaniards.)

Of course, both the American and the Spanish stereotype is an over-generalization; they are based on very partial exposures to the English character. Partial and false as they may be, however, these stereotypes did succeed in endowing England with a certain contradictory mystique—a place full of witty drunkards, elegant and boisterous, cultured and slovenly? I needed to go see London for myself, to catch a glimpse of the reality behind the reputation.

My problem was that, at the time, I was particularly low on funds. And however distorted all the other stereotypes may be about London, this one is true: London is expensive. Well, it’s expensive if you enjoy eating, sleeping indoors, using transportation, and doing any activity besides walking and sitting outside. This was a few months before the Brexit referendum, and the pound was still strong.

As a result, my short trip to London—barely 48 hours—became a frantic exercise in traveling cheaply. I didn’t buy an oyster card, and I didn’t use the Tube or the buses. I ate “meal deals”—pre-packed sandwiches at Tesco supermarkets, not terribly delicious—instead of paying for dinner in a restaurant. And I focused on visiting museums, which are free in London, instead of other popular sites.


Arrival & First Impressions

As usual, I traveled with Ryanair. My plane arrived in Stansted, the smallest of London’s airports, where I had to fill out a form and wait in a long queue to enter the country. The English, it seems, are almost as paranoid about their borders as we are in the United States. From Stansted, I took the so-called Stansted Express to London’s central Liverpool Station. The ride took about an hour, and was not cheap. This is a typical Ryanair experience: the flight is inexpensive, but uncomfortable; and you land in an unpopular airport far outside the city. I am a loyal customer.

I sat in the train—dazed from lack of sleep, filled with nervous energy, physically miserable but mentally awake—and stared out the window in disbelief. Was I really here? Was this England, the land of dry humor and wet weather? I gazed out at fleeting patches of green countryside as the train sped by, and savored the delightful names of the train stations between Stansted and London. (Of course I can’t remember any of the names now; but as I look on Google maps, I find such gems as Matching Tye, Hartfield Heath, Hastingwood, Theydon Bois.)

English novels—from Austen, to Dickens, to Rowling—have powerfully shaped the American imagination of the past; and thus, by association, English place-names strike many Americans as irresistibly charming. Each name seems to be the title of another great novel, filled with irony and romance, and written with quaint wit. Likewise, the English countryside—a neatly trimmed park, whose rolling hills are covered in a grey mist—is featured in so many films that even the snatches of green I saw out the train window filled me with delight.

These feelings of romance and fantasy are, I suspect, nearly universal for Americans visiting England, and specifically London, for the first time. England is the only foreign country we regularly see in television and movies. This gives the experience of visiting England the effect of stepping into a movie set—everything is familiar, and yet unreal. The same thing happens, I believe, to many who visit New York for the first time. Many people have independently told me that it felt like they were in a movie, since so many landmarks and features were familiar to them from films.

The train arrived, and I got out to go find my Airbnb. I was on edge. The combination of sleep deprivation (the flight was terribly early) with the usual stress of navigating a foreign city (my phone didn’t have service), plus the feeling of unreality that comes from actually being in a place which I’d been hearing about all my life—all this combined to make me edgy and oversensitive. The double-decker red buses, the black taxis, the cars driving on the wrong side of the road, the eccentric road signs (including the delightfully existential “Change Priorities Ahead”), pubs with absurd names (“Ye Old Cheshire Cheese,” on Fleet Street), the red phone booths scattered seemingly at random (apparently, the city had once sold all these phone booths, only to regret the decision and then repurchase as many as they could)—my first impressions of London did contain many of ye quaint olde stereotypes that I expected.

Red Telephone Booths

But one thing that, as a New Yorker, always surprises me when I visit a new city is the lack of skyscrapers. Madrid has only four buildings which can reasonably be called skyscrapers, and they’re located in the north of the city, far outside the center. London has its own share of skyscrapers, to be sure. But walking around in London has nothing of that vertiginous feeling that New York produces, the feeling of being crushed by steel and glass, the feeling of constantly craning one’s neck. I had always thought of London as being a huge and imposing place, so this lack of skyscrapers did disconcert me somewhat.

In many other respects, however, London can be easily compared with New York: the bustling streets, the flashy billboards and ever-present advertisements, the endless shopping, the infinite variety of chain restaurants, the ethnic diversity, the smell and the grime. London even has the same phony Buddhist monks trying to scam tourists into giving them money. (You can find a great story about them here; and in case you’re wondering, if someone is aggressively asking you for money, you can safely assume that they’re not a Buddhist monk.)

As I discovered when I got to my Airbnb, one way that London is incompatibly different from both my country and Spain is the style of its outlets. I had to buy a power-adaptor there; and like everything in London, it wasn’t cheap. Be wise and buy one ahead of time.

These were my first impressions, hazy and distorted, as I walked from the station to my Airbnb. Already I was running short of time. It was midday Friday, and my flight home would leave early on Sunday. So I set out to the first place on my list, the National Gallery.


A Note on Cuisine and Language

I should preface my trip to the National Gallery with a mention of a small restaurant, the Breadline, which can be found nearby. I decided to eat there because it had fish and chips—I know it’s silly, but I couldn’t leave London without eating that iconic meal—and because its prices were eminently reasonable. The food was plain and basic, but nonetheless, for me, extremely satisfying. I even returned the next day to try an English breakfast, which I quite liked.

English food has a poor reputation, and I understand why; it is hardly a cuisine designed to have universal appeal. Nevertheless, if those two meals can be trusted to give a fair representation (an open question), I can say that I am a fan. There is something about greasy fried potatoes and fried fish, covered in white vinegar, that just feels right to me. And sausage and beans for breakfast is brilliant.

While I was eating, a young British man came in and said “A small white coffee to take away.” This is an excellent example of the differences between British and American English. This sentence, uttered in New York, would produce only bafflement. You would have to translate it to “A small coffee with milk to go,” if you wanted to be understood. I run into these differences constantly as I teach English. Before coming to Spain, I thought the differences between British and American English were minor and negligible, besides the accent; but I was wrong. Working with British textbooks and materials can be extremely frustrating, since often I don’t know what certain expressions or words mean—which is embarrassing when my students ask. Not only that, but there are a few subtle grammatical differences between the dialects, such as in the use of the perfect tense. But this is a digression of a digression; now to the museum.


The National Gallery

It is immensely satisfying to simply walk into a museum, without fees or lines, like it’s your own home. The experience is even better when the museum is one of the best in the world. The National Gallery is only behind the Louvre, the British Museum, and the Metropolitan in visitors per year; and this is especially impressive considering the museum’s collection is comparatively small, easily viewable in three hours or so. But for those with any sensitivity to art, these three hours will be among the most rewarding of your aesthetic life; for the National Gallery’s collection is remarkable both for its breadth and its excellence. The only museums I’ve visited that compare in the average quality of the paintings on display are the Prado in Madrid and the Musée d’Orsay in Paris. Every room in the gallery contains a masterpiece, often many.

Indeed, there are so many wonderful paintings—paintings I had seen and loved in art history books—that I cannot even hope to mention all of them in this post, much less describe the impression each one made on me. Nevertheless, I can’t resist the temptation to dwell on some of these exquisite works of the human imagination.

The first painting which attracted my attention was the portrait of Erasmus by Hans Holbein the Younger. This is an extraordinary demonstration of the portraitist’s art; instead of a photographic image, capturing the physical surface of the famous writer, we get a glimpse of the writer’s mind. As in any excellent portrait, the inner is made manifest in the outer without compromising the realism of the portrait. His sharply angular face bespeaks cleverness; his gaunt features reveals a life dedicated to the mind and not the body; his half-closed eyes and serene expression show calm intelligence and a wisdom that sees beyond earthly troubles. We also catch a hint of Erasmus’s self-complacent vanity: he looks a little too comfortable in his fine fur robe, and his hands rest a little too easily upon a volume of his own writings. Is there a more convincing portrait of the scholar?

Erasmus
Erasmus

Holbein has an even more famous work on display at the museum: The Ambassadors. This is a portrait of two aristocratic ambassadors (their identity was long debated), in a room which includes an exquisitely-rendered still-life of several objects—a lute, several globes, a psalm-book, and various instruments of navigation. But the most memorable, and bizarre, feature of this painting is the giant anamorphic skull in the center. Anamorphic means that it is purposefully distorted when viewed head-on, and must be seen from a specific perspective to be seen properly. When viewed from the front, the skull is just a strange grey diagonal shape; but when you walk to the painting’s left, the skull comes into focus. I can only imagine the technical virtuosity required of a painter to pull off this trick with such consummate perfection; when seen properly, the skull is finely detailed, beautifully shaded, and anatomically accurate. Holbein painted this tour de force in 1533.

The Ambassadors
The Ambassadors

The National Gallery also possesses what is probably the most famous papal portrait in history: Raphael’s portrait of Pope Julius II. Julius was the most important of the high renaissance popes; he is responsible for the beginning of the Vatican museum, Michelangelo’s commission to paint the Sistine Chapel, and Raphael’s commission to paint the Vatican Library. Not only that, Julius originated the idea of tearing down the original St. Peter’s and building a new one. Such a man must have had enormous energy and a deep sensitivity to art. And yet in Raphael’s portrait we see him weary, worn-out, and melancholic. He is gently gripping a handkerchief in one hand and his chair in the other; his eyes are hollow, and the wrinkled skin of his face droops loosely from his skull. He seems to be just feebly holding on to the last chords of life, staring at his own end with resignation. Such terrible realism was entirely new in papal portraiture.

Julius II
Julius II

Before going to the National Gallery, I didn’t look up any of the famous pictures that could be found there; so I was surprised and delighted when I found myself face to face with one of my favorite pictures, Jan van Eyck’s Arnolfini Portrait. I remember first seeing this portrait in Ernst Gombrich’s Story of Art, and being stunned. In the context of its time, 1434, the portrait is startling for its realism and its domestic subject: a marriage contract taking place in a bedroom. To a modern eye, perhaps the portrait no longer seems terribly realistic; the husband, with his pale expressionless face and his oversized clothes, always looks like he belongs in a Tim Burton film to me; but this only adds to its charm. The little toy-sized dog in the foreground—as adorable as ever—and the mirror in the background—showing us the whole scene from reverse in a distorted perspective—add to the painting’s undeniable power.

Alfonsini Portrait
Arnolfini Portrait

There are dozens more paintings—of equal importance and beauty—that I could devote an unworthy paragraph to describing; but this would only swell this post to unartistic dimensions. Yet I cannot move on without mentioning the National Gallery’s collection of Italian Renaissance art. This includes Piero del Pollaiolo’s masterpiece, The Martyrdom of Saint Sebastian, a landmark in the realistic use of perspective, with the saint enricled by crossbowmen.

Preist with Arrows
The Martyrdom of Saint Sebastian

Even more important is one of the two versions of Leonardo da Vinci’s Virgin of the Rocks. The other one is in the Louvre, and is usually considered the original; but I think the Gallery’s version, with its deeper shades and more dramatic chiaroscuro, is lovelier. Apart from its beauty, this painting is notable for its setting. Leonardo, as is typical of him, creates a carefully naturalistic background for this traditional Biblical scene. In previous eras, the background of paintings was almost entirely neglected; monochrome gold foil set off the human figures. But in Leonardo’s masterpiece, the background—a cave, which was an unprecedented choice—swallows up its subject. Such careful attention to rendering nature was something new in history.

Virgin of the Rocks

I also cannot move on without mention of Rembrandt. The National Gallery has several of Rembrandt’s most highly regarded works, including two of his self-portraits. Looking into the eyes of a famous artist, as he stares back at you from a self-portrait, is an unnerving experience; suddenly the gap in space and time that separates your lives vanishes; the artist has transcended death, and even transcended life; his focused gaze, dry pigment on a canvas, will outlast even your own living flesh. On a less dramatic note, the Gallery also has one of Velazquez’s most famous works: The Rockeby Venus, famous for being one of the few female nudes in Spanish art (one other being Goya’s La Maja Desnuda).

I will muster my self-control and mention only two more works.

By common consent, the greatest painter in English history is Joseph Mallord William Turner; and several of his finest works can be seen at the Gallery. Of these, my favorite is Rain, Steam, and Speed—The Great Western Railway. A locomotive emerges from a tempest, a black tube bursting through grey fog. Every line and color is blurred as if seen from an out-of-focus camera. All we can see in the background are hints of blue sky, a bridge, and a lake where some people are rowing in a little boat.

Turner Steam and Rail

In this paintings, Turner seems to have both anticipated and surpassed the impressionists in rendering momentary flashes of life. The swirl of indistinct color is absolutely hypnotic; yet the painting is not merely pretty, as are many impressionistic paintings, but a convincing symbol of the relationship between human technology and natural power. The train punches through the mist, in a confident gesture of industrial might; and yet the stormy clouds that swirl all around menace the lonely black locomotive. Both the train and its surrounding are impressive, even sublime, but also inhumanly vast and cold; and the two slight figures in the rowboat below reveal our true vulnerability in the face of these forces.

The last painting I’ll mention before forcing myself away—even remembering the Gallery is a pleasure—is Bathers at Asnières, by Georges Seurat. This painting was completed in 1884; but it was not until many years after Seurat’s death that it was recognized as a masterpiece. It depicts several middle-class Parisians relaxing by the Seine on a hot summer day. The technique Seurat used is almost pointillistic in its precise use of strokes and colors, relying mainly on bright horizontal daubs. The combination of statuesque modeling and poses—the bathers’ heavy bodies and horizontal orientation remind me of an Egyptian frieze—with Seurat’s delicate treatment of brushstrokes, makes the painting look crystal-clear from a afar and blurred from up close. The treatment really captures the feeling of heat: how everything can seem perfectly clear in the summer sun, and yet distant objects are blurred.

Bathers

Complementing this tension between form and vagueness, is an emotional tension between fun and desolation. At first glance the bathers are having a wonderful day. They are at leisure, enjoying the sunshine, the smooth grass, and the cool water. But then you notice how isolated is each one of the figures. They are all in their own world; many seem lost in thought. Their expressions are emotionless; their hunching posture bespeaks weariness. The factory spewing smoke in the background adds another hint of gloom.

To me, the painting is a devastating portrait of the isolation and meaninglessness of contemporary life. We imagine the figures working 9 to 5 jobs in offices during the week, performing mechanical tasks that mean nothing to them. Then they go to their usual restaurant for a bite to eat and then to their apartment to sleep. When with their friends, they drink and talk of trivialities. On a holiday, they come here, and stare into space, unable to articulate to themselves or anyone else the strange sense of emptiness that engulfs them whenever they have a free moment. It is a comfortable world that conceives of nothing beyond wealth and luxury; and its members, when released from their usual routine, can think of nothing to do. Convention dictates that they come here to ‘relax’. The painting is the perfect complement and illustration of Albert Camus’s The Stranger: it is a painting of a world of strangers, to one another and to themselves.


The next day, I headed to one of the other great museums in London: the British Museum. Originally I planned to include my account of that great institution in this post; but I ended up writing so much that I decided that the British Museum deserved its own separate essay, which you can find here.


Brief Snatches of London Life

When I wasn’t visiting museums, I had a few spare hours to wander around the city. This allowed me to glimpse, all too briefly, most of the major sights in London—the places that must be given a mention and a respectful nod in any post about that old city.

The first landmark I insisted on seeing was Big Ben. A trip to London without seeing that venerable clocktower would be like a trip to Pisa without its leaning campanile. I was so ignorant when I visited London (and remain, despite strenuous efforts) that I didn’t even know that Big Ben was attached to the British parliament building, the Palace of Westminster. It was a delightful surprise to find these two landmarks joined together.

Westminster Palace

Although it looks gothic, the palace is of fairly recent construction. The old Westminster palace burned down in 1834 (Turner witnesses the fire, and painted several pictures of it). The new building was designed by Charles Barry, who used a Gothic revival style in his plan. I doubt there is any parliament buildings in the world so elegant, so imposing, and so charming. Few experiences in London, if any, can do a better job of creating that Hollywood sensation of being in a movie than standing on the Westminster bridge, seeing that palace and the clocktower, and hearing the ringing bells of Big Ben chime out the hour.

From there I walked away from the bridge, pausing to examine the statue of Winston Churchill (covered in pigeon droppings) in the nearby plaza, and went to Westminster Abbey. In my very limited experience, this is easily the most beautiful church building in London. I can’t say much about it, because I didn’t go inside—it was closed by the time I arrived, and in any case I didn’t want to pay the steep entry fee—but I can say that its façade is exquisite, especially the north entrance. Funnily enough, Westminster Abbey is not an abbey—at least, not anymore. Originally it was an abbey of the Benedictine monks, but after the Protestant Reformation, and after a brief stint as a cathedral, the abbey was designated a church. For the last 1,000 years it has been the site of coronations and royal weddings.

Westminster Abbey

The walk from Westminster Abbey to Buckingham Palace is about 15 minutes—slightly longer if, like me, you walk through St. James’s Park. I highly recommend this, since the park is absolutely lovely.

Architecturally, Buckingham Palace isn’t much to look at; it presents itself as a cheerless, square, grey block. The building was not originally designed as a royal residence; it only became the seat of the monarchy in 1837, during the reign of Queen Victoria. The palace takes its name from the Duke of Buckingham, who originally had it built. It sits at the end of the Mall—a major road often used for processions—in a roundabout in which stands the golden Victoria Memorial, which commemorates that famous queen.

Buckingham Palace

Even so, neither the monument nor the palace would attract a great deal of attention, I suspect, were it not for the Queen’s Guard. Equipping guards with antique weapons and dressing them in bright red outfits with fluffy tall hats seems to be one of those conspicuously impractical things that wealthy and powerful people do to showcase their wealth and power. Your average rich entrepreneur or politician could not afford to keep a corps of totally inefficient guards performing ceremonial movements all day (which are, naturally, supplemented by other guards using modern weapons, keeping careful watch, and wearing less conspicuous clothes). Here is an incident that demonstrates the guards’ mainly ceremonial role: in 1982 a man managed to evade the palace guard and make his way to the Queen’s bedroom, where he was apprehended by the city police.

I spent some time watching the guards march back and forth, their limbs as stiff as a wooden nutcracker. Purely as athletic performers, the soldiers are undeniably impressive: the timing, the coordination, the posture, the endurance—it must require excellent physical condition and serious training to keep up the routine, especially considering that they wear those clothes even in hot weather. The guards now mainly function as a tourist attraction and an amusing symbol of British culture; but to be fair, the Queen’s Guard aren’t the only soldiers to wear funny clothes (think of the Swiss Guard in the Vatican) or to engage in elaborate ceremony purely for show (think of the tomb of the unknown soldier in Washington D.C.).


By the time I left the British Museum the next day, I only had about 6 hours left before I’d have to go to sleep and say goodbye to London. The best way to get the most out of this time, I figured, was a free walking tour. The guide was excellent, and the tour just what I wanted. Unfortunately I don’t remember the name of the company or of our guide; he introduced himself as the only American tour guide in London—so he shouldn’t be too hard to find. (But apparently this isn’t true; a Google search reveals an American woman named Amber who also gives tours.)

The tour focused on the City of London. You may not know—I certainly didn’t—that the “City of London” refers to the original part of the metropolis, founded by the Romans way back when. This original City of London is now only a tiny fraction of the greater metropolitan area; indeed, it is quite a small place, having an area of only one square mile. This city is far older than England; it has enjoyed special privileges (or, to use the phrase of the Magna Carta, “ancient liberties”) since the Norman Conquest;  and even now it retains the privilege to create many of its own regulations, independent of the greater metropolitan area or of England herself. The city has laxer building codes, which explains why so many of London’s skyscrapers are found there, and also looser financial regulations, which explains why it remains the center of London’s economic life. The City of London is home to the Bank of London, the London Stock Exchange, and Lloyd’s of London (the insurance market).

The tour began at Temple Station. Our guide took us along the river and then down Fleet Street, giving us bits of details about London’s past and present. We walked by Ye Olde Cheshire Cheese, one of the oldest and best known pubs in London, famous both for its silly name and its dark, windowless interior; and this prompted our guide to embark on a long, impassioned explanation of London pub culture. Though an American, he was clearly a convert to the pub way of life; he had strong opinions about what made a pub good or bad; and he had pub recommendations for nearly any area of the city. (I was so inspired that, after the tour, I went into a pub to get a drink; but the beer was so expensive and so mediocre that my disappointment was even more bitter than the beer.)

Soon we reached St. Paul’s Cathedral. The tour didn’t pause for us to go inside; and, in any case, the entrance fee is formidable enough to discourage penurious travelers like me. Among other things, St. Paul’s is famous for having one of the tallest domes in the world. But the present St. Paul’s replaced an older, even taller cathedral (well, it was taller before its spire was destroyed by lightning), which was badly damaged in the Great Fire of London in 1666. The present building was designed by Sir Christopher Wren, and completed in his lifetime. Wren was, if not the greatest, at least the most prolific architect in England’s history. He designed and oversaw the construction of no less than 52 churches after the Great Fire. The architect himself is buried in the crypt of the cathedral, in a modest grave that says “Reader, if you seek his monument—look around you.”

St. Paul's Buildings

From there we moved on to the Monument to the Great Fire, also designed by, you guessed it, Sir Christopher Wren. As our guide pointed out, the monument—a tall doric column that originally rose far above its surroundings—is now hemmed in by neighboring buildings and dwarfed by modern architecture. The guide used this as an example of the tendency of Londoners to be more interested in the future than the past.

To emphasize this point, he directed our attention to the skyscraper at 20 Fenchurch Street, a bizarre, top-heavy construction, completed in 2014, whose shape quickly earned it the nickname ‘The Walkie Talkie’. This building won—and earned—an award for ugliness. (It was also discovered that the building’s concave shape focused the sun’s rays strongly enough to damage cars, ignite doormats, and fry eggs; a screen has since been installed to prevent this from happening.) But the Walkie Talkie is only one of the many skyscrapers that have sprung up in the City of London in recent memory, despite concerns that these tall monstrosities will dwarf and obstruct historic buildings.

Walkie Talkie
Walkie Talkie

From the cathedral, we went down towards the river and ended up under London Bridge. Many people, including me, assume from the nursery rhyme that London Bridge is a tourist attraction; indeed, the justly famous Tower Bridge, which spans the Thames nearby (see below), is often mistakenly called the London Bridge. Sad to say, the current London Bridge is a brutalist piece of concrete and steel, a minimalistic slab of stone that stretches across the Thames, without charm, beauty, or really any distinguishing quality.

The nursery rhyme dates from a time when a different London Bridge spanned the Thames. The ‘Old’ London Bridge, built in 1209 and demolished in 1831, rested on stone arches and was covered in wooden buildings (which proved to be a fire hazard). It was famous for being the site where the severed heads of those executed for treason, dipped in tar and impaled on pikes, were displayed for passersby to take heed. William Wallace’s head was the first to play this role.

In 1831, the ‘New’ London Bridge was built to replace the crumbling medieval construction; this bridge also rested on arches, but it was taller and so allowed bigger ships to pass underneath. In the 1960s it was discovered that London Bridge was falling down (sinking into the riverbed) and had to be replaced. In true English entrepreneurial spirit, the bridge was sold; an American oil tycoon, Robert McCulloch, bought the bridge, disassembled it, shipped it to the United States, and then reassembled it in Lake Havasu City, Arizona—a little piece of English history in the American south. The current behemoth was finished in 1972.

The tour came to an end front of the Tower of London. Once again, I didn’t go inside that old castle—I am really exposing myself as a pathetic traveler, I know—but contented myself with walking around the perimeter. From the outside, the Tower of London doesn’t seem to merit the name “tower”; the White Tower, the central citadel which sits at the center of the castle complex, is less than 100 feet tall—almost invisible in the context of London. The castle is quite venerable; it was first constructed by the Normans in the 11th century, and was expanded in the preceding two centuries. At present the Tower of London is a large complex with two concentric layers of stone walls surrounding the central keep, and some additional buildings such as a chapel and a barracks. The outer wall is surrounded by a moat, now left dry. Besides the castle itself, visitors can see several historical objects on display, such as Henry VIII’s armor and—most notably—the Crown Jewels of England.

The Tower of London has played an important and often a nefarious role in English history. For a long time it served as the British version of the Bastille, as a prison for traitors and other political pests. Anne Boleyn, unfortunate wife of Henry VIII, is the most famous prisoner ever to be held and executed in the tower; legend has it that her ghost still travels through the old castle, her severed head under her arm. But as I stood there looking at that stone pile, I thought only of Thomas More, the British intellectual who dreamed of a utopia with freedom of religion, and who was imprisoned in the tower and then executed for being true to his Catholic faith (also by Henry VIII). More’s head was eventually covered in tar and displayed on a pike on the old London Bridge.

The tour guide ended with a short speech, which I will try to reproduce here:

“In this tour, we’ve seen many different types of power. We have the political and military power of the Tower of London, the religious power of St. Paul’s cathedral and the Church of England, and the economic power of the London Stock Exchange. And this, ultimately, is what the City of London has always been about: the use of power to control its own destiny. It’s a place oriented towards the future, constantly striving to master whatever is the next form of social power in order to maintain its dominance in the world’s affairs.”

And this strikes me as perfectly true.

From the Tower of London I made a quick walk to the nearby Tower Bridge. This is the iconic bridge often mistakenly called the London Bridge. It’s a pretty sight, with two neo-Gothic towers supporting two platforms, one higher and one lower. Built in the 1890s, its design, by Sir Joseph Bazalgette, was innovative: a combined suspension bridge and drawbridge. The idea (according to the tour guide) was to allow pedestrians to keep using the bridge even when the drawbridge was drawn up to allow ships to pass.

Pedestrians soon learned, however, that walking up the stairs in one of the towers, crossing the upper platform, and then walking down the stairs in the other platform, took even more time than just waiting for the drawbridge to close again. Accordingly, pedestrians hardly ever used the upper platform, which came to be frequented mainly by criminals and prostitutes; it was closed in 1910. Nowadays, you need to pay an entrance fee to go up to the upper walkway. This is just another example of a brilliant idea that doesn’t take into account basic human realities: an innovative plan for a bridge that ignores the time and effort needed to climb several flights of stairs. It is certainly pretty, though.

Tower Bridge

As my last stop I made my way to Shoreditch, a neighborhood which had been recommended to me by a Londoner in my Spanish class. Shoreditch is London’s Williamsburg: a previously working class neighborhood that has been gentrified, and is now home to trendy restaurants and technology companies. The area even looks like Williamsburg, with narrower streets and older, shorter buildings, full of colorful shops and cafes. The population, too, is almost indistinguishable from its New York counterpart: men with large mustaches, plaid shirts, and suspenders; women with half their heads shaven, nose rings, and small, tasteful tattoos—in a word, hipsters. I felt right at home. The gentrification is so extreme as to be beyond parody; there is, for example, a cafe, the Cereal Killer Cafe, that serves only breakfast cereal.

To illustrate my own complicity in the world of hipsterdom, I went to a cafe famous for its rainbow-colored bagels, the Brick Lane Beigel Bake. This little cafe is open 24 hours a day, it is cheap, and it is excellent. I didn’t order a rainbow bagel, but instead a ‘hot salt beef’ on a roll. The beef comes with pickles and strong, superb mustard. I had two (for a very reasonable price) and I was stuffed. Another positive mark for English cuisine.

My time was up. My flight was leaving at seven the following morning, which meant I had to wake up at four to give myself enough time to walk to the train station and take the train to the airport.

All told, I spent less than 48 hours in London. I was constantly tired, hungry, and physically exhausted. I ate little, I slept less, and I walked almost constantly—more than 10 hours each day. I spent as little money as I could, and still the trip was expensive. I learned as much as I could, but left the vast majority of the city unseen and unknown. The trip was a physical ordeal and a financial hardship. But in return for all this trouble, I encountered, however briefly, one of the great cities of the world.

Lessons from the British Museum

Lessons from the British Museum

The British Museum is a project of the Enlightenment. It is one of the oldest—older than both the Louvre and the Prado—and the biggest museums in the world. Its collection began when Sir Hans Sloane, a doctor and naturalist, bequeathed his private collection of “curiosities” to the state. The collection grew from there, with the goal of encompassing all of human history under one roof. And because the British Empire soon came to dominate half the globe, this ambition was not so ludicrous as it may at first appear. Ironically, you can probably find finer artifacts in the British Museum than in the countries that the exhibits represent.

Museum Facade

The museum’s massive collection is housed in an equally massive neoclassical building designed by Robert Smirke. Its collection is divided by era and area: Prehistory, the Ancient Near East, Ancient Egypt, Ancient Greece, South Asia, East Asia, the Americas, Africa, and Oceania. Wandering around the museum is like getting lost in a copy of a World History textbook brought to life. The collection is so vast and detailed that the visitor is simply overwhelmed. There is far too much information to take in and process in one visit—even in a dozen visits. Each artifact on display deserves deep study; and when each room is full of hundreds of these artifacts, there is not much you can do except dumbly gape. Likewise, there is not much a writer can do except emulate Sir Hans Sloane and collect curiosities.

Central Room

I began in the Ancient Near East: Mesopotamia, the cradle of civilization. There is something sacred about the simple fact of age. Seeing ancient artifacts is the closest we get to time travel. The passing years corrode all material things, just as the gentle flowing of a stream eventually cuts through rock. The physical bodies of these ancients have long decayed; everything they knew and loved is gone. And yet, 5,000 years later, the messages they carved still preserve an echo of their voice.

Cuneiform tablet

Every time I look at a cuneiform tablet—its crisscrossing wedges and lines unintelligible to me, but visibly a language—I find myself profoundly moved. For all I know, the message is a record of a banal commercial exchange—so many goats for so many bushels of hay—but the simple fact of writing something down, of imprinting words indelibly, signals the beginning of that noble and doomed war against time—the war we call ‘civilization’.

Seeing these first scratches in stone is like catching a glimpse of the universe a few seconds after the big bang. It marks the commencement of something entirely new in history: the ability to transfer knowledge across generations; to develop literature, philosophy, mathematics, and science; to create unchanging codes of law to fairly govern societies; to make the shadows of thought external and permanent. Less fortunately, the beginning of writing also marks the origin of bureaucracy and accounting—indeed, this seems to have been its original purpose, as communities grew too big to be governed by word of mouth.

Perhaps the most impressive object in this section is the Standard of Ur. (This is one of the objects chosen in Neil MacGregor’s series, A History of the World in 100 Objects. You can listen to the segment here. I wish I had read the accompanying book, which looks excellent, before my visit to the museum; it’s on my list.)

Standard of Ur
Detail from the Peace side

It is called a ‘standard’, but nobody really knows what it was used for: a soundbox for a musical instrument or a box to store money for sacred projects—who can say? All we can really determine is that it almost definitely was not a standard, since the drawings are too detailed to be seen from far away. The object dates from 2,600 BCE and consists of a box whose sides depict scenes of war and peace, in three lines of images that look like a comic book. On the war side, we see an army marching off to battle, with armored footsoldiers and men in chariots; below, these charioteers trample enemies underfoot. On the reverse side, we see men seated at a banquet, drinking, while a harpist and a singer provide background music. Below, men are herding animals and carrying sacks of goods on their back, presumably to offer them in tribute to the king.

This standard was found in the site known as the Royal Cementary of Ur, along with objects seen on both the War and the Peace side. Judging from the numerous skeletons in the tomb, it seems that the Sumerians had a practice similar to the Egyptians: upon the death of kings and queens, the royal attendants were put to death to serve their master in the afterlife. I always shudder when I hear about these practices. Drinking poison to follow your king in death seems to be the height of unjust absurdity. I feel angry on behalf of the attendants who lived in oppression and who did not even find freedom in their master’s death. And yet, despite my anger, I can’t help feeling a sort of awe at the level of devotion displayed by this practice. To identify so strongly with a leader that you follow them in death seems hardly human; just as an ant or a bee colony dies with its queen, so these human groups voluntarily put themselves to death.

Violence and oppression thus form the subject-matter of this artifact and surround its discovery. On one side we see the king marching off the war and killing enemies; on the other side the king enjoys the tribute of his hard-working subjects. Nowadays it is impossible to see the society depicted on the Standard of Ur as anything but monstrous: a predatory upper-class stealing from the poor, and then sending the lower-class off to war to defend their bounty and to capture slaves.

But it is worth asking whether the beginning of civilization could have been any different. Humans had just begun farming and forming cities. For the first time in the history of our species, we were living in large, permanent settlements alongside strangers. For the first time, we had enough resources to allow some people in the community to specialize in tasks other than gathering food: priests, soldiers, musicians, administrators, rulers, and artisans. The accumulation of resources always invites raids from without and crimes from within; and fending off these attacks requires organization, leadership, and violence. A community simply couldn’t afford to be anything but authoritarian and militaristic if it hoped to survive. It is an unfortunate fact of human history that justice and security are often at odds—a fact we still confront in the question of surveillance and terrorism.

As a parting thought, I just want to note how remarkable it is that we can look at something like the Standard of Ur—a luxury product made 5,000 years ago, by people who spoke a different language, most of whom couldn’t write, who had a different religion, who lived in a different climate, a people whose experience of the world had so little in common with our own, a people who lived just at the beginning of history—we can look at this object and find it not only intelligible, but beautiful. We experience this same miracle when we read the Epic of Gilgamesh—a story still moving, 4,000 years after it was written down.

In my first anthropology class we learned that humans are cultural creatures, fundamentally shaped by their social environment. But if this were true—if our inborn nature were something negligible and our culture omnipotent—wouldn’t we expect a civilization which flourished in such different circumstances to give rise to art that we couldn’t even hope to understand? And yet, so universal is the human experience that, 5,000 years later, we can still recognize ourselves in the Standard of Ur.

This constancy of our nature is not only manifested in great works of art. For me, the most touching illustration of this are the little baubles and trinkets, the sundry domestic items that give us a taste of daily life in that faraway age. We see the universal human urge to beautify our bodies demonstrated by the jewels of Ancient Greece, Persia, and Egypt, the rings, earrings, pendants, necklaces, armlets, and bracelets which still glitter and charm today—indeed, designs inspired by ancient examples can be bought in the museum store. We see this also in one of the oldest board games ever discovered, the Royal Game of Ur, whose game-board and game-pieces are instantly recognizable by the modern visitor. A cuneiform tablet has also been found which explains the rules, allowing scholars to play the game 4,500 years after its creation (though I can’t find out whether they enjoyed it).

Yet if the continuities are striking, so are the divergences. I feel the gap that separates the present from the ancient past most poignantly whenever I look at a papyrus scroll covered in Egyptian hieroglyphics. Fewer human artifacts look more alien to me than these bits of ancient writing. Lines of simple images—eyes, storks, sparrows, hawks, snakes, scarabs, and many I can’t recognize—run up and down the papyrus, in a parade of symbolic forms. On the top and in the corner are larger drawings, depictions of mythological scenes, illustrations of dead gods and long-forgotten myths. What is most striking is how the writing is a kind of picture, and the pictures a sort of writing; the visual and the verbal are combined into a web of meaning, absolutely saturated with significance.

Heiroglyphics

The thing that is so fascinating about the culture of ancient Egypt is that, for hundreds and thousands of years, through the rise and fall of dynasties and the passing away of dozens of generations, there is a unified, complete, and instantly recognizable aesthetic. It is immediately obvious to any visitor that they have entered the Egyptian section, whether in the first dynasty or the twentieth.

There is undoubtedly something terrifying about this continuity—terrifying that a society based on gross injustice persisted, with its culture nearly unchanged, for a span of time that dwarfs that of our own Western culture. But it is also easy for me to imagine the deep satisfaction enabled by such a complete mythology—a symbolic worldview that decorates every surface, imbues every hour of the day with importance, structures the year and explains the cosmos, penetrates into the depths of reality and even looks beyond the veil that separates life from death. I feel similar stirrings when I look at an illuminated manuscripts from our own Middle Ages, an artifact not so different from the Egyptian scrolls.

Sarcofagus

In any exhibition on ancient Egypt the mummies are always the stars—those shrunken, dried corpses carefully wrapped and sealed in stone sarcophagi to be sent down the eons. When I was there, a crowd was gathered around a mummy of a woman named Cleopatra, perhaps in the mistaken belief that she was Mark Antony’s famous paramour. Yet the most moving object in the Egypt section, for me, is the colossal bust of Ramesses II. (This was also featured on The History of the World in 100 Objects; you can listen to it here.)

Ramesses II

Ramesses II was one of the most effective leaders in all of Egypt’s history. He was born about 1,300 years before the common era, and lived 90 long years, making his reign not only the most iconic, but the longest of ancient Egypt. An energetic general, statesman, and administrator, he was most of all a builder. He presided over the construction of dozens of colossal statues, temples, monuments, and palaces. It was this Ramesses who inaugurated the Abu Simbel complex, whose great temple includes four colossal statues (20 meters, or 66 feet high) of Ramesses himself, carved directly from the hillside. Ramesses was also responsible for the so-called Ramesseum, not a tomb, but a temple complex built for the worship of him, the deified Ramesses, during his reign and after his death.

The bust of Ramesses in the British Museum was taken from this Ramesseum. It is only a fragment: the base of the statue, in which the pharaoh is seated, is still in the Ramesseum. Napoleon’s troops first tried and failed to move the statue; then the British hired an Italian adventurer to do it, who used a combination of pulleys, hydraulics, and old-fashioned manpower. As Neil MacGregor notes, it is a testament to the power and ingenuity of the Egyptians that, 3,000 years later, their statues still require technical tours de forces to move. Imagine the discipline, organization, and sheer amount of sweat and backbreaking effort to move the original stone?

Cracked and battered as he is, the statue still has the effect that its creator intended: the impression of calm omnipotence. The pharaoh looks down serenely from a great height—imperturbable, immovable, eternal. Such a work is clearly the product of a culture in its prime, when artistic execution and social organization were raised to the pitch of perfection. As a mere display of technique, the statue is remarkable: the ability to transport such a massive block of stone, and then to chip away and polish the surface until all that remains is a perfect image of power. And you can imagine how effective these images were as propaganda, in a time before television or telescreens.

In life, Ramesses was as close as any human can get to complete power. In death, he was worshipped as a god. His name and his face have come down to us from over 3,000 years ago. This statue has outlasted whole kingdoms and countries; and there is a good chance it will keep persisting, even when (God forbid) the British Museum is no more. So you might say that, as propaganda, the statue has been an unmitigated success. And yet, Ramesses himself, his empire, and his entire culture—all of them have passed into memory, leaving only their stones and their bones. Impressive as the bust undeniably is, it is also undeniable that it now stands as a sample of Egyptian statuary, to be gawked at by visitors, impressed but certainly not worshipful.

All wood rots, all iron rusts, and everything human turns to dust. Shelley, upon hearing reports of this very bust of Ramesses II, put this sentiment into famous lines:

And on the pedestal these words appear:
My name is Ozymandias, king of kings:
Look on my works, ye mighty, and despair!”
Nothing beside remains: round the decay
Of that colossal wreck, boundless and bare,
The lone and level sands stretch far away.

The final irony is that those immortal lines, like Ramesses’s bust, have outlasted their makers and will likely last as long as there are humans who worry about the finitude of life.

If there is any hope of immortality, it is through the communication of our ideas—something demonstrated most poignantly by the Rosetta Stone. That ancient document—an administrative decree about taxes and tithes—now stands in the British Museum as a testament to the ability of different cultures in different places and times to understand one another. In the modern world it has become trendy to agonize about the impossibility of translation and the gulfs that separate different cultural worldviews. But humans have been translating since the beginning of history; and the very fact that we can decipher a long-dead language, written in an archaic script, using another translation of an ancient language written in another archaic script, shows that communication can transcend wide differences of perspective.

Rosetta Stone.jpg
Photo includes a reflection of the writer in the glass

I have already spent far too much space describing the treasures of the British Museum. But I cannot leave off without a mention of the Elgin Marbles from the Parthenon.

The Parthenon, as everyone knows, is the most important and iconic ruin from Ancient Greece. Built during Athens’ golden age as a temple to their patron goddess, Athena, it has been both a church and a mosque in its long life. The Ottomans even decided to use the temple to store ammunition—guessing that their enemies, the Venetians, would never dare to fire at such a hallowed edifice. This guess was incorrect; in 1687 a Venetian bomb detonated the ammunition inside, causing a massive explosion that left only the building’s husk intact. Then in 1800 an art-loving British aristocrat, the Earl of Elvin, in highly dubious circumstances, excavated sculptures and friezes from the ruined Parthenon to decorated his home. But a costly divorce forced him to sell his home and his collection to the British government. As a result, the parts of the Parthenon, in the next chapter of their long and battered history, found their way into the British Museum.

Unsurprisingly, this acquisition is controversial. Imagine if a museum in England had a part of Mount Rushmore. Americans wouldn’t be happy, and neither are the Greeks. The Greek government has been trying to repossess the collection since 1983. There are many arguments averred for sending the marbles back to Greece. The most compelling is the simplest: that the Parthenon is one of the most important cultural monuments in European history, and should be as complete as possible. In any case, the legality of the original transfer has always been questioned: it’s possible that Elvin didn’t have official permission from the Ottoman Empire. In England, public opinion was divided at the time—Lord Byron famously thought it was inexcusable vandalism—and seems to be in favor of returning the collection nowadays. The British Museum is (also unsurprisingly) in favor of keeping the marbles.

For my part, it seems unquestionably just to return the Elgin marbles to Athens. I do admit, however, that I was grateful for the opportunity to see the Parthenon friezes in the British Museum. The display is excellent, allowing the visitor to clearly see the friezes and the statues. If the marbles were inserted back into their original places in the Parthenon (if this is even possible), then they wouldn’t be as clearly visible. And if the marbles were merely displayed in a museum in Athens, then I’m not sure there would be any improvement of presentation. Nevertheless, it does seem that strict justice demands that the marbles be returned.

As for the Elgin marbles themselves—the friezes, metopes, and pediments that line the wall of one enormous exhibit in the British Museum—what is there to be said? The sculptures are likely the most studied and analyzed works of art in Western history; and not only that, they are perhaps the most influential. Almost from the start these works have defined and illustrated classical taste. Indeed, the Parthenon has served as such a ubiquitous model for later artists that it is nearly impossible to respond to them as genuine works of art. They are immediately familiar; you feel that you’ve seen it all before, even if this is your first time in the British Museum.

To the modern eye, the Parthenon sculptures can appear cold, austere, and timeless—perfect human forms carved from perfect white marble. It is scandalous to imagine that these frigid sculptures were once painted with gaudy colors; and inconceivable that, once upon a time, these paragons of artistic orthodoxy were once innovative and daring works that broke every convention.

A visitor to the British Museum can catch a glimpse of the originality of these works if they visit the Babylonian and Egyptian sections first. Moving on from those precursors to the Greeks, you can see obvious continuities—heros and gods, mythological beings and legends, religious processions and rituals—but the changes are even more striking. In the Parthenon, we see a new thing in history: a confident belief in the powers of human intelligence and creativity. Unlike the static and rigid bodies of Egyptian pharaohs, sitting straight up and look straight ahead, we see bodies twisting, turning, leaping, extending, straining—in other words, we see the human body in motion, propelled by its own force. This is not a society that believes in stable order, but in ceaseless striving.

Parthenon Metope

The new perspective is illustrated most clearly by the metopes depicting the centauromachy: the battle between the human lapiths and the half-human half-animal centaurs. In Egyptian mythology, many of the gods were half-animal; and Sumerian palaces were often guarded by the sphinx-like lamassus. In both of these cultures, the natural world, the world of animal life, was seen as a source of power and cosmic order. Yet in the Parthenon the half-animal creatures, the centaurs, are agents of chaos and destruction—creatures who must be conquered and vanquished. For better or for worse, this urge to conquer our own animal nature has been with us ever since.

There are so many more—thousands and thousands more—works that deserve deep contemplation in the museum’s collection, but I will stop here. Yet as I take leave of the British Museum, I want to leave you with one parting thought.

No institution I have seen better illustrates both the enormous strengths and the limitations of the Enlightenment than the British Museum. And because the Enlightenment is very much still with us, it is vital that we understand these strengths and limitations.

Its strengths are undeniable, especially in the context of history. As compared with what came before it, the conception of humanity and history embodied in the museum is undoubtedly an advance. Europeans began to be interested in non-Europeans cultures. Their sense of ancient history began to extend far beyond Ancient Greece and the tribes of Israel. Instead of focusing on their own country or their own religion, Europeans could conceive of humanity as a whole, with a single origin and a common destiny. The museum also demonstrates the democratic spirit of the Enlightenment. The knowledge is put on display for all to see and learn, not sequestered in schools or guarded by jealous academics. Just as the friezes of the Parthenon illustrate the confidence in human intelligence, so does the British Museum exemplify the new, boundless confidence in human reason—the belief that the world is intelligible, that we can communicate our knowledge to anyone, and that our knowledge is not bounded by creed, language, or nation.

But the museum also demonstrates the limitation of this universalist aim. For the idea of a museum that encompasses all of human history relies on the idea that we can create a neutral context in which to understand that history. This underlying notion is clear at a glance: each room—plain, white, full of right-angles—is filled with objects wrenched from their original context. Some of this context is restored, but only as information on panels. My question is: can a modern visitor, looking at a bracelet from ancient Egypt, reading about that bracelet on its accompanying caption, really grasp what this bracelet was to the jewel-maker who created it or the aristocrat who wore it? For comparison, imagine walking into a museum filled with objects from your room, except each object is carefully labeled and sits on its own display. Could any visitor understand what life was like for you?

My point is that there is something inescapably artificial and sterile about the museum. In attempting to create a universal history, a neutral context for information, the museum transforms its objects and imposes a new context. The original meaning of each artifact, how they were used and understood by their creators, is abolished; and instead, each artifact becomes a piece of evidence in a specifically Enlightenment story about the growth of humankind.

To put this another way, the Enlightenment attitude fails to come to grips with how our attempts to understand the world transform what we’re trying to understand. When knowledge is seen as impersonal, existing in a neutral context, simply a matter of seeing and describing, then knowledge becomes blind to its own power. And the British Museum is, among many other things, a demonstration of British power: the financial, political, and military means to scour the world and collect its most valuable objects into one location. It is also a demonstration of British intellectual power: the power to understand all of human history, to see truly and to interpret correctly, to escape provincialism into neutral universality.

I need to pause here. I sound as if I am being harshly critical of the British Museum, and indeed I am. But the truth is that my brief visit was staggering. I saw and learned so much in such a short time that I cannot possibly deny that I think the museum is valuable. The reason I level these criticisms at the British Museum is not because I think this intellectual project it represents is bankrupt or futile, but because, with all its flaws and limitations, with all its political and economic underpinnings, it seems to be the best we have yet achieved in humanity’s understanding of itself. I see these challenges not as reasons to despair—any intellectual project will have its limitations—but as spurs to creative solutions.

Review: The Stranger

Review: The Stranger

The StrangerThe Stranger by Albert Camus
My rating: 5 of 5 stars


In Search of Lost Time

The Stranger is a perplexing book: on the surface, the story and writing are simple and straightforward; yet what exactly lies underneath this surface is difficult to decipher. We can all agree that it is a philosophical novel; yet many readers, I suspect, are left unsure what the philosophical lesson was. This isn’t one of Aesop’s fables. Yes, Camus hits you over the head with something; but the hard impact makes it difficult to remember quite what.

After a long and embarrassingly difficult reread (I’d decided to struggle through the original French this time), my original guess as to the deeper meaning of this book was confirmed: this is a book about time. It is, I think, an allegorical exploration of how our experience of time shapes who we are, what we think, and how we live.

Time is highlighted in the very first sentence: Meursault isn’t quite sure what day his mother passed. Then, he makes another blunder in requesting two days off for the funeral, instead of one—for he forgot that the weekend was coming. How old was his mother when she died? Meursault isn’t sure. Clearly, time is a problem for this fellow. What sort of a man is this, who doesn’t keep track of the days of the week or his mother’s age? What does he think about, then?

For the first half of the book, Meursault is entirely absorbed in the present moment: sensations, desires, fleeting thoughts. He thinks neither of the past nor of the future, but only of what’s right in front of him. This is the root of his apathy. When you are absolutely absorbed in the present, the only things that can occupy your attention are bodily desires and passing fancies. Genuine care or concern, real interest of any kind, is dependent on a past and a future: in our past, we undergo experiences, develop affections, and emotionally invest; and these investments, in turn, shape our actions—we tailor our behavior to bring us closer to the things we care about. Without ever thinking of the past or the future, therefore, our life is a passing dream, a causeless chaos that dances in front of our eyes.

This is reflected in the language Camus uses. As Sartre noted, “The sentences in The Stranger are islands. We tumble from sentence to sentence, from nothingness to nothingness.” By this, Sartre merely wishes to highlight one aspect of Meursault’s thought-process, as mirrored in Camus’s prose: it avoids all causal connection. One thing happens, another thing happens, and then a third thing. This is why Camus so often sounds like Hemingway in this book: the clipped sentences reflect the discontinuous instants of time that pass like disjointed photographs before the eyes of Meursault. There is no making sense of your environment when you are residing in the immediate, for making sense of anything requires abstraction, and abstraction requires memory (how can you abstract a quality from two separate instances if you cannot hold the two instances in your mind at once?).

Now, the really disturbing thing, for me, is how easily Meursault gets along in this condition. He makes friends, he has a job, he even gets a girlfriend; and for quite a long time, at least, he didn’t get into trouble. Yet the reader is aware that Meursault is, if not a sociopath, at least quite close to being one. So how is he getting along so well? This, I think, is the social critique hidden in this book.

Meursault lives a perfectly conventional life; for a Frenchman living in Algeria during this time, his life could hardly be more ordinary. This is no coincidence; because he’s not interested in or capable of making decisions, Meusault has simply fallen into the path provided for him by his society. In fact, Meursault’s society had pre-fabricated everything a person might need, pre-determining his options to such an extent that he could go through life without ever making a decision. Meursault got along so well without having to make decisions because he was never asked to make one. Every decision was made by convention, every option conscribed by custom. If Meursault had not been locked up, chances are he would have simply married Marie. Why? Because that’s what one does.

So Camus lays out a problem: custom prevents us from thinking by circumscribing our decisions. But Camus does not only offer a diagnoses; he prescribes a solution. For this, we must return to the subject of time. When Meursault gets imprisoned, he is at first unhappy because he is no longer able to satisfy his immediate desires. He has been removed from society and from its resources. This produces a fascinating change in him: instead of being totally absorbed in the present moment, Meursault begins to cultivate a sense of the past. He explores his memories. For the first time, he is able, by pure force of will, to redirect his attention from what is right in front of him to something that is distant and gone. He now has a present and a past; and his psychology develops a concomitant depth. The language gets less jerky towards the end, and more like a proper narrative.

This real breakthrough, however, doesn’t happen until Meursault is forced to contemplate the future; and this, of course, happens when he is sentenced to death. His thoughts are suddenly flung towards some future event—the vanishing of his existence. Thus, the circle opened at the beginning is closed at the end, with a perfect loop: the novel ends with a hope for what will come, just as it began with ignorance and apathy for what has passed. Meursault’s final breakthrough is a complete sense of time—past, present, and future—giving him a fascinating depth and profundity wholly lacking at the beginning of the book.

In order to regain this sense of time, Meursault had to do two things: first, remove himself from the tyranny of custom; second, contemplate his own death. And these two are, you see, related: for custom discourages us from thinking about our mortality. Here we have another opened and closed circle. In the beginning of the book, Meusault goes through the rituals associated with the death of a family member. These rituals are pre-determined and conventional; death is covered with a patina of familiarity—it is made into a routine matter, to be dealt with like paying taxes or organizing a trip to the beach. Meusault has to do nothing except show up. The ceremony he witnesses is more or less the same ceremony given to everyone. (Also note that the ceremony is so scripted that he is later chastised for not properly playing the part.)

At the end of the book, society attempts once again to cover up death—this time, in the form of the chaplain. The chaplain is doing just what the funeral ceremony did: conceal death, this time with a belief about God and repentance and the afterlife. You see, even on death row, society has its conventions for death; death is intentionally obscured with rituals and ceremonies and beliefs.

Meursault’s repentance comes by penetrating this illusion, by throwing off the veil of convention and staring directly at his own end. In this one act, he transcends the tyranny of custom and, for the first time in his life, becomes free. This is the closest I can come to an Aesopian moral: Without directly facing our own mortality, we have no impetus to break out of the hamster-wheel of conventional choices. Our lives are pre-arranged and organized, even before we are born; but when death is understood for what it is—a complete and irreversible end—then it spurs us to reject the idle-talk and comforting beliefs presented to us, and to live freely.

This is what Camus would have all of us do: project our thoughts towards our own inescapable end, free of all illusions, so as to regain our ability to make real choices, rather than to chose from a pre-determined menu. Only this way will we cease to be strangers to ourselves.

(At least, that is the Heideggerian fable I think he was going for.)

View all my reviews

Review: A Study of History

Review: A Study of History

A Study of History, Abridgement of Vols 1-6A Study of History, Abridgement of Vols 1-6 by Arnold Joseph Toynbee

My rating: 3 of 5 stars

One of the perennial infirmities of human beings is to ascribe their own failure to forces that are entirely beyond their control.

One day, a couple years ago, as I was walking to Grand Central Station from my office in Manhattan—hurrying, as usual, to get to the 6:25 train in time to get a good seat by the window, which meant arriving by 6:18 at the latest—while crossing an intersection, I looked down and found a Toynbee tile lying in the middle of the street.

Toynbee tiles are mysterious plaques, pressed into the asphalt in city streets, that have appeared in several cities in the United States. Small (about the size of a tablet) and flat (they’re made of linoleum), nearly all of them bear the same puzzling message: “TOYNBEE IDEA MOVIE ‘2001 RESURRECT DEAD ON PLANET JUPITER.” Sometimes other little panicky or threatening messages about the Feds, Gays, Jews, and instructions to “lay tile alone” are scribbled in the margins. Nobody knows the identity of the tile-maker; but they are clearly the work of a dedicated conspiracy theorist with a tenuous grasp on conventional reality; and considering that they’ve been appearing since the 1980s, all around the US and even in South America, you’ve got to give the tile-maker credit for perseverance. (Click here for more on the tiles.)

I was stunned. I had heard of the tiles before, but I never thought I’d see one. I walked across that intersection twice daily; clearly the tile had been recently installed, perhaps just hours ago. I wanted to bend down and examine it closely, but the traffic light soon changed and I had to get out of the way. Reluctantly I moved on towards Grand Central; but I felt gripped by curiosity. Who is this mysterious tile maker? What is he hoping to accomplish? Suddenly I felt an overpowering desire to unlock his message. So instead of jumping on my usual train—I wasn’t going to get a window seat, anyway—I stopped by a bookstore and picked up Toynbee’s Study of History.

Toynbee, for his part, was apparently no lover of mystery, since he tried to explain nothing less than all of human history. The original study is massive: 12 volumes, each one around 500 pages. This abridgement squeezes 3,000 pages into 550; and that only covers the first five books. (Curiously, although the cover of this volume says that it is an abridgement of volumes one through six, it is clear from the table of contents that it only includes one through five. Similarly, though the next volume of the abridgement says it begins with book seven and ends with book ten, it actually begins with book six and ends with book twelve. This seems like a silly mistake.)

The abridgement was done by an English school teacher, D.C. Somervell, apparently just for fun. He did an excellent job, since it was this abridged version that became enormously popular and which is still in print. All this only proves what Toynbee says in the preface, that “the author himself is unlikely to be the best judge of what is and is not an indispensable part of his work.”

As a scholar, Toynbee achieved a level of fame and influence nearly incomprehensible today. His name was dominant in both academe and foreign affairs. In 1947, just after this abridgement of his major work became a best-seller, he was even featured on the cover of Time magazine. This, I might add, is a perverse index of how much our culture has changed since then. It is nearly impossible to imagine this book—a book with no narrative, written in a dry style about an abstract thesis—becoming a best-seller nowadays, and equally impossible to imagine any bookish intellectual on the cover of Time.

But enough about tiles and Toynbee; what about the book?

In A Study of History, Toynbee set out to do what Oswald Spengler attempted in his influential theory of history, The Decline of the West—that is, to explain the rise and fall of human communities. In method and content, the two books are remarkably similar; but this similarity is obscured by a powerful contrast in style. Where Spengler is oracular and prophetic, biting and polemical, literary and witty, Toynbee is mild, modest, careful, and deliberate. Spengler can hardly go a sentence without flying off into metaphor; Toynbee is literal-minded and sober. Toynbee’s main criticism of his German counterpart seems to have been that Spengler was too excitable and fanciful. The English historian seeks to tread the same ground, but with rigor and caution.

Nevertheless, the picture that Toynbee paints, if less colorful, is quite similar in outline to Spengler’s. The two of them seek to divide up humans into self-contained communities (‘cultures’ for Spengler, ‘societies’ for Toynbee); these communities are born, grow, break down, collapse, and pass away according to a certain pattern. Both thinkers see these groups as having a fertile early period and a sterile late period; and they both equate cultural vigor with artistic and intellectual innovation rather than political, economic, or military might.

Naturally, there are significant divergences, too. For one, Toynbee attempts to describe the origin and geographic distribution of societies, something that Spengler glosses over. Toynbee’s famous thesis is that civilizations arise in response to geographic challenge. Some terrains are too comfortable and invite indolence; other terrains are too difficult and exhaust the creative powers of their colonizers. Between these two extremes there is an ideal challenge, one that spurs communities to creative vigor and masterful dominance.

While I applaud Toynbee for the attempt, I must admit that I find this explanation absurd, both logically and empirically. The theory itself is vague because Toynbee does not analyze what he means by a ‘challenging’ environment. How can an environment be rated for ‘difficulty’ in the abstract, irrespective of any given community? A challenge is only challenging for somebody; and what may be difficult for some is easy for others. Further, thinking only about the ‘difficulty’ collapses many different sorts of things—average rainfall and temperature, available flora and fauna, presence of rival communities, and a host of other factors—into one hazy metric.

This metric is then applied retrospectively, in supremely unconvincing fashion. Toynbee explains the dominance of the English colony in North America, for example, as due to the ‘challenging’ climate of New England. He even speculates that the ‘easier’ climate south of the Mason-Dixon line is why the North won the American Civil War. Judgments like these rest on such vague principles that they can hardly be confirmed or refuted; you can never be sure how much Toynbee or ignoring or conflating. In any case, as an explanation it is clearly inadequate, since it ignores several obvious advantages possessed by the English colonists—that England was ascendant while Spain was on the wane, for example.

Now that we know more about the origins of agriculture, we have come to exact opposite conclusion as Toynbee. The communities that developed agriculture did not arise in the most ‘challenging’ environments, but in the areas which had the most advantages—namely, plants and animals that could be easily domesticated. But Toynbee cannot be faulted for the state of archaeology in his day.

The next step in Toynbee’s theory is also vague. The growing society must transfer its field of action from outside to inside itself; that is, the society must begin to challenge itself rather than be challenged by its environments. This internal challenge gives rise to a ‘creative minority’—a group of gifted individuals who innovate in art, science, and religion. These creative individuals always operate by a process of ‘withdraw-and-return’: they leave society for a time, just as Plato’s philosopher left the cave, and then return with their new ideas. The large majority of any given society is an uncreative, inert mass and merely imitates the innovations of the creative minority. The difference between a growing society and either a ‘primitive’ or a degenerating society is that the mass imitate contemporary innovators rather than hallowed ancestors.

Incredibly, Toynbee sees no relation between either technological progress or military prowess with a civilization’s vigor. Like Spengler, he measures a culture’s strength by its creative vitality—its art, music, literature, philosophy, and religion. This allows him to see the establishment of the Roman Empire, as Spengler did, not as a demonstration of vitality but as a last desperate attempt to hold on to Hellenic civilization. Toynbee actually places the ‘breakdown’ of Hellenic society (when they lost their cultural vitality) at the onset of the Peloponnesian War, in 431 BCE, and considers all the subsequent history of Hellene and Rome as degeneration.

But why does the creative minority cease to be genuinely creative and petrify into a merely ‘dominant’ minority? This is because, after one creative triumph, they grow too attached to their triumph and cannot adapt to new circumstances; in other words, they rest on their laurels. What’s more, even the genuine innovations of the creative minority may not have their proper effect, since they must operate through old, inadequate, and at times incompatible institutions. Their ideas thus become either perverted in practice or simply not practiced at all, impeding the proper ‘mimesis’ (imitation) by the masses. After the breakdown, the society takes refuge in a universal state (such as the Roman Empire), and then in a universal church (such as the Catholic church). (As with Spengler, Toynbee seems to have the decline and fall of the Roman Empire as his theory’s ur-type.)

To me—and I suspect to many readers—Toynbee’s theories seem to be straightforward consequences of his background. Born into a family of scholars and intellectuals, Toynbee is, like Spengler, naturally inclined to equate civilization with ‘high’ culture, which leads naturally to elitism. Having lived through and been involved in two horrific World Wars, Toynbee was deeply antipathetic to technology and warfare. Nearly everyone hates war, and rightly; but in Toynbee’s theory, war is inevitably a cause or an effect of societal decay—something which is true by definition in his moral worldview, but which doesn’t hold up if we define decay in more neutral terms. The combination of his family background and his hatred of violence turned Toynbee into a kind of atheistic Christian, who believed that love and non-violence conquered all. I cannot fault him ethically; but this is a moral principle and not an accurate depiction of history.

Although the association is not flattering, I cannot help comparing both Toynbee and Spengler to the maker of the Toynbee tiles. Like that lonely crank, wherever he is, these two scholars saw connections where nobody else had before, and propounded their original worldviews in captivating fashion. Unfortunately, it seems that coming up with a theory that could explain the rise and fall of every civilization in every epoch seems to be just about as possible as resurrecting the dead on planet Jupiter. But sometimes great things are accomplished when we try to do the impossible; and thanks to this unconquerable challenge, we have two monuments of human intelligence and ambition, works which will last far, far longer than linoleum on asphalt.

View all my reviews

Review: The Trial

Review: The Trial

The TrialThe Trial by Franz Kafka
My rating: 5 of 5 stars

Back in university, I had a part-time job at a research center. It was nothing glamorous: I conducted surveys over the phone. Some studies were nation-wide, others were only in Long Island. A few were directed towards small businesses. There I would sit in my little half-cubicle, with a headset on, navigating through the survey on a multiple-choice click screen.

During the small business studies, a definite pattern would emerge. I would call, spend a few minutes navigating the badly recorded voice menu, and then reach a secretary. Then the survey instructed me to ask for the president, vice-president, or manager. “Oh, sure,” the receptionist would say, “regarding?” I would explain that I was conducting a study. “Oh…” their voice would trail off, “let me check if he’s here.” Then would follow three to five minutes of being on hold, with the usual soul-sucking on-hold music. Finally, she would pick up: “Sorry, he’s out of the office.” “When will he be back?” would be my next question. “I’m not sure…” “Okay, I’ll call back tomorrow,” I would say, and the call would end.

Now imagine this process repeating again and again. As the study went on, I would be returning calls to dozens of small businesses where the owners were always mysteriously away. I had no choice what to say—it was all in the survey—and no choice who to call—the computer did that. By the end, I felt like I was getting to know some of these secretaries. They would recognize my voice, and their announcement of the boss’s absence would be given with a strain of annoyance, or exhaustion, or pity. I would grow adept at navigating particular voice menus, and remembered the particular sounds of being on hold at certain businesses. It was strait out of this novel.

When I picked up The Trial, I was expecting it to be great. I had read Kafka’s short stories—many times, actually—and he has long been one of my favorite writers. But by no means did I expect to be so disturbed. Maybe it was because I was groggy, because I hadn’t eaten yet, or because I was on a train surrounded by strangers. But by the time I reached my destination, I was completely unnerved. For a few moments, I even managed to convince myself that this actually was a nightmare. No book could do this.

What will follow in this already-too-long review will be some interpretation and analysis. But it should be remarked that, whatever conclusions you or I may draw, interpretation is a second-level activity. In Kafka’s own words: “You shouldn’t pay too much attention to people’s opinions. The text cannot be altered, and the various opinions are often no more than an expression of despair over it.” Attempts to understand Kafka should not entail a rationalizing away of his power. This is a constant danger in literary criticism, where the words sit mutely on the page, and passages can be pasted together at the analyst’s behest. This is mere illusion. If someone were to tell you that Picasso’s Guernica is about the Spanish Civil War, you may appreciate the information; but by no means should this information come between you and the visceral experience of standing in front of the painting. Just so with literature.

To repeat something that I once remarked of Dostoyevsky, Kafka is a great writer, but a bad novelist. His books do not have even remotely believable characters, character development, or a plot in any traditional sense. Placing The Trial alongside Jane Eyre or Lolita will make this abundantly clear. Rather, Kafka’s stories are somewhere in-between dream and allegory. Symbolism is heavy, and Kafka seems to be more intent on establishing a particular feeling than in telling a story. The characters are tools, not people

So the question naturally arises: what does the story represent? Like any good work of art, any strict, one-sided reading is insufficient. Great art is multivalent—it means different things to different people. The Trial may have meant only one thing to Kafka (I doubt it), but once a book (or symphony, or painting) is out in the world, all bets are off.

The broadest possible interpretation of The Trial is as an allegory of life. And isn’t this exactly what happens? You wake up one day, someone announces that you’re alive. But no one seems to be able to tell you why or how or what for. You don’t know when it will end or what you should do about it. You try to ignore the question, but the more you evade it, the more it comes back to haunt you. You ask your friends for advice. They tell you that they don’t really know, but you’d better hire a lawyer. Then you die like a dog.

Another interpretation is based on Freud. Extraordinary feelings of guilt is characteristic of Kafka’s work, and several of his short stories (“The Judgment,” “The Metamorphosis”) portray Kafka’s own unhealthy relationship with his father. Moreover, the nightmarish, nonsensical quality of his books, and his fascination with symbols and allegories, cannot help but remind one of Freud’s work on dreams. If I was a proper Freudian, I would say that The Trial is an expression of Kafka’s extraordinary guilt at his patricidal fantasies.

A different take would group this book along with Joseph Heller’s Catch-22 as a satire of bureaucracy. And, in the right light, parts of this book are hilarious. Kafka’s humor is right on. He perfectly captures the inefficiency of organizations in helping you, but their horrifying efficiency when screwing you over. And as my experience in phone surveys goes to show, this is more relevant than ever.

If we dip into Kafka’s biography, we can read this book as a depiction of the anguish caused by his relationship with Felice Bauer. (For those who don’t know, Kafka was engaged with her twice, and twice broke it off. Imagine dating Kafka. Poor woman.) This would explain the odd current of sexuality that undergirds this novel.

Here is one idea that I’ve been playing with. I can’t help but see The Trial as a response to Dostoyevsky’s Crime and Punishment. As their names suggest, they deal with similar themes: guilt, depression, alienation, the legal system, etc. But they couldn’t end more differently. Mulling this over, I was considering whether this had anything to do with the respective faiths of their authors. Dostoyevsky found Jesus during his imprisonment, and never turned back. His novels, however dark, always offer a glimmer of the hope of salvation. Kafka’s universe, on the other hand, is proverbially devoid of hope. Kafka was from a Jewish family, and was interested in Judaism throughout his life. Is this book Crime and Punishment without a Messiah?

I can go on and on, but I’ll leave it at that. There can be no one answer, and the book will mean something different to all who read it. And what does that say about Kafka?

View all my reviews

Review: The Righteous Mind

Review: The Righteous Mind

The Righteous Mind: Why Good People are Divided by Politics and ReligionThe Righteous Mind: Why Good People are Divided by Politics and Religion by Jonathan Haidt

My rating: 4 of 5 stars

I expected this book to be good, but I did not expect it to be so rich in ideas and dense with information. Haidt covers far more territory than the subtitle of the book implies. Not only is he attempting to explain why people are morally tribal, but also the way morality works in the human brain, the evolutionary origins of moral feelings, the role of moral psychology in the history of civilization, the origin and function of religion, and how we can apply all this information to the modern political situation—among much else along the way.

Haidt begins with the roles of intuition and reasoning in making moral judgments. He contends that our moral reasoning—the reasons we aver for our moral judgments—consists of mere post hoc rationalizations for our moral intuitions. We intuitively condemn or praise an action, and then search for reasons to justify our intuitive reaction.

He bases his argument on the results of experiments in which the subjects were told a story—usually involving a taboo violation of some kind, such as incest—and then asked whether the story involved any moral breach or not. These stories were carefully crafted so as not to involve harm to anyone (such as a brother and sister having sex in a lonely cabin and never telling anyone, and using contraception to prevent the risk of pregnancy).

Almost inevitably he found the same result: people would condemn the action, but then struggle to find coherent reasons to do so. To use Haidt’s metaphor, our intuition is like a client in a court case, and our reasoning is the lawyer: its job is to win the case for intuition, not to find the truth.

This is hardly a new idea. Haidt’s position was summed up several hundred years before he was born, by Benjamin Franklin: “So convenient a thing it is to be a reasonable creature, since it enables one to find or make a reason for everything one has a mind to do.” An intuitionist view of morality was also put forward by David Hume and Adam Smith. But Haidt’s account is novel for the evolutionary logic behind his argument and the empirical research used to back his claims. This is exemplified in his work on moral axes.

Our moral intuition is not one unified axis from right to wrong. There are, rather, six independent axes: harm, proportionality, equality, loyalty, authority, and purity. In other words, actions can be condemned for a variety of reasons: for harming others, for cheating others, for oppressing others, for betraying one’s group, for disrespecting authority, and for desecrating sacred objects, beings, or places.

These axes of morality arose because of evolutionary pressure. Humans who cared for their offspring and their families survived better, as did humans who had a greater sensitivity to being cheated by freeloaders (proportionality) and who resisted abusive alpha males trying to exploit them (equality). Similarly, humans who were loyal to their group and who respected a power hierarchy outperformed less loyal and less compliant humans, because they created more coherent groups (this explanation relies on group selection theory; see below). And lastly, our sense of purity and desecration—usually linked to religious and superstitious notions—arose out of our drive to avoid physical contamination (for example, pork was morally prohibited because it was unsafe to eat).

Most people in the world use all six of these axes in their moral systems. It is only in the West—particularly in the leftist West—where we focus mainly on the first three: harm, proportionality, and equality. Indeed, one of Haidt’s most interesting points is that the right tends to be more successful in elections because it appeals to a broader moral palate: it appeals to more “moral receptors” in the brain than left-wing morality (which primarily appeals to the axis of help and harm), and is thus more persuasive.

This brings us to Part III of the book, by far the most speculative.

Haidt begins with a defense of group selection: the theory that evolution can operate on the level of groups competing against one another, rather than on individuals. This may sound innocuous, but it is actually a highly controversial topic in biology, as Haidt himself acknowledges. Haidt thinks that group selection is needed to explain the “groupishness” displayed by humans—our ability to put aside personal interest in favor of our groups—and makes a case for the possibility of group selection occurring during the last 10,000 or so years of our history. He makes the theory seem plausible (to a layperson like me), but I think the topic is too complex to be covered in one short chapter.

True or not, Haidt uses the theory of group theory to account for what he calls “hiveish” behavior that humans sometimes display. Why are soldiers willing to sacrifice themselves for their brethren? Why do people like to take ecstasy and rave? Why do we waste so much money and energy going to football games and cheering for our teams? All these behaviors are bizarre when you see humans as fundamentally self-seeking; they only make sense, Haidt argues, if humans possess the ability to transcend their usual self-seeking perspective and identify themselves fully with a group. Activating this self-transcendence requires special circumstances, and it cannot be activated indefinitely; but it produces powerful effects that can permanently alter a person’s perspective.

Haidt then uses group selection and this idea of a “hive-switch” to explain religion. Religions are not ultimately about beliefs, he says, even though religions necessarily involve supernatural beliefs of some kind. Rather, the social functions of religions are primarily to bind groups together. This conclusion is straight out of Durkheim. Haidt’s innovation (well, the credit should probably go to David Sloan Wilson, who wrote Darwin’s Cathedral) is to combine Durkheim’s social explanation of religion with a group-selection theory and a plausible evolutionary story (too long to relate here).

As for empirical support, Haidt cites a historical study of communes, which found that religious communes survived much longer than their secular counterparts, thus suggesting that religions substantially contribute to social cohesion and stability. He also cites several studies showing that religious people tend to be more altruistic and generous than their atheistic peers; and this is apparently unaffected by creed or dogma, depending only on attendance rates of religious services. Indeed, for someone who describes himself as an atheist, Haidt is remarkably positive on the subject of religion; he sees religions as valuable institutions that promote the moral level and stability of a society.

The book ends with a proposed explanation of the political spectrum—people genetically predisposed to derive pleasure from novelty and to be less sensitive to threats become left-wing, and vice versa (the existence of libertarians isn’t explained, and perhaps can’t be)—and finally with an application of the book’s theses to the political arena.

Since we are predisposed to be “groupish” (to display strong loyalty towards our own group) and to be terrible at questioning our own beliefs (since our intuitions direct our reasoning), we should expect to be blind to the arguments of our political adversaries and to regard them as evil. But the reality, Haidt argues, is that each side possesses a valuable perspective, and we need to have civil debate in order to reach reasonable compromises. Pretty thrilling stuff.

Well, there is my summary of the book. As you can see, for such a short book, written for a popular audience, The Righteous Mind is impressively vast in scope. Haidt must come to grips with philosophy, politics, sociology, anthropology, psychology, biology, history—from Hume, to Darwin, to Durkheim—incorporating mountains of empirical evidence and several distinct intellectual traditions into one coherent, readable whole. I was constantly impressed by the performance. But for all that, I had the constant, nagging feeling that Haidt was intentionally playing the devil’s advocate.

Haidt argues that our moral intuition guides our moral reasoning, in a book that rationally explores our moral judgments and aims to convince its readers through reason. The very existence of his book undermines his uni-directional model of intuitions to reasoning. Being reasonable is not easy; but we can take steps to approach arguments more rationally. One of these steps is to summarize another person’s argument before critiquing it, which is what I’ve done in this review.

He argues that religions are not primarily about beliefs but about group fitness; but his evolutionary explanation of religion would be rejected by those who deny evolution on religious grounds; and even if specific beliefs don’t influence altruistic behavior, they certainly do influence which groups (homosexuals, biologists) are shunned. Haidt also argues that religions are valuable because of their ability to promote group cohesion; but if religions necessarily involve irrational beliefs, as Haidt admits, is it really wise to base a moral order on religious notions? If religions contribute to the social order by encouraging people to sacrifice their best interest for illogical reasons—such as in the commune example—should they really be praised?

The internal tension continues. Haidt argues that conservatives have an advantage in elections because they appeal to a broader moral palate, not just care and harm; and he argues that conservatives are valuable because their broad morality makes them more sensitive to disturbances of the social order. Religious conservative groups which enforce loyalty and obedience are more cohesive and durable than secular groups that value tolerance. But Haidt himself endorses utilitarianism (based solely on the harm axis) and ends the book with a plea for moral tolerance. Again, the existence of Haidt’s book presupposes secular tolerance, which makes his stance confusing.

Haidt’s arguments with regard to broad morality come dangerously close to the so-called ‘naturalistic fallacy’: equating what is natural with what is good. He compares moral axes to taste receptors; a morality that appeals to only one axis will be unsuccessful, just like a cuisine that appeals to only one taste receptor will fail to satisfy. But this analogy leads directly to a counter-point: we know that we have evolved to love sugar and salt, but this preference is no longer adaptive, indeed it is unhealthy; and it is equally possible that our moral environment has changed so much that our moral senses are no longer adaptive.

In any case, I think that Haidt’s conclusions about leftist morality are incorrect. Haidt asserts that progressive morality rests primarily on the axis of care and harm, and that loyalty, authority, and purity are actively rejected by liberals (“liberals” in the American sense, as leftist). But this is implausible. Liberals can be extremely preoccupied with loyalty—just ask any Bernie Sanders supporter. The difference is not that liberals don’t care about loyalty, but that they tend to be loyal to different types of groups—parties and ideologies rather than countries. And the psychology of purity and desecration is undoubtedly involved in the left’s concern with racism, sexism, homophobia, or privilege (accusing someone of speaking from privilege creates a moral taint as severe as advocating sodomy does in other circles).

I think Haidt’s conclusion is rather an artifact of the types of questions that he asks in his surveys to measure loyalty and purity. Saying the pledge of allegiance and going to church are not the only manifestations of these impulses.

For my part, I think the main difference between left-wing and right-wing morality is the attitude towards authority: leftists are skeptical of authority, while conservatives are skeptical of equality. This is hardly a new conclusion; but it does contradict Haidt’s argument that conservatives think of morality more broadly. And considering that a more secular and tolerant morality has steadily increased in popularity over the last 300 years, it seems prima facie implausible to argue that this way of thinking is intrinsically unappealing to the human brain. If we want to explain why Republicans win so many elections, I think we cannot do it using psychology alone.

The internal tensions of this book can make it frustrating to read, even if it is consistently fascinating. It seems that Haidt had a definite political purpose in writing the book, aiming to make liberals more open to conservative arguments; but in de-emphasizing so completely the value of reason and truth—in moral judgments, in politics, and in religion—he gets twisted into contradictions and risks undermining his entire project.

Be that as it may, I think his research is extremely valuable. Like him, I think it is vital that we understand how morality works socially and psychologically. What is natural is not necessarily what is right; but in order to achieve what is right, it helps to know what we’re working with.

View all my reviews

Review: Middlemarch

Review: Middlemarch

MiddlemarchMiddlemarch by George Eliot

My rating: 5 of 5 stars

Some gentlemen have made an amazing figure in literature by general discontent with the universe as a trap of dullness into which their great souls have fallen by mistake; but the sense of a stupendous self and an insignificant world may have its consolations.

I did not think a book like this was possible. A work of fiction with a thesis statement, a narrator who analyzes more often than describes, a morality play and an existential drama, and all this in the context of a realistic, historical novel—such a combination seems unwieldy and pretentious, to say the least. Yet Middlemarch never struck me as over-reaching or overly ambitious. Eliot not only manages to make this piece of universal art seem plausible, but her mastery is so perfect that the result is as natural and inevitable as a lullaby.

Eliot begins her story with a question: What would happen if a woman with the spiritual ardor of St. Theresa were born in 19th century rural England? This woman is Dorothea; and this book, although it includes dozens of characters, is her story. But Dorothea, and the rest of the people who populate her Middlemarch, is not only a character; she is a test-subject in a massive thought experiment, an examination intended to answer several questions:

To what extent is an individual responsible for her success or failure? How exactly does the social environment act upon the individual—in daily words and deeds—to aid or impede her potential? And how, in turn, does the potent individual act to alter her environment? What does it mean to be a failure, and what does it mean to be successful? And in the absence of a coherent social faith, as Christianity receded, what does it mean to be good?

As in any social experiment, we must have an experimental group, in the form of Dorothea, as well as a control group, in the form of Lydgate. The two are alike in their ambition. Lydgate’s ambition is for knowledge. He is a country doctor, but he longs to do important medical research, to pioneer new methods of treatment, and to solve the mysteries of sickness, death, and the human frame. Dorothea’s ambitions are more vague and spiritual. She is full of passionate longing, a hunger for something which would give coherence and meaning to her life, an object to which she could dedicate herself body and soul.

Lydgate begins with many advantages. For one, his mission is not a vague hope, but a concrete goal, the path to which he can chart and see clearly. Even more important, he is a man from a respectable family. Yes, there is some prejudice against him in Middlemarch, for being an outsider, educated abroad and with strange notions; but this barrier can hardly be compared with the those which faced even the most privileged woman in Middlemarch. For her part, Dorothea is born into a respectable family with adequate means. But her sex closes so many paths to action that the only important decision she can make is whom she will marry.

Dorothea’s choice of a husband sets the tone for the rest of her story. Faced with two options—the young, handsome, and rich Sir James Chettam, and the dry, old scholar, Mr. Casaubon—she surprises and disappoints nearly everyone by choosing the latter. Dorothea does this because she knows herself and she trusts herself; she is not afraid of being judged, and she does not care about status or wealth.

The first important decision Lydgate makes is who to recommend as chaplain for the new hospital, and this, too, sets the tone for the rest of his story. His choice is between Mr. Tyke, a disagreeable, doctrinaire puritan, and Mr. Farebrother, his friend and an honest, humane, and intelligent man. Lydgate’s inclination is towards the latter, but under pressure from Bulstrode, the rich financier of the new hospital, Lydgate chooses Mr. Tyke. In other words, he distinctly does not trust himself, and he allows his intuition of right and wrong to be swayed by public opinion and self-interest.

Dorothea’s choice soon turns out to be disastrous, while Lydgate’s works in his favor, as Bulstrode puts him in charge of the new hospital. Yet Eliot shows us that Dorothea’s choice was ultimately right and Lydgate’s ultimately wrong. For we cannot know beforehand how our choices will turn out; the future is hidden, and we must dedicate ourselves to both people and projects in ignorance. The determining factor is not whether it turned out well for you, but whether the choices was motivated by brave resolve or cowardly capitulation. You might say that this is the existentialist theme of Eliot’s novel: the necessity to act boldly in the absence of knowledge.

Dorothea’s act was bold and courageous; and even though Mr. Casaubon is soon revealed to be a wearisome, passionless, and selfish academic, her choice was nonetheless right, because she did her best to act authentically, fully in accordance with her moral intuition. Lydgate’s choice, even though it benefited him, established a pattern that ends in his bitter disappointment. He allowed himself to yield to circumstances; he allowed his self-interest to overrule his moral intuition: and this dooms him.

(Eliot, I should mention, seems to prefer what philosophers call an intuitionist view of moral action: that is, we must obey our conscience. Time and again Eliot shows how immoral acts are made to appear justified through conscious reasoning, and how hypocrites use religious or social ideologies to quiet their uneasy inner voice: “when gratitude becomes a matter of reasoning there are many ways of escaping from its bonds.”)

Eliot’s view of success or failure stems from this exploration of choice: success means being true to one’s moral intuition, and failure means betraying it. Dorothea continues to trust herself and to choose boldly, without regard for her worldly well-being or for conventional opinion. Lydgate, meanwhile, keeps buckling under pressure. He marries almost by accident, breaking a strong resolution he made beforehand, and then goes on to betray, one after the other, every other strong resolution of his, until his life’s plan has been lost entirely, chipped away by a thousand small circumstances.

Dorothea ends up on a lower social level than she started, married to an eccentric man of questionable blood, gossiped about in town and widely seen as a social failure. Lydgate, meanwhile, becomes “successful”; his beautiful wife is universally admired, and his practice is profitable and popular. But this conventional judgment means nothing; for Dorothea can live in good conscience, while Lydgate cannot.

But is success, for Eliot, so entirely dependent on intention, and so entirely divorced from results? Not exactly. For the person who is true to her moral intuition—even if she fails in her plans, even if she falls far short of her potential, and even if she is disgraced in the eyes of society—still exerts a beneficent effect on her surroundings.

Anyone who selflessly and boldly follows her moral intuition encourages everyone she meets, however subtly, to follow this example: as Eliot says of Dorothea, “the effect of her being on those around her was incalculably diffusive.” Eliot shows this most touchingly in the meeting between Dorothea and Rosamond. Although Rosamond is vain, selfish, and superficial, the presence of Dorothea prompts her to one of the only unselfish acts of her life.

From reading this review, you might get the idea that this book is merely a philosophical exercise. But Eliot’s most miraculous accomplishment is to combine this analysis with an immaculate novel. The portrait she gives of Middlemarch is so fully realized, without any hint of strain or artifice, that the reader feels that he has bought a cottage there himself.

Normally at this point in a review, I add some criticisms; but I cannot think of a single bad thing to say about this book. Eliot’s command of dialogue and characterization, of pacing and plot-development, cannot be faulted. She moves effortlessly from scene to scene, from storyline to storyline, showing how the private is interwoven with the public, the social with the psychological, the economical with the amorous—how our vices are implicated in our virtues, how our good intentions shot through with ulterior motives, how our hopes and fears are mixed up with our routine reality—never simplifying the ambiguities of perspective or collapsing the many layers of meaning—and yet she is always in perfect command of her mountains of material.

A host of minor characters marches through these pages, each one individualized, many of them charming, some hilarious, a few irritating, and all of them vividly real. I could see parts of myself in every one of them, from the petulant Fred Vincey, to the blunt Mary Garth, to the frigid Mr. Casaubon, to the muddle-headed Mr. Brooke—almost Dickensian in his comic exaggeration—to every gossip, loony, miser, dissolute, profilage, and tender heart—the list cannot be finished.

Perhaps Eliot’s most astounding feat is to combine the aesthetic, with the ethical, with the analytic, in such a way that you can no longer view them separately. Eliot’s masterpiece charms as it preaches; it is both beautiful and wise; it pulls on the heart while engaging the head; and it is, in the words of Virgina Woolf, “one of the few English novels written for grown-up people.”

View all my reviews

Review: Romeo and Juliet

Review: Romeo and Juliet

Romeo and JulietRomeo and Juliet by William Shakespeare

My rating: 4 of 5 stars

ROMEO: Peace, peace, Mercutio, peace!
Thou talk’st of nothing.

MERCUTIO: True, I talk of dreams,
Which are the children of an idle brain,
Begot of nothing but vain fantasy,
Which is as thin of substance as the air
And more inconstant than the wind.

My memories from my high school literature classes are largely a blank. Books held no interest for me. I spent one year skipping my classes completely. And when I did drag myself to class, I almost never did the reading. A quick look through the Cliffnotes the night before was usually enough to pass the exam—which inevitably consisted of a bunch of multiple-choice questions about plot details, and short-answer questions of ‘analysis’ that could easily be fudged by some clever-sounding nonsense.

We occasionally ‘acted-out’ plays in class. This was normally a cue to space-out while my classmates labored through Shakespeare’s language, and hope the teacher didn’t call on me. I liked to day-dream about videogames and action movies. Shakespeare, I thought, was stuffy boring nonsense, hopelessly cliché and old-fashioned. But despite my apathy, one moment of Romeo and Juliet did manage to worm its way into my memory. This was Mercutio’s enormous, phantasmagoric monologue about Queen Mab:

She is the fairies midwife, and she comes / In shape no bigger than an agate stone / On the forefinger of an alderman, / Drawn with a team of little atomi / Over men’s noses as they lie asleep. / Her chariot is an empty hazelnut made by the joiner squirrel or old grub, / Time out o’ mind the faries’ coachmakers; / Her wagon-spokes made of long spinners’ legs; / The cover of the wings of grasshoppers, / Her traces of the smallest spider web, / Her collars of the moonshine’s watery beams, / Her whip of cricket’s bone, her lash of film, / Her waggoner, a small grey-coated gnat, / Not half so big as a round little worm / Prick’d from the lazy finger of a maid; / And in this state she gallops night by night / Through lovers’ brains, and then they dream of love…

The speech goes on much further, describing how the Queen “gallops ov’e a courtiers’ noses” and “driveth o’er a soldier’s neck,” filling their dreams with vain fantasies. I am sure that I didn’t understand even half of it; what is an agate stone, an alderman, or an atomi? But the speech is so exuberant, and interrupted the play’s action so pointlessly (or so it seemed), that I couldn’t help being interested. Yes, I was actually interested in Shakespeare for a moment, and found myself wondering what this fairy queen, so decorously bedecked, had to do with this ridiculous story of love.

I admit that, even now, I find it hard to love this play. It has has become such a ubiquitous cultural reference-point that reading it is rather like seeing the Mona Lisa in person—seeing an icon that is already so relentlessly seen that it is almost impossible to unsee and see afresh. But this is hardly the play’s fault, or Shakespeare’s. Indeed, it is a mark of supreme merit that we can hardly speak of the passions of romantic love without these two lovers coming to mind; and, though we laugh at these outbursts of adolescent passion in our more cynical moments, there is hardly anything more simple and sublime in love poetry than Juliet’s declaration:

My bounty is as boundless as the sea,
My love as deep: The more I give to thee
The more I have, for both are infinite.

A few months ago, I was given a bilingual copy of this book, in English and Italian, from a thoughtful friend who traveled to Verona. I myself was lucky enough to have gone to Verona when I was back in high school, the very same year that I was skipping all my English classes.

I remember getting off the bus, still jetlagged and dazed, but feeling elated and happy in sunny winter’s day. I looked at the stony ruin of the Verona Arena and thought of gladiators wielding tridents and swords. Back then I even knew some Italian—long since forgotten, from lack of both interest and practice—which I was learning in school. So it seems a fitting testament to my misspent youth to quote from this most romantic of plays in that most romantic of languages:

Oh, Romeo, Romeo, perché sei tu Romeo?
Rinnega tuo padre e rifiuta il tuo nome,
o, se non vuoi, giurami solo amore,
e non saró piú una Capuleti.

View all my reviews

Review: Institutes of the Christian Religion

Review: Institutes of the Christian Religion

The Institutes of Christian ReligionThe Institutes of Christian Religion by John Calvin

My rating: 3 of 5 stars

Many wicked lies get mixed up with the tiny particles of truth in the writings of these philosophers.

I am here writing a review of John Calvin’s most famous book, but I can’t say I’ve actually read it. I have read an abridged version, one that preserves about 15% of the original. This is still a fair amount, considering that the unabridged version runs to well over 1,000 pages; but there is so much I missed that I feel a bit self-conscious writing a review.

John Calvin is arguably the most important Protestant theologian in history. Karl Barth once called Hegel the ‘Protestant Aquinas’, but the title seems more apt for Calvin, whose Institutes put into systematic form the new theology and dogma of the budding faith. Calvin begins with knowledge of God, then moves on to knowledge of Christ, the Christian life, justification by faith, prayer, election and predestination, the church and the sacraments, and much more along the way.

Calvin was a lawyer by training, and it shows in his style. Unlike Aquinas, who made careful arguments and addressed objections using logic, Calvin’s primary mode of argument is to quote and interpret scripture, in much the same way as a lawyer might argue from legal precedent. You can also see his legal background in Calvin’s combative tone; he attacks and defends with all the cunning of a professional debater, and will use any rhetorical device available to win his case.

The two biggest theological influence on Calvin, it seems, were St. Paul and St. Augustine, both of whom were rather preoccupied with evil and sin. The gentleness of the Gospels seems totally absent from Calvin’s worldview. Perhaps this is due as much to his personality; he struck me as rather saturnine and bitter, a man disappointed with the world. His mode of argument and cutting tone—treating all his interpretations of the Bible as self-evident and obvious, and his enemies as wicked deceivers—also made him, for me, a rather authoritarian guide through theology. And his zeal was manifested in deed as well as word. It was Calvin, after all, who oversaw the auto-da-fé of Michael Servetus, a Unitarian who believed in adult baptism.

Besides being a powerful thinker, Calvin was a powerful stylist. This book, in the French translation, remains one of the most influential works of French prose. He can swell into ecstasies as he describes the goodness of God, the mercy of Christ, and the bliss that awaits the elect in the next life. At other times, he can rain down fire and brimstone on sinners and reprobates, accusing humanity of universal sin and condemning human nature itself. One of his most characteristic moods is a powerful disgust with sin, which he sees as an inescapable part of earthly life, a pervasive filth that clings to everything. This quote gives a taste of his style:

Even babies bring their condemnation with them from their mother’s womb; they suffer for their own imperfection and no one else’s. Although they have not yet produced the fruits of sin, they have the seed within. Their whole nature is like a seedbed of sin and so must be hateful and repugnant to God.

The most controversial part of this book is the section on predestination. Before even the creation of the world, God knew who would be saved and who would be damned. Our salvation is not ultimately due to any choices we make; we are entirely helpless, completely dependent on God’s grace. By ourselves, we earn and deserve nothing. Human nature can take no credit for goodness. The reason why some are saved and others damned is entirely mysterious. You can never know for sure if you are among God’s elect, but there are certain signs that give hope.

One thing I do admire about Calvin’s argument for predestination is that he achieves a brutal kind of consistency. God is all-powerful, and thus responsible for everything that happens; he is all-knowing, and so was aware of what would happen when he created the world; and he is infinitely good, so all goodness resides in him and not in us.

The only problem with this doctrine is that, when combined with the doctrine of heaven and hell, it makes God seem monstrously unjust. A God who creates a world in which the majority of its inhabitants will be inescapably condemned to everlasting torment is even worse than a man who breeds puppies just to throw them in the fire. Calvin makes much of the “inscrutability of God’s judgment”; but this is as much to say that you should believe him even though what he’s saying doesn’t make sense.

Calvin perhaps can’t be faulted too much for reaching this bleak conclusion. Theologians have been trying for a long time to square the attributes of God—omnipotence, omniscience, omnibenificence—with the qualities we observe in the world and the doctrine of heaven and hell. Everlasting punishment can only appear fair if the sinner brought it upon himself with his own free action (and even then, it seems like a stretch to call everlasting torment “justice”).

But how can free will exist in a universe created by an all-powerful and all-knowing God? For if God knew exactly what was going to happen when the universe was created, and is ultimately responsible for everything that happens (since he created the universe with full knowledge of the consequences), then that would make God responsible for the existence of sinners, and thus we get this same absurdity of people being punished for things they were destined to do.

For my part, I find this question of predestination and punishment extremely interesting, since I think we will have to face similar paradoxes as we learn to explain human behavior. As our belief in free will is eroded through increasing knowledge of psychology and sociology, how will it affect our justice system?

In any case, I’m glad I read the abridged version, since I can’t imagine pushing myself through more than 1,000 pages of this book.
View all my reviews

Review: Phenomenology of Spirit

Review: Phenomenology of Spirit

The Phenomenology of MindThe Phenomenology of Mind by Georg Wilhelm Friedrich Hegel
My rating: 4 of 5 stars

Georg Wilhelm Friedrich Hegel is easily the most controversial of the canonical philosophers. Alternately revered and reviled, worshiped or scorned, he is a thinker whose conclusions are almost universally rejected and yet whose influence is impossible to escape. Like Herodotus, he is either considered to be the Father of History or the Father of Lies. Depending on who you ask, Hegel is the capstone of the grand Western attempt to explain the world through reason, or the commencement of a misguided stream of metaphysical nonsense which has only grown since.

A great deal of this controversy is caused by Hegel’s famous obscurity, which is proverbial. His writing is a great inky cloud of abstractions, a bewildering mixture of the pedantic and the mystic, a mass of vague mysteries uttered in technical jargon. This obscurity has made Hegel an academic field unto himself. There is hardly anything you can say about Hegel’s ideas that cannot be contested, which leads to the odd situation we see demonstrated in most reviews of his works, wherein people opine positively and negatively without venturing to summarize what Hegel is actually saying. Some people seem to read Hegel with the attitude of a pious Christian hearing a sermon in another language, and believe and revere without understanding; while others conclude that Hegel’s language plays the part of a screen in a magician’s act, concealing cheap tricks under a mysterious veil.

For my part, either dismissing or admiring Hegel without making a serious attempt to understand him is unsatisfactory. The proper attitude toward any canonical thinker is respect tinged with skepticism: respect for influence and originality, skepticism towards conclusions. That being said, most people, when confronted with Hegel’s style, will either incline towards the deifying or the despising stance. My inclination is certainly towards the latter. He is immensely frustrating to read, not to mention aggravating to review, since I can hardly venture to say anything about Hegel without risking the accusation of having fundamentally misunderstood him. Well, so be it.

The Phenomenology of Spirit was Hegel’s first published book, and it is widely considered his masterpiece. It is a history of consciousness. Hegel attempts to trace all of the steps that consciousness must go through—Consciousness, Self-Consciousness, Reason, Spirit, and Religion—before it can arrive at the point of fully adequate knowledge (Absolute Knowledge). Nobody had ever attempted anything similar, and even today this project seems ludicrously ambitious. Not only is the subject original, but Hegel also puts forward a new method of philosophy, the dialectical method. In other words, he is trying to do something no one had ever thought of doing before, using a way of thinking no one had thought of using before.

The Phenomenology begins with its justly famous Preface, which was written after the rest of the book was completed. This Preface alone is an important work, and is sometimes printed separately. Since it is easily the most lucid and eloquent section of the book, I would recommend it to those with even a passing interest in philosophy. This is where Hegel outlines his dialectical method.

The dialectical method is a new type of logic, meant to replace deductive reasoning. Ever since Aristotle, philosophers have mainly relied on deductive arguments. The most famous example is the syllogism (All men are mortal, Socrates is a man, etc.). Deduction received renewed emphasis with Descartes, who thought that mathematics (which is deductive) is the most certain form of knowledge, and that philosophy should emulate this certainty.

The problem with syllogisms and proofs, Hegel thought, is that they divorce content from form. Deductive frameworks are formulaic; different propositions (all pigs are animals, all apples are fruit) can be slotted into the framework indifferently, and still produce an internally consistent argument. Even empirically false propositions can be used (all apples are pineapples), and the argument may still be logically correct, even if it fails to align with reality. In other words, the organization of argument is something independent of the order of the world. In the generation before Hegel, Kant took this even further, arguing that our perception and our logic fundamentally shape the world as it appears to us, meaning that pure reason can never tell us anything about reality in itself.

Hegel found this unsatisfactory. In the words of Frederick Copleston, he was a firm believer in the equivalence of content and form. Every notion takes a form in experience; and every formula for knowledge—whether syllogistic, mathematical, or Kantian—alters the content by imposing upon it a foreign form. All attempts to separate content from form, or vice versa, therefore do an injustice to the material; the two are inseparable.

Traditional logic has one further weakness. It conceives of the truth as a static proposition, an unchanging conclusion derived from unchanging premises. But this fails to do justice to the nature of knowledge. Our search to know the truth evolves through a historical process, adopting and discarding different modes of thought in its restless search to grasp reality. Unlike in a deductive process, where incorrect premises will lead to incorrect conclusions, we often begin with an incorrect idea and then, through trial and error, eventually adopt the correct one.

Deductive reasoning not only mischaracterizes the historical growth of knowledge, but it also is unable to deal with the changing nature of reality itself. The world we know is constantly evolving, shifting, coming to being and passing away. No static formula or analysis—Newton’s equations or Kant’s metaphysics, for example—could possibly describe reality adequately. To put this another way, traditional logic is mechanistic; it conceives reality as a giant machine with moving, interlocking parts, and knowledge as being a sort of blue-print or diagram of the machine. Hegel prefers the organic metaphor.

To use Hegel’s own example, imagine that we are trying to describe an oak tree. Traditional logic might take the mature tree, divide it into anatomical sections that correspond with those of other trees, and end with a description in general terms of a static tree. Hegel’s method, by contrast, would begin with the acorn, and observe the different stages it passes through in its growth to maturity; and the terms of the description, instead of being taken from general anatomic descriptions of trees, would emerge of necessity from the observation of the growing tree itself. The final description would include every stage of the tree, and would be written in terms specific to the tree.

This is only an example. Hegel does not intend for his method to be used by biologists. What the philosopher observes is, rather, Mind or Spirit. Here we run into a famous ambiguity, because the German word Geist cannot be comfortably translated as either “mind” or “spirit.” The edition I used translates the title as the Phenomenology of Mind, whereas later translations have called it The Phenomenology of Spirit. This ambiguity is not trivial. The nature of mind—how it comes to know itself and the world, how it is related to the material world—is a traditional inquiry in philosophy, whereas spirit is something quasi-religious or mystical in flavor. For my part, I agree with Peter Singer in thinking that we ought to try to use “mind,” since it leaves Hegel’s meaning more open, while using “spirit” pre-judges Hegel’s intent.

Hegel is an absolute idealist. All reality is mental (or spiritual), and the history of mind consists in its gradual realization of this momentous fact: that mind is reality. As the famous formula goes, the rational is the real and the real is the rational. Hegel’s project in the Phenomenology is to trace the process, using his dialectic method, in which mind passes from ignorance of its true nature to the realization that it comprises the fabric of everything it knows.

How does this history unfold? Many have described the dialectic process as consisting of thesis, antithesis, and synthesis. The problem with this characterization is that Hegel never used those terms; and as we’ve seen he disliked logical formulas. Nevertheless, the description does manage to give a taste of Hegel’s procedure. Mind, he thought, evolved through stages, which he calls “moments.” At each of these moments, mind takes a specific form, in which it attempts to grapple with its reality. However, when mind has an erroneous conception of itself or its reality (which is just mind itself in another guise), it reaches an impasse, where it seems to encounter a contradiction. This contradiction is overcome via a synthesis, where the old conception and its contradiction are accommodated in a wider conception, which will in turn reach its own impasse, and so on until the final stage is reached.

This sounds momentous and mysterious (and it is), but let me try to illustrate it with a metaphor.

Imagine a cell awoke one day in the human body. At first, the cell is only aware of itself as a living thing, and therefore considers itself to be the extent of the world. But then the cell notices that it is limited by its environment. It is surrounded by other cells, which restrict its movement and even compete for resources. The cell then learns to define itself negatively, as against its environment. Not only that, but the cell engages in a conflict with its neighbors, fighting for resources and trying to assert its independence and superiority. But this fight is futile. Every time the cell attempts to restrict resources to its neighbors, it simultaneously impedes the flow of blood to itself. Eventually, after much pointless struggle, the cell realizes that it is a part of a larger structure—say, a nerve—and that it is one particular example of a universal type. In other words, the cell recognizes its neighbors as itself and itself as its neighbors. This process then repeats, from nerves to muscles to organs, until the final unity of the human body is understood to consists as one complete whole, an organism which lives and grows, but which nevertheless consists of distinct, co-dependent elements. Once again, Hegel’s model is organic rather than mechanic.

Just so, the mind awakes in the world and slowly learns to recognize the world as itself, and itself as one cell in the world. The complete unity, the world’s “body,” so to speak, is the Absolute Mind.

Hegel begins his odyssey of knowledge in the traditional Cartesian starting point, with sense-certainty. We are first aware of sensations—hot, light, rough, sour—and these are immediately present to us, seemingly truth in its naked form. However, when mind tries to articulate this truth, something curious happens. Mind finds that it can only speak in universals, which fail to capture the particularity and the immediacy of its sensations. Mind tries to overcome this by using terms like “This!” or “Here!” or “Now!” But even these will not do, since what is “here” one moment is “there” the next, and what is “this” one moment is “that” the next. In other words, the truth of sense-certainty continually slips away when you try to articulate it.

The mind then begins to analyze its sensations into perceptions—instead of raw data, we get definite objects in time and space. However, we reach other curious philosophical puzzles here. Why do all the qualities of salt—its size, weight, flavor, color—cohere in one location, persist through time, and reappear regularly? What unites these same qualities in this consistent way? Is it some metaphysical substance that the qualities inhere in? Or is the unity of these qualities just a product of the perceiving mind?

At this point, it is perhaps understandable why Hegel thought that mind comprises all reality. From a Cartesian perspective—as an ego analyzing its own subjective experience—this is true: everything analyzed is mental. And, as Kant argued, the world’s organization in experience may well be due to the mind’s action upon the world as perceived. Thus true knowledge would indeed require an understanding of how our mind shapes the experience.

But Hegel’s premiss—that the real is rational and the rational is real—becomes much more difficult to accept once we move into the world of intersubjective reality, when individual minds acknowledge other minds as real and existing in the same universe. For my part, I find it convenient to put the question of the natural world to one side. Hegel had no notion of change in nature; his picture of the world had no Big Bang, and no biological evolution, and in any case he did not like Newtonian physics (he thinks, quite dumbly, that the Law of Attraction is the general form of all laws, and that it doesn’t explain anything about nature) and he was not terribly interested in natural science. Hegel was far more preoccupied with the social world; and it is in this sphere that his ideas seem more sensible.

In human society, the real is the rational and the rational is the real, in the sense that our beliefs shape our actions, and our actions shape our environments, and our environments in turn shape our beliefs, in a constantly evolving dialogue—the dialectic. The structure of society is thus intimately related to the structure of belief at any given time and place. Let me explain that more fully.

Hegel makes quite an interesting observation about beliefs. (Well, he doesn’t actually say this, but it’s implied in his approach.) Certain mentalities, even if they can be internally consistent for an individual, reveal contradictions when the individual tries to act out these beliefs. In other words, mentalities reveal their contradictions in action and not in argument. The world created by a mentality may not correspond with the world it “wants” to create; and this in turn leads to a change in mentality, which in turn creates a different social structure, which again might not correspond with the world it is aiming for, and so on until full correspondence is achieved. Some examples will clarify this.

The classic Hegelian example is the master and the slave. The master tries to reduce the slave to the level of an object, to negate the slave’s perspective entirely. And yet, the master’s identity as master is tied to the slave having a perspective to negate; thus the slave must not be entirely objectified, but must retain some semblance of perspective in order for the situation to exist at all. Meanwhile, the slave is supposed to be a nullity with no perspective, a being entirely directed by the master. But the slave transforms the world with his work, and by this transformation asserts his own perspective. (This notion of the slave having his work “alienated” from him was highly influential, especially on Marx.)

Hegel then analyzes Stoicism. The Stoic believes that the good resides entirely in his own mental world, while the exterior world is entirely devoid of value. And yet the Stoic recognizes that he has duties in this exterior world, and thus this world has some moral claim on him. Mind reacts to this contradiction by moving to total Skepticism, believing that the world is unreal and entirely devoid of value, recognizing no duties at all. And yet this is a purely negative attitude, a constant denial of something that is persistently there, and this constant mode of denial collapses when the Skeptic goes about acting within this supposedly unreal world. Mind then decides that the world is unreal and devoid of value, including they themselves as parts of the world, but that value exists in a transcendent sphere. This leads us to medieval Christianity and the self-alienated soul, and so on.

I hope you see by now what I mean by a conception not being able to be acted out without a contradiction. Hegel thought that mind progressed from one stage to another until finally the world was adequate to the concept and vice versa; indeed, at this point the world and the concept would be one, and the real would be rational and the rational real. Thought, action, and world would be woven into one harmonious whole, a seamless fabric of reason.

I am here analyzing Hegel in a distinctly sociological light, which is easily possible in many sections of the text. However, I think this interpretation would be difficult to justify in other sections, where Hegel seems to be making the metaphysical claim that all reality (not just the social world) is mental and structured by reason. Perhaps one could make the argument on Kantian grounds that our mental apparatus, as it evolves through time, shapes the world we experience in progressively different ways. But this would seem to require a lot more traditional epistemology than I see here in the text.

In a nutshell, this is what I understand Hegel to be saying. And I have been taking pains to present his ideas (as far as I understand them) in as positive and coherent a light as I can. So what are we to make of all this?

A swarm of criticisms begin to buzz. The text itself is disorganized and uneven. Hegel spends a great deal of time on seemingly minor subjects, and rushes through major developments. He famously includes a long, tedious section on phrenology (the idea that the shape of the skull reveals a person’s personality), while devoting only a few, very obscure pages to the final section, Absolute Knowledge, which is the entire goal of the development. This latter fact is partially explained by the book’s history. Hegel made a bad deal with his publisher, and had to rush the final sections.

As for prose, the style of this book is so opaque that it could not have been an accident. Hegel leaves many important terms hazily defined, and never justifies his assumptions nor clarifies his conclusions. Obscurity is beneficial to thinkers in that they can deflect criticism by accusing critics of misunderstanding; and the ambiguity of the text means that it can be variously interpreted depending on the needs of the occasion. I think Hegel did something selfish and intellectually irresponsible by writing this way, and even now we still hear the booming thunder of his unintelligible voice echoed in many modern intellectuals.

Insofar as I understand Hegel’s argument, I cannot accept it. Although Hegel presents dialectic as a method of reasoning, I failed to be convinced of the necessary progression from one moment to the next. Far from a series of progressive developments, the pattern of the text seemed, rather, to be due entirely to Hegel’s whim.

Where Hegel is most valuable, I think, is in his emphasis on history, especially on intellectual history. This is something entirely lacking in his predecessors. He is also valuable for his way of seeing mind, action, and society as interconnected; and for his observation that beliefs and mentalities are embodied in social relations.

In sum, I am left with the somewhat lame conclusion that Hegel’s canonical status is well-deserved, but so is his controversial reputation. He is infuriating, exasperating, and has left a dubious legacy. But his originality is undeniable, his influence is pervasive, and his legacy, good or bad, will always be with us.

View all my reviews