If there is a common thread to this pandemic, it is loss. Many have lost jobs, businesses, or homes. Others have lost members of their family, and still others have lost their lives. Even the luckiest among us have lost something, if only time. But this essay seeks to focus on another kind of loss: the loss of patience. Specifically, I want to put into words for myself this strange and unsettling feeling that, of late, comes over me at least once a day, the feeling we call pandemic fatigue.
The first time the coronavirus entered my consciousness as anything more than a blip was around Chinese New Year, in late January. I was going to see the celebratory parade in Usera, Madrid’s Chinese barrio, and I asked a friend of mine if he wanted to come. “Doesn’t seem like a good idea,” he said. “Lots of people coming from Wuhan.” Wuhan? I did not understand. “You know, that new coronavirus.”
I was stunned that someone in my life—and someone I considered sensible—was willing to change their behavior because of this virus on the news. Long before that, I had written off the periodic media frenzies about foreign diseases. Every other year there seemed to be some new virus ready to destroy the world—avian flu, swine flu, zika, SARS, Ebola—and every year it amounted to very little, at least in my life. Besides, I figured the media had such a strong financial incentive to frighten people that they would play up any potential danger, however remote.
So I went to the Chinese New Year Parade, and I didn’t get sick (though my camera was stolen), and I pushed coronavirus back to the peripheries of my awareness. It did not stay there for long. The news coming out of China seemed increasingly dire. The city of Wuhan was shut down completely. A whistleblower doctor died. Travel from China was banned. And still, stories of coronavirus infections started popping up all over the place.
I went on vacation in late February with my brother—to Poland—and, for the most part, life was still completely normal. But our flight back to Madrid took us through Milan, just for a short layover. By that time Italy was in bad shape, and parts of the country were already on lockdown. Milan was one of the worst hit areas. Even so, we did not even consider changing our flight. I was still quite sure that this virus business would blow over. All this was just our instinctual fear of the unknown.
By the first of March, most people were still in denial. By that I mean that we were thinking of this virus like some other kind of natural disaster, a flood or a fire—one that is localized in space and time. Maybe Italy was bad, and maybe China was bad, but we didn’t live in Italy or China. The virus would go away and we would move on. Yet two weeks after I got back to Madrid, the schools were closed. Two days later, restaurants had to shut down; and the next day we were shut up in our houses. It was the lockdown.
I really believed that it would just be for two weeks. A month, tops. I encouraged my mom to buy tickets for a trip to Ireland in June. No way this would still be going on in June, I thought. No chance. But now that I had so much extra time, I decided to read a little about pandemics. I read books by experts in public health and infectious disease, by historians and novelists, and by investigative journalists. And slowly, the truth dawned on me—the hard truth that this emergency was going to last a long time.
This was the first time that I was living through a world-historical crisis as an adult. The closest thing I could remember were the attacks of September the 11th, but I was just a kid then, and I did not really understand what was going on. This time, I was painfully aware, and yet equally powerless to do anything about it.
I had heard stories of the solidarity that arises during times of crisis, but this was the first time I experienced it. Admittedly, it was difficult to show solidarity in any normal way, since we could not be physically close to one another. This was one of the most depressing aspects of the situation. But people figured out ways to lift each other’s spirits. There were the balcony concerts, the children’s drawings taped to windows, and the nightly rounds of applause for the healthcare workers.
The other aspect that helped us to get through this lockdown was fear. During these months we were still coming to grips with this new infection. How deadly was it, exactly? How did it spread? Could it stay in the air? Who was more vulnerable? What were all the symptoms? The uncertainty made the virus all the more frightening. Even so, it was clear that the virus was dangerous: overwhelmed emergency rooms, bodies stored in hockey rinks, and improvised field hospitals. With such a predator lurking the streets, it was less tempting to go outside.
The twin supports of fear and solidarity made the lockdown bearable. That, and a certain amount of creativity.
In Spain we were only allowed out to go shopping for food. We could not take walks or exercise outside. This really limited the options when it came to maintaining mental health—especially in my case, since I love a long walk or a good run.
But I adapted. I created a workout routine I could do in my tiny room, and made sure to do it every day. To get some sun, I snuck out onto my roommate’s balcony. Missing the local parks, I bought a bunch of plants. I made YouTube videos for my students learning English at home. Since we could not go to restaurants, my brother and I started cooking ever-more elaborate dishes—braised oxtail stew, Brazilian feijoada, French cassoulet, and even homemade kebab.
Still, the monotony could be numbing, the social isolation irritating. I can hardly imagine what it would have been like for someone living alone.
Eventually, after what seemed to be half an eternity, we were let out to exercise. In mid-May, I took my first run in over two months. I emerged onto the street almost shivering with excitement.
And yet the run was somehow less enjoyable than I thought it would be. Partly this was due to circumstances. For whatever reason, the Spanish government decided to let us out only at certain prescribed times; so when I set out the streets were absolutely packed. But I was more disappointed at my own physical shape. Though I had been regularly exercising in my little room, running even a fairly short distance felt difficult, heavy, painful. Breathing was so uncomfortable that I even wondered if I had gotten the virus. And, of course, I was much slower than before.
By the beginning of summer, some flicker of light began to appear at the end of the tunnel. We were coming down from the virus’s curve, and hopefully hitting a flat bottom. The state of alarm lifted on June 21 and we were free to do whatever we wanted. Except for the masks, life began to look pretty normal again.
But even at this relatively calm time, the virus could not be forgotten. This was brought home to me when I tried to get my papers in order to visit New York for the summer. I do this every year, and I was even more eager than usual to go home, since it is always nice to take refuge in times of trouble. Even after getting the requisite documents together, however, I was faced with uncertainty.
Here was my predicament: though I could legally travel there and back with my documents, there was no guarantee that the airlines would know that. Visa regulations are enforced very imperfectly by airlines, who tend to err on the side of caution since they face penalties if they transport someone who cannot legally enter a country. Aside from that, flights could simply get cancelled from lack of demand, or the rules could change while I was in the United States, leaving me unable to return to my job in Spain. I hoped that someone in authority could give me some clarity. But the Spanish consulate could only tell me that the situation was evolving, and advised me not to risk it. So, in the end, I had to forego a visit to my homeland.
I focus on this situation because it captures an essential part of pandemic fatigue: the sense of total uncertainty about the future. It is the feeling of being in limbo, of your life being totally up in the air, of being unable to plan even in the short-term. The most one could do was to wait, while the normal pleasures of life passed silently by.
During the summer, I slowly tried to regain the running facility I had lost. It was far more difficult than I anticipated. My body was slow and sluggish, and even rather delicate. On one run I pulled a muscle in my core and had to spend several days recuperating. Nearly every run was in some way a disappointment. But I did discover a new place to run: a park near my apartment, affectionately called siete tetas (seven boobs), a name the park owes to its seven prominent hills that stand above the surrounding city. Running there obviously meant a lot of running uphill, and I figured that this challenge might be enough to get me back into shape.
Practicing this way, I quickly discovered the key to uphill running: look down. It is simply too painful to focus on how much of the hill remains. When you look forward, you become hyper-aware of your labored breathing, and the urge to give up becomes irresistible. But if you look down, focus on your feet, you notice that each individual step is not that much harder than running on level ground, and so you can continue. And it quickly struck me that the pandemic requires just this same mentality: look down, focus on each step, and forget about how much of the hill is left to climb.
Perhaps a Buddhist would describe this state of mind as enlightened, since it is just this absorption in the present moment that meditation tries to cultivate. And, indeed, it is a powerful strategy when times are tough. But few runners, I suspect, would enjoy running the whole time with their head down. Part of the pleasure of a good run is the scenery—at least for me. Likewise, a big part of the motivation of running comes from setting goals and trying to accomplish them: an attitude inherently oriented towards the future. The pandemic, just like this hill, made all this impossible, and it was all we could do to just keep our heads down and keep pushing forward.
Time became a problem during the pandemic—empty time.
At first, I admit, it was exciting to have so much time to fill. Indeed, mixed in with all the alarm and frustration of the early days of the lockdown, there was a distinct note of relief—the opportunity to slow down, to maybe work on some hobbies, or simply to relax and introspect.
But very soon people began to hit a wall, or at least I did. Humans are simply not meant to spend so much time inactive, cut off, and without a fixed schedule. We need a bit of structure and variety, or else time turns into an mushy purée, thin and bland. With no reason to get up early or late, to do something in the morning or the evening, today or tomorrow, this week or next, it somehow became all the more difficult to focus on anything productive. Focus, after all, is as much an act of exclusion—expelling extraneous distractions—as it is of inclusion; and there was nothing to exclude (or, perhaps, there was everything at once?).
One consequence of this lack of any fixed temporal landmarks was an increase in my consumption of alcohol. Simply put, there was not much else to do, and none of the usual reasons not to drink. Not that I was deliberately drowning my sorrows, you see (at least not most of the time); rather, my background consumption of alcohol grew steadily, until I was drinking almost every day. This only exacerbated the physical toll of prolonged inactivity, contributing to the general sense of malaise and torpor that became my natural element. I would wake up groggy and late, and hang around the house most of the day, even when we were finally allowed outside.
The cumulative effect of all this has been pandemic fatigue: a listlessness mixed with an undercurrent of anxiety. Without a routine, unable to see my family, I passed the time the best I could—taking a few trips, teaching a few online classes, and trying to carry on with my usual hobbies. It was not an altogether unpleasant way to live, I suppose.
Yet the feeling was rather like sunbathing on an active volcano. The whole world had a delicate, fragile quality, as if the situation might suddenly and drastically change once again. This made it difficult to fully relax or to fully commit to future plans. Even the approach of the new school year seemed distant and unreal. Would the schools really re-open? And if so, how long would they remain so?
The reason I have become so aware of pandemic fatigue is that, for the moment, it is partially lifting. School has started for in-person classes, and I am once again in front of a classroom, writing on a white board, trying to memorize students’ names (much more difficult with the masks!). In short, I not only have a routine once more, but also a social purpose. It feels surprisingly good. Aristotle was correct when he noted that we are social animals.
Now, after all this time, I have to be presentable in front of other people. This means no more gym shorts and sweatpants. The pandemic beard—quite impressively long, if I may say so—was shaven off, and my long hair trimmed. I even decided to do a dry month, Sober October, in order to reduce my drinking to pre-pandemic levels.
Best of all, my running ability has started to reach pre-lockdown levels once again. All that running uphill paid off, and I can finally run without my body dragging behind my intentions. Better still, I can run while looking forward instead of with my head down, staring at my feet.
But this pandemic is not over yet, and neither is the fatigue. We are in the midst of the long-predicted second wave of infections. The Spanish government is scrambling, amid bitter partisan bickering, to put together a coherent response for this new challenge, and without much success. The main consequence has been a slew of new rules, changing unpredictably from week to week, the majority more annoying than effective. Even as I write this, I am not sure what I will be allowed to do by next week.
The worst part of the current situation is that we will have to endure the next round of restrictions and rules without the psychological supports from the early days. The buoyant solidarity has vanished into the usual humdrum concerns and routine bickerings of life. Lately, most of us (especially the politicians) are more concerned with finger-pointing than with lending a helping hand.
Also, the fear of the virus has lessened considerably. While this is, perhaps, partly justified, since we are more familiar with its symptoms and have better treatments, this is mostly a result of familiarity. Coronavirus is beginning to shift into the background threats in our environments, like car crashes or lung cancer—one of many dangers that we mostly ignore.
After the solidarity and the fear have mostly gone, the only thing left is the feeling of fatigue. In the end, this fatigue is a failure to live with coronavirus, to really face up to it. Most of us badly want to forget about this emergency and move on, and yet we are constantly reminded of its nagging presence. Without the support of the community or even the fear of a new threat, the virus becomes merely a burden, an extra chore, an added whisper of anxiety. Somehow, a problem affecting nearly everyone on the globe has become a dull ache that we all must deal with privately and alone.
I am afraid that there is still a lot of uphill running in our future. The only thing to do is to put our heads down, and push on.
Success! You're on the list.
Whoops! There was an error and we couldn't process your subscription. Please reload the page and try again.
Matters of religion should never be matters of controversy. We neither argue with a lover about his taste, nor condemn him, if we are just, for knowing so human a passion.
This quote sums up the apparent futility of argument—not only about religion, but about so many things that arouse strong feeling. I have never seen, or even heard of, a discussion about religion or politics that ended with one of the participants being convinced. If anything, conversations about these topics seem only to entrench the opposing parties in their positions.
This occurrence appears common and universal; and yet its implications strike at one of the pillars of western thought—that rational arguments can be used to reach the truth and to convince others—as well as of liberal democracy, which rests on the ideal that, to paraphrase John Milton, truth emerges victorious from open encounters with untruth. If debate is really futile in matters religious (which involves our ultimate views of life and the universe) and politics (which involves our stance on society), then are we doomed to endless tribal bickering based on nothing more than group mentality?
I strongly wish that this wasn’t the case; but I admit that, judging on my actions in daily life, I have little faith in the power of reason in these matters. I tend to avoid topics like religion and politics, even among friends. Powerful emotions underpin these aspects of life; values and identity are implicated; and individual psychology—background, traumas, inadequacies—may render the action far removed from cold calculation.
To a large extent, admittedly, rationality has only a subsidiary role in decision-making. Hume was quite right, I believe, to call reason a “slave of the passions.” We are never motivated by reason alone; indeed I don’t even know what that would look like. We are motivated, instead, by desires, which are organic facts. In themselves, desires are neither rational nor irrational. Rationality only applies, first, when we are figuring our how to satisfy these desires; and, second, when multiple, conflicting desires are at play.
The desires to be skinny and to eat three pints of ice cream a day, for example, conflict with one another, and reasoning is needed to achieve a harmony between these two. A reasoner may realize that, however, delicious ice cream may be, the desire to be skinny is consonant with the strong desire to be healthy and live long, so the ice cream is reduced. Both internally, within our psyches, and externally, within society, reason is how we achieve the most satisfying balance of competing desires.
Since reason rests on a fundamentally non-rational bases—namely, desires—it may be the case that reason has no appeal. In politics, for example, somebody may crave equality, and another person freedom; and no argument could move or undermine these desires, since neither is rational in the first place. Different political orientations are rooted in different value systems; and values are nothing but orientations of desires.
But I think it is often the case that competing value systems have many points in common. Grave inequality can, for instance, curtail freedom; and enforced inequality can do the same. For either party, then, a satisfactory society cannot have absolute inequality, absolute equality, absolute freedom, or absolute slavery. These different values are therefore not totally at odds, but are merely different emphases of the same basic desires, different ways to harmonize competing pulls. And in cases like these, rational argument can help to achieve a compromise.
What about religion? Here the case seems somewhat different from politics, since religion is not just a question of values but involves a view of reality.
Admittedly, political ideologies also involve a certain view of reality. Each ideology comes with its own historical narrative. Sometimes these narratives are nothing but a tissue of lies, as with the Nazis; and even the most respectable political narrative may make some dubious assumptions. Nevertheless, the validity of political opinions is not purely a matter of the truth of their historical narrative. Somebody may genuinely desire communism even if everything they assert about the Soviet Union is wrong; and if debunking their history makes us doubtful of the possibility of satisfying their desire, it does not invalidate the desire itself.
With religion, to repeat, the case is somewhat different, since religions assert some set of facts about the universe; and without this set of facts, the religion falls to pieces. All of Marx’s theories of history may be wrong, but you can still rationally want a communist society. But a Christianity without a belief in a divine Jesus has lost its core. It is no longer a religion. In this way religion is decidedly not like falling in love, contrary to Santayana, since love, being pure desire, makes no assertion about the world.
This seems to put religions on a different footing, since they rest not only on desires, but beliefs. And if these beliefs prove incorrect or irrational, then the religion ceases to make sense. From my readings in history, science, philosophy, and theology, it seems quite clear to me that this is the case: that insofar as religious notions can be disproved, they have been; and insofar as they are unprovable, they are irrational to believe.
Indeed, I think with enough time I could explain this quite clearly to a believer. But I have never tried, since I am almost positive it wouldn’t work—that their religious beliefs would be impervious to argument. I also admit that the thought of doing so, of trying to talk someone out of a religion, makes me feel uneasy. It seems impolite and invasive to try to exert so much pressure on somebody’s fundamental beliefs. And even if I were successful, I believe I would feel somewhat guilty, like I had just told a child that Santa wasn’t real.
But is this uneasiness justified? If religions are truly irrational, based on a mistaken picture of the world, then they can give rise to unjustifiable actions. The religiously inspired fight against gay marriage, climate change, and abortion are excellent examples of this. Furthermore, if people habitually accept an irrational picture of the world, basing beliefs on religious authority rather than reasoned arguments, then perhaps they will be more easily manipulated by unscrupulous leaders.
On the other hand, living in a liberal society requires tolerance of others’ beliefs, rational or otherwise. And living in a polite society requires that we respect even when we do not agree. So it seems that a balance must be struck between arguing against an irrational belief and keeping considerate silence.
To abolish aristocracy, in the sense of social privilege and sanctified authority, would be to cut off the source from which all culture has hitherto flowed.
Though I do not share Santayana’s sanguine attitude towards the aristocracy, I think this quote does bring up a vital point: the relationship between art and its patrons.
Nowadays we take it for granted that artists make their money the way that anyone else does, by trading their services on the open market. The buying public—concert-goers, music purchasers, companies that need songs for commercials, and so on—is the ultimate art patron. But this has not historically been the case. Wealthy institutions and affluent individuals have more commonly played this role. So what does this shift from artistic feudalism to capitalism signify?
This question is far more than merely financial. For the artist, however proud and independent, cannot help but be influenced by their audience and supporters.
It is easy to deprecate the vulgarity of popular art in the age of capitalism, but I am not sure aristocracy was much better. Goya’s most profound works are not his portraits of his aristocratic confreres, however excellent these may be; and the same goes—with some extremely notable exceptions—for Velazquez’s many portraits of the royal family. There is no logical reason why an aristocracy of power and wealth should also be an aristocracy of taste.
True, hereditary aristocrats, freed from laborious duty, do have more free time to devote to artistic appreciation. Without the necessity to make their way in the world, they may decide to compete in aesthetic refinement or in sponsoring living artists. This is possible, to be sure, and has happened many times in history. But this method of patronage—private, wealthy individuals—can easily lead to self-aggrandizement; the art it fosters, by being too allied to worldly wealth and earthly power, becomes yet another form of conspicuous consumption.
Perhaps the greatest art patron in western history has been, not royals or nobles, but the church. In music and the visual arts, at least, religious patronage has led to some of the greatest accomplishments in our history: the works of Palestrina, Bach, El Greco, Michelangelo. The advantages of church patronage are clear. An institution of enormous wealth, it can recruit the best artists and all the resources they need. More than that, though also prone to self-aggrandizement, the spiritual aims of religious art free it from the worldliness of the hereditary aristocracy.
Even more important, perhaps, is the continuity of tradition fostered by religions: establishing subjects, tropes, styles, and techniques, that are refined and passed down through the ages. How could Bach have written his Mass in B Minor, or Michelangelo conceived the Sistine Chapel, if they had not been the beneficiaries of hundreds of years of religious tradition?
Granting the church its honorable place in the history of art, we may, however, still admit that religious patronage can lead to a sterile conformity. There are only so many ways, and only so many emotions, that can be portrayed in a Madonna and Child; and, in any case, the church will not prove congenial to nonreligious artists—of which history is full. The Dantes of this world may find in the church all they need; but a Rabelais can never be so satisfied. Inevitably a religious organization will overlook or squelch some aspects of the human experience.
This became clear when a new patron emerged in history, far removed from the grandeur of nobility or the magnificence of the church: namely, the mercantile middle-class. The prime example of this are the paintings of the Dutch Golden Age. All patrons, to an extent, like to see themselves represented in what they patronize; thus the newly powerful Dutch capitalists gravitated towards private portraits and intimate scenes of daily life.
The artistic advantages of this shift in patronage are obvious, opening up unexplored vistas for artists to explore. Neither an aristocrat nor a clergyman, for example, would think of buying a work like Vermeer’s The Milkmaid—a work that has nothing to do with aristocratic virtues or spiritual consolations. Of course there is a downside to this, since the qualities that help merchants succeed have nothing to do with artistic appreciation; and, even if guided by exquisite taste, bourgeois art pays for its wider scope with limited depth. It is an art of prose, not poetry, with a weaker tradition to guide it and quotidian values to embody.
The ideal situation may be a mixture of patronage, such as was the case with Shakespeare. Having every rung of society as his audience, from paupers to princes, he had to strive for universality—and obviously succeeded. But the bard was clearly an exceptional case. Striving to please everyone can easily turn into pleasing nobody in particular, creating something bland and unobjectionable. While the particular taste of patrons may be constraining for some artists, it may help many others to focus.
In recent years the university has become a major source of patronage, especially for musical composition. The advantage of this is clear in an age when the general public has little to no interest in art music. But the nature of the university, as an institution, can also have negative artistic repercussions. Unlike a church, guided by spiritual values, a university is above all a place of exploration; and thus academic music tends to be experimental. Experimentation is usually an artistic virtue; but when cut off from any common set of aesthetic ideals, art degenerates into intellectual exercise.
The economy of the visual arts has diverged quite radically from either literature or music. In the age of mechanical and digital reproduction, the physical uniqueness of a painting has made it an ideal collectors item. Thus paintings are once more a form of conspicuous consumptions, with wealthy patrons spending millions on single works. And unlike in former times, when the aristocrats were the only ones able to afford art, the easy access to reams of high-quality art puts pressure on them to distinguish themselves through extreme taste as well as extreme expenditure. Given that the prize can be so huge, this is an irresistible incentive to inaccessibility—the competition to appreciate the unappreciatable.
The book and music industries are, by contrast, dominated by a relatively small number of giant publishing houses and record companies. Being companies, their fundamental motivation is profit. This makes them naturally risk-averse, since it is always safer to reproduce success than to bet on something different; and this encourages a conformity to commercially successful styles and topics. Acting as gatekeepers to fame, these companies can therefore exert a standardizing influence. And for obvious reasons these companies favor simple, popular styles in order to maximize their clientele.
But does an artist even need a patron? What about the Dickinsons, the van Goghs, and the Kafkas of the world, toiling away, unknown and unsuccessful, in some remote corner? It is true that many great artists never managed to make a living off their art during their lifetimes, relying on extraneous work or their families for support. And this arrangement does have the key advantage of allowing the artist to pursue her individual vision, without having to adapt her work to any foreign tastes, preserving her originality whole and entire.
Yet even this blessing is not unmixed. For patronage, if it subjects artists to sometimes undesirable pressure, can also give artists the direction and external challenge they need. Not every artist is self-sufficient enough to work in silent obscurity, following the bent of their own genius. The structure imposed by patronage can turn a vague or self-involved aesthetic impulse into a focused piece.
It may seem sordid to think of art in these monetary terms; but, as I hope I have shown, this is not a purely aesthetic question. Part of what gives any age is characteristic art is the way that artists make their living. The internet is now opening new possibilities for artistic entrepreneurs. The ultimate aesthetic effects of this new medium are only just beginning to appear.
On the left back of the Seine, in an old Beaux-Arts train station, is one of Europe’s great museums: the Musée d’Orsay. Its collection mainly focuses on French art from the mid-nineteenth to the early-twentieth century. This was a fertile time for Paris, as the museum amply demonstrates. Rarely can you find so many masterpieces collected in one place.
The museum is arranged with exquisite taste. In the middle runs a corridor, filled with statues—of human forms, mostly. They dash, reach, dance, strain, twist, lounge, smile, laugh, gasp, grimace.
On either side of this central corridor are the painting galleries, arranged by style and period. There were naturalistic paintings—with a vanishing perspective, careful shadowing, precise brushstrokes, scientifically accurate anatomy, symmetrical compositions. There were the impressionists—a blur of color and light, creamy clouds of paint, glances of everyday life. There was Cézanne, whose precise simplifications of shape and shade lend his painting of Mont Sainte-Victoire a calm, detached beauty. Then there were the pointillists, Seurat and Signac, who attempted to break the world into pieces and then to build it back up using only dabs of color, arranged with a mixture of science and art.
Greatest of all was van Gogh, whose violent, wavy lines, his bright, simple colors, his oil paint smeared in thick daubs onto the canvas, make his paintings slither and dance. It is simply amazing to me that something as static as a painting can be made be so energetic. Van Gogh’s paintings don’t stand still under your gaze, but move, vibrate, even breathe. It is uncanny.
His self portrait is the most emotionally affecting painting I have ever seen. Wearing a blue suit, he sits in a neutral blue space. His presence warps the atmosphere: the air seems to be curling around him, as if in a torrent. The only colors that break the blur of blue are his flaming red beard and his piercing green eyes. He looks directly at the viewer, with an expression impossible to define. At first glance he appears anxious, perhaps shy; but the more you look, the more he appears calm and confident. You get absolutely lost in his eyes, falling into them, as you are absorbed into ever more complicated subtleties of emotion concealed therein. Suddenly you realize that curling waves of air around him are not mere background, but represent his inner turmoil. Yet is it a turmoil? Perhaps it is a serenity too complicated for us to understand?
I looked and looked, and soon the experience became overwhelming. I felt as if he were looking right through me, while I pathetically tried to understand the depths of his mind. But the more I probed, the more lost I felt, the more I felt myself being subsumed into his world. The experience was so overpowering that my knees began to shake.
Consider this reaction of mine. Now imagine if a curious extraterrestrial, studying human behavior, visited an art museum. What would he make of it?
On its face, the practice of visiting art museums is absurd. We pay good money to gain entrance to a big building, so we can spend time crowding around brightly colored squares that are not obviously more interesting than any other object in the room. Indeed, I suspect an alien would find almost anything on earth—our plant and animal life, our minerals, our technology—more interesting than a painting.
In this essay I want to try to answer this question: Why do humans make and appreciate art? For this is the question that so irresistibly posed itself to me after I stared into van Gogh’s portrait. The rest of my time walking around the Musée d’Orsay, feeling lost among so many masterpieces, I pondered how a colorful canvas could so radically alter my mental state. By the end of my visit, the beginnings of an answer had occurred to me—an answer hardly original, being deeply indebted to Walter Pater, Marcel Proust, and Robert Hughes, among others—and it is this answer that I attempt to develop here.
My answer, in short, is that the alien would be confused because human art caters to a human need—specifically, an adult human need. This is the need to cure ennui.
Boredom hangs over human life like a specter, so pernicious because it cannot be grasped or seen.
The French anthropologist Claude Lévi-Strauss knew this very well. As a young man he ejoyed mountain scenes, because “instead of submitting passively to my gaze” the mountains “invited me into a conversation, as it were, in which we both had to give our best.” But as he got older, his pleasure in mountain scenery left him:
And yet I have to admit that, although I do not feel that I myself have changed, my love for the mountains is draining away from me like a wave running backward down the sand. My thoughts are unchanged, but the mountains have taken leave of me. Their unchanging joys mean less and less to me, so long and so intently have I sought them out. Surprise itself has become familiar to me as I follow my oft-trodden routes. When I climb, it is not among bracken and rock-face, but among the phantoms of my memories.
Dostoyevsky put the phenomenon more succintly: “Man grows used to everything, the scoundrel!”
These two literary snippets have stuck with me because they encapsulate the same thing: the ceaseless struggle against the deadening weight of routine. Nothing is new twice. Walk through a park you found charming at first, the second time around it will be simply nice, and the third time just normal.
The problem is human adaptability. Unlike most animals, we humans are generalists, able to adapt our behavior to many different environments. Instead of being guided by rigid instincts, we form habits.
By “habits” I do not only refer to things like biting your nails or eating pancakes for breakfast. Rather, I mean all of the routine actions performed by every person in a society. Culture itself can, at least in part, be thought of as a collection of shared habits. These routines and customs are what allow us to live in harmony with our environments and one another. Our habits form a second nature, a learned instinct, that allows us to focus our attention on more pressing matters. If, for whatever reason, we were incapable of forming habits, we would be in a sorry state indeed, as William James pointed out in his book on psychology:
There is no more miserable human being than one in whom nothing is habitual but indecision, and for whom the lighting of every cigar, the drinking of every cup, the time of rising and going to bed every day, and the beginning of every bit of work, are subjects of express volutional deliberation. Full half the time of such a man goes to the deciding, or regretting, of matters which ought to be so ingrained in him as practically not to exist for his consciousness at all.
Habits are, thus, necessary to human life. And up to a certain point, they are desirable and good. But there is also a danger in habitual response.
Making the same commute, passing the same streets and alleys, spending time with the same friends, watching the same shows, doing the same work, living in the same house, day after day after day, can ingrain a routine in us so deeply that we become dehumanized.
A habit is supposed to free our mind for more interesting matters. But we can also form habits of seeing, feeling, tasting, even of thinking, that are stultifying rather than freeing. The creeping power of routine, pervading our lives, can be difficult to detect, precisely because its essence is familiarity.
One of the most pernicious effects of routine is to dissociate us from our senses. Let me give a concrete example. A walk through New York City will inevitably present you with a chaos of sensory data. You can overhear conversations, many of them fantastically strange; you can see an entire zoo of people, from every corner of the globe, dressed in every fashion; you can look at the ways that the sunlight moves across the skyscrapers, the play of light and shadow; you can hear dog barks, car horns, construction, alarms, sirens, kids crying, adults arguing; you can smell bread baking, chicken frying, hot garbage, stale urine, and other scents too that are more safely left uninvestigated.
And yet, after working in NYC for a few months, making the same commute every day, I was able to block it out completely. I walked through the city without noticing or savoring anything. My lunch went unappreciated; my coffee was drunk unenjoyed; the changing seasons went unremarked; the fashion choices of my fellow commuters went unnoticed.
It isn’t that I stopped seeing, feeling, hearing, tasting, but that my attitude to this information had changed. I was paying attention to my senses only insofar as they provided me with useful information: the location of a pedestrian, an oncoming car, an unsanitary area. In other words, my attitude to my sensations had become purely instrumental: attending to their qualities only insofar as they were relevant to my immediate goals.
This exemplifies what I mean by ennui. It is not boredom of the temporary sort, such as when waiting on a long line. It is boredom as a spiritual malady. When beset by ennui we are not bored by a particular situation, but by any situation. And this condition is caused, I think, by a certain attitude toward our senses. When afflicted by ennui, we stop treating our sensations are things in themselves, worthy of attention and appreciation, but merely as signs and symbols of other things.
To a certain extent, we all do this, often for good reason. When you are reading this, for example, you are probably not paying attention to the details of the font, but are simply glancing at the words to understand their meaning. Theoretically, I could use any font or formatting, and it wouldn’t really affect my message, since you are treating the words as signs and not as things in themselves.
This is our normal, day-to-day attitude towards language, and it is necessary for us to read efficiently. But this can also blind us to what is right in front of us. For example, an English teacher I knew once expressed surprise when I pointed out that ‘deodorant’ consists of the word ‘odor’ with the prefix ‘de-’. She had never paused long enough to consider it, even though she had used the word thousands of times.
I think this attitude of ennui can extend even to our senses. We see the subtle shades of green and red on an apple’s surface, and only think “I’m seeing an apple.” We feel the waxy skin, and only think “I’m touching an apple.” We take a bite, munching on the crunchy fruit, tasting the tart juices, and only think “I’m tasting an apple.” In short, the whole quality of the experience is ignored or at least underappreciated. The apple has become part of our routine and has thus been moved to the background of our consciousness.
Now, imagine treating everything this way. Imagine if all the sights, sounds, tastes, textures, and smells were treated as routine. This is an adequate description of my mentality when I was working in New York, and perhaps of many people all over the world. The final effect is a feeling of emptiness and dissatisfaction. Nothing fulfills or satisfies because nothing is really being experienced.
This is where art comes in. Good art has the power to, quite literally, bring us back to our senses. Art encourages us not only to glance, but to see; not only to hear, but to listen. It reconnects us with what is right in front of us, but is so often ignored. To quote the art critic Robert Hughes, the purpose of art is “to make the world whole and comprehensible, to restore it to us in all its glory and occasional nastiness, not through argument but through feeling, and then to close the gap between you and everything that is not you.”
Last summer, while I was still working at my job in NYC, I experienced the power of art during a visit to the Metropolitan. By then, I had already visited the Met dozens of times in my life. My dad used to take me there as a kid, to see the medieval arms and armor; and ever since I have visited at least once a year. The samurai swords, the Egyptian sarcophagi, the Greek statues—it has tantalized my imagination for decades.
In my most recent visits, however, the museum had lost much of its power. It had become routine for me. I had seen everything so many times that, like Levi-Strauss, I was visiting my memories rather than the museum itself.
But this changed during my last visit. It was the summer right before I came to Spain. I had just completed my visa application and was about to leave my job. This would be my last visit to the Met for at least a year, possibly longer. I was saying goodbye to something intimately familiar in order to embrace the unknown. My visit became no longer routine, but unique and fleeting, and this made me experience the museum in an entirely new way.
Somehow, the patina of familiarity had been peeled away, leaving every artwork fresh and exciting. Whereas on previous visits I viewed the Greco-Roman and Egyptian statues are mere artifacts, revealing information about former civilizations, this time I began to become acutely sensitive to previously invisible subtleties: fine textures, subtle hues, elegant forms. In short, I had stopped treating the artwork as icons—as mere symbols of a lost age—but as genuine works of art.
This experience was so intense that for several days I felt rejuvenated. I stopped feeling so deeply dissociated from my workaday world and began to take pleasure again in little things.
While waiting for the elevator, for example, I looked at a nearby wall; and I realized, to my astonishment, that it wasn’t merely a flat plain surface, as I had thought, but was covered in little bumps and shapes. It was stucco. I grew entranced by the shifting patterns of forms on the surface. I leaned closer, and began to see tiny cracks and little places where the paint had chipped off. The slight variations on the surface, a stain here, a splotch there, the way the shapes seemed to melt into one another, made it seem as though I were looking at a painting by Jackson Pollock or the surface of the moon.
I had glanced at this wall a hundred times before, but it took a visit to an art museum to let me really see it. Routine had severed me from the world, and art had brought me back to it.
Reality is always experienced through a medium—the medium of senses, concepts, language, and thought. Sensory information is detected, broken down, analyzed, and then reconfigured in the brain.
We are not passive sensors. While a microphone might simply detect tones, rhythms, and volume, we hear cars, birds, and speech; and while a camera might detect shapes, colors, and movement, we see houses and street signs. The data we collect is, thus, not experienced directly, but is analyzed into intelligible objects. And this is for the obvious reason that, unlike cameras and microphones, we need to use this information to survive.
In order to deal efficiently with the large amount of information we encounter every day, we develop habits of perceiving and thinking. These habits are partly expectations of the kinds of things we will meet (people, cars, language), as well as the ways we have learned to analyze and respond to these things. These habits thus lay at the crossroads between the external world of our senses and the internal world of our experience, forming another medium through which we experience (or don’t experience) reality.
Good art forces us to break these habits, at least temporarily. It does so by breaking down reality and then reconstructing it with a different principle—or perhaps I should say a different taste—than the one we habitually use.
The material of art—what artists deconstruct and re-imagine—can be taken from either the natural or the cultural world. By ‘natural world’ I mean the world as we experience it through our senses; and by ‘cultural world’ I mean the world of ideas, customs, values, religion, language, tradition. No art is wholly emancipated from tradition, just as no tradition is wholly unmoored from the reality of our senses. But very often one is greatly emphasized at the expense of the other.
A good example of an artform concerned with the natural world is landscape painting. A landscape artist breaks down what she sees into shapes and colors, and puts it together on her canvass, making whatever tasteful alteration she sees fit.
Her view of the landscape, and how she chooses to reconstruct it on her canvass, is of course not merely a matter between her and nature. Inevitably our painter is familiar with a tradition of landscape paintings; and thus while engaged with the natural landscape she is simultaneously engaged in a dialogue with contemporary and former artists. She is, therefore, simultaneously breaking down the landscape and her tradition of landscape painting, deciding what to change, discard, or keep. The final product emerges as the an artifact of an exchange between the artist, the landscape, and the tradition.
The fact remains, however, that the final product can be effectively judged by how it transforms its subject—the landscape itself. Thus I would say that landscape paintings are primarily oriented towards the natural world.
By contrast, many religious paintings are much more oriented towards a tradition. It is clear, even from a glance, that the artists of the Middle Ages were not concerned with the accurate portrayal of individual humans, but with the evoking of religious figures through idealizations. The paintings thus cannot be evaluated by their fidelity to the sensory reality, but by their fidelity to a religious aesthetic.
It is worth noting that artworks oriented towards the natural world tend to be individualistic, while artworks oriented towards the cultural world tend to be communal. The reason is clear: art oriented towards the natural world reconnect us with our senses, and our senses are necessarily personal. By contrast, culture is necessarily impersonal and shared. The rise of perspective, realistic anatomy, individualized portraits, and landscape painting at the time of the Italian Renaissance can, I think, persuasively be interpreted as a break from the communalism of the medieval period and an embrace of individualism.
Music is an excellent demonstration of this tendency. To begin with, the medium of sound is naturally more social than that of sight or language, since sound pervades its environment. What is more, music is a wholly abstract art, and thus totally disconnected from the natural world.
This is because sound is just too difficult to record. With only a pencil and some paper, most people could make a rough sketch of an everyday object. But without some kind of notational system—and even then, maybe not—most people could not transcribe an everyday sound, like a bird’s chirping.
Thus, musicians (at least western musicians) take their material from culture rather than nature, from the world of tradition rather than the world of our senses.
(In an oral tradition, where music does not need to be transcribed, it is possible that music can strive to reproduce natural sounds; but this has not historically been the case in the west.)
To deal with the problem of transcribing sound, rigorous and formal ways of classifying sounds were developed. An organizational system developed, with its own laws and rules; and it is these laws and rules that the composer or songwriter manipulates.
And just as your knowledge of the natural world helps to make sense of visual art, so our cultural training helps us to make sense of music. Just as you’ve seen many trees and human faces, and thus can appreciate how painters re-imagine their appearances, so have you heard hours and hours of music in your life, most of it following the same or similar conventions.
Thus you can tell (most often unconsciously) when a tune does something unusual. Relatively few people, for example, can define a plagal cadence (an unusual final cadence from the IV to the I chord), but almost everyone responds to it in Paul McCartney’s “Yesterday.”
As a result of its cultural grounding, music an inherently communal art form. This is true, not only aesthetically, but anthropologically. Music is an integral part of many social rituals—political, religious, or otherwise. Whether we are graduating from high school, winning an Oscar, or getting married, music will certainly be heard. As much as alcohol, music can lower inhibitions by creating a sense of shared community, which is why we play it at every party. Music thus plays a different social role than visual art, connecting us to our social environment rather than to the often neglected sights and sounds of everyday life.
The above descriptions are offered only as illustrations of my more general point: Art occupies the same space as our habits, the gap between the external and the internal world. Painters, composers, and writers begin by breaking down something familiar from our daily reality. This material can be shapes, colors, ceramic vases, window panes, the play of shadow across a crumpled robe in the case of painting. It can be melodies, harmonies, timbre, volume, chord progressions, stylistic tropes in the case of music. And it can be adjectives, verbs, nouns, situations, gestures, personality traits in the case of literature
Whatever the starting material, it is the artist’s job to recombine it into something different, something that thwarts our habits. Van Gogh’s thick daubs of paint thwart our expectation of neat brushstrokes; McCartney’s plagal cadence thwarts our expectation of a perfect cadence; and Proust’s long, gnarly sentences and philosophic ideas thwart our expectations of how a novelist will write. And once we stop seeing, listening, feeling, sensing, thinking, expecting, reacting, behaving out of habit, and once more turn our fill attention to the world, naked of any preconceptions, we are in the right mood to appreciate art.
Yet it is not enough for art to be simply challenging. If this were true, art would be anything that was simply strange, confusing, or difficult. Good art can, of course, be all of those things; but it need not be.
Many artists nowadays, however, seem to disagree on this point. I have listened to works by contemporary composers which simply made no sense for my ears, and have seen many works of modern art which had no visual interest. We are living in the age of “challenging” art; and beauty is too often reduced to confusion.
But good art must not only challenge our everyday ways of seeing, listening, and being. It must reconstitute those habits along new lines. Art interrogates the space between the world and our habits of seeing the world. It breaks down the familiar—sights, harmonies, language—and then builds it back up again into the unfamiliar, using new principles and new taste. Yet for the product to be a work of art, and not mere strangeness, the unfamiliar must be rendered beautiful. That is the task of art.
Thus, Picasso does not only break down the perspectives and shapes of daily life, but builds them back up into new forms—fantastically strange, but sublime nonetheless. Debussy disintegrates the normal harmonic conventions—keys, cadences, chords—and then puts them all back together into a new form, uniquely his, and also unquestionably lovely. Great art not only shows you a different way of seeing and understanding the world, but makes this new vista attractive.
Pretentious art, art that merely wants to challenge, confuse, or frustrate you, is quite a different story. It can be most accurately compared to the relationship between an arrogant schoolmaster and a pupil. The artist is talking down to you from a position of heightened knowledge. The implication is that your perspective, your assumptions, your way of looking at the world are flawed and wrong, and the artist must help you to get out of your lowly state. Multiple perspectives are discouraged; only the artist’s is valid.
And then we come to simple entertainment.
Entertainment is something that superficially resembles art, but it’s function is entirely different. For entertainment does not reconnect us with the world, but lures us into a fantasy.
Perhaps the most emblematic form of pure entertainment is advertizing. However well made an advertisement is, it can never be art; for its goal is not to reconnect with the world, but to seduce us. Advertisements tell us we are incomplete. Instead of showing us how we can be happy now, they tell what we still need.
When you see an ad in a magazine, for example, you are not meant to scan it carefully, paying attention to the purely visual qualities. Rather, you are forced to view it as an image. By ‘image’ I mean a picture that serves to represent something else. Images are not meant to be looked at, but glanced at; images are not meant to be analyzed, but instantly understood. Ads use images because they are not trying to bring you back to your senses, but lure you into a fantasy.
Don’t misunderstand me: There is nothing inherently wrong with fantasy. Indeed, I think fantasy is almost indispensable to a healthy life. The fantasies of advertisements are, however, somewhat nefarious, since ads are never pure escapism. Rather, the ad forces you to negatively compare your actual life with the fantasy, conclude that you are lacking something, and then of course seek to remedy the situation by buying their product.
Most entertainment is, however, quite innocent, or at least it seems to me. For example, I treat almost all blockbusters as pure entertainment. I will gladly go see the new Marvel movie, not in order to have an artistic experience, but because it’s fun. The movie provides two hours of relief from the normal laws of physics, of probability, from the dreary regularities of reality as I know it. Superhero movies are escapism at its most innocent. The movies make no pretenses of being realistic, and thus you can hardly feel the envy caused by advertisements. You are free to participate vicariously and then to come back to reality, refreshed from the diversion, but otherwise unchanged.
The prime indication of entertainment is that it is meant to be effortless. The viewer is not there to be challenged, but to be diverted. Thus most bestselling novels are written with short words, simple sentences, stereotypical plotlines stuffed full of clichés—because this is easy to understand. Likewise, popular music uses common chord progressions and trite lyrics to make hits—music to dance to, to play in the background, to sing along to, but not to think about. This is entertainment: it does not reconnect us with our senses, our language, our ideas, but draw us into fantasy worlds, worlds with spies, pirates, vampires, worlds where everyone is attractive and cool, where you can be anything you want, for at least a few hours.
Some thinkers, most notably Theodor Adorno, have considered this quality of popular culture to be destructive. They abhor the way that people lull their intellects the sleep, tranquilized with popular garbage that deactivates their minds rather than challenges them. And this point cannot be wholly dismissed. But I tend to see escapism in a more positive light; people are tired, people are stressed, people are bored—they need some release. As long as fantasy does not get out of hand, becoming an goal in itself instead of only a diversion, I see no problem with it.
This, in my opinion, is the essential different between art and entertainment. There is also an essential different, I think, between art and craft.
Craft is a dedication to the techniques of art, rather than its goals. Of course, there is hardly such a thing as a pure craft or a pure art; no artist completely lacks a technique, and no craftsman totally lacks aesthetic originality. But there are certainly cases of artists whose technique stands at a bare minimum, as well as craftsmen who are almost exclusively concerned with the perfection of technique.
Here I must clarify that, by technique, I do not mean simply manual things like brush strokes or breath control. This includes more generally the mastery of a convention.
Artistic conventions consists of fossilized aesthetics. All living aesthetics represent the individual visions of artists—original, fresh, and personal. All artistic conventions are the visions of successful artists, usually dead, which have ceased to be refreshing and now have become charmingly familiar. Put another way, conventional aesthetics are the exceptions that have been made the rule. Not only that, but conventions often fossilize only the most obvious and graspable elements of brilliant artists of the past, leaving behind much of its living fibre.
This can be exemplified if we go and examine the paintings of William-Adolfe Bourgeureau in the Musée d’Orsay. Even from a glance, we can tell that he was a masterful painter. Every detail is perfect. The arrangement of the figures, the depiction of light and shadow, the musculature, the perspective—everything has been performed with exquisite mastery. My favorite painting of his is Dante and Virgil in Hell, a dramatic rendering of a scene from Dante’s Inferno. Dante and his guide stand to one side, looking on in horror as one naked man attacks another one, biting him in his throat. In the distance, a flying demon smiles, while a mound of tormented bodies writhes behind. The sky is a fiery red and the landscape is bleak.
I think it is a wonderful painting. Even so, Dante and Virgil seems to exist more as a demonstration than as art. For the main thing that makes painting art, and the main thing this painting lacks, is an original vision. The content has been adopted straightforwardly from Dante. The technique, although perfectly executed, shows no innovations of Bourgeureau’s own. All the tools he used had been used before; he merely learned them. Thus the painting, however impressive, ultimately seems like a technical exercise. And this is the essence of craft.
I fear I have said more about what art isn’t than what it is. That’s because it is admittedly much easier to define art negatively than positively. Just as mystics convey the incomprehensibility of God by listing all the things He is not, maybe we can do the same with art?
Here is my list so far. Art is not entertainment, meant to distract with fantasy. Art is not craft, meant to display technique and obey rules. Art is not simply an intellectual challenge, meant to shock and frustrate your habitual ways of being. I should say art is not necessarily any of these things, though it can and often is all of them. Indeed, I would contend that the greatest art entertains, challenges, and displays technical mastery, and yet cannot be reduced to any or all of these things.
Here I wish to take an idea from the literary critic Harold Bloom, and divide up artworks into periodpieces and great works. Period pieces are works that are highly effective in their day, but quickly become dated. These works are too specifically targeted at one specific cultural atmosphere to last. In other words, they may be totally preoccupied with the habits prevalent at one place and time, and become irrelevant when time passes.
To pick just one example, Sinclair Lewis’s Babbitt, which I sincerely loved, may be too engrossed in the foibles of 20th century American culture to be still relevant in 500 years. Its power comes from its total evisceration of American ways; and, luckily for Lewis, those ways have changed surprisingly little in its essentials since his day. The book’s continuing appeal therefore depends largely on how much the culture does or does not change. (That being said, that novel has a strong existentialist theme that may allow it to persist.)
Thus period pieces largely concern themselves with getting us to question particular habits or assumptions—in Lewis’s case, the vanities and superficialities of American life.
The greatest works of art, by contrast, are great precisely because they reconnect us with the mystery of the world. They don’t just get us to question certain assumptions, but all assumptions. They bring us face to face with the incomprehensibility of life, the great and frightening chasm that we try to bridge over with habit and convention.
No matter how many times we watch Hamlet, we can never totally understand Hamlet’s motives, the mysterious inner workings of his mind. No matter how long we stare into van Gogh’s eyes, we can never penetrate the machinations of that elusive mind. No matter how many times we listen to Bach’s Art of Fugue, we can entirely never wrap our minds around the dancing, weaving melodies, the baffling mixture of mathematical elegance and artistic sensitivity.
Why are these works so continually fresh? Why do they never seem to grow old? I cannot say. It is as if they are infinitely subtle, allowing you to discover new shades of meaning every time they are experienced anew. You can fall into them, just as I felt myself falling into van Gogh’s eyes as he stared at me across space and time.
When I listen to the greatest works of art, I feel like I do when I stare into the starry sky: absolutely small in the presence of something immense and immensely beautiful. Listening to Bach is like listening to the universe itself, and reading Shakespeare is like reading the script of the human soul. These works do not merely reconnect me to my senses, helping me to rid myself of boredom. They do not merely remind me that the world is an interesting place. Rather, these works remind me that I myself am a small part of an enormous whole, and should be thankful for every second of life, for it is a privilege to be alive somewhere so lovely and mysterious.
The British Museum is a project of the Enlightenment. It is one of the oldest—older than both the Louvre and the Prado—and the biggest museums in the world. Its collection began when Sir Hans Sloane, a doctor and naturalist, bequeathed his private collection of “curiosities” to the state. The collection grew from there, with the goal of encompassing all of human history under one roof. And because the British Empire soon came to dominate half the globe, this ambition was not so ludicrous as it may at first appear. Ironically, you can probably find finer artifacts in the British Museum than in the countries that the exhibits represent.
The museum’s massive collection is housed in an equally massive neoclassical building designed by Robert Smirke. Its collection is divided by era and area: Prehistory, the Ancient Near East, Ancient Egypt, Ancient Greece, South Asia, East Asia, the Americas, Africa, and Oceania. Wandering around the museum is like getting lost in a copy of a World History textbook brought to life. The collection is so vast and detailed that the visitor is simply overwhelmed. There is far too much information to take in and process in one visit—even in a dozen visits. Each artifact on display deserves deep study; and when each room is full of hundreds of these artifacts, there is not much you can do except dumbly gape. Likewise, there is not much a writer can do except emulate Sir Hans Sloane and collect curiosities.
I began in the Ancient Near East: Mesopotamia, the cradle of civilization. There is something sacred about the simple fact of age. Seeing ancient artifacts is the closest we get to time travel. The passing years corrode all material things, just as the gentle flowing of a stream eventually cuts through rock. The physical bodies of these ancients have long decayed; everything they knew and loved is gone. And yet, 5,000 years later, the messages they carved still preserve an echo of their voice.
Every time I look at a cuneiform tablet—its crisscrossing wedges and lines unintelligible to me, but visibly a language—I find myself profoundly moved. For all I know, the message is a record of a banal commercial exchange—so many goats for so many bushels of hay—but the simple fact of writing something down, of imprinting words indelibly, signals the beginning of that noble and doomed war against time—the war we call ‘civilization’.
Seeing these first scratches in stone is like catching a glimpse of the universe a few seconds after the big bang. It marks the commencement of something entirely new in history: the ability to transfer knowledge across generations; to develop literature, philosophy, mathematics, and science; to create unchanging codes of law to fairly govern societies; to make the shadows of thought external and permanent. Less fortunately, the beginning of writing also marks the origin of bureaucracy and accounting—indeed, this seems to have been its original purpose, as communities grew too big to be governed by word of mouth.
Perhaps the most impressive object in this section is the Standard of Ur. (This is one of the objects chosen in Neil MacGregor’s series, A History of the World in 100 Objects. You can listen to the segment here. I wish I had read the accompanying book, which looks excellent, before my visit to the museum; it’s on my list.)
It is called a ‘standard’, but nobody really knows what it was used for: a soundbox for a musical instrument or a box to store money for sacred projects—who can say? All we can really determine is that it almost definitely was not a standard, since the drawings are too detailed to be seen from far away. The object dates from 2,600 BCE and consists of a box whose sides depict scenes of war and peace, in three lines of images that look like a comic book. On the war side, we see an army marching off to battle, with armored footsoldiers and men in chariots; below, these charioteers trample enemies underfoot. On the reverse side, we see men seated at a banquet, drinking, while a harpist and a singer provide background music. Below, men are herding animals and carrying sacks of goods on their back, presumably to offer them in tribute to the king.
This standard was found in the site known as the Royal Cementary of Ur, along with objects seen on both the War and the Peace side. Judging from the numerous skeletons in the tomb, it seems that the Sumerians had a practice similar to the Egyptians: upon the death of kings and queens, the royal attendants were put to death to serve their master in the afterlife. I always shudder when I hear about these practices. Drinking poison to follow your king in death seems to be the height of unjust absurdity. I feel angry on behalf of the attendants who lived in oppression and who did not even find freedom in their master’s death. And yet, despite my anger, I can’t help feeling a sort of awe at the level of devotion displayed by this practice. To identify so strongly with a leader that you follow them in death seems hardly human; just as an ant or a bee colony dies with its queen, so these human groups voluntarily put themselves to death.
Violence and oppression thus form the subject-matter of this artifact and surround its discovery. On one side we see the king marching off the war and killing enemies; on the other side the king enjoys the tribute of his hard-working subjects. Nowadays it is impossible to see the society depicted on the Standard of Ur as anything but monstrous: a predatory upper-class stealing from the poor, and then sending the lower-class off to war to defend their bounty and to capture slaves.
But it is worth asking whether the beginning of civilization could have been any different. Humans had just begun farming and forming cities. For the first time in the history of our species, we were living in large, permanent settlements alongside strangers. For the first time, we had enough resources to allow some people in the community to specialize in tasks other than gathering food: priests, soldiers, musicians, administrators, rulers, and artisans. The accumulation of resources always invites raids from without and crimes from within; and fending off these attacks requires organization, leadership, and violence. A community simply couldn’t afford to be anything but authoritarian and militaristic if it hoped to survive. It is an unfortunate fact of human history that justice and security are often at odds—a fact we still confront in the question of surveillance and terrorism.
As a parting thought, I just want to note how remarkable it is that we can look at something like the Standard of Ur—a luxury product made 5,000 years ago, by people who spoke a different language, most of whom couldn’t write, who had a different religion, who lived in a different climate, a people whose experience of the world had so little in common with our own, a people who lived just at the beginning of history—we can look at this object and find it not only intelligible, but beautiful. We experience this same miracle when we read the Epic of Gilgamesh—a story still moving, 4,000 years after it was written down.
In my first anthropology class we learned that humans are cultural creatures, fundamentally shaped by their social environment. But if this were true—if our inborn nature were something negligible and our culture omnipotent—wouldn’t we expect a civilization which flourished in such different circumstances to give rise to art that we couldn’t even hope to understand? And yet, so universal is the human experience that, 5,000 years later, we can still recognize ourselves in the Standard of Ur.
This constancy of our nature is not only manifested in great works of art. For me, the most touching illustration of this are the little baubles and trinkets, the sundry domestic items that give us a taste of daily life in that faraway age. We see the universal human urge to beautify our bodies demonstrated by the jewels of Ancient Greece, Persia, and Egypt, the rings, earrings, pendants, necklaces, armlets, and bracelets which still glitter and charm today—indeed, designs inspired by ancient examples can be bought in the museum store. We see this also in one of the oldest board games ever discovered, the Royal Game of Ur, whose game-board and game-pieces are instantly recognizable by the modern visitor. A cuneiform tablet has also been found which explains the rules, allowing scholars to play the game 4,500 years after its creation (though I can’t find out whether they enjoyed it).
Yet if the continuities are striking, so are the divergences. I feel the gap that separates the present from the ancient past most poignantly whenever I look at a papyrus scroll covered in Egyptian hieroglyphics. Fewer human artifacts look more alien to me than these bits of ancient writing. Lines of simple images—eyes, storks, sparrows, hawks, snakes, scarabs, and many I can’t recognize—run up and down the papyrus, in a parade of symbolic forms. On the top and in the corner are larger drawings, depictions of mythological scenes, illustrations of dead gods and long-forgotten myths. What is most striking is how the writing is a kind of picture, and the pictures a sort of writing; the visual and the verbal are combined into a web of meaning, absolutely saturated with significance.
The thing that is so fascinating about the culture of ancient Egypt is that, for hundreds and thousands of years, through the rise and fall of dynasties and the passing away of dozens of generations, there is a unified, complete, and instantly recognizable aesthetic. It is immediately obvious to any visitor that they have entered the Egyptian section, whether in the first dynasty or the twentieth.
There is undoubtedly something terrifying about this continuity—terrifying that a society based on gross injustice persisted, with its culture nearly unchanged, for a span of time that dwarfs that of our own Western culture. But it is also easy for me to imagine the deep satisfaction enabled by such a complete mythology—a symbolic worldview that decorates every surface, imbues every hour of the day with importance, structures the year and explains the cosmos, penetrates into the depths of reality and even looks beyond the veil that separates life from death. I feel similar stirrings when I look at an illuminated manuscripts from our own Middle Ages, an artifact not so different from the Egyptian scrolls.
In any exhibition on ancient Egypt the mummies are always the stars—those shrunken, dried corpses carefully wrapped and sealed in stone sarcophagi to be sent down the eons. When I was there, a crowd was gathered around a mummy of a woman named Cleopatra, perhaps in the mistaken belief that she was Mark Antony’s famous paramour. Yet the most moving object in the Egypt section, for me, is the colossal bust of Ramesses II. (This was also featured on The History of the World in 100 Objects; you can listen to it here.)
Ramesses II was one of the most effective leaders in all of Egypt’s history. He was born about 1,300 years before the common era, and lived 90 long years, making his reign not only the most iconic, but the longest of ancient Egypt. An energetic general, statesman, and administrator, he was most of all a builder. He presided over the construction of dozens of colossal statues, temples, monuments, and palaces. It was this Ramesses who inaugurated the Abu Simbel complex, whose great temple includes four colossal statues (20 meters, or 66 feet high) of Ramesses himself, carved directly from the hillside. Ramesses was also responsible for the so-called Ramesseum, not a tomb, but a temple complex built for the worship of him, the deified Ramesses, during his reign and after his death.
The bust of Ramesses in the British Museum was taken from this Ramesseum. It is only a fragment: the base of the statue, in which the pharaoh is seated, is still in the Ramesseum. Napoleon’s troops first tried and failed to move the statue; then the British hired an Italian adventurer to do it, who used a combination of pulleys, hydraulics, and old-fashioned manpower. As Neil MacGregor notes, it is a testament to the power and ingenuity of the Egyptians that, 3,000 years later, their statues still require technical tours de forces to move. Imagine the discipline, organization, and sheer amount of sweat and backbreaking effort to move the original stone?
Cracked and battered as he is, the statue still has the effect that its creator intended: the impression of calm omnipotence. The pharaoh looks down serenely from a great height—imperturbable, immovable, eternal. Such a work is clearly the product of a culture in its prime, when artistic execution and social organization were raised to the pitch of perfection. As a mere display of technique, the statue is remarkable: the ability to transport such a massive block of stone, and then to chip away and polish the surface until all that remains is a perfect image of power. And you can imagine how effective these images were as propaganda, in a time before television or telescreens.
In life, Ramesses was as close as any human can get to complete power. In death, he was worshipped as a god. His name and his face have come down to us from over 3,000 years ago. This statue has outlasted whole kingdoms and countries; and there is a good chance it will keep persisting, even when (God forbid) the British Museum is no more. So you might say that, as propaganda, the statue has been an unmitigated success. And yet, Ramesses himself, his empire, and his entire culture—all of them have passed into memory, leaving only their stones and their bones. Impressive as the bust undeniably is, it is also undeniable that it now stands as a sample of Egyptian statuary, to be gawked at by visitors, impressed but certainly not worshipful.
All wood rots, all iron rusts, and everything human turns to dust. Shelley, upon hearing reports of this very bust of Ramesses II, put this sentiment into famous lines:
And on the pedestal these words appear: “My name is Ozymandias, king of kings: Look on my works, ye mighty, and despair!” Nothing beside remains: round the decay Of that colossal wreck, boundless and bare, The lone and level sands stretch far away.
The final irony is that those immortal lines, like Ramesses’s bust, have outlasted their makers and will likely last as long as there are humans who worry about the finitude of life.
If there is any hope of immortality, it is through the communication of our ideas—something demonstrated most poignantly by the Rosetta Stone. That ancient document—an administrative decree about taxes and tithes—now stands in the British Museum as a testament to the ability of different cultures in different places and times to understand one another. In the modern world it has become trendy to agonize about the impossibility of translation and the gulfs that separate different cultural worldviews. But humans have been translating since the beginning of history; and the very fact that we can decipher a long-dead language, written in an archaic script, using another translation of an ancient language written in another archaic script, shows that communication can transcend wide differences of perspective.
I have already spent far too much space describing the treasures of the British Museum. But I cannot leave off without a mention of the Elgin Marbles from the Parthenon.
The Parthenon, as everyone knows, is the most important and iconic ruin from Ancient Greece. Built during Athens’ golden age as a temple to their patron goddess, Athena, it has been both a church and a mosque in its long life. The Ottomans even decided to use the temple to store ammunition—guessing that their enemies, the Venetians, would never dare to fire at such a hallowed edifice. This guess was incorrect; in 1687 a Venetian bomb detonated the ammunition inside, causing a massive explosion that left only the building’s husk intact. Then in 1800 an art-loving British aristocrat, the Earl of Elvin, in highly dubious circumstances, excavated sculptures and friezes from the ruined Parthenon to decorated his home. But a costly divorce forced him to sell his home and his collection to the British government. As a result, the parts of the Parthenon, in the next chapter of their long and battered history, found their way into the British Museum.
Unsurprisingly, this acquisition is controversial. Imagine if a museum in England had a part of Mount Rushmore. Americans wouldn’t be happy, and neither are the Greeks. The Greek government has been trying to repossess the collection since 1983. There are many arguments averred for sending the marbles back to Greece. The most compelling is the simplest: that the Parthenon is one of the most important cultural monuments in European history, and should be as complete as possible. In any case, the legality of the original transfer has always been questioned: it’s possible that Elvin didn’t have official permission from the Ottoman Empire. In England, public opinion was divided at the time—Lord Byron famously thought it was inexcusable vandalism—and seems to be in favor of returning the collection nowadays. The British Museum is (also unsurprisingly) in favor of keeping the marbles.
For my part, it seems unquestionably just to return the Elgin marbles to Athens. I do admit, however, that I was grateful for the opportunity to see the Parthenon friezes in the British Museum. The display is excellent, allowing the visitor to clearly see the friezes and the statues. If the marbles were inserted back into their original places in the Parthenon (if this is even possible), then they wouldn’t be as clearly visible. And if the marbles were merely displayed in a museum in Athens, then I’m not sure there would be any improvement of presentation. Nevertheless, it does seem that strict justice demands that the marbles be returned.
As for the Elgin marbles themselves—the friezes, metopes, and pediments that line the wall of one enormous exhibit in the British Museum—what is there to be said? The sculptures are likely the most studied and analyzed works of art in Western history; and not only that, they are perhaps the most influential. Almost from the start these works have defined and illustrated classical taste. Indeed, the Parthenon has served as such a ubiquitous model for later artists that it is nearly impossible to respond to them as genuine works of art. They are immediately familiar; you feel that you’ve seen it all before, even if this is your first time in the British Museum.
To the modern eye, the Parthenon sculptures can appear cold, austere, and timeless—perfect human forms carved from perfect white marble. It is scandalous to imagine that these frigid sculptures were once painted with gaudy colors; and inconceivable that, once upon a time, these paragons of artistic orthodoxy were once innovative and daring works that broke every convention.
A visitor to the British Museum can catch a glimpse of the originality of these works if they visit the Babylonian and Egyptian sections first. Moving on from those precursors to the Greeks, you can see obvious continuities—heros and gods, mythological beings and legends, religious processions and rituals—but the changes are even more striking. In the Parthenon, we see a new thing in history: a confident belief in the powers of human intelligence and creativity. Unlike the static and rigid bodies of Egyptian pharaohs, sitting straight up and look straight ahead, we see bodies twisting, turning, leaping, extending, straining—in other words, we see the human body in motion, propelled by its own force. This is not a society that believes in stable order, but in ceaseless striving.
The new perspective is illustrated most clearly by the metopes depicting the centauromachy: the battle between the human lapiths and the half-human half-animal centaurs. In Egyptian mythology, many of the gods were half-animal; and Sumerian palaces were often guarded by the sphinx-like lamassus. In both of these cultures, the natural world, the world of animal life, was seen as a source of power and cosmic order. Yet in the Parthenon the half-animal creatures, the centaurs, are agents of chaos and destruction—creatures who must be conquered and vanquished. For better or for worse, this urge to conquer our own animal nature has been with us ever since.
There are so many more—thousands and thousands more—works that deserve deep contemplation in the museum’s collection, but I will stop here. Yet as I take leave of the British Museum, I want to leave you with one parting thought.
No institution I have seen better illustrates both the enormous strengths and the limitations of the Enlightenment than the British Museum. And because the Enlightenment is very much still with us, it is vital that we understand these strengths and limitations.
Its strengths are undeniable, especially in the context of history. As compared with what came before it, the conception of humanity and history embodied in the museum is undoubtedly an advance. Europeans began to be interested in non-Europeans cultures. Their sense of ancient history began to extend far beyond Ancient Greece and the tribes of Israel. Instead of focusing on their own country or their own religion, Europeans could conceive of humanity as a whole, with a single origin and a common destiny. The museum also demonstrates the democratic spirit of the Enlightenment. The knowledge is put on display for all to see and learn, not sequestered in schools or guarded by jealous academics. Just as the friezes of the Parthenon illustrate the confidence in human intelligence, so does the British Museum exemplify the new, boundless confidence in human reason—the belief that the world is intelligible, that we can communicate our knowledge to anyone, and that our knowledge is not bounded by creed, language, or nation.
But the museum also demonstrates the limitation of this universalist aim. For the idea of a museum that encompasses all of human history relies on the idea that we can create a neutral context in which to understand that history. This underlying notion is clear at a glance: each room—plain, white, full of right-angles—is filled with objects wrenched from their original context. Some of this context is restored, but only as information on panels. My question is: can a modern visitor, looking at a bracelet from ancient Egypt, reading about that bracelet on its accompanying caption, really grasp what this bracelet was to the jewel-maker who created it or the aristocrat who wore it? For comparison, imagine walking into a museum filled with objects from your room, except each object is carefully labeled and sits on its own display. Could any visitor understand what life was like for you?
My point is that there is something inescapably artificial and sterile about the museum. In attempting to create a universal history, a neutral context for information, the museum transforms its objects and imposes a new context. The original meaning of each artifact, how they were used and understood by their creators, is abolished; and instead, each artifact becomes a piece of evidence in a specifically Enlightenment story about the growth of humankind.
To put this another way, the Enlightenment attitude fails to come to grips with how our attempts to understand the world transform what we’re trying to understand. When knowledge is seen as impersonal, existing in a neutral context, simply a matter of seeing and describing, then knowledge becomes blind to its own power. And the British Museum is, among many other things, a demonstration of British power: the financial, political, and military means to scour the world and collect its most valuable objects into one location. It is also a demonstration of British intellectual power: the power to understand all of human history, to see truly and to interpret correctly, to escape provincialism into neutral universality.
I need to pause here. I sound as if I am being harshly critical of the British Museum, and indeed I am. But the truth is that my brief visit was staggering. I saw and learned so much in such a short time that I cannot possibly deny that I think the museum is valuable. The reason I level these criticisms at the British Museum is not because I think this intellectual project it represents is bankrupt or futile, but because, with all its flaws and limitations, with all its political and economic underpinnings, it seems to be the best we have yet achieved in humanity’s understanding of itself. I see these challenges not as reasons to despair—any intellectual project will have its limitations—but as spurs to creative solutions.
Who does not see that I have taken a road along which I shall go, without stopping and without effort, as long as there is ink and paper in the world?
—Michel de Montaigne
One thing above all attracts me to Montaigne: we both have an addiction to writing.
It is a rather ugly addiction. I personally find those who love the sound of their own voice nearly intolerable—and unfortunately I fall into this category, too—but to be addicted to writing is far, far worse: Not only to I love airing my opinions in conversation, but I think my views are so valuable that they should be shared with the world and preserved for future generations.
Why do I write so much? Why do I so enjoy running my fingers over a keyboard and seeing letters materialize on the screen? What mad impulse keeps me going at it, day after day, without goal and without end? And why do I think it’s a day wasted if I don’t have time to do my scribbling?
In his essay “Why I Write,” George Orwell famously answered these questions for himself. His first reason was “sheer egoism,” and this certainly applies to me, although I would define it a little differently. Orwell characterizes the egoism of writers as the desire “to seem clever, to be talked about, to be remembered after death,” and in general to live one’s own life rather than to live in the service of others.
I would call this motivation “vanity” rather than “egoism,” which is undeniably one of my motivations to write—especially the desire to seem clever, one of my uglier qualities. But this vanity is rather superficial; there is a deeper egoism at work.
Ever since I can remember, I have had the awareness, at times keen and painful, that the world of my senses, the world that I share with everyone else, is separate and distinct from the world in my head—my feelings, imagination, thoughts, my dreams and fantasies. The two world were intimately related, and communicated constantly, but there was still an insuperable barrier cutting off one from the other.
The problem with this was that my internal world was often far more interesting and beautiful to me than the world outside. Everyone around me seemed totally absorbed in things that were, to me, boring and insipid; and I was expected to show interest in these things too, which was frustrating. If only I could express the world in my head, I thought, and bring my internal world into the external world, then people would realize that the things they busy themselves with are silly and would occupy their time with the same things that fascinated me.
But how to externalize my internal world? This is a constant problem. Some of my sweetest childhood memories are of playing all by myself, with sticks, rocks, or action figures, in my room or my backyard, in a landscape of my own imagination. While alone, I could endow my senses with the power of my inner thoughts, and externalize my inner world for myself.
Yet to communicate my inner world to others, I needed to express it somehow. This led to my first artistic habit: drawing. I used to draw with the same avidity as I write now, filling up pages and pages with my sketches. I advanced from drawings of whales and dinosaurs, to medieval arms and armor, to modern weaponry. Eventually this gave way to another passion: video games.
Now, obviously, video games are not a means of self-expression; but I found them addicting nonetheless, and played them with a seriousness and dedication that alarms me in retrospect—so many hours gone!—because they were an escape. When you play a video game you enter another world, in many ways a more exciting and interesting world, a world of someone’s imagination. And you are allowed to contribute to this dream world—in part, at least—and adopt a new identity, in a world that abides by different rules.
Clearly, escapism and self-expression, even if they spring from the same motive, are incompatible; in the first you abandon your identity, and in the second you share it. For this reason, I couldn’t be satisfied for long with gaming. In high school I began learn guitar, to sing, and eventually to write my own songs. This satisfied me for a while; and to a certain extent it still does.
But music, for me, is primarily a conduit of emotion; and as I am not a primarily emotional person, I’ve always felt, even after writing my own songs, that the part of myself I wanted to express, the internal world I still wanted to externalize, was still getting mostly left behind. It was this that led me to my present addiction: writing.
I should pause here and note that I’m aware how egotistical and cliché this narrative seems. My internal world is almost entirely a reflection of the world around me—far, far less interesting than the world itself—and my brain, I’m sorry to say, is mostly full of inanities. I am in every way a product—a specifically male, middle-class, suburban product—of my time and place; and even my narrative about trying to express myself is itself a product of my environment. My feeling of being original is unoriginal. My life story is a stereotype.
I know all of this very well, and yet I cannot shake this persistent feeling that I have something I need to share with the world. More than share, I want to shape the world, to mold it, to make it more in accordance with myself. And my writing is how I do that. This is egoism in its purist form: the desire to remake the world in my image.
A blank page is a parallel world, and I am its God. I control what happens and when, how it looks, what are its values, how it ends, and everything else. This feeling of absolute control and complete self-expression is what is so intoxicating about writing, at least for me. Once you get a taste of it, you can’t stop. Montaigne couldn’t, at least: he kept on editing, polishing, revising, and expanding his essays until his death. And I suspect I’ll do the same, in my pitiful way, pottering about with nouns and verbs, eventually running out of new things to write about and so endlessly rehashing old ones, until I finally succumb to the tooth of time.
After mentioning the egoism of writers, Orwell goes on to mention three other motivations: aesthetic enthusiasm, historical impulse, and political purpose. But I think he leaves two things out: self-discovery and thinking.
Our thoughts are fugitive and vague, like shadows flickering on the wall, forever in motion, impossible to get hold of. And even when we do seem to come upon a complete, whole, well-formed thought, as often as not it pops like a soap bubble as soon as we stretch out our fingers to touch it. Whenever I try to think something over silently, without recording my thoughts, I almost inevitably find myself grasping at clouds. Instead of reaching a conclusion, I get swept off track, blown into strange waters, unable to remember even where I started.
Writing is how I take the fleeting vapors of my thoughts and solidify them into definite form. Unless I write down what I’m thinking, I can’t even be sure what I think. This is why I write these quotes and commentary; so far it has been a journey of self-investigation, probing myself to find out my opinions.
When I commit to write, it keeps me on a certain track. Unless you are like Montaigne and write wherever your thoughts take you, writing inevitably means sticking to a handful of subjects and proceeding in an orderly way from one to the other. Since I am recording my progress, and since I am committed to reaching the conclusion, this counteracts my tendency to get distracted or to go off topic, as I do when I think silently.
This essay is a case in point. Although these are things I have often talked and thought about, I had never fully articulated to myself the reasons why I write, or strung all my obsessions into a narrative with a unified motivation, as I did above, until I decided to write about them. No wonder I’m addicted.