Running the (Full) Madrid Marathon

Running the (Full) Madrid Marathon

Four years ago, I ran my first race, the Madrid Half-Marathon. Since that time, running has become a regular part of my life. I signed up for the same half-marathon in 2020 (postponed to 2021 because of the pandemic), and then in 2022, finally getting my marathon time down to 1:55. I also completed several 5ks and 10ks, and even a grueling trail run in El Escorial, which covered 20km of distance with 1,000 meters of elevation difference thrown in (we had to go up and down a mountain). As you can see, I got a little obsessed with running and racing.

But like many runners, I felt incomplete. I had not done the most iconic of all races: the Marathon. Now, of course, this was an illogical thought. There is nothing magical about 42.195 kilometers. And there are many excellent reasons to avoid such a long distance and to stick to other, more reasonable races. Indeed, as you will see, I am not even convinced that training to run such a long distance is even particularly healthy. But logical or not, reasonable or not, I wanted to overcome the challenge, and so I committed myself to the 2023 Madrid Marathon as a kind of Christmas “present” to myself.

There are tons of resources and training plans available to anyone preparing for a marathon. I followed one in a book by Matthew Fitzgerald, but I am sure that most of them contain the same basic scheme: a period of augmenting distance, two weeks of “peak” training, and a tapering-off period of two to three weeks before the race. This is essentially what I did. On the weekdays I would run two or three times, either an hour-long easy run or some sort of speed training. Then, every Saturday, I completed a long run of ever-increasing distance—starting at a half-marathon distance, and capping out at 19 miles (30 kilometers). Most of the training resources I consulted advised against going farther than this distance, or training for more than three and a half hours, since this is too stressful for the body to recover from.

As an aside, I also noted with curiosity that there are several different schools of thought when it comes to training. A serious runner I know, for example, advised me to focus my effort on interval training (running relatively short distances at high speed, then walking to recover, and repeating). The sports writer Matthew Fitzgerald, on the other hand, recommends mainly long runs at a very low intensity, with just a bit of speed training thrown in. Other runners swear by strength training and footwork exercises. Many other runners have no method at all, and just go as far and as fast as their feet can take them with every run. I suppose these differences probably don’t matter much in the end, so long as you are putting your body under the appropriate stress (and thus provoking an improvement).

For what it’s worth, I basically followed the Fitzgerald advice, and stuck to lots of slow runs with just a bit of speed work thrown in.

Though my training schedule was on the lighter side of marathon preparation, it was still enough to take over my free time. I gradually stopped practicing guitar and substantially reduced my writing. Normally, I like to take little trips to the mountains on the weekends, but these were also dropped. All of my other hobbies had to exist in the periphery of running. Marathon training, it seems, is hard to incorporate as just one hobby among many. It takes over your life.

My Saturday long runs were especially draining. I would go to sleep early on Friday, not wanting to run tired or hungover the next day. Then I would wake up dreading the run, and I’d do everything I could to postpone it. Normally it took me until 2 to muster the will power to drag myself out of the house, which meant eating an extremely late lunch at around 5 or 6.

Yet when I finally did manage to put on my running gear, tie my shoes, fill up my water bottle, and step onto the sidewalk, habit took over. I had the same basic route for all of these long runs, which I followed without fail. This was the path that runs alongside the Manzanares River, in the south of Madrid. Technically, this consists of two separate parks, the Parque Lineal and the Madrid Río, but in reality it forms a continuous greenspace that extends for as long as any runner could desire. It is also very flat and quite attractive, so I felt very fortunate to have such an ideal training ground nearby.

Keeping myself entertained during these long runs—which often lasted three hours and beyond—became something of a problem. Normally I listen to audiobooks for my runs (my choice for marathon training: Les Miserables), but I found that I could not focus on a book for such a long stretch. Thus, after about an hour I would switch to Bob Dylan’s old radio program, Theme Time Radio Hour, which would be like a second wind after Victor Hugo’s prolixity. But I made sure to spend at least a part of every run without anything in my ears, just focusing on the experience. I thought it deserved at least a slice of my undivided attention.

The long Saturday runs not only made Fridays unavailable for socializing, but usually I felt so tired afterwards that I had little desire to do anything on Saturday night or to go anywhere on Sunday. Long distance running, I must conclude, is very bad for one’s social life. 

I am fortunate in that I normally don’t get sick. But I spent most of January and February—my two first months of training—with a persistent cough, which seemed to be related to the long Saturday runs. During the work week, I would gradually feel better. By Friday, I’d think I had gotten over whatever virus was bothering me. But then, after my Saturday run, the cough would return with a vengeance. Though I cannot prove it, I got the strong impression that the long runs were notably weakening my immune system. Heavily breathing the cold winter air for hours at a time couldn’t have helped, either. I must also conclude, then, that marathon training isn’t necessarily even good for your health—at least in the short term.

Carlos (left) about to run the El Peluca trail race with me.

Marathon runners are often advised to do a kind of warm-up race a month or so before the event. For me, this was a trail race in Aranjuez called “El Peluca.”—the nickname of a local barber who has managed to become a long-distance runner despite his diabetes. The race was 18 kilometers long but with about 500 meters of altitude difference, making it roughly as difficult as a flat half-marathon. My coworker, Carlos—a triathlete—very generously helped me train for this race. He even ran it along with me.

The race left from a park at the edge of town, and we quickly were among the hills that ring the city. After about two kilometers of running on a flat road we reached the path that took us up into the trail, going up and down, over and over, on narrow, dusty trails. Carlos gamely stayed by my side, encouraging me to keep up the pace—which was very nice of him, but also slightly depressing since he made it seem effortless.

I began the race thinking my legs had gotten pretty strong—I had been training for almost three months by then, at much longer distances, often running up hills—but the steep slopes sapped my energy. By the time we ran down the final hill and back onto the road, I could hardly accelerate for a final sprint. I finished with a time of 1:55, in the bottom half. Another of my coworkers, Víctor, actually won the entire race with a blazing 1:19—though admittedly he may not have come in first had another strong runner not gotten lost and gone the wrong way. Trail running has its perils.

(Carlos, meanwhile, was having a great time, as he had found a purple wig that had been hidden in some bushes along the race course. He crossed the finish line with a cartwheel and won a basket full of hair products as the prize for having found the wig.)

During the week after the race, I had pain going all the way up the outside of my right leg, from my calves to my hip. I was afraid that I had given myself tendinitis or some other injury. I took it easy for a few days and stretched. This only helped a little. Then, I tried using a rolling pin to massage everywhere that hurt. Miraculously, that made the pain disappear almost immediately. I have no idea why that worked.

My next long run was what is called a “marathon-simulation.” This is a run of 26 kilometers (not miles) at your planned marathon pace—which, for me, was an unhurried 10 minutes per mile. This time, I actually managed to get myself outside in the early morning, and ran without anything in my ears. It was much easier than I expected. The miles flew by and I finished well under my projected time. Maybe I can do this after all, I thought.

This year, the Easter holidays fell three weeks before the race. I wanted to do something with my free time, but I also wanted to maintain my fitness. So I hit upon the idea of hiking on the Camino de Santiago. I covered about 115 kilometers in five days, from April 2 to April 6, and once again enjoyed the lush Galician countryside. But in retrospect, was this good “training”? I am not sure. I think hiking does benefit your running ability, but I also think it may have been too much distance, too close to the actual race day. The leg pain mentioned above temporarily returned. 

With my camino finished, this wrapped up the heavy phase of training for the marathon. The only thing left was the so-called “taper”—basically, relaxing a little. I did not do any runs longer than 10 miles in the two weeks before the race, focusing more on shorter, faster sessions.

I also took the opportunity to try an experiment. One week before race day, I went cold turkey on coffee, tea, and anything else with caffeine in it. The first two days were unpleasant, especially the second. I felt groggy, confused, and almost sick. The closest thing I can think of is a hangover. But my sleep markedly improved. This was partly the goal: to increase the quality and quantity of my rest leading up to race day. The other goal of the experiment was to increase my sensitivity to caffeine so that, when I finally had some on race day, I would be extra sensitive to it. Caffeine, after all, is the only legal performance-enhancing drug.

Of course, in the lead-up to the race, alcohol was entirely cut out. I had to live the pure life.

Three days before race day, I began the famous carbo-loading. This took the form of large spaghetti dinners. The idea is that, by gorging on carbohydrates, you build up a larger reserve of energy in your muscles that you can use on race day. To be honest I really don’t know if it actually works, but I wanted to give myself every possible advantage just in case. Another common piece of advice is to hydrate profusely in the few days before the race. I did this, too—dutifully sipping from my water bottle throughout the day—though I was similarly unsure if it would actually work.

As you can see, I had done my best to optimize my body for the race. I don’t think I had ever tried to be so precise and careful with what I eat and drink, with how much I sleep, with when and how much I exercise. Every variable available to me, I attempted to manipulate. All that remained to be seen was whether it would pay off.


April 23, Race Day. This was also International Book Day, which I took as a good omen.

The night before, I was afraid that I would be too nervous to sleep, so I did everything I could to induce a calm, drowsy state—taking a magnesium supplement, swallowing a pill of melatonin, drinking herbal tea, stretching, meditating. This worked fairly well and, for once in my life, I was able to sleep peacefully before a major event. I woke up at 7 feeling well-rested. Breakfast was a piece of toast and some oatmeal—and, of course, the long-awaited coffee. The chemical worked its magic, and optimism surged up within my brain.

I arrived at 8:30 at the starting line and began to warm up. The area was absolutely packed. More than 30,000 runners had signed up, filling up every available spot. To get my body ready, I slowly jogged around the area—bouncing my legs, raising up my knees, kicking the back of my thighs—just trying to get my heart beating and blood flowing. My ankles, hip, and back popped, which felt good but perhaps was not a good sign. One final trip to the bathroom, and I was ready.

The race was divided into 10 “waves,” and I was number 9, scheduled to start running at 9:30. I entered the gate early, hoping to be near the front of the wave—since, if you’re not, you can get stuck in a huge crowd and be unable to go at your own pace. Directly in front of me was one of the official pacers. These are runners who carry large balloons which have a finishing time printed in bold letters. I suppose they must be pretty experienced runners, since they can reliably finish the race in that time. This pacer had the balloon for 4:15, coincidentally just the time I was hoping to achieve. I positioned myself right behind him at the starting line.

An example of a pacer baloon, this one for a fast half-marathon time. In the background you can see the sign for the first wave (“cajón”).

The pistol shot rang out, and we were off. I was in a group of about five runners—two French women, two Spaniards, and me—who were trying to stay close to the pacer. I knew that a 4:15 pace amounted to slightly faster than 10 minutes per mile, which for me is just above my relaxed pace. Even so, the speed felt surprisingly difficult as we wove through the crowd on the Paseo de la Castellana. Finally, after about 10 minutes, my breathing calmed down and I was able to follow without conscious strain.

The course took us north, past the famous Santiago Bernabeu stadium, alongside the government buildings at Nuevos Ministerios, through the so-called Gates of Europe—twin, slanting office buildings—and then to the four tallest skyscrapers in the city, the “four towers.”

(You can see a video of the entire course here.)

The road is just slightly uphill during this section, but since I had fresh legs I hardly noticed. Finally, we did a U-turn and started the criss-crossing journey back south. This is where we entered the massive, residential center of Madrid. We were running up and down streets very much like my own—mid-sized apartment buildings, the streets lined with restaurants and shops. So much of Madrid looks so similar that I became disoriented quite quickly, and gave up trying to place myself on the map. But now we were running downhill, and it felt almost effortless.

Though traffic in the city had been severely restricted, pedestrian life went on. The terraces were full of families having breakfast. Parents were shopping with their children. Senior citizens were chatting on benches. It all looked much more pleasant than what I was doing.

But we did have an audience. All along the route, there were people cheering us on. I really have to give anyone credit who spends their Sunday morning clapping for runners. After all, there really is no sport less interesting to watch than long-distance running. Nothing dramatic happens, and you never see the same face twice—just runner after runner, going by at moderate speed. And consider how long the event lasts: from the first wave to the last person over the finish line, around six hours elapse. Even watching long-distance running is a feat of endurance. (Reading about it isn’t much better—sorry!)

At various points along the route there were rock groups playing—though, by the end of my race, most of them had understantably run out of material.

I must say, though, that while I considered everyone cheering us on to be slightly crazy, I did very much appreciate the applause and encouragement. It really did help lift my spirits or, at least, distract me in moments of pain. Little kids stuck out their hands for high-gives. Adults held up signs with Super Mario mushrooms on them, to “power-up.” Two men were even offering free pizza, but it didn’t seem like such a good idea to have a slice. Meanwhile, the professional pacer did his best to rile up the crowd, shouting and waving his arms. Having so much social support is partly why, I think, people run faster during races than they do on their own. You feel as if everyone wants you to succeed.

Finally, we took a turn down Calle de San Bernardo, and I realized where we were. The next turn led us directly onto Gran Vía, Madrid’s most famous avenue, and then towards the center of the city, the Plaza de Sol. The size of the crowd surged, and we were surrounded by cheering onlookers. This was the first year that the race went through the center like this, and it felt great to be running through such an iconic part of the city.

As we approached Sol, a series of large black-and-white signs began to warn us that the marathon and half-marathon courses were about to part ways—the half-marathon to the left, with only 2 kilometers to the finish line, and the marathon to the right, with 23 kilometers to go.

In the background you can see the signs signalling the split between the half-marathon and marathon routes. I am looking rather grim on the left.

Just as we entered Sol, I noticed a group of paramedics and a police officer, who were gathered over somebody laying on the ground, wrapped in a metallic blanket. He must have been a half-marathoner who tried to push too hard during the final stretch of the race. As I passed, I saw his feet moving, so I knew that he was basically alright. But it is true that long-distance running can be dangerously stressful on the body. On March 12 of this year, a young man died after completing a half-marathon in Elche. And four people were hospitalized during the March 26 half-marathon in Madrid. Again, long-distance running may not be the healthiest sport, at least in the short-term.

I tried to put this specter of the perils of running out of my mind as I entered Sol. There were large crowds on either side of us, and a band playing on a stage right in the center. I waved at the half-marathoners, and then turned with some trepidation to the rest of the race. The Madrid half-marathon is actually quite a nice course—reasonably flat and mostly downhill. But the full marathon is considerably more difficult, with long stretches of up-hill and sections with little shade. Besides, Madrid in April is normally quite warm, and by now (around 11:30) we could start to feel the heat. I was sure that this second half would be far worse than the first.

By this point I had spilled some Powerade on my glasses and couldn’t use them. The Madrid Cathedral is behind me.

At least it began with some nice scenery. We ran down the Calle Mayor, passing by the Plaza Mayor, the Mercado de San Miguel, and the Casa de la Villa (a medieval building which serves as City Hall). Then we emerged onto the Calle de Bailén and ran right by the city’s cathedral and the Royal Palace. Finally we went up the Calle Princesa, through the beautiful Parque del Oeste, and across the Manzanares River.

All this time, I was still on the tail of my pacer, though several times I fell behind and had to struggle to catch up again. I could feel myself getting tired. I tried to combat it. My coworker, Víctor (the serious runner who won the Aranjuez race), advised me to drink water at every opportunity. This was more frequent than I expected, as there were water stations every 2 or 3 kilometers in the marathon course. I did my best to keep cool and stay hydrated: drinking a glass of Powerade (for the sugar), several gulps of water, and pouring the rest over my head. I also came equipped with energy gels, which are essentially just tubes full of glucose and caffeine that are supposed to keep your energy levels up during a long race.

Even with all of this chemical help, however, I felt myself flagging. After stopping at one water station, I looked up and saw that my pacer was about 50 meters in front of me. Try as I might, I couldn’t catch up. Soon the balloon was barely visible down the track and I had to resign myself to running a slower marathon. Admittedly, this was a kind of relief, since it allowed me to go at a more comfortable speed. But as I neared kilometer 26, I found that I had to exert effort even to keep going at this more modest pace.

Meanwhile, another one of my coworkers, Pedro, passed me. (I guess I have a lot of athletic coworkers.) He said hello and disappeared down the track, on his way to complete an impressive 3:55 marathon.

I look like I’m suffering, but I still felt okay… That’s the Royal Palace behind me.

Any hope of a strong finish fell apart once I reached the Casa de Campo. Here the path traveled uphill for kilometer after kilometer, often on dirt roads with little shade. I could not stop thinking that I had entered the Valley of Death. Many runners around me were experiencing this same difficulty, slowing down considerably or just walking, a phenomenon known as “hitting the wall.” This is when you use up all of your body’s glycogen (the way your body stores carbs for later use), which typically happens at around mile 20 (or kilometer 30).

According to the sports writer Fitzgerald, this happens to about 70% of runners who attempt a marathon, so I saw it coming. But I did not expect it to feel the way it did. I thought that “hitting the wall” would mean feeling dizzy, lightheaded, and totally out of energy. Instead, I felt reasonably alert and clear-headed, but my legs cramped up and began to feel extremely heavy. It felt as if my muscles had done as much as they could and were totally worn out. It got worse: as I ran up a cruelly steep hill, painful spasms shot up both my legs, which was concerning to say the least. I began to worry—perhaps irrationally?—that I was nearing my legs’ breaking point, and that I had to back off to avoid an injury.

Under these circumstances, I slowed down more and more, every kilometer more sluggish than the last. I was not alone in my predicament: by around mile 21 (kilometer 35) almost everyone around me was walking, at least part of the time. I considered walking for a bit to regain some strength, but I was afraid that if I stopped running I wouldn’t be able to start again. So instead, I just ran at a snail’s pace, so slowly it was generous to even call it running. I was barely going faster than those walking around me. This, naturally, had the effect of stretching out the final miles to an agonizing extent. Rather than passing a kilometer marker every six minutes, it was taking me eight. 

Though my stomach felt absolutely full, I dutifully stopped for my last drink of water, pouring half of it over my head. My eyes cleared in time for me to see the pacer for 4:30 passing by. They were not going very fast, but there was no way I could hope to catch them in my state. Not only had I missed my goal of 4:15, then, but I had even failed with my backup goal of 4:30. Yet I was much too exhausted to be upset. Indeed, I was surprised at how cheerful I felt. From what I’ve heard, many runners feel quite depressed at this point in the race. But if my body was breaking down, I still had enough mental strength to keep myself chugging along.

Near the end. I think you can tell I was struggling. (By then I had cleaned off my glasses with water.)

I was nearing the end now. The course took us through the Delicias neighborhood, down the Calle de Méndez Álvaro, and finally past Atocha station and up the Paseo del Prado. There were more and more spectators at every turn, swelling into a real crowd as we neared the end. Once again, I was very grateful to everyone cheering, though I admit I felt pretty ridiculous as I inched along, dragging my legs behind me, while people acted as if I were some kind of athlete. I certainly did not feel like an athlete at that moment.

The run up the Paseo del Prado felt endless, the upwards slope making me slow down to a barely perceptible forward movement. But eventually, inevitably, the end came into sight. I passed the Prado Museum, I passed the Fountain of Neptune, and finally the enormous Palace of Cibeles came into view. Just beyond, I could see the inflatable gate that marked kilometer 42, the finish line now directly ahead. I felt a surge of emotion when I finally saw it, and surprised myself by almost crying. But, realistically, I think I was too dehydrated for tears. In any case, the emotion passed almost instantly and I felt mainly relief as I attempted—unsuccessfully—to speed up for a final sprint.

Attempting jubilation.

I stumbled over the finish line. I was finished. Volunteers handed me a medal and a bag of food, and ushered out of the runners’ zone. Though I should have been enjoying my “triumph” (or at least savoring being able to stop running), I was quite worried at this point about whether I would be able to walk to the train and make it home without incident. I really didn’t know if my legs would take much more. But this fear was unfounded: I made it back to my apartment without incident, peeled off the sweaty clothes, and assessed the damage.

Everything in my pockets—my keys, my headphones, my phone—was sticky from the energy gel packs, which had spilled after I opened them. One of my toenails was bruised and discolored (even though I made sure to cut them beforehand!). On my other foot, I had a large and painful blister. And I had small cuts from chafing all over my body. Getting into the shower stung like crazy.

The rest of the day, I just sat on the couch, eating pizza and ice cream. I did not feel triumphant or euphoric, but I was done. My final time: 4:41. Of the 9,101 runners who competed, I was number 6,139—decidedly in the bottom third. But I was done!


I have already written far too much about this race. If you are still with me, you deserve a medal yourself. Now, I only want to address a few more points.

First, could I have avoided the dreaded wall? There were a few things I could have done differently on race day. I could, for instance, have gone at a slower pace for the first half, and thus have conserved more energy for the second (though it is far from certain I would have run the entire thing faster than way). It’s also possible that I didn’t take enough fuel. Many runners gulp down an energy gel every 30 minutes, with the express aim of not depleting one’s glycogen reserves. However, I am a bit skeptical that this actually works. Doesn’t it take some time for the body to process and absorb calories? Perhaps the most obvious answer is that I could have trained more. But that will almost always be true.

Realistically, even with all the tweaking in the world, I don’t think I could have done much better than I did. I am not built for speed.

Another question I am sometimes asked is: What do you think about during such a long run? The honest answer is: not much. For such a simple activity, running takes a lot of attention. Indeed, I find that it is quite meditative, and I don’t often catch my mind wandering far afield. Besides, while running, I hardly have the energy for anything more complex than a passing observation. This, I think, is actually one of the primary benefits of running. It clears the mind.

Running a marathon seems to have reputation as something admirable and noteworthy in our culture. It is the mark of somebody determined and goal-oriented. Now that I’ve run one, I can ask: is that reputation justified? Certainly, training for a marathon requires consistency and discipline. But the same can be said of many other activities—learning an instrument, writing a novel, painting a portrait, or simply having a job and raising a family. And considering the huge time expenditure and the questionable health benefits (probably training for shorter races is just as good for you), it is difficult to argue that it is an especially good use of one’s time. Thus, I am not convinced it deserves its reputation as an admirable challenge.

Last, I must ask myself: Will I ever do this again? I cannot say the experience was overwhelmingly positive. It was time-consuming, often painful, and I achieved unremarkable results. And, again, I am not even sure that I am now any healthier than I was before I started. Just a little skinnier.

Even so, I may be betraying the mentality of an addict when I say that I will almost certainly run another marathon. Though running is the simplest sport imaginable, as you can hopefully see, doing it to the absolute best of your potential requires a great deal of thought, effort, and focus. It is a kind of massive experiment you are conducting with your own body. I guess all this is to say that I am hooked, and eager to see if I can improve. But in the back of my mind, I know that running is, ultimately, just a form of exercise—a component of the life I want, but not its main focus. After all, I have a blog to write.

(Photo credits: All of the photos used in this post, except that of me and Carlos, were taken by professionals at the event and purchased at Sportograf.com.)

On Pandemic Fatigue

On Pandemic Fatigue

If there is a common thread to this pandemic, it is loss. Many have lost jobs, businesses, or homes. Others have lost members of their family, and still others have lost their lives. Even the luckiest among us have lost something, if only time. But this essay seeks to focus on another kind of loss: the loss of patience. Specifically, I want to put into words for myself this strange and unsettling feeling that, of late, comes over me at least once a day, the feeling we call pandemic fatigue. 


The first time the coronavirus entered my consciousness as anything more than a blip was around Chinese New Year, in late January. I was going to see the celebratory parade in Usera, Madrid’s Chinese barrio, and I asked a friend of mine if he wanted to come. “Doesn’t seem like a good idea,” he said. “Lots of people coming from Wuhan.” Wuhan? I did not understand. “You know, that new coronavirus.”

I was stunned that someone in my life—and someone I considered sensible—was willing to change their behavior because of this virus on the news. Long before that, I had written off the periodic media frenzies about foreign diseases. Every other year there seemed to be some new virus ready to destroy the world—avian flu, swine flu, zika, SARS, Ebola—and every year it amounted to very little, at least in my life. Besides, I figured the media had such a strong financial incentive to frighten people that they would play up any potential danger, however remote.

So I went to the Chinese New Year Parade, and I didn’t get sick (though my camera was stolen), and I pushed coronavirus back to the peripheries of my awareness. It did not stay there for long. The news coming out of China seemed increasingly dire. The city of Wuhan was shut down completely. A whistleblower doctor died. Travel from China was banned. And still, stories of coronavirus infections started popping up all over the place.

I went on vacation in late February with my brother—to Poland—and, for the most part, life was still completely normal. But our flight back to Madrid took us through Milan, just for a short layover. By that time Italy was in bad shape, and parts of the country were already on lockdown. Milan was one of the worst hit areas. Even so, we did not even consider changing our flight. I was still quite sure that this virus business would blow over. All this was just our instinctual fear of the unknown. 

By the first of March, most people were still in denial. By that I mean that we were thinking of this virus like some other kind of natural disaster, a flood or a fire—one that is localized in space and time. Maybe Italy was bad, and maybe China was bad, but we didn’t live in Italy or China. The virus would go away and we would move on. Yet two weeks after I got back to Madrid, the schools were closed. Two days later, restaurants had to shut down; and the next day we were shut up in our houses. It was the lockdown.

I really believed that it would just be for two weeks. A month, tops. I encouraged my mom to buy tickets for a trip to Ireland in June. No way this would still be going on in June, I thought. No chance. But now that I had so much extra time, I decided to read a little about pandemics. I read books by experts in public health and infectious disease, by historians and novelists, and by investigative journalists. And slowly, the truth dawned on me—the hard truth that this emergency was going to last a long time.


This was the first time that I was living through a world-historical crisis as an adult. The closest thing I could remember were the attacks of September the 11th, but I was just a kid then, and I did not really understand what was going on. This time, I was painfully aware, and yet equally powerless to do anything about it.

I had heard stories of the solidarity that arises during times of crisis, but this was the first time I experienced it. Admittedly, it was difficult to show solidarity in any normal way, since we could not be physically close to one another. This was one of the most depressing aspects of the situation. But people figured out ways to lift each other’s spirits. There were the balcony concerts, the children’s drawings taped to windows, and the nightly rounds of applause for the healthcare workers.

The other aspect that helped us to get through this lockdown was fear. During these months we were still coming to grips with this new infection. How deadly was it, exactly? How did it spread? Could it stay in the air? Who was more vulnerable? What were all the symptoms? The uncertainty made the virus all the more frightening. Even so, it was clear that the virus was dangerous: overwhelmed emergency rooms, bodies stored in hockey rinks, and improvised field hospitals. With such a predator lurking the streets, it was less tempting to go outside.

The twin supports of fear and solidarity made the lockdown bearable. That, and a certain amount of creativity.


In Spain we were only allowed out to go shopping for food. We could not take walks or exercise outside. This really limited the options when it came to maintaining mental health—especially in my case, since I love a long walk or a good run.

But I adapted. I created a workout routine I could do in my tiny room, and made sure to do it every day. To get some sun, I snuck out onto my roommate’s balcony. Missing the local parks, I bought a bunch of plants. I made YouTube videos for my students learning English at home. Since we could not go to restaurants, my brother and I started cooking ever-more elaborate dishes—braised oxtail stew, Brazilian feijoada, French cassoulet, and even homemade kebab.

Still, the monotony could be numbing, the social isolation irritating. I can hardly imagine what it would have been like for someone living alone.

Eventually, after what seemed to be half an eternity, we were let out to exercise. In mid-May, I took my first run in over two months. I emerged onto the street almost shivering with excitement.

And yet the run was somehow less enjoyable than I thought it would be. Partly this was due to circumstances. For whatever reason, the Spanish government decided to let us out only at certain prescribed times; so when I set out the streets were absolutely packed. But I was more disappointed at my own physical shape. Though I had been regularly exercising in my little room, running even a fairly short distance felt difficult, heavy, painful. Breathing was so uncomfortable that I even wondered if I had gotten the virus. And, of course, I was much slower than before.


By the beginning of summer, some flicker of light began to appear at the end of the tunnel. We were coming down from the virus’s curve, and hopefully hitting a flat bottom. The state of alarm lifted on June 21 and we were free to do whatever we wanted. Except for the masks, life began to look pretty normal again.

But even at this relatively calm time, the virus could not be forgotten. This was brought home to me when I tried to get my papers in order to visit New York for the summer. I do this every year, and I was even more eager than usual to go home, since it is always nice to take refuge in times of trouble. Even after getting the requisite documents together, however, I was faced with uncertainty.

Here was my predicament: though I could legally travel there and back with my documents, there was no guarantee that the airlines would know that. Visa regulations are enforced very imperfectly by airlines, who tend to err on the side of caution since they face penalties if they transport someone who cannot legally enter a country. Aside from that, flights could simply get cancelled from lack of demand, or the rules could change while I was in the United States, leaving me unable to return to my job in Spain. I hoped that someone in authority could give me some clarity. But the Spanish consulate could only tell me that the situation was evolving, and advised me not to risk it. So, in the end, I had to forego a visit to my homeland.

I focus on this situation because it captures an essential part of pandemic fatigue: the sense of total uncertainty about the future. It is the feeling of being in limbo, of your life being totally up in the air, of being unable to plan even in the short-term. The most one could do was to wait, while the normal pleasures of life passed silently by.


During the summer, I slowly tried to regain the running facility I had lost. It was far more difficult than I anticipated. My body was slow and sluggish, and even rather delicate. On one run I pulled a muscle in my core and had to spend several days recuperating. Nearly every run was in some way a disappointment. But I did discover a new place to run: a park near my apartment, affectionately called siete tetas (seven boobs), a name the park owes to its seven prominent hills that stand above the surrounding city. Running there obviously meant a lot of running uphill, and I figured that this challenge might be enough to get me back into shape.

Practicing this way, I quickly discovered the key to uphill running: look down. It is simply too painful to focus on how much of the hill remains. When you look forward, you become hyper-aware of your labored breathing, and the urge to give up becomes irresistible. But if you look down, focus on your feet, you notice that each individual step is not that much harder than running on level ground, and so you can continue. And it quickly struck me that the pandemic requires just this same mentality: look down, focus on each step, and forget about how much of the hill is left to climb.

Perhaps a Buddhist would describe this state of mind as enlightened, since it is just this absorption in the present moment that meditation tries to cultivate. And, indeed, it is a powerful strategy when times are tough. But few runners, I suspect, would enjoy running the whole time with their head down. Part of the pleasure of a good run is the scenery—at least for me. Likewise, a big part of the motivation of running comes from setting goals and trying to accomplish them: an attitude inherently oriented towards the future. The pandemic, just like this hill, made all this impossible, and it was all we could do to just keep our heads down and keep pushing forward.


Time became a problem during the pandemic—empty time.

At first, I admit, it was exciting to have so much time to fill. Indeed, mixed in with all the alarm and frustration of the early days of the lockdown, there was a distinct note of relief—the opportunity to slow down, to maybe work on some hobbies, or simply to relax and introspect.

But very soon people began to hit a wall, or at least I did. Humans are simply not meant to spend so much time inactive, cut off, and without a fixed schedule. We need a bit of structure and variety, or else time turns into an mushy purée, thin and bland. With no reason to get up early or late, to do something in the morning or the evening, today or tomorrow, this week or next, it somehow became all the more difficult to focus on anything productive. Focus, after all, is as much an act of exclusion—expelling extraneous distractions—as it is of inclusion; and there was nothing to exclude (or, perhaps, there was everything at once?).

One consequence of this lack of any fixed temporal landmarks was an increase in my consumption of alcohol. Simply put, there was not much else to do, and none of the usual reasons not to drink. Not that I was deliberately drowning my sorrows, you see (at least not most of the time); rather, my background consumption of alcohol grew steadily, until I was drinking almost every day. This only exacerbated the physical toll of prolonged inactivity, contributing to the general sense of malaise and torpor that became my natural element. I would wake up groggy and late, and hang around the house most of the day, even when we were finally allowed outside.

The cumulative effect of all this has been pandemic fatigue: a listlessness mixed with an undercurrent of anxiety. Without a routine, unable to see my family, I passed the time the best I could—taking a few trips, teaching a few online classes, and trying to carry on with my usual hobbies. It was not an altogether unpleasant way to live, I suppose.

Yet the feeling was rather like sunbathing on an active volcano. The whole world had a delicate, fragile quality, as if the situation might suddenly and drastically change once again. This made it difficult to fully relax or to fully commit to future plans. Even the approach of the new school year seemed distant and unreal. Would the schools really re-open? And if so, how long would they remain so?


The reason I have become so aware of pandemic fatigue is that, for the moment, it is partially lifting. School has started for in-person classes, and I am once again in front of a classroom, writing on a white board, trying to memorize students’ names (much more difficult with the masks!). In short, I not only have a routine once more, but also a social purpose. It feels surprisingly good. Aristotle was correct when he noted that we are social animals. 

Now, after all this time, I have to be presentable in front of other people. This means no more gym shorts and sweatpants. The pandemic beard—quite impressively long, if I may say so—was shaven off, and my long hair trimmed. I even decided to do a dry month, Sober October, in order to reduce my drinking to pre-pandemic levels. 

Best of all, my running ability has started to reach pre-lockdown levels once again. All that running uphill paid off, and I can finally run without my body dragging behind my intentions. Better still, I can run while looking forward instead of with my head down, staring at my feet.

But this pandemic is not over yet, and neither is the fatigue. We are in the midst of the long-predicted second wave of infections. The Spanish government is scrambling, amid bitter partisan bickering, to put together a coherent response for this new challenge, and without much success. The main consequence has been a slew of new rules, changing unpredictably from week to week, the majority more annoying than effective. Even as I write this, I am not sure what I will be allowed to do by next week.

The worst part of the current situation is that we will have to endure the next round of restrictions and rules without the psychological supports from the early days. The buoyant solidarity has vanished into the usual humdrum concerns and routine bickerings of life. Lately, most of us (especially the politicians) are more concerned with finger-pointing than with lending a helping hand.

Also, the fear of the virus has lessened considerably. While this is, perhaps, partly justified, since we are more familiar with its symptoms and have better treatments, this is mostly a result of familiarity. Coronavirus is beginning to shift into the background threats in our environments, like car crashes or lung cancer—one of many dangers that we mostly ignore.

After the solidarity and the fear have mostly gone, the only thing left is the feeling of fatigue. In the end, this fatigue is a failure to live with coronavirus, to really face up to it. Most of us badly want to forget about this emergency and move on, and yet we are constantly reminded of its nagging presence. Without the support of the community or even the fear of a new threat, the virus becomes merely a burden, an extra chore, an added whisper of anxiety. Somehow, a problem affecting nearly everyone on the globe has become a dull ache that we all must deal with privately and alone.

I am afraid that there is still a lot of uphill running in our future. The only thing to do is to put our heads down, and push on. 

Processing…
Success! You're on the list.

The Madrid Half-Marathon

The Madrid Half-Marathon

I have been a bad athlete for as long as I can remember. Apart from a brief and embarrassing stint on a soccer team in elementary school (all I can recall is spending an entire game crying my eyes out), I have avoided team sports all my life. And they have avoided me. In gym class I was always one of the last to be picked for a team. For all of middle school and high school I was tall, overweight, and consequently I had all the gawkiness and sluggishness of both conditions. True, I did spend a few years taking taekwondo classes in high school, and I was not so bad at it. But my unpromising career as a martial artist came to an abrupt end when all the stretching and kicking made it necessary to go to physical therapy for my aching, cracking knees.

Of all of the sports that I have failed at, the most conspicuous is running. Every year I dreaded the day in gym class when we would be made to run a mile. I always began with the hope that, this time, I would be able to run the whole thing without stopping. After all, nearly everyone else could. But inevitably, less than halfway through, I would run out of breath and have to walk; and I spent the rest of the time alternating between a wheezing run and a panting walk. Not once did I manage to run a mile in less than ten minutes. Just as bad was the PACER test, when we had to run from one end of the gym to the other within progressively shorter intervals, signalled by an ominous beep. The real studs were able to get to level nine, while I gave up far before that—defeated by the high-pitched tone.

This long and undistinguished experience taught me that I would never be a runner. My knee problems only added to this belief. So, after high school, I never tried. I was pragmatically and philosophically committed to a life of inactivity, with the sole exception of walking (a true intellectual’s sport). But then something happened to break my conviction that I could not run.

Last year, I got into the habit of leaving my apartment at the exact minute needed to catch the bus. Sometimes I left a little late, however, and this put me in a dilemma: walk and miss the bus (and this would mean arriving late to work), or run and catch it. My fear of being fired overcame my combined fears of looking foolish, getting my clothes sweaty, and dying of suffocation. So I ran. It started with only a half of a block, just a short sprint to catch the light. Then it became the whole block, and eventually two blocks—sprinting for the light, stopping, sprinting for the next light, stopping again—until I would run almost the whole way to the bus. And the strangest part is that I did not hate it.

Still, nothing changed. I did not participate in my school’s “Race Against Hunger,” a charity race that we do every year. Instead, I sat by the sidelines feeling bored and useless. I did not even own a pair of sneakers. Nevertheless, circumstances were quietly conspiring to make running a reality. Aside from my bus sprints, living in Europe had left a mark. On all my travels I had tried to walk as much as possible, mostly to avoid paying for taxis and buses and trains; and this had made me a resolute trekker, capable of walking miles under the hottest suns.

All of this unintended athletic experience culminated in a growing curiosity: Could I, finally, after a decade of not running, run a mile without stopping? Sure, I was no athlete; but I was skinnier and in better shape than I was in high school. That adolescent experience had left within me the iron conviction that a mile was an impossibly long distance for me, and that my body was simply unable to do it. Yet in the spirit of science I wanted to test this conviction.

So, one cold February day, I went to a sportswear store with my brother. I could not have felt any more out of place as I looked at sweatpants, recovery gels, and headbands than if I had wandered into an Aztec ritual sacrifice. This was not my world. But I managed to buy myself tights, sneakers, and an armband for my phone, feeling absolutely ridiculous all the while.

That same day I carried my purchases home and prepared for my trial. The tights were, well, tight; the armband was awkward to use. When I walked out into the street, I felt acutely embarrassed, as if everyone was staring at me. I had not worn athletic wear since… actually, I don’t know. What was I doing? Long before I began to run, my body became flushed with adrenaline. I was certain that I was about to make a fool of myself.

The walk to the park, where I would begin my run, seemed endless. But finally I arrived. This was the fateful moment. I opened the app, Runkeeper, and started the tracking function. Then, I fumbled in getting it into the armband holder, and then fumbled again in putting it on my arm. Now the run began—slowly. The first steps felt strange. Retiro Park seemed to bounce up and down. I remember finding it odd that I could enjoy the beauty of the trees while running; I had assumed that I would not be able to think about or appreciate anything.

Sure enough, the tightness in my lungs soon came, that horrible feeling of suffocating. But it was never powerful enough to make me want to stop. I kept going until I got to the artificial lake, and then I turned left and then left again, to complete the circuit. The ground was mostly flat but there was a slight hill near the end, and I thought my chest would explode as I crawled to the top of it. Finally, and unbelievably, I made it back to where I had started. And I had run the entire time. I checked the app—1.12 miles, at a pace of 9:39 per mile. For the first time in my life, at the age of 27, I ran a whole mile.


The months that followed were full of constant surprise. The biggest was that I actually enjoyed running. I did not necessarily enjoy the physical sensation of running; the mythical runner’s high eluded me, and I felt mostly pain and exhaustion. But I did enjoy improving; and I improved with every run—running longer distances at faster paces. Unlike writing or playing music, running can be measured objectively, in simple, cold figures. There can be no dispute over which runner is better or worse. This makes progress very easy to see and, consequently, very satisfying.

I chatted about it incessantly, even getting mildly obsessed with the subject. It felt genuinely surreal to be spending so much time thinking about an athletic activity: this was not me. More important, it felt liberating to see myself as someone who could actually do something physical. My carefully constructed self-image as a delicate intellectual had cracked and crumbled. I felt as if a new continent of experience was now available for exploration.

Eventually, my coworker, Holden, suggested that I do the half-marathon. He had signed up for the marathon and had been preparing for months. At first I dismissed the idea as absurd. The longest I had run at that point was six miles, at a very sluggish pace, and it nearly killed me. Yet, the idea was implanted in my head. I thought of the feeling of triumph, of surpassing even my most ambitious running goals. And, of course, I imagined how much weight I would lose in the process of training (it wasn’t much). So, I paid my 40€ (somewhat indignantly) and signed myself up. Now the serious training would begin.

This consisted of one long run a week, in which I tried to increase my maximum distance by one mile, and several shorter runs wherein I worked on my speed. This regime got me to 13 miles two weeks before the day of the half-marathon, April 27 (it had been moved up a day because of the elections on April 28). On my long runs, I would end up going so slowly that I struggled to pass old ladies with canes. But at least I knew that I could go the distance.

Finally there was only one week until race day. I was nervous. Somehow, I was certain that I was going to do badly and disappoint myself. It did not matter what time I got, of course, but I had decided that I was to run the race in less than two hours—not an easy thing for a beginning runner. I followed all the typical advice, taking a break in the days before the race and stuffing myself with platefuls of pasta. By the time Saturday came around I was well-rested, well-fed, and as prepared as I could have been. Would it be enough?

Two day before the race I picked up my bib (the little paper with your number on it, and a chip so they can track your movement). Annoyingly, they put the pick-up location all the way out in Feria de Madrid, a large complex of expo centers on the outskirts of the city. It took some time just to get there; and then it took some time just to walk through the mammoth buildings to the proper hall. There, a series of volunteers in booths gave me a bib, a t-shirt, and a drawstring bag. The rest of the space was full of other booths offering running-related products and services—energy gels, massages, protein powders. Probably many had free samples; but it was late and I wanted to go home.

The next night, I attached the bib to my sleeveless running shirt with safety pins. I was officially ready.


Race day.

I woke up, ate toast and peanuts, drank water and coffee, and headed out the door. I had been told that it’s best to warm-up a bit before the race, so I jogged about ten minutes to the train station. When I walked out of the train, I was surrounded by thousands of men and women in colorful sports clothes. I did not realize it was such a massive undertaking. Stalls were set up for clothing drop off; hundreds of port-a-potties lined the streets (all without toilet paper); rock music blared from enormous speakers. The closer I got to the running corrals, the more I was awed at the sheer size of the event. 35,000 people were running that day—the 10k, the half-marathon, and the full marathon. William the Conqueror had conquered England with fewer.

I waited, warmed up again, and waited some more. Finally it was time to get into my corral. It was like being in a nightclub—a packed mass of bodies. How could I run through this? Rock music blared. The announcer counted down. Athletic-looking people were dancing (motivationally?) on elevated platforms in the middle of the track. They had spent a lot of money on this thing.

Finally the signal was given. I tensed for the exertion; but it was a bit anticlimactic, since the whole mass of people had to walk to the starting line before they actually began running. There were people holding big blue balloons with times on them; they were professional pacers, and would run the race at exactly the time indicated on their balloon. I struggled to find the 2 hour balloon: it was several hundred meters ahead, and had started before me. Finally I crossed the starting line and found myself jogging in a loose formation.

“Hey man,” I heard a voice say. I turned to see David, a friend I had made in my masters program. He had helped me work on my speed in preparation for the marathon, as I struggled to keep up with him on our weekly runs around Retiro Park. (This is something I discovered during training: running with better runners makes it easier to push your limits.) Soon it was apparent that he was still faster than me, as he pulled away through the crowd of runners. Besides David, I knew four other people running that day, but did not see a single familiar face during the whole race, even though our finishing times were mere minutes apart.

Peter Sagal said that anyone could run the first mile of a marathon, since it gives you the sensation of running with a mob. Unfortunately I did not feel the same way. Most people were fairly quiet, just focused on the long trail ahead; nobody burst forward in a mad dash. Our route took us straight north from the starting line, up towards the four skyscrapers near Chamartín station. The organizers had planned the route well, since these first 5 kilometers was the only stretch that was consistently uphill. After we turned the corner to go back south, it was smooth sailing.

The route

Without the reference of the balloon, I did not know if I was going fast enough. I tried to keep a constant pace, not pushing too hard but not going easy. The presence of so many other people was surprisingly motivating. I felt as if I were being urged ahead by a social force, and all I had to do was to follow the wave. For the most part there were not many onlookers—just a few scattered people cheering us on. I appreciated it. There are few sports more boring to watch than long-distance running.

The pacers in action. Photo by Rebeca López.

Fifty minutes in we passed our first water station, and I felt like a real professional as I drank my bottle on the move. I also took this opportunity to have some of the energy gel that Holden had given me. This is a cocktail of vitamins, sugar, and caffeine that tastes horrible but it has a satisfying effect. Suddenly I felt optimistic—even chipper. The exhaustion lifted and I felt my stride grow longer. Was this the elusive runner’s high? Probably it was just a caffeine rush, but it felt great nonetheless. As I reached a downhill area in the neighborhood of Salamanca, I began passing some runners ahead of me—which is strange for me. Also strange, I began to talk to myself in almost ecstatically encouraging tones: praising myself and egging myself on. Caffeine is an amazing drug.

As is often the case in Madrid, it was a perfect day to run: a clear blue sky, no wind, no humidity, and not too hot. I am not sure that I ever saw so much of Madrid in a single day, and the city looked beautiful in the sunlight. This is one of the great benefits of running: it makes you feel a part of the community. I had already experienced this during my practice runs in Retiro Park and Madrid Río. Because you are outside, covering plenty of ground, surrounded by others, you feel that you are really getting to know a place and to belong in it. That day, I felt like I belonged in Madrid.

Just as we reached the end of the hill, we passed through a small tunnel. There were people cheering on the road above. But the real noise came from the runners, who shouted and whooped as soon as they passed underground, making the space reverberate with a kind of barbaric din—a war cry for amateur athletes. I added my own feeble contribution to the chorus of adrenaline, and felt for a moment as part of something bigger than myself, as just one pulsating cell of an enormous beast. This feeling, I thought, is why people run these ridiculous races.

This sensation soon passed, as did the euphoric effect of the caffeine, and the usual pain and strain came back. Luckily, I soon reached another water station, and then swallowed the rest of my energy gel, which gave me another boost. But I could tell that my reserves were running low.

This particular marathon was a “rock ‘n’ roll” race, which meant that there were stages set up periodically along the course where local rock bands were playing. I must admit that I did not find the music particularly animating, partially because I was able to hear so little of it as I ran by. The cheering of the crowd was somewhat more uplifting, especially when I noticed my friend Monica calling my name. But by far the most motivating factor were the other runners, sweeping me up into a constant forward motion.

Partially because the race was a “rock ‘n’ roll” marathon, I decided to run it without headphones. This was the first time I had ever done a long run without my trusty audiobooks keeping me company, and I was afraid that I would get bored. But it turned out to be a good choice. Free from the distraction, I was able to focus my energy on keeping myself going at a steady pace. Indeed, the extended focus on my breath and my moving limbs made the experience at times rather meditative; I was completely absorbed in the experience of the race. Another advantage to not using headphones is that I did not have my running app telling me how much distance I had covered. This was a very strategic sort of ignorance, since it allowed me to keep pushing without fear of burning out too early.

Photo by Rebeca López

I started to enter more familiar neighborhoods, and I knew that I was in the final stretch. The more I ran, the more impressed I became at the scale of the marathon: they had to shut down half the city for us. Now I knew why I had paid 40€ to run. Still, city life tried to go on—in particular the life of the elderly, who refused to stop for any sweaty army. More times than could possibly be a coincidence I had to stop or swerve to avoid an octogenarian slowly crossing the race course, cane or walker in hand. They were either very brave or quite blind.

Soon I passed several men and women shouting directions at us: those running the full marathon had to turn left, while us half-marathoners continued straight. I knew from the map that this meant that we were in the final stretch. I did my best to push myself to go faster, but my whole body was achy and unresponsive. So I compromised by trying not to slow down. A small woman with a very loud voice started yelling what she meant to be encouraging slogans to us, most of which were about the beer waiting for us at the end. This failed to motivate me, I am afraid, since the thought of drinking beer after getting so dehydrated filled me with disgust.

It was around this time that the thought finally crossed my mind that I would very much like to stop. I had been running for almost two hours by then, and I was tired and even bored, and the finish line was failing to materialize. Luckily the course started taking us downhill, past Retiro Park on the way towards Atocha. At this point I spotted Rebe, to whom I had delegated the task of taking photos of the race for this blog. She was busy at work—so busy, in fact, that she did not notice me until I was right about to kiss her.

Photo by Rebeca López. I am on the far left.

Now it was truly the final stretch. We got to the bottom of the hill, into the Plaza del Emperador Carlos V, and then began up the Paseo del Prado. The finish line finally came into view. I was afraid to look at it, since I thought it would be too discouraging to see how slowly it came nearer; so I looked at the ground. The loud-voiced woman started shouting even more loudly and insistently. The crowd around us started to roar. I could hear music.

Before the race, I had imagined that the sight of the finish line would fill me with a final burst of energy, and I would be able to spring the last few hundred meters. But when I tried to speed up my body rebelled; it hurt too much; so I contented myself with, once again, keeping an even pace.

When I was within 100 meters I looked up and beheld the goal. Again, I tried sprinting, but it was impossible. So I jogged under the gateway and across the finish line, weakly raising my arms in tired triumph. I was done. Again, I had assumed that I would immediately feel transports of joy and accomplishment, but I was too exhausted to feel or to think anything—except, of course, at how exhausted I felt.

After the finish line volunteers were distributing medals, water bottles, and little bags full of food: a banana, an apple, a chocolate croissant, and a bottle of Powerade—for which I was extremely grateful. I started gulping down the water as I limped out of the race area and into the Plaza de Cibeles. Somehow, Rebe immediately found me, and we sat down nearby while I slowly recovered the ability to speak. My face was marked with salty-white streaks of dried sweat, my clothes were completely soaked, and I walked with an awkward limp. But I felt fantastic, and only felt better as the day progressed. Indeed, the sense of accomplishment, blended with complete bodily relaxation, creating one of the most pleasant days I can remember.

My final time was 2:05, which is five minutes above my goal time, but still easily the best I had ever run. I felt completely at peace—with myself and with the world. And I finally discovered the most valuable benefit of running: not losing weight, nor being healthy, nor even the sense of accomplishment, but just feeling good. And I felt good.

Global Classrooms: Part 1

Global Classrooms: Part 1

My school year thus far has been dominated by the Global Classrooms program. This is an educational initiative that resulted from a collaboration between the Comunidad de Madrid, the British Council, the United States Embassy, and the Fulbright program, in which students in their third year of secondary school (American 9th grade) participate in a model United Nations conference. The program has grown every year since its inception; this year well over 100 high schools took part.

Each year it is one language assistant’s job to implement the program in their school—and this year it was my job. This meant preparing my students for the first conference, which took place the third week of January. This gave me about twelve weeks of class to work with the students.

I was lucky. For one, the program has been running for many years in my school, so the teachers are very supportive. I had also seen the program in action already, during the past two years at my school, so I knew what to expect. What is more, instead of the required two hours per week with students, I had three hours to work with them. But I did have one slight disadvantage: my number of students was higher than average. I had four class groups, each with nearly 30 students, so almost 120 in total.

My Global Classrooms team

Despite the extra time, I still felt rushed. The students need to master quite a number of skills before the conference. First I had to explain what Global Classrooms is, which meant explaining what the United Nations is. Then there is public speaking. Most people—let alone teenagers—do not feel comfortable speaking in front of a large group; and when you add a second language into the mix, you can see why this would be quite a challenge. Another difficulty is teamwork, since the students must work in two-person “delegations,” doing their best to share the responsibilities equally.

Voting in the conference

After the class is divided into delegations and assigned countries, they must write papers and speeches from their country’s perspective. This means doing research. For most of these students, this is the first time that they are asked to diligently search for reliable sources and cite these sources in the proper format. Believe me, it can be a struggle trying to get students to not use Wikipedia (especially since I use it so much). Added to this, the temptation to plagiarize is especially strong in a foreign language, since paraphrasing can be quite difficult for a non-native speaker.

The dais

The two major pieces of writing the students must produce are the Opening Speech and the Position Paper (a short research paper). The former is essentially a shortened version of the latter, since the students need to be able to read their speech in 90 seconds. In both, they must examine a global problem from a domestic and an international perspective, explain what their country has already done to address the problem, and then propose solutions for the future.

This year the problem assigned to my school was Gender Violence—a very timely issue. Thus, the students had to wrap their mind around the forms and causes of gender violence—both in the world and in their own countries—in order to come up with persuasive solutions.

The chair checking the remaining time to the unmoderated caucus

The last piece of the puzzle is parliamentary procedure. The students need to learn the different sections of a conference (formal debate, moderated caucus, unmoderated caucus), how to make a point or a motion or to yield their time (and all the different ways of doing so), and how to write a proper resolution (a proposal to solve the problem).

Teaching these things may sound dry (and can be); but it was made somewhat more engaging by teaching it through a “Zombie Conference,” in which the students had to find a solution to the impending zombie apocalypse.

Two delegates giving their speech, with the evaluator listening closely

The effort to impart all of these skills is rewarded when you see the students in action. By December, my students (well, most of them) were able to operate within a formal environment, speak in public, write a properly-researched paper, debate a global issue, and in general impress all of the adults in the room with their poise and maturity. It is quite a payoff.

Because so many schools are participating in the program, each school may only send 10 students to the first conference in January. I admit I was a little disappointed by this, since it meant choosing between several deserving candidates. But in my school (as in many others) we compensated by having our own conference in December, in which every student could participate. This may have been the most rewarding part of the whole experience for me. It is the middle-range students, the ones who do not normally excel, whom an educator most hopes to reach—since the ones who do normally excel hardly need your help—so it was gratifying to see so many of these students improve.

After this conference in December we picked our team of 10 and began preparing them for the January conference. This meant re-writing position papers and opening speeches, brushing up on parliamentary procedure, and discussing the program of gender violence in greater depth. It can be a nervous time for the students, since they know that only about a third of the schools advance to the next conference (which takes place in late February). Is it important, then, for the students to see the conference, not as a cut-throat competition, but as a learning experience—which it is.

The list of speakers

This year so many schools participated that the conference had to be spread out over six days. It is a big operation, and so each of the Global Classrooms language assistants is required to help run the conference. My job was to be a photographer. It was my first stint as an events photographer, though hopefully not my last. At least now I can reap the benefits of my work, since I have a stockpile of photos that I can use to illustrate the event.

Self-portrait

The conference took place at CRIF Las Acacias, a labyrinthine old building (now a center of “innovation and training”) which had been a state orphanage for nearly 100 years (1888 – 1987). I had the privilege of being taken on a small tour of the old church. It sits empty nowadays, cavernous and deconsecrated, beside the main building. The old altar still stands, the virgin presiding over empty benches. Nearby is a theater—now shrouded in darkness—that the orphans would use to stage plays. Few places I know have such a delightful feeling of being abandoned and even haunted by past lives.

The orphanage’s deconsecrated church

But on the days of the conference, present lives were what attracted the most attention. Students in suits and ties, dresses and high heels, gave impassioned speeches in excellent English about complex and controversial issues. The day would typically start off somewhat tense, with the students nervously eyeing those from other schools, and doing their best to outshine their opponents. But the atmosphere noticeably mellowed as the conference went on (it runs from 8:00 am to 3:00 pm), until by the end of the day each student has accumulated dozens of new instagram followers. Now, that is what I call success.


DBZ & SSBM: An Adolescence in Two Acronyms

DBZ & SSBM: An Adolescence in Two Acronyms

After I got home from long and boring day of school, I would sit on the couch, turn on the television, and lazily do my homework during the commercial breaks.

This procedure—which I followed for years—guaranteed that homework would be torture. Even simple tasks could take ages, from starting, stopping, forgetting, and starting again. And since I did not devote even half my attention to the work, I did it badly without learning anything. Yet by the time I got home from school I was so burnt out that I had to distract myself from the work as much as possible, just to stay sane. It did not help that this homework was inevitably the most pointless drudgery—“busywork,” as my mom called it—requiring time but no thought, some attention but no creativity. Television at least took the edge off.

In the late afternoon, when I got home, there usually wasn’t anything very good on. As the day waned the quality would improve, until, finally, it was time for Toonami. Toonami was a programming block on Cartoon Network, specializing in Japanese anime dubbed into English. The programs were presented by TOM, an animated robot man—surprisingly pudgy for an android—who was a kind of space-pirate broadcaster, transmitting the shows from his spaceship all across the galaxy. You can imagine that the teenage me was entranced.

The first anime to win me over, and the one that was to remain my favorite, was a show called Dragon Ball Z. On the surface it was like any superhero cartoon; the characters had powers and fought bad guys; and since I had long been a fan of Superman and Batman, this drew me in. Indeed, the protagonist, Goku, had a backstory almost identical to Superman’s. One of the last survivors of his destroyed planet, Goku arrived on Earth as an infant and was brought up as a human. Yet his alien fisique soon proved much stronger than a normal human’s, and so on, etc.

All this was standard stuff. But there were some odd discrepancies between DBZ and American superhero cartoons. DBZ had a surprising amount of ethical ambiguity—at least, surprising for a young teenager. Bad guys sometimes became good guys, or at least semi-good guys; and the good guys were often foolish, cowardly, or just silly. This did not happen with Superman and Batman, who were always good, brave, and wise, and whose enemies were always arrogant, cowardly, and bad. Another fundamental difference was the concept of training. The characters in DBZ did not simply have powers, but had to continually train to develop their abilities, which grow as the series progresses.

But the most striking difference were the fights. Whereas Batman threw batarangs and gave karate chops, and Superman mainly stuck to a few good jabs and hooks, the characters of DBZ would disappear into a blur of punches and kicks, shoot energy rapid-fire until whole landscapes were engulfed in flame, make the entire earth shake as they charged their attacks. The fight choreography was light years beyond the most daring American cartoons. And the fights lasted longer—much longer. Two characters could be embroiled in a fight for whole episodes, sometimes even multiple episodes: hours and hours of anime action. After DBZ, the Justice League seemed tame.

The show was unashamedly centered on fights in a way that I found irresistible. The plot became ever-more perfunctory, merely serving to set up meetings between powerful characters so they could proceed to beat each other to pulp or blow each other to bits. If you think that the plot of a usual superhero movie is thin, try watching DBZ. Everything—the characters, the pacing, the story—is dictated by the demands of epic battle. Characters have epiphanies just so they can reach another power level; characters fall in love just so they can have kids, who will have their own battles; characters make irrational decisions just so that battles will be prolonged.

DBZ is most infamous for its long power-ups, wherein a character will scream his head off while his body emits light and heat in a fantastic buildup of energy. I almost admire how shamelessly this device is used by the writers to fill episodes and build tension. This is the only explanation for the power-ups, since they make no sense within the story: the fighter is perfectly vulnerable during the ordeal, just standing there and screaming like a wild monkey. And yet time after time their opponents let it happen, despite the possibility that a successful power-up spells defeat. Even wicked world-destroying villains are above interrupting this sacred process, it seems. While this yodelling lightshow takes place, all the other characters retreat to gape and repeatedly exclaim how amazed they are. Certain phrases become obligatory: “I can’t believe how powerful he’s become!” “What? Impossible!” “This energy! Can it really be from one person?” Even by the end of the series, when they have all seen a hundred power-ups, the spectacle never fails to fill them with awe and dread.

Sometimes these power-ups led to transformations, which is another hallmark of DBZ. The young Goku found that, like a sort of King Kong werewolf, he transformed into a giant dog-monkey during a full moon—until cutting off his tail solved that problem. His rival Vegeta, another Saiyan, used this transformation against Goku until, being similarly dismembered, he was deprived of this power. And this is not the end of the Saiyans’ ability to transform. The most iconic of these is the Super Saiyan, in which the hair turns golden and stands straight up. But this turned out to be just the tip of the iceberg; Super Saiyan 2 and 3 followed, and in other media even the ape-like Super Saiyan 4.

And the Saiyans aren’t the only ones who transform. The show’s most famous villains—Frieza, Cell, and Majin Buu—are all distinguished by their many metamorphoses; and these are not just changes in hairstyle, but involve a complete modification of their bodies. I suppose we associate these bodily mutations with insects, which is why it seems like a villainous thing to do. Indeed, Cell has beetle wings, and Majin Buu emerges from a cocoon like a butterfly. Even more nefarious, these two villains unlock their new forms by absorbing other people, like giant mosquitoes. And yet it is interesting to note that, for all three of these villains, their most powerful form is their most humanoid. The combination of human and animal traits is, after all, the essence of monstrosity.

§

A few years after I started watching DBZ, I began to delve into Super Smash Brothers Melee. In case you are not familiar with this game, SSBM is a fighting game originally released on the Nintendo GameCube in 2001. It is the sequel to the original Super Smash Brothers game on Nintendo 64, which I had been playing with my friends since elementary school. Some of my happiest memories from childhood are of sitting in my best friend’s basement playing Smash. And Melee was just as good, if not better. Both Smash and its sequel Melee are ideal party games—there is no backstory, the objective is clear, they require little skill to enjoy, and up to four people can play at once. Just choose a character and try to knock the other guy off.

My brother and I bought SSBM almost as soon as it came out, and for a while we played it the way it was designed to be played: as a lighthearted, meaningless diversion among friends, much like Mario Kart. But then, in high school, we began to take games more seriously. This began when we started playing online computer games, both greatly widening the pool of our competition and introducing us to gaming culture—a culture of competition adequately summarized and parodied in the online series Pure Pwnage (which we also watched). The goal was not just to have fun, but to be the best, to crush and humiliate your opponents: in short, to pwn noobs.

It was during this period that our neighbor visited us one day, and said he wanted to play SSBM. This was somewhat odd, since we believed that SSBM was just for button-mashing fun, not for serious high-level play. “But look,” our neighbor said. “I found out about advanced techniques.” And he searched a video of the wavedash.

The wavedash is the most iconic advanced technique of SSBM. It is hard to explain what it is without giving some idea of the game. Normally, you can run, jump, or roll to move around. These are all standard controller inputs, a straightforward combination of a button and the joystick. But a wavedash is executed by pressing the jump button, and then immediately air-dodging (by pressing another button while angling the joystick) diagonally towards the ground, thus interrupting the jump: two inputs, one after the other. The result is that the character slides across the stage, sometimes very quickly. This method of locomotion was likely not intended by the game’s architects. But it works wonderfully.

The wavedash alone significantly added to gameplay, giving players speed and maneuverability that weren’t available before. But this was only the beginning. There were lots of these so-called advanced techniques: short-hop, dashdance, L-cancel, crouch cancelling, directional influence, wall-teching, and on and on. We had played the game for years and had never even suspected the existence of higher-level play. Out of the package, the game seemed as simple and obvious as Parcheesi; that was its appeal. But these techniques opened up an entirely new level of gameplay, turning a lighthearted diversion into a lightning-fast contest of reflexes.

Seeing these techniques in action was incredible. Top players made combo-videos, showing how they could string together attacks in inescapable sequences, juggling their opponents across the fighting stage and then sending them flying. Even more impressive were the videos of professional players. This was around 2007, right before SSBM was discontinued from its three-year run on the Major League Gaming circuit (a company that organizes gaming tournaments with big prizes and high publicity). This meant that YouTube was already full of videos of high-level players competing in formal competitions. PC Chris, KorenDJ, Azen, Chudat, Isai, and Ken—we watched their matches and marvelled at their prowess. Soon enough my neighbor and I were practicing these advanced techniques and sharpening our skills against one another.

Here I should pause to explain a bit about how SSBM works. Unlike in other fighting games, where you have a certain amount of health or stamina that is depleted by your opponent’s attacks, in SSBM you have percentage. This determines how far you are sent if an attack hits you. A player with 0% will hardly move from an attack, while a player with 150% will take off like a cannonball. If you fly too far off stage, in any direction, you lose a “stock,” or life. Another big difference is that there are no predetermined combos in SSBM. (As in, no series of controller inputs automatically results in a combo.) Combos have to be discovered or invented by the player, and rely on a mixture of luck and timing to pull off. The result is a far freer fighting game, with death may come at any time (or postponed indefinitely), and where each sequence of moves is improvised in the moment.

Another attraction of the game is its wealth of characters. There are twenty-six to choose from, each with a different set of moves, a different height and weight, a different walking and falling speed, and consequently requiring techniques and styles of play. And though some characters are generally far stronger than others (competitive players arrange them on a tier-list from best to worst), the game’s architects did an excellent job in giving each one unique strengths and weaknesses, making each two-way matchup unique. I mostly played Captain Falcon, a mid- to high-tier character with strong moves and fast movement, but who suffers from predictable recovery and being easily comboed. My neighbor mostly played Marth, one of the best characters in the game, who nevertheless suffers from a difficulty in finishing off opponents.

After a few months of practice, my neighbor and I were good enough that we could beat any normal player without much trouble. And yet even though I improved greatly, I was constantly frustrated at my inability to best my neighbor. No matter how good I became, he was always at least slightly better—sometimes more than slightly—and no amount of practice could bridge the gap. This made me furious. Even for my adolescent age, my maturity level was not high. I had a low tolerance for frustration and had difficulty controlling my anger. So sometimes, when being badly beaten, or when victory was snatched away from me at the last moment—as it always seemed to be—I would explode and slam my controller on the ground, or throw it across the room, sometimes damaging or even breaking it. Fully indoctrinated to the gaming ethos, I wanted only to win, to be the best, to crush my opponents; so when I was myself beaten, I felt worthless, empty, powerless.

This experience playing videos games, incidentally, is one reason why I generally avoid competitive situations. While competition seems to bring out the best in some people, I think it brings out the worst in me. I become petty and spiteful: arrogant towards those I beat and resentful towards those who beat me. So focused on winning, I cannot relax and enjoy what I’m doing, which ironically makes me less likely to win. The pressure I put on myself makes me nervous; I think about how good it would feel to win, how awful to lose, and my palms begin to sweat and my mind races; I panic, my playing suffers, and I lose—and then the rage comes, and I mentally chastise myself until I feel like a little worm squirming in the mud. This is more or less what would happen to me as I became ever more engrossed in competitive gaming, which is why I have developed a reluctance to compete in adulthood. Since so much of life in a capitalistic world is based on competition, at times this puts me out of harmony with my surroundings—but that’s another story.

The closest I ever came to the professional player scene was my one trip to a local tournament. I went with my neighbor. My mom drove us. The tournament was held in a video game shop next to an old toy store I used to go to. Strange to say, my memory of this tournament is very vague. I remember being in a cramped room full of chairs and TV screens, and feeling intimidated by all the older people around me (at around 15, we were probably the youngest there); I didn’t say a word to anybody except my neighbor. I remember sitting down to play my first match with sweaty palms, and I remember being beaten, but putting up a respectable fight. And that was it for me.

So my very promising career as a professional gamer was quickly snuffed out. Discouraged by the huge skill-gap that remained between myself and even moderately ranked players, I lost heart. Not that it mattered much, since the following year my interests abruptly switched from video games to playing guitar—but, again, that’s another story.  

§

The reason that I am writing about these two adolescent obsessions of mine is because, strange to say, they never entirely left me. After many years of scarcely thinking about Goku or Captain Falcon, I now find myself regularly watching clips from DBZ and SSBM matches, and really loving them. And this, in a man who normally looks down his nose at all lowbrow pleasures. Why the resurgence in interest?

Partly my renewed interest has been sparked by an actual resurgence in these media. After a long hiatus, the Dragon Ball Z saga was continued in the new series, Dragon Ball Super. And after a period of decline following the release of SSBM’s sequel, Super Smash Brothers Brawl for the Nintendo Wii (a game far less amenable to quick, competitive play), the Melee community has rebounded and grown, with regular tournaments all over the world, and even a full-length documentary devoted to the game’s early years.

I began watching Dragon Ball Super out of boredom and a sense of nostalgia, but I was quickly hooked into to the series. In every way it is an improvement from DBZ. The story has far less filler—notably, the power-ups only take a few minutes. The already perfunctory plot-lines about monsters trying to blow up the world have been scrapped for simple tournaments, giving the characters a chance to pummel each other without further ado. The villains are, for the most part, no longer shapeshifting monsters but other martial artists. And the animation is much sharper and impressive. Yet the basic elements remain the same. The humble Goku trains to unlock new transformations (Super Saiyan God, Super Saiyan Blue, Ultra Instinct) in order to beat the enemy, who is, as usual, arrogant and overconfident.

I started to watch the Smash Brothers Documentary out of a sense of curious irony, amused that somebody would make a documentary about such a silly subject. But I soon found myself genuinely impressed. Indeed, for a fan-made documentary uploaded directly to YouTube, it is almost absurdly well-made—informative, entertaining, and attractive. Directed by Travis ‘Samox’ Beauchamp, the documentary contains nine episodes, each of which is dedicated to a notable player from SSBM’s “Golden Age” (the years following its release): Ken, Isai, Azen, PC Chris, Korean DJ, Mew2King, Mango, with many other players making an appearance. Having followed these players in high school, I was fascinated to hear their own story in their own words. And the commentary, far from the usual callow gamer smacktalk, was consistently thoughtful and humane—especially that of the player Wife. In short, the documentary really captures the magic of the game and the community which has formed around it.

But even if DBZ and SSBM are still going strong, it does not explain my continued interest. Again, I have a tendency to be extremely pretentious when it comes to the media I consume. I seldom resist the opportunity to denigrate popular music, films, and books as simplistic, formulaic, childish, etc. (Here you see my nasty competitive side expressed in a different way.) And yet here I am, still watching a cartoon about men flying and fighting, still watching people manipulate characters on a screen, still enjoying the adolescent obsessions that I thought I had left off long ago. Clearly, these two media have a consistent appeal to me. But why?

They are similar in several conspicuous ways. Both SSBM and DBZ focus on fast-paced fights, with characters dashing, jumping, and flying through the air—shooting projectiles, exchanging blows, sending each other flying. In both, the fight itself is more compelling than the outcome. Though DBZ has good guys and bad guys, we do not watch to see who wins (it’s always Goku), but to see the fight itself—the sheer spectacle of it. And even the story-mode of SSBM does not have anything resembling a plot. The whole substance of SSBM and DBZ is made of rapid punches, flying kicks, and energy beams. And since the fight is the main focus, both media include training as a major focus. Goku is not simply strong, like Superman is; his strength is the product of work. Top SSBM players, too, must put in endless hours of practice to compete on their level.

Another striking similarity is that both SSBM and DBZ are male-oriented. Though Dragon Ball Super finally incorporated some female fighters, DBZ’s fighters were exclusively male; and though I do not have the statistics, I believe the show’s audience was similarly male-dominated. One look at an SSBM tournament will reveal how completely boy-centered is the game. Every top player I know of is male; the commentators, too, are all men; and the audience is inevitably a chorus of husky voices. Perhaps this should be expected. True to the cultural stereotype, both DBZ and SSBM are bereft of romance and sentiment, and instead focus on violence—a traditionally male vice.

It should be noted, however, that both the show and the game are pretty tame. Indeed, I would argue that both DBZ and SSBM are distinguished by a kind of vanilla violence, where characters are punched but do not break their bones, where they lose a game or are sent to the afterlife but never really die (the important characters in DBZ are inevitably revived with the titular dragon balls)—where the stakes are, in short, never very high. (The resemblance only increased in Dragon Ball Super, where the characters are eliminated from the tournament by being knocked off the stage, just as in SSBM.) It is a violence without bloodshed and without consequence, for the pure sport and spectacle of it. And this, perhaps, explains why the two attracts similar demographics, namely “dorky” men: they are male but not manly, competitive but not cutthroat, violent but not vicious. It is purely imaginative fighting.

§

DBZ and SSBM are similar, then. But again I must ask: Why do they hold such a consistent appeal to me? The most obvious answer is nostalgia. I am a boy who grew up right when they were coming out, and they remind me of my childhood. This, however, leads to another question: Why did they appeal to me in the first place?

This is, perhaps, also no mystery. I fit their demographics pretty well. I was a dorky boy who has never been popular or good at sports. Like other video games, SSBM gave me a chance to excel at something competitive. I could not beat anyone in any physical activity, but I could run circles around my opponents on the screen. And Goku was the perfect hero for a boy in my situation: whose strength was the product of determination, and whose persistent efforts could defeat his more naturally talented foes—muscly monsters whose overconfidence always led them to neglect their own training. In short, the imaginative identification with the heroes of DBZ and the characters of SSBM could transform a slow, weak, pudgy kid into a lightning-fast, super-strong fighter.

SSBM and DBZ were a form of escape in more ways than one. Not only did they provide me with an escape from my nerdy, unathletic self, they also provided a much-needed relief from the omnipresent boredom of school. My memories of middle and high school are, with some notable exceptions, sitting in a claustrophobic room, feeling tired and bored out of my mind, seldom paying attention to what was being said or read. Despite this, I was actually a good student—at least as far as grades were concerned. But the endless amounts of busywork, the dry lectures, and the repetitive routine had me constantly on the verge of burning out completely.

When I got home my first priority was to unplug, to forget everything from the day and to put school as far as possible from my mind. Shows like DBZ and games like SSBM were perfect. They require no thought to understand and enjoy. Indeed, then and now their primary function for me is to switch off my intellect, leaving only a kind of dim, dog-like awareness of movement. When I indulge in these media I am in a trance, as incapable of critical thought as is a goldfish.

Many times in later life I have found myself feeling the same way I felt in high school: bored to tears by my daily life—an endless parade of meaningless obligations and unrewarding tasks—and looking for some way to forget it all. Intellectual pleasures are arguably not the best way to do this, since they sharpen rather than blunt the attention. But SSBM and DBZ are perfect: cartoon fights without meaning, appealing to my primitive brain and leaving the frontal lobe blissfully empty. Indeed, I have found that when I am particularly keen to watch SSBM fights on YouTube, it is usually a sign that I need to liven up my routine.

In saying these things I hope I have not insulted or offended anyone connected with these media. I have only the warmest feelings towards DBZ and SSBM; and if I wore a hat I would take it off to the makers of the first and the players of the second, who have provided me with so many happy hours. For the world—at least how it is now—necessarily involves drudgery. As long as we have routines we will have boredom. And some light escapism is, I think, a healthy and natural way of coping with the limitations of our own identities and the plodding monotony of the day-to-day.

[Cover photos taken from Dragon Ball Wiki; its use falls under Fair Use.]

Connie Converse & Feelings of Beauty

Connie Converse & Feelings of Beauty

In 1974 a woman called Connie Converse got into her car, drove away, and was never heard from again. This was not noticed by the press. She was fifty and a failed musician, an eccentric and hermit-like woman who never caught her big break. But she left behind recordings of her songs—songs which, belatedly, have finally led to her getting a modicum of the recognition she deserved.

This is another of those stories of the misunderstood genius. Completely obscure during her career, she is nowadays recognized as a musician ahead of her time, one of the first practitioners of the so-called singer-songwriter genre. Her recordings remained unavailable to the public until 2004 when she was featured on WYNC’s “Spinning on Air.” Five years later, in 2009, an album of her homemade recordings was released—How Sad, How Lovely—which secured her place in the hearts of hipsters across Brooklyn.

In this essay I am, however, not primarily interested in analyzing her music. This is not because there isn’t much to discuss. Converse blends American popular styles—blues, folk, jazz, country—into a sophisticated personal style. Her songs are masterful on many levels: with piquant lyrics, sophisticated harmonies, and creative guitar arrangements. Deep originality, keen intelligence, and a fine literary sensibility make her posthumous album an excellent listen.

All this is beside the point. I want to use Converse’s music as the jumping-off point to discuss an essential question of aesthetics: Are there such things as aesthetic emotions?

In a previous essay on aesthetics, I tried to analyze the way that art alters our relation with the natural and cultural world. But I left unexamined the way that art changes our relationship with ourselves. For art does not only affect our stance towards our assumptions, habits, sensations, perceptions, and social conventions. Art also changes our relationship with our emotions.

The very title of Converse’s album invites us to consider aesthetic emotions: How Sad, How Lovely. She herself did not choose this title for the album; but it is well-chosen, since this feeling—the feeling of delightful melancholy—always comes over me when I listen to her music.

But how can sadness be beautiful? When tragedy actually befalls us—a career failure, a break up, a death—we are little disposed to find it beautiful. And long-term, grinding depression is perhaps even less lovely. Yet we so often find ourselves watching sad movies, reading depressing books, enjoying tragedies on stage, and listening to tearful ballads. Converse’s songs are full of heartbreak, yet many people enjoy them. We do our best to avoid sadness in life, but often seek it out in art. Why?

This leads me to think that the melancholic emotion we experience in sad art is not the same as “real” sadness. That is, when a beloved character dies in a novel we feel an entirely different emotion than when a friend passes away. Can that be true?

Perhaps, instead, it is just a question of degree: sadness in art affects us less than sadness in life. But this explanation does not do. For we do not enjoy even small amounts of sadness in real life; so why would we in art? And besides, anyone artistically sensitive knows that one can have intense reactions from art—quite as intense as any experience in life—so that it is clearly not a question of degree.

The explanation must be, then, that aesthetic sadness is categorically different from actual sadness. Yet we recognize an immediate and obvious affinity between the two feelings: otherwise we would not call them both “sad.” So it seems as if they are alike in one sense and yet different in another.

To pinpoint the difference, we must see if we can isolate the feeling of beauty. In novels, paintings, and songs, aesthetic reactions are hopelessly mixed up with a riot of other considerations: artistic intentions, subject-matter, moral judgments, and so forth. But when contemplating nature—in my experience, at least—the feeling of beauty stands out cold and pure.

In the forest of New Brunswick my family owns a property on a lake, where we go every summer to relax. Out there, far away from the light pollution of cities, the Milky Way appears in all its remarkable brilliance. Except for the ghostly calls of loons echoing across the lake, the woods are deathly still. Gazing up at the stars in the silence of the night I come closest to experiencing pure beauty.

Looking at the starry night I feel neither sad, nor happy, nor frightened, nor angry. The word that comes closest the feeling is “awe”: gaping wonder at the splendor of existence. I become entirely absorbed in my senses; everything except the beautiful object drops away. I am solely aware of the sensory details of this object, and perpetually amazed that such a thing could exist.

An important facet of this experience, I think, is disinterestedness. I want nothing from the beautiful object. I have no stake in its fate. Indeed, my absorption in it is so great that the feeling of distinction—of subject and object—dissolves, and I achieve a feeling of oneness with what I contemplate. Thus this feeling of disinterest extends to myself: I no longer have a stake in my own life, and I can accept with equanimity come what may. Beauty is, for this reason, associated in my mind with a feeling of profound calmness.

Using this observation, I believe we can see why sad art can be pleasant. My hypothesis is that the feeling of beauty—which brings with it a calming sense of disinterest—denatures normally painful emotions, rendering them relatively inert.

Real sadness involves the painful feeling of loss. To feel loss, you must feel interested, in the sense that you have a stake in what is happening. But experiencing sadness in the context of a work of art, while our aesthetic sense is activated, numbs us to this pain. We see sadness, rather, as a kind of floating, neutral observer in the scene; and this allows us to savor the poignant and touching experience of the sentiment without the drawbacks of emotional trauma.

And herein lies the therapeutic potential of art. For art allows us to see the beauty in normally painful emotions. By putting us into a disinterested state of mind, great art allows us to savor the sublime melancholy of life. We see that sadness is not just painful, but lovely. In this, too, consists art’s ability to help us achieve wisdom: to look upon life, not as a person wrapped up in his own troubles, but as a cloud watching from above.

Here I think it is useful to examine the difference between art and entertainment. In my first essay, I held that the differences is that art reconnects us to the world while entertainment lures us into fantasy. But the emotional distinction between art and entertainment cannot be described thus.

Rather, I think herein lies the difference: that art allows us to contemplate emotions disinterestedly, while entertainment provokes us to react empathetically. That is to say that, with entertainment, the difference between aesthetic emotions and everyday emotions is blurred. We react to an entertaining tale the same way we react to, say, our friend telling us a story.

Some observations lead me to this conclusion. One is that Shakespeare’s tragedies have never once brought me anywhere close to tears, while rather mediocre movies have had me bawling. Another is that drinking alcohol, being stressed, feeling sentimental, or being otherwise emotionally raw tend to make me more sensitive to blockbuster movies and pop music, and less sensitive to far greater works. Clearly, provoking strong emotions requires neither great sophistication in the work nor great appreciation in the audience; indeed, it requires a childlike innocence from both.

I believe the explanation for this is very simple. Crude art—or “entertainment” in my parlance—does not strongly activate our aesthetic sense, and thus our emotions are unfiltered. We feel none of the disinterest that the contemplation of beauty engenders, and so react in an unmediated naturalness. This makes entertainment the opposite of calming—rather, it can be very animating and distressing.

This may be the root of Plato’s and Aristotle’s ancient dispute about the role of poetry in society. Plato famously banished poets from his ideal republic, fearing poetry’s ability to disrupt social order and discompose men’s minds. Aristotle, on the other hand, thought that poetry could be cathartic, curing us of strong emotions and thus conducive to calmness and stability. In my scheme, entertainment is destabilizing and true art therapeutic. In other words, Plato should only have banished the entertainers.

This also leads to an observation about singer-songwriters. Unlike in other genres of music—musicals, operas, jazz—there is a pretense among singer-songwriters to be communicating directly with their audience; that their songs are honest and related to their daily lives. It is this intentional blurring of art and life that leads many fans to get absorbed in tabloid stories of celebrity personal lives.

I think this is almost inevitably a sort of illusion, and the “honest” self on display to the public is a sort of persona. But in any case this pretense serves the purposes of entertainment: If we think the musicians are giving us honest sentiments, we will react empathetically and not disinterestedly.

Connie Converse, on the other hand, creates no illusion of direct honesty. Her lyrics are literary, picturesque, and impersonal. This is not to say that her songs have nothing to do with her personality. Her preferred themes—unrequited love, most notably—obviously have some bearing on her life. And the sadness in her music must be related with the sorrow in her life, feeling isolated and unrecognized. But in her songs, the stuff of her life is sublimated into art—turned into an impersonal product that can be contemplated and appreciated without knowing anything about its maker.

I believe all true art is, in this sense, impersonal: its value does not depend on knowing or thinking anything about its maker. Art is not an extension of the artist’s personality, but has its own life. This is why I am against “confessional” art: art that pretends to be, or actually is, an unfiltered look into somebody’s life and feelings. Much of John Lennon’s work after the Beatles broke up falls into this category. By my definition, “confessional” art is always inevitably entertainment.

This is not to say that art must always scorn its maker’s life. The essays of Montaigne, for example, are deeply introspective, while being among the glories of western literature. But those essays are not simply the pouring forth of feelings or the airing of grievances. They transform Montaigne’s own experiences into an exploration of the human condition, and thus become genuine works of art.

In my previous essay I described how art can help us break out of the deadening effects of routine, thus revivifying the world. But art can also pull us from the opposite direction: from frenetic emotionality to a detached calmness. While contemplating beauty, we see the world, and even ourselves, as calm and sensitive observers, with fascination and delight. We rediscover the childlike richness of experience, while shunning the childlike tyranny of emotion. We can achieve equanimity, at least temporarily, by being reminded that beauty and sadness are not opposed, but are intimately intertwined.

The Musée d’Orsay & A Theory of Aesthetics

The Musée d’Orsay & A Theory of Aesthetics

On the left back of the Seine, in an old Beaux-Arts train station, is one of Europe’s great museums: the Musée d’Orsay. Its collection mainly focuses on French art from the mid-nineteenth to the early-twentieth century. This was a fertile time for Paris, as the museum amply demonstrates. Rarely can you find so many masterpieces collected in one place.

The museum is arranged with exquisite taste. In the middle runs a corridor, filled with statues—of human forms, mostly. They dash, reach, dance, strain, twist, lounge, smile, laugh, gasp, grimace.

On either side of this central corridor are the painting galleries, arranged by style and period. There were naturalistic paintings—with a vanishing perspective, careful shadowing, precise brushstrokes, scientifically accurate anatomy, symmetrical compositions. There were the impressionists—a blur of color and light, creamy clouds of paint, glances of everyday life. There was Cézanne, whose precise simplifications of shape and shade lend his painting of Mont Sainte-Victoire a calm, detached beauty. Then there were the pointillists, Seurat and Signac, who attempted to break the world into pieces and then to build it back up using only dabs of color, arranged with a mixture of science and art.

Greatest of all was van Gogh, whose violent, wavy lines, his bright, simple colors, his oil paint smeared in thick daubs onto the canvas, make his paintings slither and dance. It is simply amazing to me that something as static as a painting can be made be so energetic. Van Gogh’s paintings don’t stand still under your gaze, but move, vibrate, even breathe. It is uncanny.

His self portrait is the most emotionally affecting painting I have ever seen. Wearing a blue suit, he sits in a neutral blue space. His presence warps the atmosphere: the air seems to be curling around him, as if in a torrent. The only colors that break the blur of blue are his flaming red beard and his piercing green eyes. He looks directly at the viewer, with an expression impossible to define. At first glance he appears anxious, perhaps shy; but the more you look, the more he appears calm and confident. You get absolutely lost in his eyes, falling into them, as you are absorbed into ever more complicated subtleties of emotion concealed therein. Suddenly you realize that curling waves of air around him are not mere background, but represent his inner turmoil. Yet is it a turmoil? Perhaps it is a serenity too complicated for us to understand?

800px-Vincent_van_Gogh_-_Self-Portrait_-_Google_Art_Project

I looked and looked, and soon the experience became overwhelming. I felt as if he were looking right through me, while I pathetically tried to understand the depths of his mind. But the more I probed, the more lost I felt, the more I felt myself being subsumed into his world. The experience was so overpowering that my knees began to shake.

Consider this reaction of mine. Now imagine if a curious extraterrestrial, studying human behavior, visited an art museum. What would he make of it?

On its face, the practice of visiting art museums is absurd. We pay good money to gain entrance to a big building, so we can spend time crowding around brightly colored squares that are not obviously more interesting than any other object in the room. Indeed, I suspect an alien would find almost anything on earth—our plant and animal life, our minerals, our technology—more interesting than a painting.

In this essay I want to try to answer this question: Why do humans make and appreciate art? For this is the question that so irresistibly posed itself to me after I stared into van Gogh’s portrait. The rest of my time walking around the Musée d’Orsay, feeling lost among so many masterpieces, I pondered how a colorful canvas could so radically alter my mental state. By the end of my visit, the beginnings of an answer had occurred to me—an answer hardly original, being deeply indebted to Walter Pater, Marcel Proust, and Robert Hughes, among others—and it is this answer that I attempt to develop here.

My answer, in short, is that the alien would be confused because human art caters to a human need—specifically, an adult human need. This is the need to cure ennui.

§

Boredom hangs over human life like a specter, so pernicious because it cannot be grasped or seen.

The French anthropologist Claude Lévi-Strauss knew this very well. As a young man he ejoyed mountain scenes, because “instead of submitting passively to my gaze” the mountains “invited me into a conversation, as it were, in which we both had to give our best.” But as he got older, his pleasure in mountain scenery left him:

And yet I have to admit that, although I do not feel that I myself have changed, my love for the mountains is draining away from me like a wave running backward down the sand. My thoughts are unchanged, but the mountains have taken leave of me. Their unchanging joys mean less and less to me, so long and so intently have I sought them out. Surprise itself has become familiar to me as I follow my oft-trodden routes. When I climb, it is not among bracken and rock-face, but among the phantoms of my memories.

Dostoyevsky put the phenomenon more succintly: “Man grows used to everything, the scoundrel!”

These two literary snippets have stuck with me because they encapsulate the same thing: the ceaseless struggle against the deadening weight of routine. Nothing is new twice. Walk through a park you found charming at first, the second time around it will be simply nice, and the third time just normal.

The problem is human adaptability. Unlike most animals, we humans are generalists, able to adapt our behavior to many different environments. Instead of being guided by rigid instincts, we form habits.

By “habits” I do not only refer to things like biting your nails or eating pancakes for breakfast. Rather, I mean all of the routine actions performed by every person in a society. Culture itself can, at least in part, be thought of as a collection of shared habits. These routines and customs are what allow us to live in harmony with our environments and one another. Our habits form a second nature, a learned instinct, that allows us to focus our attention on more pressing matters. If, for whatever reason, we were incapable of forming habits, we would be in a sorry state indeed, as William James pointed out in his book on psychology:

There is no more miserable human being than one in whom nothing is habitual but indecision, and for whom the lighting of every cigar, the drinking of every cup, the time of rising and going to bed every day, and the beginning of every bit of work, are subjects of express volutional deliberation. Full half the time of such a man goes to the deciding, or regretting, of matters which ought to be so ingrained in him as practically not to exist for his consciousness at all.

Habits are, thus, necessary to human life. And up to a certain point, they are desirable and good. But there is also a danger in habitual response.

Making the same commute, passing the same streets and alleys, spending time with the same friends, watching the same shows, doing the same work, living in the same house, day after day after day, can ingrain a routine in us so deeply that we become dehumanized.

A habit is supposed to free our mind for more interesting matters. But we can also form habits of seeing, feeling, tasting, even of thinking, that are stultifying rather than freeing. The creeping power of routine, pervading our lives, can be difficult to detect, precisely because its essence is familiarity.

One of the most pernicious effects of routine is to dissociate us from our senses. Let me give a concrete example. A walk through New York City will inevitably present you with a chaos of sensory data. You can overhear conversations, many of them fantastically strange; you can see an entire zoo of people, from every corner of the globe, dressed in every fashion; you can look at the ways that the sunlight moves across the skyscrapers, the play of light and shadow; you can hear dog barks, car horns, construction, alarms, sirens, kids crying, adults arguing; you can smell bread baking, chicken frying, hot garbage, stale urine, and other scents too that are more safely left uninvestigated.

And yet, after working in NYC for a few months, making the same commute every day, I was able to block it out completely. I walked through the city without noticing or savoring anything. My lunch went unappreciated; my coffee was drunk unenjoyed; the changing seasons went unremarked; the fashion choices of my fellow commuters went unnoticed.

It isn’t that I stopped seeing, feeling, hearing, tasting, but that my attitude to this information had changed. I was paying attention to my senses only insofar as they provided me with useful information: the location of a pedestrian, an oncoming car, an unsanitary area. In other words, my attitude to my sensations had become purely instrumental: attending to their qualities only insofar as they were relevant to my immediate goals.

This exemplifies what I mean by ennui. It is not boredom of the temporary sort, such as when waiting on a long line. It is boredom as a spiritual malady. When beset by ennui we are not bored by a particular situation, but by any situation. And this condition is caused, I think, by a certain attitude toward our senses. When afflicted by ennui, we stop treating our sensations are things in themselves, worthy of attention and appreciation, but merely as signs and symbols of other things.

To a certain extent, we all do this, often for good reason. When you are reading this, for example, you are probably not paying attention to the details of the font, but are simply glancing at the words to understand their meaning. Theoretically, I could use any font or formatting, and it wouldn’t really affect my message, since you are treating the words as signs and not as things in themselves.

This is our normal, day-to-day attitude towards language, and it is necessary for us to read efficiently. But this can also blind us to what is right in front of us. For example, an English teacher I knew once expressed surprise when I pointed out that ‘deodorant’ consists of the word ‘odor’ with the prefix ‘de-’. She had never paused long enough to consider it, even though she had used the word thousands of times.

I think this attitude of ennui can extend even to our senses. We see the subtle shades of green and red on an apple’s surface, and only think “I’m seeing an apple.” We feel the waxy skin, and only think “I’m touching an apple.” We take a bite, munching on the crunchy fruit, tasting the tart juices, and only think “I’m tasting an apple.” In short, the whole quality of the experience is ignored or at least underappreciated. The apple has become part of our routine and has thus been moved to the background of our consciousness.

Now, imagine treating everything this way. Imagine if all the sights, sounds, tastes, textures, and smells were treated as routine. This is an adequate description of my mentality when I was working in New York, and perhaps of many people all over the world. The final effect is a feeling of emptiness and dissatisfaction. Nothing fulfills or satisfies because nothing is really being experienced.

This is where art comes in. Good art has the power to, quite literally, bring us back to our senses. Art encourages us not only to glance, but to see; not only to hear, but to listen. It reconnects us with what is right in front of us, but is so often ignored. To quote the art critic Robert Hughes, the purpose of art is “to make the world whole and comprehensible, to restore it to us in all its glory and occasional nastiness, not through argument but through feeling, and then to close the gap between you and everything that is not you.”

Last summer, while I was still working at my job in NYC, I experienced the power of art during a visit to the Metropolitan. By then, I had already visited the Met dozens of times in my life. My dad used to take me there as a kid, to see the medieval arms and armor; and ever since I have visited at least once a year. The samurai swords, the Egyptian sarcophagi, the Greek statues—it has tantalized my imagination for decades.

In my most recent visits, however, the museum had lost much of its power. It had become routine for me. I had seen everything so many times that, like Levi-Strauss, I was visiting my memories rather than the museum itself.

But this changed during my last visit. It was the summer right before I came to Spain. I had just completed my visa application and was about to leave my job. This would be my last visit to the Met for at least a year, possibly longer. I was saying goodbye to something intimately familiar in order to embrace the unknown. My visit became no longer routine, but unique and fleeting, and this made me experience the museum in an entirely new way.

Somehow, the patina of familiarity had been peeled away, leaving every artwork fresh and exciting. Whereas on previous visits I viewed the Greco-Roman and Egyptian statues are mere artifacts, revealing information about former civilizations, this time I began to become acutely sensitive to previously invisible subtleties: fine textures, subtle hues, elegant forms. In short, I had stopped treating the artwork as icons—as mere symbols of a lost age—but as genuine works of art.

This experience was so intense that for several days I felt rejuvenated. I stopped feeling so deeply dissociated from my workaday world and began to take pleasure again in little things.

While waiting for the elevator, for example, I looked at a nearby wall; and I realized, to my astonishment, that it wasn’t merely a flat plain surface, as I had thought, but was covered in little bumps and shapes. It was stucco. I grew entranced by the shifting patterns of forms on the surface. I leaned closer, and began to see tiny cracks and little places where the paint had chipped off. The slight variations on the surface, a stain here, a splotch there, the way the shapes seemed to melt into one another, made it seem as though I were looking at a painting by Jackson Pollock or the surface of the moon.

I had glanced at this wall a hundred times before, but it took a visit to an art museum to let me really see it. Routine had severed me from the world, and art had brought me back to it.

§

Reality is always experienced through a medium—the medium of senses, concepts, language, and thought. Sensory information is detected, broken down, analyzed, and then reconfigured in the brain.

We are not passive sensors. While a microphone might simply detect tones, rhythms, and volume, we hear cars, birds, and speech; and while a camera might detect shapes, colors, and movement, we see houses and street signs. The data we collect is, thus, not experienced directly, but is analyzed into intelligible objects. And this is for the obvious reason that, unlike cameras and microphones, we need to use this information to survive.

In order to deal efficiently with the large amount of information we encounter every day, we develop habits of perceiving and thinking. These habits are partly expectations of the kinds of things we will meet (people, cars, language), as well as the ways we have learned to analyze and respond to these things. These habits thus lay at the crossroads between the external world of our senses and the internal world of our experience, forming another medium through which we experience (or don’t experience) reality.

Good art forces us to break these habits, at least temporarily. It does so by breaking down reality and then reconstructing it with a different principle—or perhaps I should say a different taste—than the one we habitually use.

The material of art—what artists deconstruct and re-imagine—can be taken from either the natural or the cultural world. By ‘natural world’ I mean the world as we experience it through our senses; and by ‘cultural world’ I mean the world of ideas, customs, values, religion, language, tradition. No art is wholly emancipated from tradition, just as no tradition is wholly unmoored from the reality of our senses. But very often one is greatly emphasized at the expense of the other.

A good example of an artform concerned with the natural world is landscape painting. A landscape artist breaks down what she sees into shapes and colors, and puts it together on her canvass, making whatever tasteful alteration she sees fit.

Her view of the landscape, and how she chooses to reconstruct it on her canvass, is of course not merely a matter between her and nature. Inevitably our painter is familiar with a tradition of landscape paintings; and thus while engaged with the natural landscape she is simultaneously engaged in a dialogue with contemporary and former artists. She is, therefore, simultaneously breaking down the landscape and her tradition of landscape painting, deciding what to change, discard, or keep. The final product emerges as the an artifact of an exchange between the artist, the landscape, and the tradition.

1024px-Vincent_van_Gogh,_Wheat_Field,_June_1888,_Oil_on_canvas
Landscape by van Gogh

The fact remains, however, that the final product can be effectively judged by how it transforms its subject—the landscape itself. Thus I would say that landscape paintings are primarily oriented towards the natural world.

By contrast, many religious paintings are much more oriented towards a tradition. It is clear, even from a glance, that the artists of the Middle Ages were not concerned with the accurate portrayal of individual humans, but with the evoking of religious figures through idealizations. The paintings thus cannot be evaluated by their fidelity to the sensory reality, but by their fidelity to a religious aesthetic.

800px-BambergApocalypseFolio010vWorshipBeforeThroneOfGod
From the Bamberg Apocalypse

It is worth noting that artworks oriented towards the natural world tend to be individualistic, while artworks oriented towards the cultural world tend to be communal. The reason is clear: art oriented towards the natural world reconnect us with our senses, and our senses are necessarily personal. By contrast, culture is necessarily impersonal and shared. The rise of perspective, realistic anatomy, individualized portraits, and landscape painting at the time of the Italian Renaissance can, I think, persuasively be interpreted as a break from the communalism of the medieval period and an embrace of individualism.

Music is an excellent demonstration of this tendency. To begin with, the medium of sound is naturally more social than that of sight or language, since sound pervades its environment. What is more, music is a wholly abstract art, and thus totally disconnected from the natural world.

This is because sound is just too difficult to record. With only a pencil and some paper, most people could make a rough sketch of an everyday object. But without some kind of notational system—and even then, maybe not—most people could not transcribe an everyday sound, like a bird’s chirping.

Thus, musicians (at least western musicians) take their material from culture rather than nature, from the world of tradition rather than the world of our senses.

(In an oral tradition, where music does not need to be transcribed, it is possible that music can strive to reproduce natural sounds; but this has not historically been the case in the west.)

To deal with the problem of transcribing sound, rigorous and formal ways of classifying sounds were developed. An organizational system developed, with its own laws and rules; and it is these laws and rules that the composer or songwriter manipulates.

And just as your knowledge of the natural world helps to make sense of visual art, so our cultural training helps us to make sense of music. Just as you’ve seen many trees and human faces, and thus can appreciate how painters re-imagine their appearances, so have you heard hours and hours of music in your life, most of it following the same or similar conventions.

Thus you can tell (most often unconsciously) when a tune does something unusual. Relatively few people, for example, can define a plagal cadence (an unusual final cadence from the IV to the I chord), but almost everyone responds to it in Paul McCartney’s “Yesterday.”

As a result of its cultural grounding, music an inherently communal art form. This is true, not only aesthetically, but anthropologically. Music is an integral part of many social rituals—political, religious, or otherwise. Whether we are graduating from high school, winning an Oscar, or getting married, music will certainly be heard. As much as alcohol, music can lower inhibitions by creating a sense of shared community, which is why we play it at every party. Music thus plays a different social role than visual art, connecting us to our social environment rather than to the often neglected sights and sounds of everyday life.

The above descriptions are offered only as illustrations of my more general point: Art occupies the same space as our habits, the gap between the external and the internal world. Painters, composers, and writers begin by breaking down something familiar from our daily reality. This material can be shapes, colors, ceramic vases, window panes, the play of shadow across a crumpled robe in the case of painting. It can be melodies, harmonies, timbre, volume, chord progressions, stylistic tropes in the case of music. And it can be adjectives, verbs, nouns, situations, gestures, personality traits in the case of literature

Whatever the starting material, it is the artist’s job to recombine it into something different, something that thwarts our habits. Van Gogh’s thick daubs of paint thwart our expectation of neat brushstrokes; McCartney’s plagal cadence thwarts our expectation of a perfect cadence; and Proust’s long, gnarly sentences and philosophic ideas thwart our expectations of how a novelist will write. And once we stop seeing, listening, feeling, sensing, thinking, expecting, reacting, behaving out of habit, and once more turn our fill attention to the world, naked of any preconceptions, we are in the right mood to appreciate art.

§

Yet it is not enough for art to be simply challenging. If this were true, art would be anything that was simply strange, confusing, or difficult. Good art can, of course, be all of those things; but it need not be.

Many artists nowadays, however, seem to disagree on this point. I have listened to works by contemporary composers which simply made no sense for my ears, and have seen many works of modern art which had no visual interest. We are living in the age of “challenging” art; and beauty is too often reduced to confusion.

But good art must not only challenge our everyday ways of seeing, listening, and being. It must reconstitute those habits along new lines. Art interrogates the space between the world and our habits of seeing the world. It breaks down the familiar—sights, harmonies, language—and then builds it back up again into the unfamiliar, using new principles and new taste. Yet for the product to be a work of art, and not mere strangeness, the unfamiliar must be rendered beautiful. That is the task of art.

Thus, Picasso does not only break down the perspectives and shapes of daily life, but builds them back up into new forms—fantastically strange, but sublime nonetheless. Debussy disintegrates the normal harmonic conventions—keys, cadences, chords—and then puts them all back together into a new form, uniquely his, and also unquestionably lovely. Great art not only shows you a different way of seeing and understanding the world, but makes this new vista attractive.

Pretentious art, art that merely wants to challenge, confuse, or frustrate you, is quite a different story. It can be most accurately compared to the relationship between an arrogant schoolmaster and a pupil. The artist is talking down to you from a position of heightened knowledge. The implication is that your perspective, your assumptions, your way of looking at the world are flawed and wrong, and the artist must help you to get out of your lowly state. Multiple perspectives are discouraged; only the artist’s is valid.

And then we come to simple entertainment.

Entertainment is something that superficially resembles art, but it’s function is entirely different. For entertainment does not reconnect us with the world, but lures us into a fantasy.

Perhaps the most emblematic form of pure entertainment is advertizing. However well made an advertisement is, it can never be art; for its goal is not to reconnect with the world, but to seduce us. Advertisements tell us we are incomplete. Instead of showing us how we can be happy now, they tell what we still need.

When you see an ad in a magazine, for example, you are not meant to scan it carefully, paying attention to the purely visual qualities. Rather, you are forced to view it as an image. By ‘image’ I mean a picture that serves to represent something else. Images are not meant to be looked at, but glanced at; images are not meant to be analyzed, but instantly understood. Ads use images because they are not trying to bring you back to your senses, but lure you into a fantasy.

Don’t misunderstand me: There is nothing inherently wrong with fantasy. Indeed, I think fantasy is almost indispensable to a healthy life. The fantasies of advertisements are, however, somewhat nefarious, since ads are never pure escapism. Rather, the ad forces you to negatively compare your actual life with the fantasy, conclude that you are lacking something, and then of course seek to remedy the situation by buying their product.

Most entertainment is, however, quite innocent, or at least it seems to me. For example, I treat almost all blockbusters as pure entertainment. I will gladly go see the new Marvel movie, not in order to have an artistic experience, but because it’s fun. The movie provides two hours of relief from the normal laws of physics, of probability, from the dreary regularities of reality as I know it. Superhero movies are escapism at its most innocent. The movies make no pretenses of being realistic, and thus you can hardly feel the envy caused by advertisements. You are free to participate vicariously and then to come back to reality, refreshed from the diversion, but otherwise unchanged.

The prime indication of entertainment is that it is meant to be effortless. The viewer is not there to be challenged, but to be diverted. Thus most bestselling novels are written with short words, simple sentences, stereotypical plotlines stuffed full of clichés—because this is easy to understand. Likewise, popular music uses common chord progressions and trite lyrics to make hits—music to dance to, to play in the background, to sing along to, but not to think about. This is entertainment: it does not reconnect us with our senses, our language, our ideas, but draw us into fantasy worlds, worlds with spies, pirates, vampires, worlds where everyone is attractive and cool, where you can be anything you want, for at least a few hours.

Some thinkers, most notably Theodor Adorno, have considered this quality of popular culture to be destructive. They abhor the way that people lull their intellects the sleep, tranquilized with popular garbage that deactivates their minds rather than challenges them. And this point cannot be wholly dismissed. But I tend to see escapism in a more positive light; people are tired, people are stressed, people are bored—they need some release. As long as fantasy does not get out of hand, becoming an goal in itself instead of only a diversion, I see no problem with it.

This, in my opinion, is the essential different between art and entertainment. There is also an essential different, I think, between art and craft.

Craft is a dedication to the techniques of art, rather than its goals. Of course, there is hardly such a thing as a pure craft or a pure art; no artist completely lacks a technique, and no craftsman totally lacks aesthetic originality. But there are certainly cases of artists whose technique stands at a bare minimum, as well as craftsmen who are almost exclusively concerned with the perfection of technique.

Here I must clarify that, by technique, I do not mean simply manual things like brush strokes or breath control. This includes more generally the mastery of a convention.

Artistic conventions consists of fossilized aesthetics. All living aesthetics represent the individual visions of artists—original, fresh, and personal. All artistic conventions are the visions of successful artists, usually dead, which have ceased to be refreshing and now have become charmingly familiar. Put another way, conventional aesthetics are the exceptions that have been made the rule. Not only that, but conventions often fossilize only the most obvious and graspable elements of brilliant artists of the past, leaving behind much of its living fibre.

This can be exemplified if we go and examine the paintings of William-Adolfe Bourgeureau in the Musée d’Orsay. Even from a glance, we can tell that he was a masterful painter. Every detail is perfect. The arrangement of the figures, the depiction of light and shadow, the musculature, the perspective—everything has been performed with exquisite mastery. My favorite painting of his is Dante and Virgil in Hell, a dramatic rendering of a scene from Dante’s Inferno. Dante and his guide stand to one side, looking on in horror as one naked man attacks another one, biting him in his throat. In the distance, a flying demon smiles, while a mound of tormented bodies writhes behind. The sky is a fiery red and the landscape is bleak.

Dante_et_Virgile-William_Bouguereau-IMG_8283

I think it is a wonderful painting. Even so, Dante and Virgil seems to exist more as a demonstration than as art. For the main thing that makes painting art, and the main thing this painting lacks, is an original vision. The content has been adopted straightforwardly from Dante. The technique, although perfectly executed, shows no innovations of Bourgeureau’s own. All the tools he used had been used before; he merely learned them. Thus the painting, however impressive, ultimately seems like a technical exercise. And this is the essence of craft.

§

I fear I have said more about what art isn’t than what it is. That’s because it is admittedly much easier to define art negatively than positively. Just as mystics convey the incomprehensibility of God by listing all the things He is not, maybe we can do the same with art?

Here is my list so far. Art is not entertainment, meant to distract with fantasy. Art is not craft, meant to display technique and obey rules. Art is not simply an intellectual challenge, meant to shock and frustrate your habitual ways of being. I should say art is not necessarily any of these things, though it can and often is all of them. Indeed, I would contend that the greatest art entertains, challenges, and displays technical mastery, and yet cannot be reduced to any or all of these things.

Here I wish to take an idea from the literary critic Harold Bloom, and divide up artworks into period pieces and great works. Period pieces are works that are highly effective in their day, but quickly become dated. These works are too specifically targeted at one specific cultural atmosphere to last. In other words, they may be totally preoccupied with the habits prevalent at one place and time, and become irrelevant when time passes.

To pick just one example, Sinclair Lewis’s Babbitt, which I sincerely loved, may be too engrossed in the foibles of 20th century American culture to be still relevant in 500 years. Its power comes from its total evisceration of American ways; and, luckily for Lewis, those ways have changed surprisingly little in its essentials since his day. The book’s continuing appeal therefore depends largely on how much the culture does or does not change. (That being said, that novel has a strong existentialist theme that may allow it to persist.)

Thus period pieces largely concern themselves with getting us to question particular habits or assumptions—in Lewis’s case, the vanities and superficialities of American life.

The greatest works of art, by contrast, are great precisely because they reconnect us with the mystery of the world. They don’t just get us to question certain assumptions, but all assumptions. They bring us face to face with the incomprehensibility of life, the great and frightening chasm that we try to bridge over with habit and convention.

No matter how many times we watch Hamlet, we can never totally understand Hamlet’s motives, the mysterious inner workings of his mind. No matter how long we stare into van Gogh’s eyes, we can never penetrate the machinations of that elusive mind. No matter how many times we listen to Bach’s Art of Fugue, we can entirely never wrap our minds around the dancing, weaving melodies, the baffling mixture of mathematical elegance and artistic sensitivity.

Why are these works so continually fresh? Why do they never seem to grow old? I cannot say. It is as if they are infinitely subtle, allowing you to discover new shades of meaning every time they are experienced anew. You can fall into them, just as I felt myself falling into van Gogh’s eyes as he stared at me across space and time.

When I listen to the greatest works of art, I feel like I do when I stare into the starry sky: absolutely small in the presence of something immense and immensely beautiful. Listening to Bach is like listening to the universe itself, and reading Shakespeare is like reading the script of the human soul. These works do not merely reconnect me to my senses, helping me to rid myself of boredom. They do not merely remind me that the world is an interesting place. Rather, these works remind me that I myself am a small part of an enormous whole, and should be thankful for every second of life, for it is a privilege to be alive somewhere so lovely and mysterious.

The Illogic of Discrimination

The Illogic of Discrimination

Discrimination is a problem. It is a blight on society and a blemish on personal conduct. During the last one hundred or so years, the fight against discrimination has played an increasingly important role in political discourse, particularly on the left: against racism, sexism, homophobia, transphobia, and white privilege. Nowadays this discourse has its own name: identity politics. We both recognize and repudiate more kinds of discrimination than ever before.

This is as it should be. Undeniably many forms of discrimination exist; and discrimination—depriving people of rights and privileges without legitimate reason—is the enemy of equality and justice. If we are to create a more fair and open society, we must fight to reduce prejudice and privilege as much as we can. Many people are already doing this, of course; and identity politics is rightly here to stay.

And yet, admirable as the goals of identity politics are, I am often dissatisfied with its discourse. Specifically, I think we are often not clear about why certain statements or ideas are discriminatory. Often we treat certain statements as prejudiced because they offend people. I have frequently heard arguments of this form: “As a member of group X, I am offended by Y; therefore Y is discriminatory to group X.”

This argument—the Argument from Offended Feelings, as I’ll call it—is unsatisfactory. First, it is fallacious because it generalizes improperly. It is the same error someone commits when they conclude, from eating bad sushi once, that all sushi is bad: the argument takes one case and applies it to a whole class of things.

Even if many people, all belonging to the same group, find a certain remark offensive, it still is invalid to conclude that the remark is intrinsically discriminatory: this only shows that many people think it is. Even the majority may be wrong—such as the many people who believe that the word “niggardly” comes from the racial slur and is thus racist, while in reality the word has no etymological or historical connection with the racial slur (it comes from Middle English).

Subjective emotional responses should not be given an authoritative place in the question of prejudice. Emotions are not windows into the truth. They are of no epistemological value. Even if everybody in the world felt afraid of me, it would not make me dangerous. Likewise, emotional reactions are not enough to show that a remark is discriminatory. To do that, it must be shown how the remark incorrectly assumes, asserts, or implies something about a certain group.

In other words, we must keep constantly in mind the difference between a statement being discriminatory or merely offensive. Discrimination is wrong because it leads to unjust actions; offending people, on the other hand, is not intrinsically wrong. Brave activists, fighting for a good cause, often offend many.

Thus it is desirable to have logical tests, rather than just emotional responses, for distinguishing discriminatory responses. I hope to provide a few tools in this direction. But before that, here are some practical reasons for preferring logical to emotional criteria.

Placing emotions, especially shared emotions, at the center of any moral judgment makes a community prone to fits of mob justice. If the shared feelings of outrage, horror, or disgust of a group is sufficient to condemn somebody, then we have the judicial equivalent of a witch-hunt: the evidence for the accusation is not properly examined, and the criteria that separate good evidence from bad are ignored.

Another practical disadvantage of giving emotional reactions a privileged place in judgments of discrimination is that it can easily backfire. If enough people say that they are not offended, or if emotional reactions vary from outrage to humor to ambivalence, then the community cannot come to a consensus about whether any remark or action is discriminatory. Insofar as collective action requires consensus, this is an obvious limitation.

What is more, accusations of discrimination are extremely easy to deny if emotional reactions are the ultimate test. The offended parties can simply be dismissed as “over-sensitive” (a “snowflake,” more recently), which is a common rhetorical strategy among the right (and is sometimes used on the left, too). The wisest response to this rhetorical strategy, I believe, is not to re-affirm the validity of emotions in making judgments of discrimination—this leads you into the same trap—but to choose more objective criteria. Some set of non-emotional, objective criteria for determining whether an action is discriminatory is highly desirable, I think, since there is no possibility of a lasting consensus without it.

So if these emotional tests can backfire, what less slippery test can we use?

To me, discriminatory ideas—and the actions predicated on these ideas—are discriminatory precisely because they are based on a false picture of reality: they presuppose differences that do not exist, and mischaracterize or misunderstand the differences that do exist. This is important, because morally effective action of any kind requires a basic knowledge of the facts. A politician cannot provide for his constituents’ needs of she does not know what they are. A lifeguard cannot save a drowning boy if he was not paying attention to the water. Likewise, social policies and individual actions, if they are based on a false picture of human difference, will be discriminatory, even with the best intentions in the world.

I am not arguing that discrimination is wrong purely because of this factual deficiency. Indeed, if I falsely think that all Hungarians love bowties, although this idea is incorrect and therefore discriminatory, this will likely not make me do anything immoral. Thus it is possible, in theory at least, to hold discriminatory views and yet be a perfectly ethical person. It is therefore necessary to distinguish between whether a statement is offensive (it upsets people), discriminatory (it is factually wrong about a group of people), and immoral (it harms people and causes injustice). The three categories do not necessary overlap, in theory or in practice.

It is obvious that, in our society, discrimination is usually far more nefarious than believing that Hungarians love bowties. Discrimination harms people, sometimes kills people; and discrimination causes systematic injustice. My argument is that to prove any policy or idea is intrinsically discriminatory requires proving that it asserts something empirically false.

Examples are depressingly numerous. Legal segregation in the United States was based on the premise that there existed a fundamental difference between blacks and whites, a difference that justified different treatment and physical separation. Similarly, Aristotle argued that slavery was legitimate because some people were born slaves: they were intrinsically slavish. Now, both of these ideas are empirically false. They assert things about reality that are either meaningless, untestable, or contrary to the evidence; and so any actions predicated on these ideas will be discriminatory—and, as it turned out, horrific.

These are not special cases. European antisemitism has always incorporated myths and lies about the Jewish people: tales of Jewish murders of Christian children, of widespread Jewish conspiracies, and so on. Laws barring women from voting and rules preventing women from attending universities were based on absurd notions about women’s intelligence and emotional stability. Name any group which has faced discrimination, and you can find a corresponding myth that attempts to justify the prejudice. Name any group which has dominated, and you can find an untruth to justify their “superiority.”

In our quest to determine whether a remark is discriminatory, it is worth taking a look, first of all, at the social categories themselves. Even superficial investigation will reveal that many of our social categories are close to useless, scientifically speaking. Our understanding of race in the United States, for example, gives an entirely warped picture of human difference. Specifically, the terms “white” and “black” have shifted in meaning and extent over time, and in any case were never based on empirical investigation.

Historically speaking, our notion of what it means to be “white” used to be far more exclusive than it is now, previously excluding Jews and Eastern Europeans. Likewise, as biological anthropologists never tire of telling us, there is more genetic variation in the continent of Africa than the rest of the world combined. Our notions of “white” and “black” simply fail to do justice to the extent of genetic variation and intermixture that exists in the United States. We categorize people into a useless binary using crude notions of skin color. Any policy based on supposed innate, universal differences between “black” and “white” will therefore be based on a myth. Similar criticisms can be made of our common notions of gender and sexual orientation

Putting aside the sloppy categories, discrimination may be based on bad statistics and bad logic. Here are the three errors I think are most common in discriminatory remarks.

The first is to generalize improperly: to erroneously attribute a characteristic to a group. This type of error is exemplified by Randy Newman’s song “Short People,” when he says short people “go around tellin’ great big lies.” I strongly suspect that it is untrue that short people tell, on average, more lies than taller people, which makes this an improper generalization.

This is a silly example, of course. And it is worth pointing out that some generalizations about group differences are perfectly legitimate. It is true, for example, that Spanish people eat more paella than Japanese people. When done properly, generalizations about people are useful and often necessary. The problem is that we are often poor generalizers. We jump to conclusions—using the small sample of our experience to justify sweeping pronouncements—and we are apt to give disproportionate weight to conspicuous examples, thus skewing our judgments.

Our poor generalizations are, all too often, mixed up with more nefarious prejudices. Trump exemplified this when he tweeted a table of statistics of crime rates back in November of 2015. The statistics are ludicrously wrong in every respect. Notably, they claim that more whites are killed by blacks than by other whites, when in reality more whites are killed by other whites. (This shouldn’t be a surprise, since most murders take place within the same community; and since people of the same race tend to live in the same community, most murders are intra-racial.)

The second type of error involved in prejudice is to make conclusions about an individual based on their group. This is a mistake even when the generalizations about the group are accurate. Even if it were statistically true, for example, that short people lied more often than tall people, it would still be invalid to assume that any particular short person is a liar.

The logical mistake is obvious: even if a group has certain characteristics on average, that does not mean that every individual will have these characteristics. On average, Spaniards are shorter than me; but that does not mean that I can safely assume any Spaniard will be shorter than I am. On average, most drivers are looking out for pedestrians; but that doesn’t make I can safely run into the road.

Of course, almost nobody, if they had a half-second to reflect, would make the mistake of believing every single member of a given group had whatever quality. More often, people are just wildly mistaken about how likely a certain person is to have any given quality—most often, we greatly overestimate.

It is statistically true, for example, that Asian Americans tend to do well on standardized math and science exams. But this generalization, which is valid, does not mean you can safely ask any Asian American friend for help on your science homework. Even though Asian Americans do well in these subjects as a group, you should still expect to see many individuals who are average or below average. This is basic statistics—and yet this error accounts for a huge amount of racist and sexist remarks.

Aside from falsely assuming that every member of a group will be characterized by a generalization, the second error also results from forgetting intersectionality: the fact that any individual is inevitably a member of many, intersecting demographic groups. Race, gender, income bracket, sexual orientation, education, religion, and a host of other categories will apply to any single individual. Predicting how the generalizations associated with these categories—which may often make contradictory predictions—will play out in any individual case, is close to impossible.

This is not even to mention all of the manifold influences on behavior that are not included in these demographic categories. Indeed, it is these irreducibly unique experiences, and our unique genetic makeup, that make us individuals in the first place. Humans are not just members of a group, nor even members of many different, overlapping groups: each person is sui generis.

In sum, humans are complicated—the most complicated things in the universe, so far as we know—and making predictions about individual people using statistical generalizations of broad, sometimes hazily defined categories, is hazardous at best, and often foolish. Moving from the specific to the general is fairly unproblematic; we can collect statistics and use averages and medians to analyze sets of data. But moving from the general to the specific is far more troublesome.

The third error is to assert a causal relationship where we only have evidence for correlation. Even if a generalization is valid, and even if an individual fits into this generalization, it is still not valid to conclude that an individual has a certain quality because they belong to a certain group.

Let me be more concrete. As we have seen, it is a valid generalization to say that Asian Americans do well on math and science exams. Now imagine that your friend John is Asian American, and also an excellent student in these subjects. Even in this case, to say that John is good at math “because he’s Asian” would still be illogical (and therefore racist). Correlation does not show causation.

First of all, it may not be known why Asian Americans tend to do better. And even if a general explanation is found—for example, that academic achievement is culturally prized and thus families put pressure on children to succeed—this explanation may not apply in your friend John’s case. Maybe John’s family does not pressure him to study and he just has a knack for science.

Further, even if this general explanation did apply in your friend John’s case (his family pressures him to study for cultural reasons), the correct explanation for him being a good student still wouldn’t be “because he’s Asian,” but would be something more like “because academic achievement is culturally prized in many Asian communities.” In other words, the cause would be ultimately cultural, and not racial. (I mean that this causation would apply equally to somebody of European heritage being raised in an Asian culture, a person who would be considered “white” in the United States. The distinction between cultural and biological explanations is extremely important, since one posits only temporary, environmental differences while the other posits permanent, innate differences.)

In practice, these three errors are often run together. An excellent example of this is from Donald Trump’s notorious campaign announcement: “When Mexico sends its people, they’re not sending their best. … They’re sending people that have lots of problems, and they’re bringing those problems with us [sic.]. They’re bringing drugs. They’re bringing crime. They’re rapists.”

Putting aside the silly notion of Mexico “sending” its people (they come of their own accord), the statement is discriminatory because it generalizes falsely. Trump’s words give the impression that a huge portion, maybe even the majority, of Mexican immigrants are criminals of some kind—and this isn’t true. (In reality, the statistics for undocumented immigrants can put native citizens to shame, as demonstrated here.)

Trump then falls into the third error by treating people as inherently criminal—the immigrants simply “are” criminals, as if they were born that way. Even if it were proven that Mexican immigrants had significantly higher crime rates, it would still be an open question why this was so. The explanation might have nothing to do with their cultural background or any previous history of criminality. It might be found, for example, that poverty and police harassment significantly increased criminality; and in this case the government would share some of the responsibility.

Donald Trump committed the second error in his infamous comments about Judge Gonzalo Curiel, who was overseeing a fraud lawsuit against Trump University. Trump attributed the Curiel’s (perceived) hostility to his Mexican heritage. Trump committed a simple error of fact when he called Curiel “Mexican” (Curiel was born in Indiana), and then committed a logical fallacy when he concluded that the judge’s actions and attitudes were due to his being of Mexican heritage. Even if it were true (as I suspect it is), that Mexican-Americans, on the whole, don’t like Trump, it still doesn’t follow that any given individual Mexican-American doesn’t like him (2nd error); and even if Curiel did dislike Trump, it wouldn’t follow that it was because of his heritage (3rd error).

These errors and mistakes are just my attempt at an outline of how discrimination can be criticized on logical, empirical grounds. Certainly there is much more to be said in this direction. What I hoped to show in this piece was that this strategy is viable, and ultimately more desirable than using emotional reactions as a test for prejudice.

Discourse, agreement, and cooperation are impossible when people are guided by emotional reactions. We tend to react emotionally along the lines of factions—indeed, our emotional reactions are conditioned by our social circumstances—so privileging emotional reactions will only exacerbate disagreements, not help to bridge them. In any case, besides the practical disadvantages—which are debatable—I think emotional reactions are not reliable windows into the truth. Basing reactions, judgments, and criticisms on sound reasoning and dependable information is always a better long-term strategy.

For one, this view of discrimination provides an additional explanation for why prejudice is so widespread and difficult to eradicate. We humans have inherited brains that are constantly trying to understand our world in order to navigate it more efficiently. Sometimes our brains make mistakes because we generalize too eagerly from limited information (1st error), or because we hope to fit everything into the same familiar pattern (2nd error), or because we are searching for causes of the way things work (3rd error).

So the universality of prejudice can be partially explained, I think, by the need to explain the social world. And once certain ideas become ingrained in somebody’s worldview, it can be difficult to change their mind without undermining their sense of reality or even their sense of identity. This is one reason why prejudices can be so durable (not to mention that certain prejudices justify convenient, if morally questionable, behaviors, as well as signal a person’s allegiance to a certain group).

I should say that I do not think that discrimination is simply the result of observational or logical error. We absorb prejudices from our cultural environment; and these prejudices are often associated with divisive hatreds and social tension. But even these prejudices absorbed by the environment—that group x is lazy, that group y is violent, that group z is unreliable—always inevitably incorporate some misconception of the social world. Discrimination is not just a behavior. Mistaken beliefs are involved—sometimes obliquely, to be sure—with any prejudice.

This view of prejudice—as caused, at least in part, by an incorrect picture of the world, rather than pure moral depravity—may also allow us to combat it more effectively. It is easy to imagine a person with an essentially sound sense of morality who nevertheless perpetrates harmful discrimination because of prejudices absorbed from her community. Treating such a person as a monster will likely produce no change of perspective; people are not liable to listen when they’re being condemned. Focusing on somebody’s misconceptions may allow for a less adversarial, and perhaps more effective, way of combating prejudice. And this is not to mention the obvious fact that somebody cannot be morally condemned for something they cannot help; and we cannot help if we’re born into a community that instructs its members in discrimination.

Even if this view does not adequately explain discrimination, and even if it does not provide a more effective tool in eliminating it, this view does at least orient our gaze towards the substance rather than the symptoms of discrimination.

Because of their visibility, we tend to focus on the trappings of prejudice—racial slurs, the whitewashed casts of movies, the use of pronouns, and so on—instead of the real meat of it: the systematic discrimination—economic, political, judicial, and social—that is founded on an incorrect picture of the world. Signs and symptoms of prejudice are undeniably important; but eliminating them will not fix the essential problem: that we see differences that aren’t really there, we assume differences without having evidence to justify these assumptions, and we misunderstand the nature and extent of the differences that really do exist.

On the Quarter-Life Crisis

On the Quarter-Life Crisis

From College to Chaos

In the modern world, there is a certain existential dread that comes with being in your twenties. Certainly this is true in my case.

This dread creeps up on you in the years of struggle, confusion, and setbacks that many encounter after graduating university. There are many reasons for this.

One is that college simply does not prepare you for the so-called “real world.” In college, you know what you have to do, more or less. Every class has a syllabus. Every major has a list of required courses. You know your GPA and how many credits you need to graduate.

College lacks some of that uncertainty and ambiguity that life—particularly life as a young adult—so abundantly possesses. There is a clear direction forward and it’s already been charted out for you. You know where you’re going and what you have to do to get there.

Another big difference is that college life is fairly egalitarian. Somebody might have a cuter boyfriend, a higher GPA, a richer dad, or whatever, but in the end you’re all just students. As a consequence, envy doesn’t have very much scope. Not that college students don’t get envious, but there are far fewer things, and less serious things, to get envious about. You don’t scroll through your newsfeed and see friends bragging about promotions, proposals, babies, and paid vacations.

There’s one more big difference: nothing you do in college is potentially a big commitment. The biggest commitment you have to make is what to major in; and even that is only a commitment for four years or less. Your classes only last a few months, so you don’t have to care much about professors. You are constantly surrounded by people your age, so friendships and relationships are easy to come by.

Then you graduate, and you’re thrown into something else entirely. Big words like Career and Marriage and Adulthood start looming large. You start asking yourself questions. When you take a job, you ask yourself “Can I imagine doing this for the rest of my life?” When you date somebody, you say to yourself “Can I imagine living with this person the rest of my life?” If you move to another city, you wonder “Could I make a home here?”

You don’t see adults as strange, foreign creatures anymore, but as samples of what you might become. You are expected, explicitly and implicitly, to become an adult yourself. But how? And what type of adult? You ask yourself, “What do I really want?” Yet the more you think about what you want, the less certain it becomes. It’s easy to like something for a day, a week, a month. But for the rest of your life? How are you supposed to commit yourself for such an indefinitely long amount of time?

Suddenly your life is not just potential anymore. Very soon, it will become actual. Instead of having a future identity, you will have a present identity. This is really frightening. When your identity is only potential, it can take on many different forms in your imagination. But when your identity is present and actual, you lose the deliciousness of endless possibility. You are narrowed down to one thing. Now you have to choose what that thing will be. But it’s such a hard choice, and the clock keeps ticking. You feel like you’re running out of time. What will you become?

The American Dream

A few weeks ago I was taking a long walk, and my route took me through a wealthy suburban neighborhood. Big, stately houses with spacious driveways, filled with expensive cars, surrounded me on all sides. The gardens were immaculate; the houses had big lawns with plenty of trees, giving them privacy from their neighbors. And they had a wonderful view, too, since the neighborhood was right on the Hudson River.

I was walking along, and I suddenly realized that this is what I’m supposed to want. This is the American Dream, right? A suburban house, a big lawn, a few cars and a few kids.

For years I’d been torturing myself with the idea that I would never achieve success. Now that I was looking at success, what did it make me feel? Not much. In fact, I didn’t envy the people in those houses. It’s not that I pitied them or despised them. I just couldn’t imagine that their houses and cars and their view of the river, wonderful as it all was, made them appreciably happier than people without those things.

So I asked myself, “Do I really want all these things? A house? A wife? Kids?” In that moment, the answer seemed to be “No, I don’t want any of that stuff. I want my freedom.”

Yet nearly everybody wants this stuff—eventually. And I have a natural inclination to give people some credit. I don’t think folks are mindless cultural automatons who simply aspire to things because that’s how they’ve been taught. I don’t think everybody who wants conventional success is a phony or a sell-out.

Overwhelmingly, people genuinely want these things when they reach a certain point in their lives. I’m pretty certain I will want them, too, and maybe soon. The thing that feels uncomfortable is that, in the mean time, since I expect to want these things, I feel an obligation to work towards them, even though they don’t interest me now. Isn’t that funny?

Equations of Happiness

One of the reasons that these questions can fill us with dread is that we absorb messages from society about the definition of happiness.

One of these messages is about our career. Ever since I was young, I’d been told “Follow your passion!” or “Follow your dreams!” The general idea is that, if you make your passion into your career, you will be supremely happy, since you’ll get paid for what you like doing. Indeed, the phrase “Get paid for what you like doing” sometimes seems like a pretty decent definition of happiness.

Careers aren’t the only thing we learn to identify with happiness. How many stories, novels, and movies end with the boy getting the girl, and the couple living happily ever after? In our culture, we have veritable a mythology of love. Finding “the one,” finding your “perfect match,” and in the process finding the solution to life—this is a story told over and over again, until we subconsciously believe that romantic love is the essential ingredient to life.

Work and Love are two of the biggest, but there are so many other things that we learn to identify with happiness. Having a perfect body, being beautiful and fit. Beating others in competitions, winning contests, achieving things. Being cool and popular, getting accepted into a group. Avoiding conflict, pleasing others. Having the right opinions, knowing the truth. This list only scratches the surface.

In so many big and little ways, in person and in our media, we equate these things with happiness and self-worth. And when we even suspect that we don’t have them—that we might not be successful, popular, right, loved, or whatever—then we feel a sickening sense of groundlessness, and we struggle to put that old familiar ground beneath our feet.

Think of all the ways that you measure yourself against certain, self-imposed standards. Think of all the times you chastise yourself for falling short, or judge yourself harshly for failing to fit this self-image you’ve built up, or fallen into a dark hole when something didn’t go right. Think about all the things you equate with happiness.

Now, think about how you judge your good friends. Do you look down on them if they aren’t successful? Do you think they’re worthless if they didn’t find “the one”? Do you spend much time judging them for their attractiveness, popularity, or coolness? Do you like them less if they lose or fail? If someone else rejects them, do you feel more prone to reject them too?

I’d wager the answer to all these questions is “No.” So why do we treat ourselves this way?

Is it the Money?

There’s no question that the quarter-life crisis is partly a product of privilege. It takes a certain amount of affluence to agonize over what will be my “calling” or who will be “the one.” Lots of people have to pay the rent; and their work and romantic options are shaped by that necessity. When you’re struggling to keep your head above water, your anxiety is more practical than existential. This thought makes me feel guilty for complaining.

But affluence is only part of the it. The other is expectation. Many of us graduated full of hope and optimism, and found ourselves in a limping economy, dragging behind us a big weight of college debt. Just when we were supposed to be hitting the ground running, we were struggling to find jobs and worrying how to pay for the degree we just earned. And since many of us had been encouraged—follow your dreams!—to study interesting but financially impractical things, our expensive degrees seemed to hurt us more than help us.

This led to a lot of bitterness. My generation had been told that we could be anything we wanted. Just do the thing you’re passionate about, and everything will follow. That was the advice. But when we graduated, it seemed that we’d been conned into paying thousands of dollars for a worthless piece of paper. This led to a lot of anger and disenchantment among twenty-somethings, which is why, I think, so many of us gravitated towards Bernie Sanders. Our parents had a car, a house, and raised a family, while we were living at home, working at Starbucks, and using our paychecks to pay for our anthropology degree.

For a long while I used my sense of injustice to justify my angst. I had the persistent feeling that it wasn’t fair, and that went back and forth between being angry at myself or the world.

Nevertheless, I think that, for most middle class people, financial factors don’t really explain the widespread phenomenon of the quarter-life crisis.

I realized this when I started my first decent-paying job. I wasn’t making a lot of money, you understand, but I was making more than enough for everything I wanted. The result? I felt even worse. When I took care of the money problem, the full weight of the existential crisis hit me. I kept asking myself, “Can I really imagine doing this forever?” I thought about my job, and felt empty. And this feeling of emptiness really distressed me, because I thought my job was supposed to be exciting and fulfilling.

This was a valuable lesson for me. I expected the money to calm me and make me happy, and yet I only felt worse and worse. Clearly, the problem was with my mindset and not my circumstances. How to fix it?

From Crisis to Contentment

Well, I’m not out of it yet. But I have made some progress.

First, I think it’s important to take it easy on ourselves. We are so prone to hold ourselves up to certain self-imposed standards, or some fixed idea of who we are. We also like to compare ourselves with others, feeling superior when we’re doing “better,” and worthless when we’re doing “worse.” Take it easy with all that. All of these standards are unreal. You tell yourself you’re “supposed” to be doing such and such, making this much money, and engaged at a whatever age. All this is baloney. You aren’t “supposed” to be or to do anything.

Bertrand Russell said: “At twenty men think that life will be over at thirty. I, at the age of fifty-eight, can no longer take that view.” He’s right: There is nothing magical about the age of thirty. There is no age you pass when you don’t have to worry about money, about your boss, about your partner, about your health. There will always be something to worry about. There will always be unexpected curveballs that upset your plans. Don’t struggle to escape the post-college chaos; try to accept it as normal.

Don’t equate your happiness or your self-worth with something external. You are not your job, your hobby, your paycheck, your body, your friend group, or your relationship. You aren’t a collection of accomplishments or a Facebook profile. You’re a person, and you have worth just because you’re a person, pure and simple. Everything else is incidental.

If you want to be rich, famous, loved, successful—that’s fine, but that won’t make you any better than other people. It might not even make you happier. Don’t worry so much about putting ground under your feet. Don’t fret about establishing your identity. You will always be changing. Life will always be throwing problems at you, and sometimes things will go wrong. Try to get comfortable with the impermanence of things.

Don’t look for the “meaning” of life. Don’t look for “the answer.” Look for meaningful experiences of being alive. Appreciate those moments when you feel totally connected with life, and try to seek those moments out. Realize that life is just a collection of moments, and not a novel with a beginning, middle, and end.

These moments are what bring you happiness, not the story you tell about yourself. So you don’t have to feel existential dread about these big Adult Questions of Love and Work. It’s important to find a good partner and a good job. These things are very nice, but they’re not what give your life value or define you or make life worth living. Treat them as practical problems, not existential ones. Like any practical problem, they might not have a perfect solution, and you might fail—which is frustrating. But failure won’t make you worthless, just like success won’t legitimize your life.

One last thing. Stop caring about what other people think. Who cares? What do they know? Be a friend to yourself, be loyal to yourself. Every time to judge yourself, you betray yourself. In a thousand little ways throughout the day, we reject our experiences and our world. Don’t reject. Accept. Stand steadfastly by yourself as you ride down the steady stream of thoughts, feelings, flavors, colors, sounds, mistakes, accidents, failures, successes, and petty frustrations that make up life as we know it.

On Egotism and Education

On Egotism and Education

A while ago a friend asked me an interesting question.

As usual, I was engrossed in some rambling rant about a book I was reading—no doubt enlarging upon the author’s marvelous intellect (and, by association, my own). My poor friend, who is by now used to this sort of thing, suddenly asked me:

“Do you really think reading all these books has made you a better person?”

“Well, yeah…” I stuttered. “I think so…”

An awkward silence took over. I could truthfully say that reading had improved my mind, but that wasn’t the question. Was I better? Was I more wise, more moral, calmer, braver, kinder? Had reading made me a more sympathetic friend, a more caring partner? I didn’t want to admit it, but the answer seemed to be no.

This wasn’t an easy thing to face up to. My reading was a big part of my ego. I was immensely proud, indeed even arrogant, about all the big books I’d gotten through. Self-study had strengthened a sense of superiority.

But now I was confronted with the fact that, however much more knowledgeable and clever I had become, I had no claim to superiority. In fact—although I hated even to consider the possibility—reading could have made me worse in some ways, by giving me a justification for being arrogant.

This phenomenon is by no means confined to myself. Arrogance, condescension, and pretentiousness are ubiquitous qualities in intellectual circles. I know this both at first- and second-hand. While lip-service is often given to humility, the intellectual world is rife with egotism. And often I find that the more well-educated someone is, the more likely they are to assume a condescending tone.

This is the same condescending tone that I sometimes found myself using in conversations with friends. But condescension is of course more than a tone; it is an attitude towards oneself and the world. And this attitude can be fostered and reinforced by habits you pick up through intellectual activity.

One of these habits is argumentativeness for me, most closely connected with reading philosophy. Philosophy is, among other things, the art of argument; and good philosophers are able to bring to their arguments a level of rigor, clarity, and precision that is truly impressive. The irony here is that there is far more disagreement in philosophy than in any other discipline. To be fair, this is largely due to the abstract, mysterious, and often paradoxical nature of the questions they investigate—which resist even the most thorough analysis.

Nevertheless, given that their professional success depends upon putting forward the strongest argument to a given problem, philosophers devote a lot of time to picking apart the theories and ideas of their competitors. Indeed, the demolition of a rival point of view can assume supreme importance. A good example of this is Gilbert Ryle’s Concept of Mind—a brilliant and valuable book, but one that is mainly devoted to debunking an old theory rather than putting forward a new one.

This sort of thing isn’t confined to philosophy, of course. I have met academics in many disciplines whose explicit goal is to quash another theory rather than to provide a new one. I can sympathize with this, since proving an opponent wrong can feel immensely powerful. To find a logical fallacy, an unwarranted assumption, an ambiguous term, an incorrect generalization in a competitor’s work, and then to focus all your firepower on this structural weakness until the entire argument comes tumbling down—it’s really satisfying. Intellectual arguments can have all the thrill of combat, with none of the safety hazards.

But to steal a phrase from the historian Richard Fletcher, disputes of this kind usually generate more heat than light. Disproving a rival claim is not the same thing as proving your own claim. And when priority is given to finding the weaknesses rather than the strengths of competing theories, the result is bickering rather than the pursuit of truth.

To speak from my own experience, in the past I’ve gotten to the point where I considered it a sign of weakness to agree with somebody. Endorsing someone else’s conclusions without reservations or qualifications was just spineless. And to fail to find the flaws in another thinker’s argument—or, worse yet, to put forward your own flawed argument—was simply mortifying for me, a personal failing. Needless to say this mentality is not desirable or productive, either personally or intellectually.

Besides being argumentative, another condescending attitude that intellectual work can reinforce is name-dropping.

In any intellectual field, certain thinkers reign supreme. Their theories, books, and even their names carry a certain amount of authority; and this authority can be commandeered by secondary figures through name-dropping. This is more than simply repeating a famous person’s name (although that’s common); it involves positioning oneself as an authority on that person’s work.

Two books I read recently—Mortimer Adler’s How to Read a Book, and Harold Bloom’s The Western Canon—are prime examples of this. Both authors wield the names of famous authors like weapons. Shakespeare, Plato, and Newton are bandied about, used to cudgel enemies and to cow readers into submission. References to famous thinkers and writers can even be used as substitutes for real argument. This is the infamous argument from authority, a fallacy easy to spot when explicit, but much harder when used in the hands of a skilled name-dropper.

I have certainly been guilty of this. Even while I was still an undergraduate, I realized that big names have big power. If I even mentioned the names of Dante or Milton, Galileo or Darwin, Hume or Kant, I instantly gained intellectual clout. And if I found a way to connect the topic under discussion to any famous thinker’s ideas—even if that connection was tenuous and forced—it gave my opinions weight and made me seem more “serious.” Of course I wasn’t doing this intentionally to be condescending or lazy. At the time, I thought that name-dropping was the mark of a dedicated student, and perhaps to a certain extent it is. But there is a difference between appropriately citing an authority’s work and using their work to intimidate people.

There is a third way that intellectual work can lead to condescending attitudes, and that is, for lack of a better term, political posturing. This particular attitude isn’t very tempting for me, since I am by nature not very political, but this habit of mind is extremely common nowadays.

By political posturing I mean several related things. Most broadly, I mean when someone feels that people (himself included) must hold certain beliefs in order to be acceptable. These can be political or social beliefs, but they can also be more abstract, theoretical beliefs. In any group—be it a university department, a political party, or just a bunch of friends—a certain amount of groupthink is always a risk. Certain attitudes and opinions become associated with the group, and they become a marker of identity. In intellectual life this is a special hazard because proclaiming fashionable and admirable opinions can replace the pursuit of truth as the criterion of acceptability.

At its most extreme, this kind of political posturing can lead to a kind of gang mentality, wherein disagreement is seen as evil and all dissent must be punished with ostracism and mob justice. This can be observed in the Twitter shame campaigns of recent years, but a similar thing happens in intellectual circles.

During my brief time in graduate school, I felt an intense and ceaseless pressure to espouse leftist opinions. This seemed to be ubiquitous: students and professors sparred with one another, in person and in print, by trying to prove that their rival is not genuinely right-thinking (or “left-thinking” as the case may be). Certain thinkers could not be seriously discussed, much less endorsed, because their works had intolerable political ramifications. Contrariwise, questioning the conclusions of properly left-thinking people could leave you vulnerable to accusations about your fidelity to social justice or economic equality.

But political posturing has a milder form: know-betterism. Know-betterism is political posturing without the moral outrage, and its victims are smug rather than indignant.

The book Language, Truth, and Logic by A.J. Ayer comes to mind, wherein the young philosopher, still in his mid-twenties, simply dismisses the work of Plato, Aristotle, Spinoza, Kant and others as hogwash, because it doesn’t fit into his logical positivist framework.

Indeed, logical positivism is an excellent example of the pernicious effects of know-betterism. In retrospect, it seems incredible that so many brilliant people endorsed it, because logical positivism has crippling and obvious flaws. But not only did people believe it, but they thought it was “The Answer”—the solution to every philosophical problem—and considered anyone who thought otherwise a crank or a fool, somebody who couldn’t see the obvious. This is the danger of groupthink: when everyone “in the know” believes something, it can seem obviously right, regardless of the strength of the ideas.

The last condescending attitude I want to mention is rightness—the obsession with being right. Now of course there’s nothing wrong with being right. Getting nearer to the truth is the goal of all honest intellectual work. But to be overly preoccupied with being right is, I think, both an intellectual and a personal shortcoming.

As far as I know, the only area of knowledge in which real certainty is possible is mathematics. The rest of life is riddled with uncertainty. Every scientific theory might, and probably will, be overturned by a better theory. Every historical treatise is open to revision when new evidence, priorities, and perspectives arise. Philosophical positions are notoriously difficult to prove, and new refinements are always around the corner. And despite the best efforts of the social sciences, the human animal remains a perpetually surprising mystery.

To me, this uncertainty in our knowledge means that you must always be open to the possibility that you are wrong. The feeling of certainty is just that—a feeling. Our most unshakeable beliefs are always open to refutation. But when you have read widely on a topic, studied it deeply, thought it through thoroughly, it gets more and more difficult to believe that you are possibly in error. Because so much effort, thought, and time has gone into a conclusion, it can be personally devastating to think that you are mistaken.

This is human, and understandable, but can also clearly lead to egotism. For many thinkers, it becomes their goal in life to impose their conclusions upon the world. They struggle valiantly for the acceptance of their opinions, and grow resentful and bitter when people disagree with or, worse, ignore them. Every exchange thus becomes a struggle, pushing your views down another person’s throat.

This is not only an intellectual shortcoming—since it is highly unlikely that your views represent the whole truth—but it is also a personal shortcoming, since it makes you deaf to other people’s perspectives. When you are sure you’re right, you can’t listen to others. But everyone has their own truth. I don’t mean that every opinion is equally valid (since there are such things as uninformed opinions), but that every opinion is an expression, not only of thoughts, but of emotions, and emotions can’t be false.

If you want to have a conversation with somebody instead of giving them a lecture, you need to believe that they have something valuable to contribute, even if they are disagreeing with you. In my experience it is always better, personally and intellectually, to try to find some truth in what someone is saying than to search for what is untrue.

Lastly, being overly concerned with being right can make you intellectually timid. Going out on a limb, disagreeing with the crowd, putting forward your own idea—all this puts you at risk of being publicly wrong, and thus will be avoided out of fear. This is a shame. The greatest adventure you can take in life and thought is to be extravagantly wrong. Name any famous thinker, and you will be naming one of the most gloriously incorrect thinkers in history. Newton, Darwin, Einstein—every one of them has been wrong about something.

For a long time I have been the victim of all of these mentalities—argumentativeness, name-dropping, political posturing, know-betterism, and rightness—and to a certain extent, probably I always will. What makes them so easy to fall into is that they are positive attitudes taken to excess. It is admirable and good to subject claims to logical scrutiny, to read and cite major authorities, to advocate for causes you think are right, to respect the opinions of your peers and colleagues, and to prioritize getting to the truth.

But taken to excesses, these habits can lead to egotism. They certainly have with me. This is not a matter of simple vanity. Not only can egotism cut you off from real intimacy with other people, but it can lead to real unhappiness, too.

When you base your self-worth on beating other people in argument, being more well read than your peers, being on the morally right side, being in the know, being right and proving others wrong, then you put yourself at risk of having your self-worth undermined. To be refuted will be mortifying, to be questioned will be infuriating, to be contradicted will be intolerable. Simply put, such an attitude will put you at war with others, making you defensive and quick-tempered.

An image that springs to mind is of a giant castle with towering walls, a moat, and a drawbridge. On the inside of this castle, in the deepest chambers of the inner citadel, is your ego. The fortifications around your ego are your intellectual defenses—your skill in rhetoric, logic, argument, debate, and your impressive knowledge. All of these defense are necessary because your sense of self-worth depends on certain conditions: being perceived, and perceiving oneself, as clever, correct, well-educated, and morally admirable.

Intimacy is difficult in these circumstances. You let down the drawbridge for people you trust, and let them inside the walls. But you test people for a long time before you get to this point—making sure they appreciate your mind and respect your opinions—and even then, you don’t let them come into the inner citadel. You don’t let yourself be totally vulnerable, because even a passing remark can lead to crippling self-doubt when you equate your worth with your intellect.

Thus the fundamental mindset that leads to all of the bad habits described above is that being smart, right, or knowledgeable is the source of your worth as a human being. This is dangerous, because it means that you constantly have to reinforce the idea that you have all of these qualities in abundance. Life becomes then a constantly performance, an act for others and for yourself. And because a part of you knows that its an act—a voice you try to ignore—then it also leads to considerable bad faith.

As for the solution, I can only speak from my own experience. The trick, I’ve found, is to let down my guard. Every time you defend yourself you make yourself more fragile, because you tell yourself that there is a part of you that needs to be defended. When you let go of your anxieties about being wrong, being ignorant, or being rejected, your intellectual life will be enriched. You will find it easier to learn from others, to consider issues from multiple points of view, and to propose original solutions.

Thus I can say that reading has made me a better person, not because I think intellectual people are worth more than non-intellectuals, but because I realized that they aren’t.