DBZ & SSBM

DBZ & SSBM

DBZ & SSBM:

An Adolescence in Two Acronyms

After I got home from long and boring day of school, I would sit on the couch, turn on the television, and lazily do my homework during the commercial breaks.

This procedure—which I followed for years—guaranteed that homework would be torture. Even simple tasks could take ages, from starting, stopping, forgetting, and starting again. And since I did not devote even half my attention to the work, I did it badly without learning anything. Yet by the time I got home from school I was so burnt out that I had to distract myself from the work as much as possible, just to stay sane. It did not help that this homework was inevitably the most pointless drudgery—“busywork,” as my mom called it—requiring time but no thought, some attention but no creativity. Television at least took the edge off.

In the late afternoon, when I got home, there usually wasn’t anything very good on. As the day waned the quality would improve, until, finally, it was time for Toonami. Toonami was a programming block on Cartoon Network, specializing in Japanese anime dubbed into English. The programs were presented by TOM, an animated robot man—surprisingly pudgy for an android—who was a kind of space-pirate broadcaster, transmitting the shows from his spaceship all across the galaxy. You can imagine that the teenage me was entranced.

The first anime to win me over, and the one that was to remain my favorite, was a show called Dragon Ball Z. On the surface it was like any superhero cartoon; the characters had powers and fought bad guys; and since I had long been a fan of Superman and Batman, this drew me in. Indeed, the protagonist, Goku, had a backstory almost identical to Superman’s. One of the last survivors of his destroyed planet, Goku arrived on Earth as an infant and was brought up as a human. Yet his alien fisique soon proved much stronger than a normal human’s, and so on, etc.

All this was standard stuff. But there were some odd discrepancies between DBZ and American superhero cartoons. DBZ had a surprising amount of ethical ambiguity—at least, surprising for a young teenager. Bad guys sometimes became good guys, or at least semi-good guys; and the good guys were often foolish, cowardly, or just silly. This did not happen with Superman and Batman, who were always good, brave, and wise, and whose enemies were always arrogant, cowardly, and bad. Another fundamental difference was the concept of training. The characters in DBZ did not simply have powers, but had to continually train to develop their abilities, which grow as the series progresses.

But the most striking difference were the fights. Whereas Batman threw batarangs and gave karate chops, and Superman mainly stuck to a few good jabs and hooks, the characters of DBZ would disappear into a blur of punches and kicks, shoot energy rapid-fire until whole landscapes were engulfed in flame, make the entire earth shake as they charged their attacks. The fight choreography was light years beyond the most daring American cartoons. And the fights lasted longer—much longer. Two characters could be embroiled in a fight for whole episodes, sometimes even multiple episodes: hours and hours of anime action. After DBZ, the Justice League seemed tame.

The show was unashamedly centered on fights in a way that I found irresistible. The plot became ever-more perfunctory, merely serving to set up meetings between powerful characters so they could proceed to beat each other to pulp or blow each other to bits. If you think that the plot of a usual superhero movie is thin, try watching DBZ. Everything—the characters, the pacing, the story—is dictated by the demands of epic battle. Characters have epiphanies just so they can reach another power level; characters fall in love just so they can have kids, who will have their own battles; characters make irrational decisions just so that battles will be prolonged.

DBZ is most infamous for its long power-ups, wherein a character will scream his head off while his body emits light and heat in a fantastic buildup of energy. I almost admire how shamelessly this device is used by the writers to fill episodes and build tension. This is the only explanation for the power-ups, since they make no sense within the story: the fighter is perfectly vulnerable during the ordeal, just standing there and screaming like a wild monkey. And yet time after time their opponents let it happen, despite the possibility that a successful power-up spells defeat. Even wicked world-destroying villains are above interrupting this sacred process, it seems. While this yodelling lightshow takes place, all the other characters retreat to gape and repeatedly exclaim how amazed they are. Certain phrases become obligatory: “I can’t believe how powerful he’s become!” “What? Impossible!” “This energy! Can it really be from one person?” Even by the end of the series, when they have all seen a hundred power-ups, the spectacle never fails to fill them with awe and dread.

Sometimes these power-ups led to transformations, which is another hallmark of DBZ. The young Goku found that, like a sort of King Kong werewolf, he transformed into a giant dog-monkey during a full moon—until cutting off his tail solved that problem. His rival Vegeta, another Saiyan, used this transformation against Goku until, being similarly dismembered, he was deprived of this power. And this is not the end of the Saiyans’ ability to transform. The most iconic of these is the Super Saiyan, in which the hair turns golden and stands straight up. But this turned out to be just the tip of the iceberg; Super Saiyan 2 and 3 followed, and in other media even the ape-like Super Saiyan 4.

And the Saiyans aren’t the only ones who transform. The show’s most famous villains—Frieza, Cell, and Majin Buu—are all distinguished by their many metamorphoses; and these are not just changes in hairstyle, but involve a complete modification of their bodies. I suppose we associate these bodily mutations with insects, which is why it seems like a villainous thing to do. Indeed, Cell has beetle wings, and Majin Buu emerges from a cocoon like a butterfly. Even more nefarious, these two villains unlock their new forms by absorbing other people, like giant mosquitoes. And yet it is interesting to note that, for all three of these villains, their most powerful form is their most humanoid. The combination of human and animal traits is, after all, the essence of monstrosity.

§

A few years after I started watching DBZ, I began to delve into Super Smash Brothers Melee. In case you are not familiar with this game, SSBM is a fighting game originally released on the Nintendo GameCube in 2001. It is the sequel to the original Super Smash Brothers game on Nintendo 64, which I had been playing with my friends since elementary school. Some of my happiest memories from childhood are of sitting in my best friend’s basement playing Smash. And Melee was just as good, if not better. Both Smash and its sequel Melee are ideal party games—there is no backstory, the objective is clear, they require little skill to enjoy, and up to four people can play at once. Just choose a character and try to knock the other guy off.

My brother and I bought SSBM almost as soon as it came out, and for a while we played it the way it was designed to be played: as a lighthearted, meaningless diversion among friends, much like Mario Kart. But then, in high school, we began to take games more seriously. This began when we started playing online computer games, both greatly widening the pool of our competition and introducing us to gaming culture—a culture of competition adequately summarized and parodied in the online series Pure Pwnage (which we also watched). The goal was not just to have fun, but to be the best, to crush and humiliate your opponents: in short, to pwn noobs.

It was during this period that our neighbor visited us one day, and said he wanted to play SSBM. This was somewhat odd, since we believed that SSBM was just for button-mashing fun, not for serious high-level play. “But look,” our neighbor said. “I found out about advanced techniques.” And he searched a video of the wavedash.

The wavedash is the most iconic advanced technique of SSBM. It is hard to explain what it is without giving some idea of the game. Normally, you can run, jump, or roll to move around. These are all standard controller inputs, a straightforward combination of a button and the joystick. But a wavedash is executed by pressing the jump button, and then immediately air-dodging (by pressing another button while angling the joystick) diagonally towards the ground, thus interrupting the jump: two inputs, one after the other. The result is that the character slides across the stage, sometimes very quickly. This method of locomotion was likely not intended by the game’s architects. But it works wonderfully.

The wavedash alone significantly added to gameplay, giving players speed and maneuverability that weren’t available before. But this was only the beginning. There were lots of these so-called advanced techniques: short-hop, dashdance, L-cancel, crouch cancelling, directional influence, wall-teching, and on and on. We had played the game for years and had never even suspected the existence of higher-level play. Out of the package, the game seemed as simple and obvious as Parcheesi; that was its appeal. But these techniques opened up an entirely new level of gameplay, turning a lighthearted diversion into a lightning-fast contest of reflexes.

Seeing these techniques in action was incredible. Top players made combo-videos, showing how they could string together attacks in inescapable sequences, juggling their opponents across the fighting stage and then sending them flying. Even more impressive were the videos of professional players. This was around 2007, right before SSBM was discontinued from its three-year run on the Major League Gaming circuit (a company that organizes gaming tournaments with big prizes and high publicity). This meant that YouTube was already full of videos of high-level players competing in formal competitions. PC Chris, KorenDJ, Azen, Chudat, Isai, and Ken—we watched their matches and marvelled at their prowess. Soon enough my neighbor and I were practicing these advanced techniques and sharpening our skills against one another.

Here I should pause to explain a bit about how SSBM works. Unlike in other fighting games, where you have a certain amount of health or stamina that is depleted by your opponent’s attacks, in SSBM you have percentage. This determines how far you are sent if an attack hits you. A player with 0% will hardly move from an attack, while a player with 150% will take off like a cannonball. If you fly too far off stage, in any direction, you lose a “stock,” or life. Another big difference is that there are no predetermined combos in SSBM. (As in, no series of controller inputs automatically results in a combo.) Combos have to be discovered or invented by the player, and rely on a mixture of luck and timing to pull off. The result is a far freer fighting game, with death may come at any time (or postponed indefinitely), and where each sequence of moves is improvised in the moment.

Another attraction of the game is its wealth of characters. There are twenty-six to choose from, each with a different set of moves, a different height and weight, a different walking and falling speed, and consequently requiring techniques and styles of play. And though some characters are generally far stronger than others (competitive players arrange them on a tier-list from best to worst), the game’s architects did an excellent job in giving each one unique strengths and weaknesses, making each two-way matchup unique. I mostly played Captain Falcon, a mid- to high-tier character with strong moves and fast movement, but who suffers from predictable recovery and being easily comboed. My neighbor mostly played Marth, one of the best characters in the game, who nevertheless suffers from a difficulty in finishing off opponents.

After a few months of practice, my neighbor and I were good enough that we could beat any normal player without much trouble. And yet even though I improved greatly, I was constantly frustrated at my inability to best my neighbor. No matter how good I became, he was always at least slightly better—sometimes more than slightly—and no amount of practice could bridge the gap. This made me furious. Even for my adolescent age, my maturity level was not high. I had a low tolerance for frustration and had difficulty controlling my anger. So sometimes, when being badly beaten, or when victory was snatched away from me at the last moment—as it always seemed to be—I would explode and slam my controller on the ground, or throw it across the room, sometimes damaging or even breaking it. Fully indoctrinated to the gaming ethos, I wanted only to win, to be the best, to crush my opponents; so when I was myself beaten, I felt worthless, empty, powerless.

This experience playing videos games, incidentally, is one reason why I generally avoid competitive situations. While competition seems to bring out the best in some people, I think it brings out the worst in me. I become petty and spiteful: arrogant towards those I beat and resentful towards those who beat me. So focused on winning, I cannot relax and enjoy what I’m doing, which ironically makes me less likely to win. The pressure I put on myself makes me nervous; I think about how good it would feel to win, how awful to lose, and my palms begin to sweat and my mind races; I panic, my playing suffers, and I lose—and then the rage comes, and I mentally chastise myself until I feel like a little worm squirming in the mud. This is more or less what would happen to me as I became ever more engrossed in competitive gaming, which is why I have developed a reluctance to compete in adulthood. Since so much of life in a capitalistic world is based on competition, at times this puts me out of harmony with my surroundings—but that’s another story.

The closest I ever came to the professional player scene was my one trip to a local tournament. I went with my neighbor. My mom drove us. The tournament was held in a video game shop next to an old toy store I used to go to. Strange to say, my memory of this tournament is very vague. I remember being in a cramped room full of chairs and TV screens, and feeling intimidated by all the older people around me (at around 15, we were probably the youngest there); I didn’t say a word to anybody except my neighbor. I remember sitting down to play my first match with sweaty palms, and I remember being beaten, but putting up a respectable fight. And that was it for me.

So my very promising career as a professional gamer was quickly snuffed out. Discouraged by the huge skill-gap that remained between myself and even moderately ranked players, I lost heart. Not that it mattered much, since the following year my interests abruptly switched from video games to playing guitar—but, again, that’s another story.  

§

The reason that I am writing about these two adolescent obsessions of mine is because, strange to say, they never entirely left me. After many years of scarcely thinking about Goku or Captain Falcon, I now find myself regularly watching clips from DBZ and SSBM matches, and really loving them. And this, in a man who normally looks down his nose at all lowbrow pleasures. Why the resurgence in interest?

Partly my renewed interest has been sparked by an actual resurgence in these media. After a long hiatus, the Dragon Ball Z saga was continued in the new series, Dragon Ball Super. And after a period of decline following the release of SSBM’s sequel, Super Smash Brothers Brawl for the Nintendo Wii (a game far less amenable to quick, competitive play), the Melee community has rebounded and grown, with regular tournaments all over the world, and even a full-length documentary devoted to the game’s early years.

I began watching Dragon Ball Super out of boredom and a sense of nostalgia, but I was quickly hooked into to the series. In every way it is an improvement from DBZ. The story has far less filler—notably, the power-ups only take a few minutes. The already perfunctory plot-lines about monsters trying to blow up the world have been scrapped for simple tournaments, giving the characters a chance to pummel each other without further ado. The villains are, for the most part, no longer shapeshifting monsters but other martial artists. And the animation is much sharper and impressive. Yet the basic elements remain the same. The humble Goku trains to unlock new transformations (Super Saiyan God, Super Saiyan Blue, Ultra Instinct) in order to beat the enemy, who is, as usual, arrogant and overconfident.

I started to watch the Smash Brothers Documentary out of a sense of curious irony, amused that somebody would make a documentary about such a silly subject. But I soon found myself genuinely impressed. Indeed, for a fan-made documentary uploaded directly to YouTube, it is almost absurdly well-made—informative, entertaining, and attractive. Directed by Travis ‘Samox’ Beauchamp, the documentary contains nine episodes, each of which is dedicated to a notable player from SSBM’s “Golden Age” (the years following its release): Ken, Isai, Azen, PC Chris, Korean DJ, Mew2King, Mango, with many other players making an appearance. Having followed these players in high school, it was fascinating to hear their own story in their own words. And the commentary, far from the usual callow gamer smacktalk, was consistency thoughtful and humane—especially that of the player Wife. In short, the documentary really captures the magic of the game and the community which has formed around it.

But even if DBZ and SSBM are still going strong, it does not explain my continued interest. Again, I have a tendency to be extremely pretentious when it comes to the media I consume. I seldom resist the opportunity to denigrate popular music, films, and books as simplistic, formulaic, childish, etc. (Here you see my nasty competitive side expressed in a different way.) And yet here I am, still watching a cartoon about men flying and fighting, still watching people manipulate characters on a screen, still enjoying the adolescent obsessions that I thought I had left off long ago. Clearly, these two media have a consistent appeal to me. But why?

They are similar in several conspicuous ways. Both SSBM and DBZ focus on fast-paced fights, with characters dashing, jumping, and flying through the air—shooting projectiles, exchanging blows, sending each other flying. In both, the fight itself is more compelling than the outcome. Though DBZ has good guys and bad guys, we do not watch to see who wins (it’s always Goku), but to see the fight itself—the sheer spectacle of it. And even the story-mode of SSBM does not have anything resembling a plot. The whole substance of SSBM and DBZ is made of rapid punches, flying kicks, and energy beams. And since the fight is the main focus, both media include training as a major focus. Goku is not simply strong, like Superman is; his strength is the product of work. Top SSBM players, too, must put in endless hours of practice to compete on their level.

Another striking similarity is that both SSBM and DBZ are male-oriented. Though Dragon Ball Super finally incorporated some female fighters, DBZ’s fighters were exclusively male; and though I do not have the statistics, I believe the show’s audience was similarly male-dominated. One look at an SSBM tournament will reveal how completely boy-centered is the game. Every top player I know of is male; the commentators, too, are all men; and the audience is inevitably a chorus of husky voices. Perhaps this should be expected. True to the cultural stereotype, both DBZ and SSBM are bereft of romance and sentiment, and instead focus on violence—a traditionally male vice.

It should be noted, however, that both the show and the game are pretty tame. Indeed, I would argue that both DBZ and SSBM are distinguished by a kind of vanilla violence, where characters are punched but do not break their bones, where they lose a game or are sent to the afterlife but never really die (the important characters in DBZ are inevitably revived with the titular dragon balls)—where the stakes are, in short, never very high. (The resemblance only increased in Dragon Ball Super, where the characters are eliminated from the tournament by being knocked off the stage, just as in SSBM.) It is a violence without bloodshed and without consequence, for the pure sport and spectacle of it. And this, perhaps, explains why the two attracts similar demographics, namely “dorky” men: they are male but not manly, competitive but not cutthroat, violent but not vicious. It is purely imaginative fighting.

§

DBZ and SSBM are similar, then. But again I must ask: Why do they hold such a consistent appeal to me? The most obvious answer is nostalgia. I am a boy who grew up right when they were coming out, and they remind me of my childhood. This, however, leads to another question: Why did they appeal to me in the first place?

This is, perhaps, also no mystery. I fit their demographics pretty well. I was a dorky boy who has never been popular or good at sports. Like other video games, SSBM gave me a chance to excel at something competitive. I could not beat anyone in any physical activity, but I could run circles around my opponents on the screen. And Goku was the perfect hero for a boy in my situation: whose strength was the product of determination, and whose persistent efforts could defeat his more naturally talented foes—muscly monsters whose overconfidence always led them to neglect their own training. In short, the imaginative identification with the heroes of DBZ and the characters of SSBM could transform a slow, weak, pudgy kid into a lightning-fast, super-strong fighter.

SSBM and DBZ were a form of escape in more ways than one. Not only did they provide me with an escape from my nerdy, unathletic self, they also provided a much-needed relief from the omnipresent boredom of school. My memories of middle and high school are, with some notable exceptions, sitting in a claustrophobic room, feeling tired and bored out of my mind, seldom paying attention to what was being said or read. Despite this, I was actually a good student—at least as far as grades were concerned. But the endless amounts of busywork, the dry lectures, and the repetitive routine had me constantly on the verge of burning out completely.

When I got home my first priority was to unplug, to forget everything from the day and to put school as far as possible from my mind. Shows like DBZ and games like SSBM were perfect. They require no thought to understand and enjoy. Indeed, then and now their primary function for me is to switch off my intellect, leaving only a kind of dim, dog-like awareness of movement. When I indulge in these media I am in a trance, as incapable of critical thought as is a goldfish.

Many times in later life I have found myself feeling the same way I felt in high school: bored to tears by my daily life—an endless parade of meaningless obligations and unrewarding tasks—and looking for some way to forget it all. Intellectual pleasures are arguably not the best way to do this, since they sharpen rather than blunt the attention. But SSBM and DBZ are perfect: cartoon fights without meaning, appealing to my primitive brain and leaving the frontal lobe blissfully empty. Indeed, I have found that when I am particularly keen to watch SSBM fights on YouTube, it is usually a sign that I need to liven up my routine.

In saying these things I hope I have not insulted or offended anyone connected with these media. I have only the warmest feelings towards DBZ and SSBM; and if I wore a hat I would take it off to the makers of the first and the players of the second, who have provided me with so many happy hours. For the world—at least how it is now—necessarily involves drudgery. As long as we have routines we will have boredom. And some light escapism is, I think, a healthy and natural way of coping with the limitations of our own identities and the plodding monotony of the day-to-day.

[Cover photos taken from SmashWiki and Dragon Ball Wiki; its use falls under Fair Use.]

Connie Converse & Feelings of Beauty

Connie Converse & Feelings of Beauty

In 1974 a woman called Connie Converse got into her car, drove away, and was never heard from again. This was not noticed by the press. She was fifty and a failed musician, an eccentric and hermit-like woman who never caught her big break. But she left behind recordings of her songs—songs which, belatedly, have finally led to her getting a modicum of the recognition she deserved.

This is another of those stories of the misunderstood genius. Completely obscure during her career, she is nowadays recognized as a musician ahead of her time, one of the first practitioners of the so-called singer-songwriter genre. Her recordings remained unavailable to the public until 2004 when she was featured on WYNC’s “Spinning on Air.” Five years later, in 2009, an album of her homemade recordings was released—How Sad, How Lovely—which secured her place in the hearts of hipsters across Brooklyn.

In this essay I am, however, not primarily interested in analyzing her music. This is not because there isn’t much to discuss. Converse blends American popular styles—blues, folk, jazz, country—into a sophisticated personal style. Her songs are masterful on many levels: with piquant lyrics, sophisticated harmonies, and creative guitar arrangements. Deep originality, keen intelligence, and a fine literary sensibility make her posthumous album an excellent listen.

All this is beside the point. I want to use Converse’s music as the jumping-off point to discuss an essential question of aesthetics: Are there such things as aesthetic emotions?

In a previous essay on aesthetics, I tried to analyze the way that art alters our relation with the natural and cultural world. But I left unexamined the way that art changes our relationship with ourselves. For art does not only affect our stance towards our assumptions, habits, sensations, perceptions, and social conventions. Art also changes our relationship with our emotions.

The very title of Converse’s album invites us to consider aesthetic emotions: How Sad, How Lovely. She herself did not choose this title for the album; but it is well-chosen, since this feeling—the feeling of delightful melancholy—always comes over me when I listen to her music.

But how can sadness be beautiful? When tragedy actually befalls us—a career failure, a break up, a death—we are little disposed to find it beautiful. And long-term, grinding depression is perhaps even less lovely. Yet we so often find ourselves watching sad movies, reading depressing books, enjoying tragedies on stage, and listening to tearful ballads. Converse’s songs are full of heartbreak, yet many people enjoy them. We do our best to avoid sadness in life, but often seek it out in art. Why?

This leads me to think that the melancholic emotion we experience in sad art is not the same as “real” sadness. That is, when a beloved character dies in a novel we feel an entirely different emotion than when a friend passes away. Can that be true?

Perhaps, instead, it is just a question of degree: sadness in art affects us less than sadness in life. But this explanation does not do. For we do not enjoy even small amounts of sadness in real life; so why would we in art? And besides, anyone artistically sensitive knows that one can have intense reactions from art—quite as intense as any experience in life—so that it is clearly not a question of degree.

The explanation must be, then, that aesthetic sadness is categorically different from actual sadness. Yet we recognize an immediate and obvious affinity between the two feelings: otherwise we would not call them both “sad.” So it seems as if they are alike in one sense and yet different in another.

To pinpoint the difference, we must see if we can isolate the feeling of beauty. In novels, paintings, and songs, aesthetic reactions are hopelessly mixed up with a riot of other considerations: artistic intentions, subject-matter, moral judgments, and so forth. But when contemplating nature—in my experience, at least—the feeling of beauty stands out cold and pure.

In the forest of New Brunswick my family owns a property on a lake, where we go every summer to relax. Out there, far away from the light pollution of cities, the Milky Way appears in all its remarkable brilliance. Except for the ghostly calls of loons echoing across the lake, the woods are deathly still. Gazing up at the stars in the silence of the night I come closest to experiencing pure beauty.

Looking at the starry night I feel neither sad, nor happy, nor frightened, nor angry. The word that comes closest the feeling is “awe”: gaping wonder at the splendor of existence. I become entirely absorbed in my senses; everything except the beautiful object drops away. I am solely aware of the sensory details of this object, and perpetually amazed that such a thing could exist.

An important facet of this experience, I think, is disinterestedness. I want nothing from the beautiful object. I have no stake in its fate. Indeed, my absorption in it is so great that the feeling of distinction—of subject and object—dissolves, and I achieve a feeling of oneness with what I contemplate. Thus this feeling of disinterest extends to myself: I no longer have a stake in my own life, and I can accept with equanimity come what may. Beauty is, for this reason, associated in my mind with a feeling of profound calmness.

Using this observation, I believe we can see why sad art can be pleasant. My hypothesis is that the feeling of beauty—which brings with it a calming sense of disinterest—denatures normally painful emotions, rendering them relatively inert.

Real sadness involves the painful feeling of loss. To feel loss, you must feel interested, in the sense that you have a stake in what is happening. But experiencing sadness in the context of a work of art, while our aesthetic sense is activated, numbs us to this pain. We see sadness, rather, as a kind of floating, neutral observer in the scene; and this allows us to savor the poignant and touching experience of the sentiment without the drawbacks of emotional trauma.

And herein lies the therapeutic potential of art. For art allows us to see the beauty in normally painful emotions. By putting us into a disinterested state of mind, great art allows us to savor the sublime melancholy of life. We see that sadness is not just painful, but lovely. In this, too, consists art’s ability to help us achieve wisdom: to look upon life, not as a person wrapped up in his own troubles, but as a cloud watching from above.

Here I think it is useful to examine the difference between art and entertainment. In my first essay, I held that the differences is that art reconnects us to the world while entertainment lures us into fantasy. But the emotional distinction between art and entertainment cannot be described thus.

Rather, I think herein lies the difference: that art allows us to contemplate emotions disinterestedly, while entertainment provokes us to react empathetically. That is to say that, with entertainment, the difference between aesthetic emotions and everyday emotions is blurred. We react to an entertaining tale the same way we react to, say, our friend telling us a story.

Some observations lead me to this conclusion. One is that Shakespeare’s tragedies have never once brought me anywhere close to tears, while rather mediocre movies have had me bawling. Another is that drinking alcohol, being stressed, feeling sentimental, or being otherwise emotionally raw tend to make me more sensitive to blockbuster movies and pop music, and less sensitive to far greater works. Clearly, provoking strong emotions requires neither great sophistication in the work nor great appreciation in the audience; indeed, it requires a childlike innocence from both.

I believe the explanation for this is very simple. Crude art—or “entertainment” in my parlance—does not strongly activate our aesthetic sense, and thus our emotions are unfiltered. We feel none of the disinterest that the contemplation of beauty engenders, and so react in an unmediated naturalness. This makes entertainment the opposite of calming—rather, it can be very animating and distressing.

This may be the root of Plato’s and Aristotle’s ancient dispute about the role of poetry in society. Plato famously banished poets from his ideal republic, fearing poetry’s ability to disrupt social order and discompose men’s minds. Aristotle, on the other hand, thought that poetry could be cathartic, curing us of strong emotions and thus conducive to calmness and stability. In my scheme, entertainment is destabilizing and true art therapeutic. In other words, Plato should only have banished the entertainers.

This also leads to an observation about singer-songwriters. Unlike in other genres of music—musicals, operas, jazz—there is a pretense among singer-songwriters to be communicating directly with their audience; that their songs are honest and related to their daily lives. It is this intentional blurring of art and life that leads many fans to get absorbed in tabloid stories of celebrity personal lives.

I think this is almost inevitably a sort of illusion, and the “honest” self on display to the public is a sort of persona. But in any case this pretense serves the purposes of entertainment: If we think the musicians are giving us honest sentiments, we will react empathetically and not disinterestedly.

Connie Converse, on the other hand, creates no illusion of direct honesty. Her lyrics are literary, picturesque, and impersonal. This is not to say that her songs have nothing to do with her personality. Her preferred themes—unrequited love, most notably—obviously have some bearing on her life. And the sadness in her music must be related with the sorrow in her life, feeling isolated and unrecognized. But in her songs, the stuff of her life is sublimated into art—turned into an impersonal product that can be contemplated and appreciated without knowing anything about its maker.

I believe all true art is, in this sense, impersonal: its value does not depend on knowing or thinking anything about its maker. Art is not an extension of the artist’s personality, but has its own life. This is why I am against “confessional” art: art that pretends to be, or actually is, an unfiltered look into somebody’s life and feelings. Much of John Lennon’s work after the Beatles broke up falls into this category. By my definition, “confessional” art is always inevitably entertainment.

This is not to say that art must always scorn its maker’s life. The essays of Montaigne, for example, are deeply introspective, while being among the glories of western literature. But those essays are not simply the pouring forth of feelings or the airing of grievances. They transform Montaigne’s own experiences into an exploration of the human condition, and thus become genuine works of art.

In my previous essay I described how art can help us break out of the deadening effects of routine, thus revivifying the world. But art can also pull us from the opposite direction: from frenetic emotionality to a detached calmness. While contemplating beauty, we see the world, and even ourselves, as calm and sensitive observers, with fascination and delight. We rediscover the childlike richness of experience, while shunning the childlike tyranny of emotion. We can achieve equanimity, at least temporarily, by being reminded that beauty and sadness are not opposed, but are intimately intertwined.

The Musée d’Orsay & A Theory of Aesthetics

The Musée d’Orsay & A Theory of Aesthetics

On the left back of the Seine, in an old Beaux-Arts train station, is one of Europe’s great museums: the Musée d’Orsay. Its collection mainly focuses on French art from the mid-nineteenth to the early-twentieth century. This was a fertile time for Paris, as the museum amply demonstrates. Rarely can you find so many masterpieces collected in one place.

The museum is arranged with exquisite taste. In the middle runs a corridor, filled with statues—of human forms, mostly. They dash, reach, dance, strain, twist, lounge, smile, laugh, gasp, grimace.

On either side of this central corridor are the painting galleries, arranged by style and period. There were naturalistic paintings—with a vanishing perspective, careful shadowing, precise brushstrokes, scientifically accurate anatomy, symmetrical compositions. There were the impressionists—a blur of color and light, creamy clouds of paint, glances of everyday life. There was Cézanne, whose precise simplifications of shape and shade lend his painting of Mont Sainte-Victoire a calm, detached beauty. Then there were the pointillists, Seurat and Signac, who attempted to break the world into pieces and then to build it back up using only dabs of color, arranged with a mixture of science and art.

Greatest of all was van Gogh, whose violent, wavy lines, his bright, simple colors, his oil paint smeared in thick daubs onto the canvas, make his paintings slither and dance. It is simply amazing to me that something as static as a painting can be made be so energetic. Van Gogh’s paintings don’t stand still under your gaze, but move, vibrate, even breathe. It is uncanny.

His self portrait is the most emotionally affecting painting I have ever seen. Wearing a blue suit, he sits in a neutral blue space. His presence warps the atmosphere: the air seems to be curling around him, as if in a torrent. The only colors that break the blur of blue are his flaming red beard and his piercing green eyes. He looks directly at the viewer, with an expression impossible to define. At first glance he appears anxious, perhaps shy; but the more you look, the more he appears calm and confident. You get absolutely lost in his eyes, falling into them, as you are absorbed into ever more complicated subtleties of emotion concealed therein. Suddenly you realize that curling waves of air around him are not mere background, but represent his inner turmoil. Yet is it a turmoil? Perhaps it is a serenity too complicated for us to understand?

800px-Vincent_van_Gogh_-_Self-Portrait_-_Google_Art_Project

I looked and looked, and soon the experience became overwhelming. I felt as if he were looking right through me, while I pathetically tried to understand the depths of his mind. But the more I probed, the more lost I felt, the more I felt myself being subsumed into his world. The experience was so overpowering that my knees began to shake.

Consider this reaction of mine. Now imagine if a curious extraterrestrial, studying human behavior, visited an art museum. What would he make of it?

On its face, the practice of visiting art museums is absurd. We pay good money to gain entrance to a big building, so we can spend time crowding around brightly colored squares that are not obviously more interesting than any other object in the room. Indeed, I suspect an alien would find almost anything on earth—our plant and animal life, our minerals, our technology—more interesting than a painting.

In this essay I want to try to answer this question: Why do humans make and appreciate art? For this is the question that so irresistibly posed itself to me after I stared into van Gogh’s portrait. The rest of my time walking around the Musée d’Orsay, feeling lost among so many masterpieces, I pondered how a colorful canvas could so radically alter my mental state. By the end of my visit, the beginnings of an answer had occurred to me—an answer hardly original, being deeply indebted to Walter Pater, Marcel Proust, and Robert Hughes, among others—and it is this answer that I attempt to develop here.

My answer, in short, is that the alien would be confused because human art caters to a human need—specifically, an adult human need. This is the need to cure ennui.

§

Boredom hangs over human life like a specter, so pernicious because it cannot be grasped or seen.

The French anthropologist Claude Lévi-Strauss knew this very well. As a young man he ejoyed mountain scenes, because “instead of submitting passively to my gaze” the mountains “invited me into a conversation, as it were, in which we both had to give our best.” But as he got older, his pleasure in mountain scenery left him:

And yet I have to admit that, although I do not feel that I myself have changed, my love for the mountains is draining away from me like a wave running backward down the sand. My thoughts are unchanged, but the mountains have taken leave of me. Their unchanging joys mean less and less to me, so long and so intently have I sought them out. Surprise itself has become familiar to me as I follow my oft-trodden routes. When I climb, it is not among bracken and rock-face, but among the phantoms of my memories.

Dostoyevsky put the phenomenon more succintly: “Man grows used to everything, the scoundrel!”

These two literary snippets have stuck with me because they encapsulate the same thing: the ceaseless struggle against the deadening weight of routine. Nothing is new twice. Walk through a park you found charming at first, the second time around it will be simply nice, and the third time just normal.

The problem is human adaptability. Unlike most animals, we humans are generalists, able to adapt our behavior to many different environments. Instead of being guided by rigid instincts, we form habits.

By “habits” I do not only refer to things like biting your nails or eating pancakes for breakfast. Rather, I mean all of the routine actions performed by every person in a society. Culture itself can, at least in part, be thought of as a collection of shared habits. These routines and customs are what allow us to live in harmony with our environments and one another. Our habits form a second nature, a learned instinct, that allows us to focus our attention on more pressing matters. If, for whatever reason, we were incapable of forming habits, we would be in a sorry state indeed, as William James pointed out in his book on psychology:

There is no more miserable human being than one in whom nothing is habitual but indecision, and for whom the lighting of every cigar, the drinking of every cup, the time of rising and going to bed every day, and the beginning of every bit of work, are subjects of express volutional deliberation. Full half the time of such a man goes to the deciding, or regretting, of matters which ought to be so ingrained in him as practically not to exist for his consciousness at all.

Habits are, thus, necessary to human life. And up to a certain point, they are desirable and good. But there is also a danger in habitual response.

Making the same commute, passing the same streets and alleys, spending time with the same friends, watching the same shows, doing the same work, living in the same house, day after day after day, can ingrain a routine in us so deeply that we become dehumanized.

A habit is supposed to free our mind for more interesting matters. But we can also form habits of seeing, feeling, tasting, even of thinking, that are stultifying rather than freeing. The creeping power of routine, pervading our lives, can be difficult to detect, precisely because its essence is familiarity.

One of the most pernicious effects of routine is to dissociate us from our senses. Let me give a concrete example. A walk through New York City will inevitably present you with a chaos of sensory data. You can overhear conversations, many of them fantastically strange; you can see an entire zoo of people, from every corner of the globe, dressed in every fashion; you can look at the ways that the sunlight moves across the skyscrapers, the play of light and shadow; you can hear dog barks, car horns, construction, alarms, sirens, kids crying, adults arguing; you can smell bread baking, chicken frying, hot garbage, stale urine, and other scents too that are more safely left uninvestigated.

And yet, after working in NYC for a few months, making the same commute every day, I was able to block it out completely. I walked through the city without noticing or savoring anything. My lunch went unappreciated; my coffee was drunk unenjoyed; the changing seasons went unremarked; the fashion choices of my fellow commuters went unnoticed.

It isn’t that I stopped seeing, feeling, hearing, tasting, but that my attitude to this information had changed. I was paying attention to my senses only insofar as they provided me with useful information: the location of a pedestrian, an oncoming car, an unsanitary area. In other words, my attitude to my sensations had become purely instrumental: attending to their qualities only insofar as they were relevant to my immediate goals.

This exemplifies what I mean by ennui. It is not boredom of the temporary sort, such as when waiting on a long line. It is boredom as a spiritual malady. When beset by ennui we are not bored by a particular situation, but by any situation. And this condition is caused, I think, by a certain attitude toward our senses. When afflicted by ennui, we stop treating our sensations are things in themselves, worthy of attention and appreciation, but merely as signs and symbols of other things.

To a certain extent, we all do this, often for good reason. When you are reading this, for example, you are probably not paying attention to the details of the font, but are simply glancing at the words to understand their meaning. Theoretically, I could use any font or formatting, and it wouldn’t really affect my message, since you are treating the words as signs and not as things in themselves.

This is our normal, day-to-day attitude towards language, and it is necessary for us to read efficiently. But this can also blind us to what is right in front of us. For example, an English teacher I knew once expressed surprise when I pointed out that ‘deodorant’ consists of the word ‘odor’ with the prefix ‘de-’. She had never paused long enough to consider it, even though she had used the word thousands of times.

I think this attitude of ennui can extend even to our senses. We see the subtle shades of green and red on an apple’s surface, and only think “I’m seeing an apple.” We feel the waxy skin, and only think “I’m touching an apple.” We take a bite, munching on the crunchy fruit, tasting the tart juices, and only think “I’m tasting an apple.” In short, the whole quality of the experience is ignored or at least underappreciated. The apple has become part of our routine and has thus been moved to the background of our consciousness.

Now, imagine treating everything this way. Imagine if all the sights, sounds, tastes, textures, and smells were treated as routine. This is an adequate description of my mentality when I was working in New York, and perhaps of many people all over the world. The final effect is a feeling of emptiness and dissatisfaction. Nothing fulfills or satisfies because nothing is really being experienced.

This is where art comes in. Good art has the power to, quite literally, bring us back to our senses. Art encourages us not only to glance, but to see; not only to hear, but to listen. It reconnects us with what is right in front of us, but is so often ignored. To quote the art critic Robert Hughes, the purpose of art is “to make the world whole and comprehensible, to restore it to us in all its glory and occasional nastiness, not through argument but through feeling, and then to close the gap between you and everything that is not you.”

Last summer, while I was still working at my job in NYC, I experienced the power of art during a visit to the Metropolitan. By then, I had already visited the Met dozens of times in my life. My dad used to take me there as a kid, to see the medieval arms and armor; and ever since I have visited at least once a year. The samurai swords, the Egyptian sarcophagi, the Greek statues—it has tantalized my imagination for decades.

In my most recent visits, however, the museum had lost much of its power. It had become routine for me. I had seen everything so many times that, like Levi-Strauss, I was visiting my memories rather than the museum itself.

But this changed during my last visit. It was the summer right before I came to Spain. I had just completed my visa application and was about to leave my job. This would be my last visit to the Met for at least a year, possibly longer. I was saying goodbye to something intimately familiar in order to embrace the unknown. My visit became no longer routine, but unique and fleeting, and this made me experience the museum in an entirely new way.

Somehow, the patina of familiarity had been peeled away, leaving every artwork fresh and exciting. Whereas on previous visits I viewed the Greco-Roman and Egyptian statues are mere artifacts, revealing information about former civilizations, this time I began to become acutely sensitive to previously invisible subtleties: fine textures, subtle hues, elegant forms. In short, I had stopped treating the artwork as icons—as mere symbols of a lost age—but as genuine works of art.

This experience was so intense that for several days I felt rejuvenated. I stopped feeling so deeply dissociated from my workaday world and began to take pleasure again in little things.

While waiting for the elevator, for example, I looked at a nearby wall; and I realized, to my astonishment, that it wasn’t merely a flat plain surface, as I had thought, but was covered in little bumps and shapes. It was stucco. I grew entranced by the shifting patterns of forms on the surface. I leaned closer, and began to see tiny cracks and little places where the paint had chipped off. The slight variations on the surface, a stain here, a splotch there, the way the shapes seemed to melt into one another, made it seem as though I were looking at a painting by Jackson Pollock or the surface of the moon.

I had glanced at this wall a hundred times before, but it took a visit to an art museum to let me really see it. Routine had severed me from the world, and art had brought me back to it.

§

Reality is always experienced through a medium—the medium of senses, concepts, language, and thought. Sensory information is detected, broken down, analyzed, and then reconfigured in the brain.

We are not passive sensors. While a microphone might simply detect tones, rhythms, and volume, we hear cars, birds, and speech; and while a camera might detect shapes, colors, and movement, we see houses and street signs. The data we collect is, thus, not experienced directly, but is analyzed into intelligible objects. And this is for the obvious reason that, unlike cameras and microphones, we need to use this information to survive.

In order to deal efficiently with the large amount of information we encounter every day, we develop habits of perceiving and thinking. These habits are partly expectations of the kinds of things we will meet (people, cars, language), as well as the ways we have learned to analyze and respond to these things. These habits thus lay at the crossroads between the external world of our senses and the internal world of our experience, forming another medium through which we experience (or don’t experience) reality.

Good art forces us to break these habits, at least temporarily. It does so by breaking down reality and then reconstructing it with a different principle—or perhaps I should say a different taste—than the one we habitually use.

The material of art—what artists deconstruct and re-imagine—can be taken from either the natural or the cultural world. By ‘natural world’ I mean the world as we experience it through our senses; and by ‘cultural world’ I mean the world of ideas, customs, values, religion, language, tradition. No art is wholly emancipated from tradition, just as no tradition is wholly unmoored from the reality of our senses. But very often one is greatly emphasized at the expense of the other.

A good example of an artform concerned with the natural world is landscape painting. A landscape artist breaks down what she sees into shapes and colors, and puts it together on her canvass, making whatever tasteful alteration she sees fit.

Her view of the landscape, and how she chooses to reconstruct it on her canvass, is of course not merely a matter between her and nature. Inevitably our painter is familiar with a tradition of landscape paintings; and thus while engaged with the natural landscape she is simultaneously engaged in a dialogue with contemporary and former artists. She is, therefore, simultaneously breaking down the landscape and her tradition of landscape painting, deciding what to change, discard, or keep. The final product emerges as the an artifact of an exchange between the artist, the landscape, and the tradition.

1024px-Vincent_van_Gogh,_Wheat_Field,_June_1888,_Oil_on_canvas
Landscape by van Gogh

The fact remains, however, that the final product can be effectively judged by how it transforms its subject—the landscape itself. Thus I would say that landscape paintings are primarily oriented towards the natural world.

By contrast, many religious paintings are much more oriented towards a tradition. It is clear, even from a glance, that the artists of the Middle Ages were not concerned with the accurate portrayal of individual humans, but with the evoking of religious figures through idealizations. The paintings thus cannot be evaluated by their fidelity to the sensory reality, but by their fidelity to a religious aesthetic.

800px-BambergApocalypseFolio010vWorshipBeforeThroneOfGod
From the Bamberg Apocalypse

It is worth noting that artworks oriented towards the natural world tend to be individualistic, while artworks oriented towards the cultural world tend to be communal. The reason is clear: art oriented towards the natural world reconnect us with our senses, and our senses are necessarily personal. By contrast, culture is necessarily impersonal and shared. The rise of perspective, realistic anatomy, individualized portraits, and landscape painting at the time of the Italian Renaissance can, I think, persuasively be interpreted as a break from the communalism of the medieval period and an embrace of individualism.

Music is an excellent demonstration of this tendency. To begin with, the medium of sound is naturally more social than that of sight or language, since sound pervades its environment. What is more, music is a wholly abstract art, and thus totally disconnected from the natural world.

This is because sound is just too difficult to record. With only a pencil and some paper, most people could make a rough sketch of an everyday object. But without some kind of notational system—and even then, maybe not—most people could not transcribe an everyday sound, like a bird’s chirping.

Thus, musicians (at least western musicians) take their material from culture rather than nature, from the world of tradition rather than the world of our senses.

(In an oral tradition, where music does not need to be transcribed, it is possible that music can strive to reproduce natural sounds; but this has not historically been the case in the west.)

To deal with the problem of transcribing sound, rigorous and formal ways of classifying sounds were developed. An organizational system developed, with its own laws and rules; and it is these laws and rules that the composer or songwriter manipulates.

And just as your knowledge of the natural world helps to make sense of visual art, so our cultural training helps us to make sense of music. Just as you’ve seen many trees and human faces, and thus can appreciate how painters re-imagine their appearances, so have you heard hours and hours of music in your life, most of it following the same or similar conventions.

Thus you can tell (most often unconsciously) when a tune does something unusual. Relatively few people, for example, can define a plagal cadence (an unusual final cadence from the IV to the I chord), but almost everyone responds to it in Paul McCartney’s “Yesterday.”

As a result of its cultural grounding, music an inherently communal art form. This is true, not only aesthetically, but anthropologically. Music is an integral part of many social rituals—political, religious, or otherwise. Whether we are graduating from high school, winning an Oscar, or getting married, music will certainly be heard. As much as alcohol, music can lower inhibitions by creating a sense of shared community, which is why we play it at every party. Music thus plays a different social role than visual art, connecting us to our social environment rather than to the often neglected sights and sounds of everyday life.

The above descriptions are offered only as illustrations of my more general point: Art occupies the same space as our habits, the gap between the external and the internal world. Painters, composers, and writers begin by breaking down something familiar from our daily reality. This material can be shapes, colors, ceramic vases, window panes, the play of shadow across a crumpled robe in the case of painting. It can be melodies, harmonies, timbre, volume, chord progressions, stylistic tropes in the case of music. And it can be adjectives, verbs, nouns, situations, gestures, personality traits in the case of literature

Whatever the starting material, it is the artist’s job to recombine it into something different, something that thwarts our habits. Van Gogh’s thick daubs of paint thwart our expectation of neat brushstrokes; McCartney’s plagal cadence thwarts our expectation of a perfect cadence; and Proust’s long, gnarly sentences and philosophic ideas thwart our expectations of how a novelist will write. And once we stop seeing, listening, feeling, sensing, thinking, expecting, reacting, behaving out of habit, and once more turn our fill attention to the world, naked of any preconceptions, we are in the right mood to appreciate art.

§

Yet it is not enough for art to be simply challenging. If this were true, art would be anything that was simply strange, confusing, or difficult. Good art can, of course, be all of those things; but it need not be.

Many artists nowadays, however, seem to disagree on this point. I have listened to works by contemporary composers which simply made no sense for my ears, and have seen many works of modern art which had no visual interest. We are living in the age of “challenging” art; and beauty is too often reduced to confusion.

But good art must not only challenge our everyday ways of seeing, listening, and being. It must reconstitute those habits along new lines. Art interrogates the space between the world and our habits of seeing the world. It breaks down the familiar—sights, harmonies, language—and then builds it back up again into the unfamiliar, using new principles and new taste. Yet for the product to be a work of art, and not mere strangeness, the unfamiliar must be rendered beautiful. That is the task of art.

Thus, Picasso does not only break down the perspectives and shapes of daily life, but builds them back up into new forms—fantastically strange, but sublime nonetheless. Debussy disintegrates the normal harmonic conventions—keys, cadences, chords—and then puts them all back together into a new form, uniquely his, and also unquestionably lovely. Great art not only shows you a different way of seeing and understanding the world, but makes this new vista attractive.

Pretentious art, art that merely wants to challenge, confuse, or frustrate you, is quite a different story. It can be most accurately compared to the relationship between an arrogant schoolmaster and a pupil. The artist is talking down to you from a position of heightened knowledge. The implication is that your perspective, your assumptions, your way of looking at the world are flawed and wrong, and the artist must help you to get out of your lowly state. Multiple perspectives are discouraged; only the artist’s is valid.

And then we come to simple entertainment.

Entertainment is something that superficially resembles art, but it’s function is entirely different. For entertainment does not reconnect us with the world, but lures us into a fantasy.

Perhaps the most emblematic form of pure entertainment is advertizing. However well made an advertisement is, it can never be art; for its goal is not to reconnect with the world, but to seduce us. Advertisements tell us we are incomplete. Instead of showing us how we can be happy now, they tell what we still need.

When you see an ad in a magazine, for example, you are not meant to scan it carefully, paying attention to the purely visual qualities. Rather, you are forced to view it as an image. By ‘image’ I mean a picture that serves to represent something else. Images are not meant to be looked at, but glanced at; images are not meant to be analyzed, but instantly understood. Ads use images because they are not trying to bring you back to your senses, but lure you into a fantasy.

Don’t misunderstand me: There is nothing inherently wrong with fantasy. Indeed, I think fantasy is almost indispensable to a healthy life. The fantasies of advertisements are, however, somewhat nefarious, since ads are never pure escapism. Rather, the ad forces you to negatively compare your actual life with the fantasy, conclude that you are lacking something, and then of course seek to remedy the situation by buying their product.

Most entertainment is, however, quite innocent, or at least it seems to me. For example, I treat almost all blockbusters as pure entertainment. I will gladly go see the new Marvel movie, not in order to have an artistic experience, but because it’s fun. The movie provides two hours of relief from the normal laws of physics, of probability, from the dreary regularities of reality as I know it. Superhero movies are escapism at its most innocent. The movies make no pretenses of being realistic, and thus you can hardly feel the envy caused by advertisements. You are free to participate vicariously and then to come back to reality, refreshed from the diversion, but otherwise unchanged.

The prime indication of entertainment is that it is meant to be effortless. The viewer is not there to be challenged, but to be diverted. Thus most bestselling novels are written with short words, simple sentences, stereotypical plotlines stuffed full of clichés—because this is easy to understand. Likewise, popular music uses common chord progressions and trite lyrics to make hits—music to dance to, to play in the background, to sing along to, but not to think about. This is entertainment: it does not reconnect us with our senses, our language, our ideas, but draw us into fantasy worlds, worlds with spies, pirates, vampires, worlds where everyone is attractive and cool, where you can be anything you want, for at least a few hours.

Some thinkers, most notably Theodor Adorno, have considered this quality of popular culture to be destructive. They abhor the way that people lull their intellects the sleep, tranquilized with popular garbage that deactivates their minds rather than challenges them. And this point cannot be wholly dismissed. But I tend to see escapism in a more positive light; people are tired, people are stressed, people are bored—they need some release. As long as fantasy does not get out of hand, becoming an goal in itself instead of only a diversion, I see no problem with it.

This, in my opinion, is the essential different between art and entertainment. There is also an essential different, I think, between art and craft.

Craft is a dedication to the techniques of art, rather than its goals. Of course, there is hardly such a thing as a pure craft or a pure art; no artist completely lacks a technique, and no craftsman totally lacks aesthetic originality. But there are certainly cases of artists whose technique stands at a bare minimum, as well as craftsmen who are almost exclusively concerned with the perfection of technique.

Here I must clarify that, by technique, I do not mean simply manual things like brush strokes or breath control. This includes more generally the mastery of a convention.

Artistic conventions consists of fossilized aesthetics. All living aesthetics represent the individual visions of artists—original, fresh, and personal. All artistic conventions are the visions of successful artists, usually dead, which have ceased to be refreshing and now have become charmingly familiar. Put another way, conventional aesthetics are the exceptions that have been made the rule. Not only that, but conventions often fossilize only the most obvious and graspable elements of brilliant artists of the past, leaving behind much of its living fibre.

This can be exemplified if we go and examine the paintings of William-Adolfe Bourgeureau in the Musée d’Orsay. Even from a glance, we can tell that he was a masterful painter. Every detail is perfect. The arrangement of the figures, the depiction of light and shadow, the musculature, the perspective—everything has been performed with exquisite mastery. My favorite painting of his is Dante and Virgil in Hell, a dramatic rendering of a scene from Dante’s Inferno. Dante and his guide stand to one side, looking on in horror as one naked man attacks another one, biting him in his throat. In the distance, a flying demon smiles, while a mound of tormented bodies writhes behind. The sky is a fiery red and the landscape is bleak.

Dante_et_Virgile-William_Bouguereau-IMG_8283

I think it is a wonderful painting. Even so, Dante and Virgil seems to exist more as a demonstration than as art. For the main thing that makes painting art, and the main thing this painting lacks, is an original vision. The content has been adopted straightforwardly from Dante. The technique, although perfectly executed, shows no innovations of Bourgeureau’s own. All the tools he used had been used before; he merely learned them. Thus the painting, however impressive, ultimately seems like a technical exercise. And this is the essence of craft.

§

I fear I have said more about what art isn’t than what it is. That’s because it is admittedly much easier to define art negatively than positively. Just as mystics convey the incomprehensibility of God by listing all the things He is not, maybe we can do the same with art?

Here is my list so far. Art is not entertainment, meant to distract with fantasy. Art is not craft, meant to display technique and obey rules. Art is not simply an intellectual challenge, meant to shock and frustrate your habitual ways of being. I should say art is not necessarily any of these things, though it can and often is all of them. Indeed, I would contend that the greatest art entertains, challenges, and displays technical mastery, and yet cannot be reduced to any or all of these things.

Here I wish to take an idea from the literary critic Harold Bloom, and divide up artworks into period pieces and great works. Period pieces are works that are highly effective in their day, but quickly become dated. These works are too specifically targeted at one specific cultural atmosphere to last. In other words, they may be totally preoccupied with the habits prevalent at one place and time, and become irrelevant when time passes.

To pick just one example, Sinclair Lewis’s Babbitt, which I sincerely loved, may be too engrossed in the foibles of 20th century American culture to be still relevant in 500 years. Its power comes from its total evisceration of American ways; and, luckily for Lewis, those ways have changed surprisingly little in its essentials since his day. The book’s continuing appeal therefore depends largely on how much the culture does or does not change. (That being said, that novel has a strong existentialist theme that may allow it to persist.)

Thus period pieces largely concern themselves with getting us to question particular habits or assumptions—in Lewis’s case, the vanities and superficialities of American life.

The greatest works of art, by contrast, are great precisely because they reconnect us with the mystery of the world. They don’t just get us to question certain assumptions, but all assumptions. They bring us face to face with the incomprehensibility of life, the great and frightening chasm that we try to bridge over with habit and convention.

No matter how many times we watch Hamlet, we can never totally understand Hamlet’s motives, the mysterious inner workings of his mind. No matter how long we stare into van Gogh’s eyes, we can never penetrate the machinations of that elusive mind. No matter how many times we listen to Bach’s Art of Fugue, we can entirely never wrap our minds around the dancing, weaving melodies, the baffling mixture of mathematical elegance and artistic sensitivity.

Why are these works so continually fresh? Why do they never seem to grow old? I cannot say. It is as if they are infinitely subtle, allowing you to discover new shades of meaning every time they are experienced anew. You can fall into them, just as I felt myself falling into van Gogh’s eyes as he stared at me across space and time.

When I listen to the greatest works of art, I feel like I do when I stare into the starry sky: absolutely small in the presence of something immense and immensely beautiful. Listening to Bach is like listening to the universe itself, and reading Shakespeare is like reading the script of the human soul. These works do not merely reconnect me to my senses, helping me to rid myself of boredom. They do not merely remind me that the world is an interesting place. Rather, these works remind me that I myself am a small part of an enormous whole, and should be thankful for every second of life, for it is a privilege to be alive somewhere so lovely and mysterious.

The Illogic of Discrimination

The Illogic of Discrimination

Discrimination is a problem. It is a blight on society and a blemish on personal conduct. During the last one hundred or so years, the fight against discrimination has played an increasingly important role in political discourse, particularly on the left: against racism, sexism, homophobia, transphobia, and white privilege. Nowadays this discourse has its own name: identity politics. We both recognize and repudiate more kinds of discrimination than ever before.

This is as it should be. Undeniably many forms of discrimination exist; and discrimination—depriving people of rights and privileges without legitimate reason—is the enemy of equality and justice. If we are to create a more fair and open society, we must fight to reduce prejudice and privilege as much as we can. Many people are already doing this, of course; and identity politics is rightly here to stay.

And yet, admirable as the goals of identity politics are, I am often dissatisfied with its discourse. Specifically, I think we are often not clear about why certain statements or ideas are discriminatory. Often we treat certain statements as prejudiced because they offend people. I have frequently heard arguments of this form: “As a member of group X, I am offended by Y; therefore Y is discriminatory to group X.”

This argument—the Argument from Offended Feelings, as I’ll call it—is unsatisfactory. First, it is fallacious because it generalizes improperly. It is the same error someone commits when they conclude, from eating bad sushi once, that all sushi is bad: the argument takes one case and applies it to a whole class of things.

Even if many people, all belonging to the same group, find a certain remark offensive, it still is invalid to conclude that the remark is intrinsically discriminatory: this only shows that many people think it is. Even the majority may be wrong—such as the many people who believe that the word “niggardly” comes from the racial slur and is thus racist, while in reality the word has no etymological or historical connection with the racial slur (it comes from Middle English).

Subjective emotional responses should not be given an authoritative place in the question of prejudice. Emotions are not windows into the truth. They are of no epistemological value. Even if everybody in the world felt afraid of me, it would not make me dangerous. Likewise, emotional reactions are not enough to show that a remark is discriminatory. To do that, it must be shown how the remark incorrectly assumes, asserts, or implies something about a certain group.

In other words, we must keep constantly in mind the difference between a statement being discriminatory or merely offensive. Discrimination is wrong because it leads to unjust actions; offending people, on the other hand, is not intrinsically wrong. Brave activists, fighting for a good cause, often offend many.

Thus it is desirable to have logical tests, rather than just emotional responses, for distinguishing discriminatory responses. I hope to provide a few tools in this direction. But before that, here are some practical reasons for preferring logical to emotional criteria.

Placing emotions, especially shared emotions, at the center of any moral judgment makes a community prone to fits of mob justice. If the shared feelings of outrage, horror, or disgust of a group is sufficient to condemn somebody, then we have the judicial equivalent of a witch-hunt: the evidence for the accusation is not properly examined, and the criteria that separate good evidence from bad are ignored.

Another practical disadvantage of giving emotional reactions a privileged place in judgments of discrimination is that it can easily backfire. If enough people say that they are not offended, or if emotional reactions vary from outrage to humor to ambivalence, then the community cannot come to a consensus about whether any remark or action is discriminatory. Insofar as collective action requires consensus, this is an obvious limitation.

What is more, accusations of discrimination are extremely easy to deny if emotional reactions are the ultimate test. The offended parties can simply be dismissed as “over-sensitive” (a “snowflake,” more recently), which is a common rhetorical strategy among the right (and is sometimes used on the left, too). The wisest response to this rhetorical strategy, I believe, is not to re-affirm the validity of emotions in making judgments of discrimination—this leads you into the same trap—but to choose more objective criteria. Some set of non-emotional, objective criteria for determining whether an action is discriminatory is highly desirable, I think, since there is no possibility of a lasting consensus without it.

So if these emotional tests can backfire, what less slippery test can we use?

To me, discriminatory ideas—and the actions predicated on these ideas—are discriminatory precisely because they are based on a false picture of reality: they presuppose differences that do not exist, and mischaracterize or misunderstand the differences that do exist. This is important, because morally effective action of any kind requires a basic knowledge of the facts. A politician cannot provide for his constituents’ needs of she does not know what they are. A lifeguard cannot save a drowning boy if he was not paying attention to the water. Likewise, social policies and individual actions, if they are based on a false picture of human difference, will be discriminatory, even with the best intentions in the world.

I am not arguing that discrimination is wrong purely because of this factual deficiency. Indeed, if I falsely think that all Hungarians love bowties, although this idea is incorrect and therefore discriminatory, this will likely not make me do anything immoral. Thus it is possible, in theory at least, to hold discriminatory views and yet be a perfectly ethical person. It is therefore necessary to distinguish between whether a statement is offensive (it upsets people), discriminatory (it is factually wrong about a group of people), and immoral (it harms people and causes injustice). The three categories do not necessary overlap, in theory or in practice.

It is obvious that, in our society, discrimination is usually far more nefarious than believing that Hungarians love bowties. Discrimination harms people, sometimes kills people; and discrimination causes systematic injustice. My argument is that to prove any policy or idea is intrinsically discriminatory requires proving that it asserts something empirically false.

Examples are depressingly numerous. Legal segregation in the United States was based on the premise that there existed a fundamental difference between blacks and whites, a difference that justified different treatment and physical separation. Similarly, Aristotle argued that slavery was legitimate because some people were born slaves: they were intrinsically slavish. Now, both of these ideas are empirically false. They assert things about reality that are either meaningless, untestable, or contrary to the evidence; and so any actions predicated on these ideas will be discriminatory—and, as it turned out, horrific.

These are not special cases. European antisemitism has always incorporated myths and lies about the Jewish people: tales of Jewish murders of Christian children, of widespread Jewish conspiracies, and so on. Laws barring women from voting and rules preventing women from attending universities were based on absurd notions about women’s intelligence and emotional stability. Name any group which has faced discrimination, and you can find a corresponding myth that attempts to justify the prejudice. Name any group which has dominated, and you can find an untruth to justify their “superiority.”

In our quest to determine whether a remark is discriminatory, it is worth taking a look, first of all, at the social categories themselves. Even superficial investigation will reveal that many of our social categories are close to useless, scientifically speaking. Our understanding of race in the United States, for example, gives an entirely warped picture of human difference. Specifically, the terms “white” and “black” have shifted in meaning and extent over time, and in any case were never based on empirical investigation.

Historically speaking, our notion of what it means to be “white” used to be far more exclusive than it is now, previously excluding Jews and Eastern Europeans. Likewise, as biological anthropologists never tire of telling us, there is more genetic variation in the continent of Africa than the rest of the world combined. Our notions of “white” and “black” simply fail to do justice to the extent of genetic variation and intermixture that exists in the United States. We categorize people into a useless binary using crude notions of skin color. Any policy based on supposed innate, universal differences between “black” and “white” will therefore be based on a myth. Similar criticisms can be made of our common notions of gender and sexual orientation

Putting aside the sloppy categories, discrimination may be based on bad statistics and bad logic. Here are the three errors I think are most common in discriminatory remarks.

The first is to generalize improperly: to erroneously attribute a characteristic to a group. This type of error is exemplified by Randy Newman’s song “Short People,” when he says short people “go around tellin’ great big lies.” I strongly suspect that it is untrue that short people tell, on average, more lies than taller people, which makes this an improper generalization.

This is a silly example, of course. And it is worth pointing out that some generalizations about group differences are perfectly legitimate. It is true, for example, that Spanish people eat more paella than Japanese people. When done properly, generalizations about people are useful and often necessary. The problem is that we are often poor generalizers. We jump to conclusions—using the small sample of our experience to justify sweeping pronouncements—and we are apt to give disproportionate weight to conspicuous examples, thus skewing our judgments.

Our poor generalizations are, all too often, mixed up with more nefarious prejudices. Trump exemplified this when he tweeted a table of statistics of crime rates back in November of 2015. The statistics are ludicrously wrong in every respect. Notably, they claim that more whites are killed by blacks than by other whites, when in reality more whites are killed by other whites. (This shouldn’t be a surprise, since most murders take place within the same community; and since people of the same race tend to live in the same community, most murders are intra-racial.)

The second type of error involved in prejudice is to make conclusions about an individual based on their group. This is a mistake even when the generalizations about the group are accurate. Even if it were statistically true, for example, that short people lied more often than tall people, it would still be invalid to assume that any particular short person is a liar.

The logical mistake is obvious: even if a group has certain characteristics on average, that does not mean that every individual will have these characteristics. On average, Spaniards are shorter than me; but that does not mean that I can safely assume any Spaniard will be shorter than I am. On average, most drivers are looking out for pedestrians; but that doesn’t make I can safely run into the road.

Of course, almost nobody, if they had a half-second to reflect, would make the mistake of believing every single member of a given group had whatever quality. More often, people are just wildly mistaken about how likely a certain person is to have any given quality—most often, we greatly overestimate.

It is statistically true, for example, that Asian Americans tend to do well on standardized math and science exams. But this generalization, which is valid, does not mean you can safely ask any Asian American friend for help on your science homework. Even though Asian Americans do well in these subjects as a group, you should still expect to see many individuals who are average or below average. This is basic statistics—and yet this error accounts for a huge amount of racist and sexist remarks.

Aside from falsely assuming that every member of a group will be characterized by a generalization, the second error also results from forgetting intersectionality: the fact that any individual is inevitably a member of many, intersecting demographic groups. Race, gender, income bracket, sexual orientation, education, religion, and a host of other categories will apply to any single individual. Predicting how the generalizations associated with these categories—which may often make contradictory predictions—will play out in any individual case, is close to impossible.

This is not even to mention all of the manifold influences on behavior that are not included in these demographic categories. Indeed, it is these irreducibly unique experiences, and our unique genetic makeup, that make us individuals in the first place. Humans are not just members of a group, nor even members of many different, overlapping groups: each person is sui generis.

In sum, humans are complicated—the most complicated things in the universe, so far as we know—and making predictions about individual people using statistical generalizations of broad, sometimes hazily defined categories, is hazardous at best, and often foolish. Moving from the specific to the general is fairly unproblematic; we can collect statistics and use averages and medians to analyze sets of data. But moving from the general to the specific is far more troublesome.

The third error is to assert a causal relationship where we only have evidence for correlation. Even if a generalization is valid, and even if an individual fits into this generalization, it is still not valid to conclude that an individual has a certain quality because they belong to a certain group.

Let me be more concrete. As we have seen, it is a valid generalization to say that Asian Americans do well on math and science exams. Now imagine that your friend John is Asian American, and also an excellent student in these subjects. Even in this case, to say that John is good at math “because he’s Asian” would still be illogical (and therefore racist). Correlation does not show causation.

First of all, it may not be known why Asian Americans tend to do better. And even if a general explanation is found—for example, that academic achievement is culturally prized and thus families put pressure on children to succeed—this explanation may not apply in your friend John’s case. Maybe John’s family does not pressure him to study and he just has a knack for science.

Further, even if this general explanation did apply in your friend John’s case (his family pressures him to study for cultural reasons), the correct explanation for him being a good student still wouldn’t be “because he’s Asian,” but would be something more like “because academic achievement is culturally prized in many Asian communities.” In other words, the cause would be ultimately cultural, and not racial. (I mean that this causation would apply equally to somebody of European heritage being raised in an Asian culture, a person who would be considered “white” in the United States. The distinction between cultural and biological explanations is extremely important, since one posits only temporary, environmental differences while the other posits permanent, innate differences.)

In practice, these three errors are often run together. An excellent example of this is from Donald Trump’s notorious campaign announcement: “When Mexico sends its people, they’re not sending their best. … They’re sending people that have lots of problems, and they’re bringing those problems with us [sic.]. They’re bringing drugs. They’re bringing crime. They’re rapists.”

Putting aside the silly notion of Mexico “sending” its people (they come of their own accord), the statement is discriminatory because it generalizes falsely. Trump’s words give the impression that a huge portion, maybe even the majority, of Mexican immigrants are criminals of some kind—and this isn’t true. (In reality, the statistics for undocumented immigrants can put native citizens to shame, as demonstrated here.)

Trump then falls into the third error by treating people as inherently criminal—the immigrants simply “are” criminals, as if they were born that way. Even if it were proven that Mexican immigrants had significantly higher crime rates, it would still be an open question why this was so. The explanation might have nothing to do with their cultural background or any previous history of criminality. It might be found, for example, that poverty and police harassment significantly increased criminality; and in this case the government would share some of the responsibility.

Donald Trump committed the second error in his infamous comments about Judge Gonzalo Curiel, who was overseeing a fraud lawsuit against Trump University. Trump attributed the Curiel’s (perceived) hostility to his Mexican heritage. Trump committed a simple error of fact when he called Curiel “Mexican” (Curiel was born in Indiana), and then committed a logical fallacy when he concluded that the judge’s actions and attitudes were due to his being of Mexican heritage. Even if it were true (as I suspect it is), that Mexican-Americans, on the whole, don’t like Trump, it still doesn’t follow that any given individual Mexican-American doesn’t like him (2nd error); and even if Curiel did dislike Trump, it wouldn’t follow that it was because of his heritage (3rd error).

These errors and mistakes are just my attempt at an outline of how discrimination can be criticized on logical, empirical grounds. Certainly there is much more to be said in this direction. What I hoped to show in this piece was that this strategy is viable, and ultimately more desirable than using emotional reactions as a test for prejudice.

Discourse, agreement, and cooperation are impossible when people are guided by emotional reactions. We tend to react emotionally along the lines of factions—indeed, our emotional reactions are conditioned by our social circumstances—so privileging emotional reactions will only exacerbate disagreements, not help to bridge them. In any case, besides the practical disadvantages—which are debatable—I think emotional reactions are not reliable windows into the truth. Basing reactions, judgments, and criticisms on sound reasoning and dependable information is always a better long-term strategy.

For one, this view of discrimination provides an additional explanation for why prejudice is so widespread and difficult to eradicate. We humans have inherited brains that are constantly trying to understand our world in order to navigate it more efficiently. Sometimes our brains make mistakes because we generalize too eagerly from limited information (1st error), or because we hope to fit everything into the same familiar pattern (2nd error), or because we are searching for causes of the way things work (3rd error).

So the universality of prejudice can be partially explained, I think, by the need to explain the social world. And once certain ideas become ingrained in somebody’s worldview, it can be difficult to change their mind without undermining their sense of reality or even their sense of identity. This is one reason why prejudices can be so durable (not to mention that certain prejudices justify convenient, if morally questionable, behaviors, as well as signal a person’s allegiance to a certain group).

I should say that I do not think that discrimination is simply the result of observational or logical error. We absorb prejudices from our cultural environment; and these prejudices are often associated with divisive hatreds and social tension. But even these prejudices absorbed by the environment—that group x is lazy, that group y is violent, that group z is unreliable—always inevitably incorporate some misconception of the social world. Discrimination is not just a behavior. Mistaken beliefs are involved—sometimes obliquely, to be sure—with any prejudice.

This view of prejudice—as caused, at least in part, by an incorrect picture of the world, rather than pure moral depravity—may also allow us to combat it more effectively. It is easy to imagine a person with an essentially sound sense of morality who nevertheless perpetrates harmful discrimination because of prejudices absorbed from her community. Treating such a person as a monster will likely produce no change of perspective; people are not liable to listen when they’re being condemned. Focusing on somebody’s misconceptions may allow for a less adversarial, and perhaps more effective, way of combating prejudice. And this is not to mention the obvious fact that somebody cannot be morally condemned for something they cannot help; and we cannot help if we’re born into a community that instructs its members in discrimination.

Even if this view does not adequately explain discrimination, and even if it does not provide a more effective tool in eliminating it, this view does at least orient our gaze towards the substance rather than the symptoms of discrimination.

Because of their visibility, we tend to focus on the trappings of prejudice—racial slurs, the whitewashed casts of movies, the use of pronouns, and so on—instead of the real meat of it: the systematic discrimination—economic, political, judicial, and social—that is founded on an incorrect picture of the world. Signs and symptoms of prejudice are undeniably important; but eliminating them will not fix the essential problem: that we see differences that aren’t really there, we assume differences without having evidence to justify these assumptions, and we misunderstand the nature and extent of the differences that really do exist.

On the Quarter-Life Crisis

On the Quarter-Life Crisis

From College to Chaos

In the modern world, there is a certain existential dread that comes with being in your twenties. Certainly this is true in my case.

This dread creeps up on you in the years of struggle, confusion, and setbacks that many encounter after graduating university. There are many reasons for this.

One is that college simply does not prepare you for the so-called “real world.” In college, you know what you have to do, more or less. Every class has a syllabus. Every major has a list of required courses. You know your GPA and how many credits you need to graduate.

College lacks some of that uncertainty and ambiguity that life—particularly life as a young adult—so abundantly possesses. There is a clear direction forward and it’s already been charted out for you. You know where you’re going and what you have to do to get there.

Another big difference is that college life is fairly egalitarian. Somebody might have a cuter boyfriend, a higher GPA, a richer dad, or whatever, but in the end you’re all just students. As a consequence, envy doesn’t have very much scope. Not that college students don’t get envious, but there are far fewer things, and less serious things, to get envious about. You don’t scroll through your newsfeed and see friends bragging about promotions, proposals, babies, and paid vacations.

There’s one more big difference: nothing you do in college is potentially a big commitment. The biggest commitment you have to make is what to major in; and even that is only a commitment for four years or less. Your classes only last a few months, so you don’t have to care much about professors. You are constantly surrounded by people your age, so friendships and relationships are easy to come by.

Then you graduate, and you’re thrown into something else entirely. Big words like Career and Marriage and Adulthood start looming large. You start asking yourself questions. When you take a job, you ask yourself “Can I imagine doing this for the rest of my life?” When you date somebody, you say to yourself “Can I imagine living with this person the rest of my life?” If you move to another city, you wonder “Could I make a home here?”

You don’t see adults as strange, foreign creatures anymore, but as samples of what you might become. You are expected, explicitly and implicitly, to become an adult yourself. But how? And what type of adult? You ask yourself, “What do I really want?” Yet the more you think about what you want, the less certain it becomes. It’s easy to like something for a day, a week, a month. But for the rest of your life? How are you supposed to commit yourself for such an indefinitely long amount of time?

Suddenly your life is not just potential anymore. Very soon, it will become actual. Instead of having a future identity, you will have a present identity. This is really frightening. When your identity is only potential, it can take on many different forms in your imagination. But when your identity is present and actual, you lose the deliciousness of endless possibility. You are narrowed down to one thing. Now you have to choose what that thing will be. But it’s such a hard choice, and the clock keeps ticking. You feel like you’re running out of time. What will you become?

The American Dream

A few weeks ago I was taking a long walk, and my route took me through a wealthy suburban neighborhood. Big, stately houses with spacious driveways, filled with expensive cars, surrounded me on all sides. The gardens were immaculate; the houses had big lawns with plenty of trees, giving them privacy from their neighbors. And they had a wonderful view, too, since the neighborhood was right on the Hudson River.

I was walking along, and I suddenly realized that this is what I’m supposed to want. This is the American Dream, right? A suburban house, a big lawn, a few cars and a few kids.

For years I’d been torturing myself with the idea that I would never achieve success. Now that I was looking at success, what did it make me feel? Not much. In fact, I didn’t envy the people in those houses. It’s not that I pitied them or despised them. I just couldn’t imagine that their houses and cars and their view of the river, wonderful as it all was, made them appreciably happier than people without those things.

So I asked myself, “Do I really want all these things? A house? A wife? Kids?” In that moment, the answer seemed to be “No, I don’t want any of that stuff. I want my freedom.”

Yet nearly everybody wants this stuff—eventually. And I have a natural inclination to give people some credit. I don’t think folks are mindless cultural automatons who simply aspire to things because that’s how they’ve been taught. I don’t think everybody who wants conventional success is a phony or a sell-out.

Overwhelmingly, people genuinely want these things when they reach a certain point in their lives. I’m pretty certain I will want them, too, and maybe soon. The thing that feels uncomfortable is that, in the mean time, since I expect to want these things, I feel an obligation to work towards them, even though they don’t interest me now. Isn’t that funny?

Equations of Happiness

One of the reasons that these questions can fill us with dread is that we absorb messages from society about the definition of happiness.

One of these messages is about our career. Ever since I was young, I’d been told “Follow your passion!” or “Follow your dreams!” The general idea is that, if you make your passion into your career, you will be supremely happy, since you’ll get paid for what you like doing. Indeed, the phrase “Get paid for what you like doing” sometimes seems like a pretty decent definition of happiness.

Careers aren’t the only thing we learn to identify with happiness. How many stories, novels, and movies end with the boy getting the girl, and the couple living happily ever after? In our culture, we have veritable a mythology of love. Finding “the one,” finding your “perfect match,” and in the process finding the solution to life—this is a story told over and over again, until we subconsciously believe that romantic love is the essential ingredient to life.

Work and Love are two of the biggest, but there are so many other things that we learn to identify with happiness. Having a perfect body, being beautiful and fit. Beating others in competitions, winning contests, achieving things. Being cool and popular, getting accepted into a group. Avoiding conflict, pleasing others. Having the right opinions, knowing the truth. This list only scratches the surface.

In so many big and little ways, in person and in our media, we equate these things with happiness and self-worth. And when we even suspect that we don’t have them—that we might not be successful, popular, right, loved, or whatever—then we feel a sickening sense of groundlessness, and we struggle to put that old familiar ground beneath our feet.

Think of all the ways that you measure yourself against certain, self-imposed standards. Think of all the times you chastise yourself for falling short, or judge yourself harshly for failing to fit this self-image you’ve built up, or fallen into a dark hole when something didn’t go right. Think about all the things you equate with happiness.

Now, think about how you judge your good friends. Do you look down on them if they aren’t successful? Do you think they’re worthless if they didn’t find “the one”? Do you spend much time judging them for their attractiveness, popularity, or coolness? Do you like them less if they lose or fail? If someone else rejects them, do you feel more prone to reject them too?

I’d wager the answer to all these questions is “No.” So why do we treat ourselves this way?

Is it the Money?

There’s no question that the quarter-life crisis is partly a product of privilege. It takes a certain amount of affluence to agonize over what will be my “calling” or who will be “the one.” Lots of people have to pay the rent; and their work and romantic options are shaped by that necessity. When you’re struggling to keep your head above water, your anxiety is more practical than existential. This thought makes me feel guilty for complaining.

But affluence is only part of the it. The other is expectation. Many of us graduated full of hope and optimism, and found ourselves in a limping economy, dragging behind us a big weight of college debt. Just when we were supposed to be hitting the ground running, we were struggling to find jobs and worrying how to pay for the degree we just earned. And since many of us had been encouraged—follow your dreams!—to study interesting but financially impractical things, our expensive degrees seemed to hurt us more than help us.

This led to a lot of bitterness. My generation had been told that we could be anything we wanted. Just do the thing you’re passionate about, and everything will follow. That was the advice. But when we graduated, it seemed that we’d been conned into paying thousands of dollars for a worthless piece of paper. This led to a lot of anger and disenchantment among twenty-somethings, which is why, I think, so many of us gravitated towards Bernie Sanders. Our parents had a car, a house, and raised a family, while we were living at home, working at Starbucks, and using our paychecks to pay for our anthropology degree.

For a long while I used my sense of injustice to justify my angst. I had the persistent feeling that it wasn’t fair, and that went back and forth between being angry at myself or the world.

Nevertheless, I think that, for most middle class people, financial factors don’t really explain the widespread phenomenon of the quarter-life crisis.

I realized this when I started my first decent-paying job. I wasn’t making a lot of money, you understand, but I was making more than enough for everything I wanted. The result? I felt even worse. When I took care of the money problem, the full weight of the existential crisis hit me. I kept asking myself, “Can I really imagine doing this forever?” I thought about my job, and felt empty. And this feeling of emptiness really distressed me, because I thought my job was supposed to be exciting and fulfilling.

This was a valuable lesson for me. I expected the money to calm me and make me happy, and yet I only felt worse and worse. Clearly, the problem was with my mindset and not my circumstances. How to fix it?

From Crisis to Contentment

Well, I’m not out of it yet. But I have made some progress.

First, I think it’s important to take it easy on ourselves. We are so prone to hold ourselves up to certain self-imposed standards, or some fixed idea of who we are. We also like to compare ourselves with others, feeling superior when we’re doing “better,” and worthless when we’re doing “worse.” Take it easy with all that. All of these standards are unreal. You tell yourself you’re “supposed” to be doing such and such, making this much money, and engaged at a whatever age. All this is baloney. You aren’t “supposed” to be or to do anything.

Bertrand Russell said: “At twenty men think that life will be over at thirty. I, at the age of fifty-eight, can no longer take that view.” He’s right: There is nothing magical about the age of thirty. There is no age you pass when you don’t have to worry about money, about your boss, about your partner, about your health. There will always be something to worry about. There will always be unexpected curveballs that upset your plans. Don’t struggle to escape the post-college chaos; try to accept it as normal.

Don’t equate your happiness or your self-worth with something external. You are not your job, your hobby, your paycheck, your body, your friend group, or your relationship. You aren’t a collection of accomplishments or a Facebook profile. You’re a person, and you have worth just because you’re a person, pure and simple. Everything else is incidental.

If you want to be rich, famous, loved, successful—that’s fine, but that won’t make you any better than other people. It might not even make you happier. Don’t worry so much about putting ground under your feet. Don’t fret about establishing your identity. You will always be changing. Life will always be throwing problems at you, and sometimes things will go wrong. Try to get comfortable with the impermanence of things.

Don’t look for the “meaning” of life. Don’t look for “the answer.” Look for meaningful experiences of being alive. Appreciate those moments when you feel totally connected with life, and try to seek those moments out. Realize that life is just a collection of moments, and not a novel with a beginning, middle, and end.

These moments are what bring you happiness, not the story you tell about yourself. So you don’t have to feel existential dread about these big Adult Questions of Love and Work. It’s important to find a good partner and a good job. These things are very nice, but they’re not what give your life value or define you or make life worth living. Treat them as practical problems, not existential ones. Like any practical problem, they might not have a perfect solution, and you might fail—which is frustrating. But failure won’t make you worthless, just like success won’t legitimize your life.

One last thing. Stop caring about what other people think. Who cares? What do they know? Be a friend to yourself, be loyal to yourself. Every time to judge yourself, you betray yourself. In a thousand little ways throughout the day, we reject our experiences and our world. Don’t reject. Accept. Stand steadfastly by yourself as you ride down the steady stream of thoughts, feelings, flavors, colors, sounds, mistakes, accidents, failures, successes, and petty frustrations that make up life as we know it.

On Egotism and Education

On Egotism and Education

A while ago a friend asked me an interesting question.

As usual, I was engrossed in some rambling rant about a book I was reading—no doubt enlarging upon the author’s marvelous intellect (and, by association, my own). My poor friend, who is by now used to this sort of thing, suddenly asked me:

“Do you really think reading all these books has made you a better person?”

“Well, yeah…” I stuttered. “I think so…”

An awkward silence took over. I could truthfully say that reading had improved my mind, but that wasn’t the question. Was I better? Was I more wise, more moral, calmer, braver, kinder? Had reading made me a more sympathetic friend, a more caring partner? I didn’t want to admit it, but the answer seemed to be no.

This wasn’t an easy thing to face up to. My reading was a big part of my ego. I was immensely proud, indeed even arrogant, about all the big books I’d gotten through. Self-study had strengthened a sense of superiority.

But now I was confronted with the fact that, however much more knowledgeable and clever I had become, I had no claim to superiority. In fact—although I hated even to consider the possibility—reading could have made me worse in some ways, by giving me a justification for being arrogant.

This phenomenon is by no means confined to myself. Arrogance, condescension, and pretentiousness are ubiquitous qualities in intellectual circles. I know this both at first- and second-hand. While lip-service is often given to humility, the intellectual world is rife with egotism. And often I find that the more well-educated someone is, the more likely they are to assume a condescending tone.

This is the same condescending tone that I sometimes found myself using in conversations with friends. But condescension is of course more than a tone; it is an attitude towards oneself and the world. And this attitude can be fostered and reinforced by habits you pick up through intellectual activity.

One of these habits is argumentativeness for me, most closely connected with reading philosophy. Philosophy is, among other things, the art of argument; and good philosophers are able to bring to their arguments a level of rigor, clarity, and precision that is truly impressive. The irony here is that there is far more disagreement in philosophy than in any other discipline. To be fair, this is largely due to the abstract, mysterious, and often paradoxical nature of the questions they investigate—which resist even the most thorough analysis.

Nevertheless, given that their professional success depends upon putting forward the strongest argument to a given problem, philosophers devote a lot of time to picking apart the theories and ideas of their competitors. Indeed, the demolition of a rival point of view can assume supreme importance. A good example of this is Gilbert Ryle’s Concept of Mind—a brilliant and valuable book, but one that is mainly devoted to debunking an old theory rather than putting forward a new one.

This sort of thing isn’t confined to philosophy, of course. I have met academics in many disciplines whose explicit goal is to quash another theory rather than to provide a new one. I can sympathize with this, since proving an opponent wrong can feel immensely powerful. To find a logical fallacy, an unwarranted assumption, an ambiguous term, an incorrect generalization in a competitor’s work, and then to focus all your firepower on this structural weakness until the entire argument comes tumbling down—it’s really satisfying. Intellectual arguments can have all the thrill of combat, with none of the safety hazards.

But to steal a phrase from the historian Richard Fletcher, disputes of this kind usually generate more heat than light. Disproving a rival claim is not the same thing as proving your own claim. And when priority is given to finding the weaknesses rather than the strengths of competing theories, the result is bickering rather than the pursuit of truth.

To speak from my own experience, in the past I’ve gotten to the point where I considered it a sign of weakness to agree with somebody. Endorsing someone else’s conclusions without reservations or qualifications was just spineless. And to fail to find the flaws in another thinker’s argument—or, worse yet, to put forward your own flawed argument—was simply mortifying for me, a personal failing. Needless to say this mentality is not desirable or productive, either personally or intellectually.

Besides being argumentative, another condescending attitude that intellectual work can reinforce is name-dropping.

In any intellectual field, certain thinkers reign supreme. Their theories, books, and even their names carry a certain amount of authority; and this authority can be commandeered by secondary figures through name-dropping. This is more than simply repeating a famous person’s name (although that’s common); it involves positioning oneself as an authority on that person’s work.

Two books I read recently—Mortimer Adler’s How to Read a Book, and Harold Bloom’s The Western Canon—are prime examples of this. Both authors wield the names of famous authors like weapons. Shakespeare, Plato, and Newton are bandied about, used to cudgel enemies and to cow readers into submission. References to famous thinkers and writers can even be used as substitutes for real argument. This is the infamous argument from authority, a fallacy easy to spot when explicit, but much harder when used in the hands of a skilled name-dropper.

I have certainly been guilty of this. Even while I was still an undergraduate, I realized that big names have big power. If I even mentioned the names of Dante or Milton, Galileo or Darwin, Hume or Kant, I instantly gained intellectual clout. And if I found a way to connect the topic under discussion to any famous thinker’s ideas—even if that connection was tenuous and forced—it gave my opinions weight and made me seem more “serious.” Of course I wasn’t doing this intentionally to be condescending or lazy. At the time, I thought that name-dropping was the mark of a dedicated student, and perhaps to a certain extent it is. But there is a difference between appropriately citing an authority’s work and using their work to intimidate people.

There is a third way that intellectual work can lead to condescending attitudes, and that is, for lack of a better term, political posturing. This particular attitude isn’t very tempting for me, since I am by nature not very political, but this habit of mind is extremely common nowadays.

By political posturing I mean several related things. Most broadly, I mean when someone feels that people (himself included) must hold certain beliefs in order to be acceptable. These can be political or social beliefs, but they can also be more abstract, theoretical beliefs. In any group—be it a university department, a political party, or just a bunch of friends—a certain amount of groupthink is always a risk. Certain attitudes and opinions become associated with the group, and they become a marker of identity. In intellectual life this is a special hazard because proclaiming fashionable and admirable opinions can replace the pursuit of truth as the criterion of acceptability.

At its most extreme, this kind of political posturing can lead to a kind of gang mentality, wherein disagreement is seen as evil and all dissent must be punished with ostracism and mob justice. This can be observed in the Twitter shame campaigns of recent years, but a similar thing happens in intellectual circles.

During my brief time in graduate school, I felt an intense and ceaseless pressure to espouse leftist opinions. This seemed to be ubiquitous: students and professors sparred with one another, in person and in print, by trying to prove that their rival is not genuinely right-thinking (or “left-thinking” as the case may be). Certain thinkers could not be seriously discussed, much less endorsed, because their works had intolerable political ramifications. Contrariwise, questioning the conclusions of properly left-thinking people could leave you vulnerable to accusations about your fidelity to social justice or economic equality.

But political posturing has a milder form: know-betterism. Know-betterism is political posturing without the moral outrage, and its victims are smug rather than indignant.

The book Language, Truth, and Logic by A.J. Ayer comes to mind, wherein the young philosopher, still in his mid-twenties, simply dismisses the work of Plato, Aristotle, Spinoza, Kant and others as hogwash, because it doesn’t fit into his logical positivist framework.

Indeed, logical positivism is an excellent example of the pernicious effects of know-betterism. In retrospect, it seems incredible that so many brilliant people endorsed it, because logical positivism has crippling and obvious flaws. But not only did people believe it, but they thought it was “The Answer”—the solution to every philosophical problem—and considered anyone who thought otherwise a crank or a fool, somebody who couldn’t see the obvious. This is the danger of groupthink: when everyone “in the know” believes something, it can seem obviously right, regardless of the strength of the ideas.

The last condescending attitude I want to mention is rightness—the obsession with being right. Now of course there’s nothing wrong with being right. Getting nearer to the truth is the goal of all honest intellectual work. But to be overly preoccupied with being right is, I think, both an intellectual and a personal shortcoming.

As far as I know, the only area of knowledge in which real certainty is possible is mathematics. The rest of life is riddled with uncertainty. Every scientific theory might, and probably will, be overturned by a better theory. Every historical treatise is open to revision when new evidence, priorities, and perspectives arise. Philosophical positions are notoriously difficult to prove, and new refinements are always around the corner. And despite the best efforts of the social sciences, the human animal remains a perpetually surprising mystery.

To me, this uncertainty in our knowledge means that you must always be open to the possibility that you are wrong. The feeling of certainty is just that—a feeling. Our most unshakeable beliefs are always open to refutation. But when you have read widely on a topic, studied it deeply, thought it through thoroughly, it gets more and more difficult to believe that you are possibly in error. Because so much effort, thought, and time has gone into a conclusion, it can be personally devastating to think that you are mistaken.

This is human, and understandable, but can also clearly lead to egotism. For many thinkers, it becomes their goal in life to impose their conclusions upon the world. They struggle valiantly for the acceptance of their opinions, and grow resentful and bitter when people disagree with or, worse, ignore them. Every exchange thus becomes a struggle, pushing your views down another person’s throat.

This is not only an intellectual shortcoming—since it is highly unlikely that your views represent the whole truth—but it is also a personal shortcoming, since it makes you deaf to other people’s perspectives. When you are sure you’re right, you can’t listen to others. But everyone has their own truth. I don’t mean that every opinion is equally valid (since there are such things as uninformed opinions), but that every opinion is an expression, not only of thoughts, but of emotions, and emotions can’t be false.

If you want to have a conversation with somebody instead of giving them a lecture, you need to believe that they have something valuable to contribute, even if they are disagreeing with you. In my experience it is always better, personally and intellectually, to try to find some truth in what someone is saying than to search for what is untrue.

Lastly, being overly concerned with being right can make you intellectually timid. Going out on a limb, disagreeing with the crowd, putting forward your own idea—all this puts you at risk of being publicly wrong, and thus will be avoided out of fear. This is a shame. The greatest adventure you can take in life and thought is to be extravagantly wrong. Name any famous thinker, and you will be naming one of the most gloriously incorrect thinkers in history. Newton, Darwin, Einstein—every one of them has been wrong about something.

For a long time I have been the victim of all of these mentalities—argumentativeness, name-dropping, political posturing, know-betterism, and rightness—and to a certain extent, probably I always will. What makes them so easy to fall into is that they are positive attitudes taken to excess. It is admirable and good to subject claims to logical scrutiny, to read and cite major authorities, to advocate for causes you think are right, to respect the opinions of your peers and colleagues, and to prioritize getting to the truth.

But taken to excesses, these habits can lead to egotism. They certainly have with me. This is not a matter of simple vanity. Not only can egotism cut you off from real intimacy with other people, but it can lead to real unhappiness, too.

When you base your self-worth on beating other people in argument, being more well read than your peers, being on the morally right side, being in the know, being right and proving others wrong, then you put yourself at risk of having your self-worth undermined. To be refuted will be mortifying, to be questioned will be infuriating, to be contradicted will be intolerable. Simply put, such an attitude will put you at war with others, making you defensive and quick-tempered.

An image that springs to mind is of a giant castle with towering walls, a moat, and a drawbridge. On the inside of this castle, in the deepest chambers of the inner citadel, is your ego. The fortifications around your ego are your intellectual defenses—your skill in rhetoric, logic, argument, debate, and your impressive knowledge. All of these defense are necessary because your sense of self-worth depends on certain conditions: being perceived, and perceiving oneself, as clever, correct, well-educated, and morally admirable.

Intimacy is difficult in these circumstances. You let down the drawbridge for people you trust, and let them inside the walls. But you test people for a long time before you get to this point—making sure they appreciate your mind and respect your opinions—and even then, you don’t let them come into the inner citadel. You don’t let yourself be totally vulnerable, because even a passing remark can lead to crippling self-doubt when you equate your worth with your intellect.

Thus the fundamental mindset that leads to all of the bad habits described above is that being smart, right, or knowledgeable is the source of your worth as a human being. This is dangerous, because it means that you constantly have to reinforce the idea that you have all of these qualities in abundance. Life becomes then a constantly performance, an act for others and for yourself. And because a part of you knows that its an act—a voice you try to ignore—then it also leads to considerable bad faith.

As for the solution, I can only speak from my own experience. The trick, I’ve found, is to let down my guard. Every time you defend yourself you make yourself more fragile, because you tell yourself that there is a part of you that needs to be defended. When you let go of your anxieties about being wrong, being ignorant, or being rejected, your intellectual life will be enriched. You will find it easier to learn from others, to consider issues from multiple points of view, and to propose original solutions.

Thus I can say that reading has made me a better person, not because I think intellectual people are worth more than non-intellectuals, but because I realized that they aren’t.

On Justice

On Justice

A question that I often ask myself, especially now during the election season, is this: What makes a society just? Specifically, what are the criteria that determine whether a law is legitimate or a government is principled? I do not intend this question to be a legal question; whether something is constitutional is for me a secondary matter. The primary matter is this: What are the values on which a constitution is based that make it a worthy document?

Justice consists of the standards by which we determine whether a society is fair.

Justice is always a meta-standard—a standard applied to other standards that allows us to determine whether these other standards are worthy. For example, our standards of justice can be applied to our standards of ethics, to determine whether they need changing. But justice deals with other things besides crime and punishment. Economic justice deals with the fairness of the distribution of economic means; and social justice deals with the fairness of the treatment different demographic groups in the society. All of these, however, deal with the fairness of certain standards, whether they are the standards for determining whether someone should go to jail, make a lot of money, or be treated differently.

The crux of the matter, of course, is what we mean by fair. This is what philosophers, politicians, and virtually everyone else disagree about. The problem is that fairness often seems like a self-evident concept, when it reality it is far from that.

To start, what seems fair or unfair can depend very much on your situation. Let us say a lion managed to grab a gazelle and is about to eat it. From the gazelle’s point of view, this situation is monstrously unfair. The gazelle didn’t do anything to the lion, nor anything to anyone else, so why should it be the one being eaten? From the lion’s point of view, on the other hand, the situation is absolutely fair. The lion was born with certain dietary needs; it has to hunt and kill to survive; it picked the gazelle it found easiest to catch. What’s unfair about that? Should the lion starve itself?

Of course, lions and gazelles have no concept of fairness, so they lack this particular problem. But we have to deal with it. Finding a standard that can satisfy everyone in a given community is, I think, impossible. Every standard of fairness is bound to disappoint and embitter some. This is the basic tragedy of life. We can mourn this, but also learn from it. Since disappointment is unavoidable, and since perspective colors our notions of fair and unfair, it is clear that emotion alone cannot be the basis of a consistent standard of justice. We need something more objective, a clear set of principles that can be applied to any situation.

Let us start, as so many philosophers have before, with people in a so-called State of Nature. By this, I only mean people living without community of any sort—without rules, laws, or government, each person looking out for themself.

In this hypothetical (and wholly imaginary) situation, every person is maximally free. The only restrictions on people’s actions exist through the necessities of life. If they want to survive, the natural people must devote time and energy to finding food and building shelter for themselves. If they choose not to kill another person, it might be because they want that person’s help or because they are afraid of vengeance; but not because of moral scruples or fear of legal persecution. If a natural person finds a loner in the forest and decides to kill him and take his stuff, there might not be consequences. It is up to each individual what to do. Their every action is thus a calculated risk.

There are clearly some advantages to this hypothetical state of affairs. Most conspicuously, each person is a master of themself and does not have to listen to anybody. They can live where they like, how they like; they can eat, sleep, and play whenever they wish. But the disadvantages are also considerable. The main problem is lack of security. Without laws or police, you would always need to fear your neighbor; without a social safety net, you would always live at the mercy of the elements. It would be a life of maximal freedom and constant danger.

To repeat, I am not saying that this ‘Natural State’ ever existed; to the contrary, I do not think humans ever existed without communities, and I am only calling it ‘natural’ in keeping with the philosophical tradition. I am merely using this scenario to illustrate what a situation of maximal freedom would look like—wherein the only checks on a person’s actions are due to natural, and not social, constraints; wherein bare necessity, and not rules, custom, or law, are what guide life.

Now let us imagine what will happen if the people decide to get together and form a little community. This will clearly entail some changes. Most relevantly for my purposes, the people will have to start developing ways of organizing their actions. This is because, as they will soon discover, their unbridled desires will inevitably come into conflict.

If, for example, there are 10 apples and 10 people, it might be the case that each person wants all ten for themself. But when each of them tries to take all the apples, they will of course start arguing. If they are going to continue living together, they need to develop a solution.

Perhaps three of them fashion spears and shields, and use their weapons to impose their will on the other seven. Thus an oligarchy emerges, in which the three masters make the seven slaves gather apples for them, leaving the slaves only the cores for meager sustenance. The masters punish disobedience, hunt down deserters, and grow fat while the others wither away.

This is the classic Might Makes Right solution to the problem of human society. Thinkers since Plato have been grappling with it, and as long as humans live together it will be a constant temptation. Nietzsche would say that a society wherein the strong dominate the weak is the fairest society of all—fairness itself, he might say, since people are being divided due to the natural law of strength and not the artificial law of custom. The devotees of Realpolitik—Thucydides and Machiavelli, to name just two—find this dominance of the strong over the weak inevitable; and the Social Darwinians go further and find it desirable.

Admittedly, the use of force does solve the problem of conflict, albeit brutally. A powerful few, by violent means, can indeed reduce infighting enough to produce a stable society. But I think most people instinctively recoil from the solution as unjust. After all, being born strong, violent, and domineering does not make you any more deserving of power than being born weak, meek, and kindhearted.

But let us a take a closer look. In my society—namely, the modern West—we have attempted (in theory) to create a meritocracy, wherein the most intelligent and innovative people are able to become wealthy. But is a meritocracy of mind any more fair than a meritocracy of muscle? Is it any better to reward the clever than the cruel? Perhaps both systems are unfair, since they reward people based on an attribute that is not within their control. After all, you can’t choose whether you’re born a genius any more than whether you’re born a warrior. Yes, rewarding the bright involves less bruising and bloodshed than rewarding the belligerent; but is it, in the strict sense, any more fair?

I think so, for the following reason. In a meritocracy of intelligence (in theory, at least) everybody possesses the same rights; whereas in a society governed by Might Makes Right, the rulers have different rights than the ruled. A simple example will suffice. If you agree to play chess with your friend, probably you won’t complain of injustice if your friend easily defeats you. Both of you are playing by the same rules, and your friend, either through practice or natural talent, simply operated within these rules more effectively than you did. But if your friend took out a knife, held it to your throat, and declared himself the winner, this would be clearly unfair, because your friend gave himself an extra dimension of power that you lacked.

Admittedly, a true advocate of Might Makes Right can, with total consistency, insist that the situation is still fair, since you could have thought of using a knife, too. Your friend had an idea you didn’t; what’s unfair about that? Using this logic, any rule-breaking can be regarded as fair, since anybody could have thought of any breach of the rules. To repeat, ‘fairness’ is a slippery concept; and some purists would insist that the only real fairness exists in the law of survival of the fittest. After all, aren’t all the rules of society just artificial contrivances used by the weak to entrap the strong? Many have thought so.

All I can say is that the advocates of Nietzsche’s Will to Power and Social Darwinism do indeed have a self-consistent worldview that cannot be refuted without begging the question. Personally, I find a world governed by Might Makes Right immoral. All moral systems, in my view, must exist between equals and benefit each individual who takes part in it. Thus a society based on violent coercion cannot be moral—at least for me—since the members abide by the rules out of fear and not self-interest. Granted, a Social Darwinian or a Nietzchean would have a very different concept of morality so again my criticism is still begging the question. All I can plead, therefore, is that I find Might Makes Right distasteful; so while acknowledging its logical appeal I will focus on other solutions to the problems of human society.

Now let us return to the problem of the ten people and their ten apples. We have considered and rejected the possibility of violent coercion, though my rejection was personal rather than philosophical. (The thorny problem with ‘justice’ is that it deals in fairness; and how do you decide if your standard of fairness is fair? Obviously you cannot without using circular logic, and thus your personal preferences come into play. As you will see shortly, we’re about to encounter this same problem again.)

We shall consider another solution: The community comes together and decides that the apple supply must always be divided equally between its members. Thus with ten apples each person gets one apple; with five apples each person gets a half, and so on. This is communism, of course, and represents another classic response to the problem of human society. Instead of the brutal law of strength, we get the perfect law of equality.

There is a certain elegance and undeniable appeal to communism. After all, what could be more fair than everyone getting the same thing? But upon closer inspection, it is easy to see how a communist system can also be considered unjust.

An obvious consideration is that every person does not have the same needs. If nine people were healthy but one person had a medical problem, would it be fair for every person to get the same amount of medical attention? Obviously that would be absurd; and even a hardliner communist would admit that perfect equality should be abandoned with regards to medical care, since different people clearly have different needs.

But if individuals differ in their needs for medical care, how else might their needs differ? Perhaps one person only feels good after nine hours of sleep, while the others feel fine after seven. Is it then fair to ask all of them to sleep eight? Perhaps not. We can give the needy sleeper a special dispensation to sleep nine hours. But then won’t this person be doing less work then the rest? Isn’t that unfair too?

A trickier problem is distinguishing a need from a desire. We distinguish between the two quite strictly in our language, but in reality the difference is not so clear. To pick a silly example, if nine of the community prefer apples but one abhors apples and loves pears, this pear-lover will be doomed to constant gustatory dissatisfaction if all decisions with regard to the food supply are taken collectively. This sounds quite trivial, but the point is that different things make different people happy; thus giving every person the same thing, while fair with regard to supply, is possibly quite unfair in terms of satisfaction.

An additional possibility of unfairness is differential contribution. In a communist community, some people may work harder, innovate more, and keep scrupulously to the rules; others may not carry their weight, or may otherwise take advantage of the system. In sum, different people will contribute different amounts to the community. Some of this difference will be due to ability, and some to personality. In any case, it is arguably quite unfair that, whatever you put into the collective, you take out the same amount.

The above criticisms are not meant to discredit communism; rather, they are only meant to show that, even in ostensibly the most perfectly fair system, unfairness still exists. (Unfortunately, unfairness of some sort always exists.) As an individualist, I am not attracted by communism because I think people have different needs, desires, and abilities, and that society should reflect these differences; but this preference of mine is obviously of emotional and not philosophical character. In any case, I do not know of any successful large-scale, long-term societies that had a truly communist character (most ‘communist’ countries being so in name only); so I feel justified in moving on from communism as a possibility.

Let us return, therefore, to our ten people with their ten apples. They tried a military oligarchy, and there was a rebellion; then they tried communism, but they grew resentful and dissatisfied. Then somebody has a bright idea: Whoever picks the apple owns it. The picker can choose to eat it, store it, or give it away; but under no circumstances can another person take it without permission; and if anyone is caught stealing the thief will have to pay a three apple penalty. Our society just invented the right to private property. Thus we see the birth of rights as a tool for organizing society.

There is nothing natural or God-given about a right. Rights are privileges agreed upon by the community, and exist by consent of the community. Rights are ways of organizing what people can and cannot do, to ensure that each person has a clearly delineated sphere of free action that does not impinge upon those of others. In other words, rights restrict people’s freedom at the point at which their freedom interferes with the freedom of their neighbors. A right to kill would thus be logically absurd, since if you killed me you would have deprived me of my right to kill. In other words, exercising your right extinguished my ability to exercise mine. This clearly will not do. This is why murder, larceny, and rape cannot be made into rights: They cannot be made universal, since they are actions that by definition involve the violation of other people’s autonomy.

Limitations on people’s actions are only justified insofar as these limitations protect the freedom of others. Anything beyond this is unnecessary and therefore unjust. Thus a law against homicide is valid, but a law forbidding the eating of sesame seeds cannot be justified, since that action does not deprive anybody else of their liberty. The aim is to secure for each individual the biggest allowable range of mutually consistent actions. To accomplish this, it is more suitable to define rights negatively rather than positively. Rights, in other words, ought to be defined as freedoms from rather than freedoms to, in order to secure the maximum amount of available action. This is consistent with the principle that freedom should only be limited at the points at which they interfere with the freedoms of others, since the rights are defined as freedom from this interference.

We return, now, to our apple community. Things are going along quite well in this new system. Then something happens: a man breaks his leg, and thus cannot pick apples any more. He begins to starve, while his neighbors continue happily along. So one night he makes a proposal of a new rule: When a member of the community is hurt, the healthy members must donate a certain fraction of their food to support the injured person during their convalescence. Since anybody can get injured, the man argued, this rule could potentially benefit any one of them. The healthy members disagree with the proposal, arguing that contributing their own food to another person is an infringement of their rights.

Which party is correct? More broadly, I want to ask how disputes like these should be resolved, when members of the community differ in their preferences of rights.

To answer this, I will introduce a Hierarchy of Rights.

Rights can, I believe, be ordered into a hierarchy from more to less fundamental. The measure of a right’s importance is the degree of autonomy that the right entails. Thus the most fundamental right is to life, since without life no other rights can be enjoyed. The right to be free from taxation is, by comparison, less important, since the loss of autonomy suffered through starvation is greater than the that suffered through taxation.

In the above case, therefore, I think the just thing to do is to impose a tax to keep the injured man alive. Contrarily, if somebody wanted to tax the population to build a gigantic statue of himself, this should be rejected, since the freedom to use one’s own money is more fundamental a right than the freedom to build giant statues. Having money appreciably increases your autonomy, while having a giant statue does not, and autonomy is the measure of a right’s importance.

Let us apply this line of thinking to a contemporary problem: Gun Control. Constitutional problems aside, I think it is clear that gun regulation is justifiable within this system. If the freedom to buy an assault riffle is interfering with another person’s freedom from violent death, obviously the first must be curtailed in some way, since it is the less fundamental right. Regulating firearms is thus justifiable in the same way as instituting taxes for welfare programs.

This same line of thinking applies to many other areas of life. We regulate car speed because the right to drive as quickly as you like is superficial in comparison with the right to life; and we regulate the finance industry because the right to speculate on the markets is less important than the right to our own money. In short, some rights are more important than others, since they entail a greater degree of autonomy; and to protect these fundamental rights it is justifiable to limit other rights of less importance.

Failing to distinguish between the importance of different rights is a mistake that I have often encountered. Once, for example, I spoke with a libertarian who argued that everybody should be able to own nuclear weapons. He argued this because, being a libertarian, he thought everybody should have as much freedom as possible. But this fails to take into account that, without limits, your autonomy will at some point interfere with mine. Maximal freedom is simply impossible in a society. The idea of allowing citizens to buy nuclear weapons is an obvious example: If one person used a nuclear weapon, in a flash they would deprive millions of people of their lives, and thus all of their rights. Thus for the sake of protecting personal liberty—not to mention human life—it is necessary to prevent individuals from possessing weapons of this kind. In other words, libertarians should be in favor of limiting access to weapons, since weapons deprive people of their liberty.

Similarly absurd was the argument that gay people should not marry because it offended people’s religious sensibilities. The right to marry is a quite fundamental, being of great social, personal, and financial importance; while the right not to be offended is not a right at all. (Anybody can potentially get offended at anything, since being offended is an emotional reaction; thus it would lead to absurdities to try to ban everything that offends.) While I am at it, polyamorous marriages should also be legal, I think, so long as all the parties consented. In general, I do not see why the love lives of consenting adults should be regulated at all.

The only justification for regulating or banning something is that it could potentially deprive somebody of their autonomy. The highly addictive and dangerous nature of drugs like cocaine and heroine give a compelling case for regulation, since it is possible that the substances compromise people’s ability to choose freely. And if you influence me to try cocaine, and I get addicted, you will have compromised my autonomy just the same as if you’d stolen from me. (This is philosophically interesting territory: Should you have a right to choose to do something that might compromise your ability to choose? It’s a tricky problem, but I think there are good grounds for banning certain substances, both because they cause people to act in ways they regret and, through their repercussions to people’s health, create a strain on the public health system.)

Likewise, I think it is the right choice to regulate, but not to ban, cigarettes and alcohol, since the addictive nature of the first and the intoxicating effects of the second can compromise a person’s autonomy. In the case of marijuana, on the other hand, I think that it is absolutely unjust that it has been made illegal and that people have been jailed for its possession. It is not a powerful drug, and does not limit people’s autonomy to the degree that an absolute ban is justified. More generally, I think many of the laws surrounding drug possession in the United States are good examples of unjust laws—some of them banning substances with insufficient justification, others imposing unduly harsh penalties for crimes of a non-violent nature. (As I wrote elsewhere, punishments are only justifiable insofar as they act as effective deterrents.) But let me return to the main subject.

This hierarchy of rights conception is obviously quite abstract, and without deliberate care will not be put into practice. In the above case of the ten people, I doubt that the one injured party would be able to prevail upon the nine healthy ones to give up a fraction of their food. The poor fellow might starve.

This is a constant danger in any community: the tyranny of the powerful. The powerful might be a majority, a race, a sex, or a class. This is one reason why I think government is a necessary institution in any large community. I do not see, in other words, how justice could be enforced in an anarchic system; and for this reason I am generally hostile to anarchism. Without a government, what would prevent the strong from preying on the weak? An anarchist will easily retort that the government is far from an ethically perfect entity, and indeed the state has often become the very thing we need protection against. This is true, and to prevent this careful measures must be taken.

The strategy used in the United States is a model example: divide up the government’s powers between different branches, with checks and balances between them. A division of powers between different levels of government and regions of the country—in other words, Federalism—is also an excellent practical measure against state tyranny. All the powers of each branch of government must be made explicit in a constitution, thus making any breaches easy to detect. Periodic elections also help to hold the government accountable to the people, as well as to prevent any one individual from accumulating too much power. Sad to say, no government and no constitution will ever be immune to totalitarian impulses, which is why a free press and an active, vigilant citizenry are necessary for a healthy state. But this is an essay on justice, not a plan of government.

The most just societies are those that keep the hierarchy of rights most clearly in view. When a just government balances the right of one person to buy thirteen private jets against the rights of a beggar to have food and shelter, it always sides with the latter. In general, the more resources, power, and privilege you have, the more justifiable it becomes to curtail your rights to your own property with the aim of redistribution. This is the justification behind welfare, food stamps, and Medicare; this is the reason why we have a graduated income tax. If you have one billion dollars, it does not appreciably affect your autonomy to be deprived of a large percentage of your income. On the other hand, government welfare programs allow worse-off people to stay alive and to find work, which are fundamental to their autonomy.

The above sketch is my preferred solution to the problem of creating a standard of justice. A system of rights, ordered into a hierarchy, allows each citizen a definite sphere of autonomy. This is important, because I think every person should be an authority over themself. Nobody knows your needs and desires better than you do; thus you are the person who best knows how to secure your own livelihood and attain your own happiness. Allowing people to order their own lives is not only good for each person individually, but is also good for the society as a whole. When people can think for themselves and reap the benefits of their own innovations, it provides both the means and motivations for a thriving society.

On Morality

On Morality

What does it mean to do the right thing? What does it mean to be good or evil?

These questions have perplexed people since people began to be perplexed about things. They are the central questions of one of the longest lines of intellectual inquiry in history: ethics. Great thinkers have tackled it; whole religions have been based around it. But confusion still remains.

Well perhaps I should be humble before attempting to solve such a momentous question, seeing who have come before me. And indeed, I don’t claim any originality or finality in these answers. I’m sure they have been thought of before, and articulated more clearly and convincingly by others (though I don’t know by whom). Nevertheless, if only for my own sake I think it’s worthwhile to set down how I tend to think about morality—what it is, what it’s for, and how it works.

I am much less concerned in this essay with asserting how I think morality should work than with describing how it does work—although I think understanding the second is essential to understanding the first. That is to say, I am not interested in fantasy worlds of selfless people performing altruistic acts, but in real people behaving decently in their day-to-day life. But to begin, I want to examine some of the assumptions that have characterized earlier concepts of ethics, particularly with regard to freedom.

Most thinkers begin with a free individual contemplating multiple options. Kantians think that the individual should abide by the categorical imperative and act with consistency; Utilitarians think that the individual should attempt to promote happiness with her actions. What these systems disagree about is the appropriate criterion. But they do both assume that morality is concerned with free individuals and the choices they make. They disagree about the nature of Goodness, but agree that Goodness is a property of people’s actions, making the individual in question worthy of blame or praise, reward or punishment.

The Kantian and Utilitarian perspectives both have a lot to recommend them. But they do tend to produce an interesting tension: the first focuses exclusively on intentions while the second focuses exclusively on consequences. Yet surely both intentions and consequences matter. Most people, I suspect, wouldn’t call somebody moral if they were always intending to do the right thing and yet always failing. Neither would we call somebody moral if they always did the right thing accidentally. Individually, neither of these systems captures our intuitive feeling that both intentions and consequences are important; and yet I don’t see how they can be combined, because the systems have incompatible intellectual justifications.

But there’s another feature of both Kantian and Utilitarian ethics that I do not like, and it is this: Free will. The systems presuppose individuals with free will, who are culpable for their actions because they are responsible for them. Thus it is morally justifiable to punish criminals because they have willingly chosen something wrong. They “deserve” the punishment, since they are free and therefore responsible for their actions.

I’d like to focus on this issue of deserving punishment, because for me it is the key to understanding morality. By this I mean the notion that doing ill to a criminal helps to restore moral order to the universe, so to speak. But before I discuss punishment I must take a detour into free will, since free will, as traditionally conceived, provides the intellectual foundation for this worldview.

What is free will? In previous ages, humans were conceived of as a composite of body and soul. The soul sent directions to the body through the “will.” The body was material and earthly, while the soul was spiritual and holy. Impulses from the body—for example, anger, lust, gluttony—were bad, in part because they destroyed your freedom. To give into lust, for example, was to yield to your animal nature; and since animals aren’t free, neither is the lustful individual. By contrast, impulses from the soul (or mind) were free because they were unconstrained by the animal instincts that compromise your ability to choose.

Thus free will, as it was originally conceived, was the ability to make choices unconstrained by one’s animal nature and by the material world. The soul was something apart and distinct from one’s body; the mind was its own place, and could make decisions independently of one’s impulses or one’s surroundings. It was even debated whether God Himself could predict the behavior of free individuals. Some people held that even God couldn’t, while others maintained that God did know what people would or wouldn’t do, but God’s knowledge wasn’t the cause of their doing it. (And of course, some people believed in predestination.)

It is important to note that, in this view, free will is an uncaused cause. That is, when somebody makes a decision, this decision is not caused by anything in the material world as we know it. The choice comes straight from the soul, bursting into our world of matter and electricity. The decision would therefore be impossible to predict by any scientific means. No amount of brain imaging or neurological study could explain why a person made a certain decision. Nor could the decision be explained by cultural or social factors, since individuals, not groups, were responsible for them. All decisions were therefore caused by individuals, and that’s the essence of freedom.

It strikes me that this is still how we tend to think about free will, more or less. And yet, this view is based on an outdated understanding of human behavior. We now know that human behavior can be explained by a combination of biological and cultural influences. Our major academic debate—nature vs. nurture—presupposes that people don’t have free will. Behavior is the result of the way your genes are influenced by your environment. There is no evidence for the existence of the soul, and there is no evidence that the mind cannot be explained through understanding the brain.

Furthermore, even without the advancements of the biological and social sciences, the old way of viewing things was not philosophically viable, since it left unexplained how the soul affects the body and vice versa. If the soul and the body were metaphysically distinct, how could the immaterial soul cause the material body to move? And how could a pinch in your leg cause a pain in your mind? What’s more, if there really was an immaterial soul that was causing your body to move, and if these bodily movements truly didn’t have any physical cause, then it’s obvious that your mind would be breaking the laws of physics. How else could the mind produce changes in matter that didn’t have any physical cause?

I think this old way of viewing the body and the soul must be abandoned. Humans do not have free will as originally conceived. Humans do not perform actions that cannot be scientifically predicted or explained. Human behavior, just like cat behavior, is not above scientific explanation. The human mind cannot generated uncaused causes, and does not break the laws of physics. We are intelligent apes, not entrapped gods.

Now you must ask me: But if human behavior can be explained in the same way that squirrel behavior can, how do we have ethics at all? We don’t think squirrel are capable of ethical or unethical behavior because they don’t have minds. We can’t hold a squirrel to any ethical standard and we therefore can’t justifiably praise or censor a squirrel’s actions. If humans aren’t categorically different then squirrels, than don’t we have to give up on ethics altogether?

This is not justified. Even though I think it is wrong to say that certain people “deserve” punishment (in the Biblical sense), I do think that certain types of consequences can be justified as deterrents. The difference between humans and squirrels is not that humans are free, but that humans are capable of thinking about the long term consequences of an action before committing it. Individuals should be held accountable, not because they have free will, but because humans have a great deal of behavioral flexibility, thus allowing their behavior to be influenced by the threat of prison.

This is why it is justifiable to lock away murderers. If it is widely known among the populace that murderers get caught and thrown into prison, this reduces the number of murders. Imprisoning squirrels for stealing peaches, on the other hand, wouldn’t do anything at all, since the squirrel community wouldn’t understand what was going on. With humans, the threat of punishment acts as a deterrent. Prison becomes part of the social environment, and therefore will influence decision-making. But in order for this threat to act as an effective deterrent, it cannot be simply a threat; real murderers must actually face consequences or the threat won’t be taken seriously and thus won’t influence behavior.

To understand how our conception of free will affects the way we organize our society, consider the case of drug addiction. In the past, addicts were seen as morally depraved. This was a direct consequence of the way people thought about free will. If people’s decisions were made independently of their environment or biology, then there was no excuses or mitigating circumstance for drug addicts. Addicts were simply weak, depraved people who mysteriously kept choosing self-destructive behavior. What resulted from this was the disastrous war on drugs, a complete fiasco. Now we know that it is absurd to throw people into jail for being addicted, simply absurd, because addicts are not capable of acting otherwise. This is the very definition of addiction, that one’s decision-making abilities have been impaired.

As we’ve grown more enlightened about drug addiction, we’ve realized that throwing people in jail doesn’t solve anything. Punishment does not act as an effective deterrent when normal decision-making is compromised. By transitioning to a system where addiction is given treatment and support, we have effectively transitioned from an old view of free will to the new view that humans behavior is the result of biology, environment, and culture. We don’t hold them “responsible” because we know it would be like holding a squirrel responsible for burying nuts. This is a step forward, and it has been taken by abandoning the old views of free will.

I think we should apply this new view of human behavior to other areas of criminal activity. We need to get rid of the old notions of free will and punishment. We must abandon the idea of punishing people because they “deserve” it. Murderers should be punished, but not because they deserve to suffer, but for the following two reasons: first, because they have shown themselves to be dangerous and should be isolated; and second, because their punishment helps to act as a deterrent to future murderers. Punishment is just only insofar as these two criteria are met. Once a murderer is made to suffer more than is necessary to deter future crimes, and is isolated more than is necessary to protect others, then I think it is unjustifiable and wrong to punish him further.

In short, we have to give up on the idea that inflicting pain and discomfort on a murderer helps to restore moral balance to the universe. Vengeance in all its forms should be removed from our justice system. It is not the job of us or anyone else to seek retributions for wrongs committed. Punishments are only justifiable because they help to protect the community. The aim of punishing murderers is neither to hurt nor to help them, but to prevent other people from becoming murderers. And this is, I think, the reason why the barbarous methods of torture and execution are wrong, because I very much doubt that brutal punishments are justified in terms of further efficacy in deterrence. However, I’m sure there is interesting research somewhere on this.

Seen in this way, morality can be understood in the same way we understand language—as a social adaptation that benefits the community as a whole as well as individual members of the community. Morality is a code of conduct imposed by the community on its members, and derivations from this code of conduct are justifiably punished for the safety of the other members of the community. When this code is broken, a person forfeits the protection under the code, and is dealt with in such a way that future derivations from the moral code are discouraged.

Just as Wittgenstein said that a private language is impossible, so I’d argue that a private morality is impossible. A single, isolated individual can be neither moral nor immoral. People are born with a multitude of desires; and every desire is morally neutral. A moral code comes into play when two individuals begin to cooperate. This is because the individuals will almost inevitably have some desires that conflict. A system of behavior is therefore necessary if the two are to live together harmoniously. This system of behavior is their moral code. In just the same way that language results when two people both use the same sounds to communicate the same messages, morality results when two people’s desires and actions are in harmony. Immorality arises when the harmonious arrangement breaks down, and one member of the community satisfies their desire at the expense of the others. Deviations of this kind must have consequences if the system is to maintain itself, and this is the justification for punishment.

One thing to note about this account of moral systems is that they arise for the well-being of their participants. When people are working together, when their habits and opinions are more or less in harmony, when they can walk around in their neighborhood without fearing every person they meet, both the individual and the group benefits. This point is worth stressing, since we now know that the human brain is the product of evolution, and therefore we must surmise that universal features of human behavior, such as morality, are adaptive. The fundamental basis for morality is self-interest. What distinguishes moral from immoral behavior is not that the first is unselfish while the other is selfish, but that the first is more intelligently selfish than the second.

It isn’t hard to see how morality is adaptive. One need only consider the basic tenets of game theory. In the short term, to cooperate with others may not be as advantageous as simply exploiting others. Robbery is a quicker way to make money than farming. And indeed, the potentially huge advantages of purely selfish behavior explains why unethical behavior occurs: Sometimes it benefits individuals more to exploit rather than to help one another. Either that, or certain individuals—either from ignorance or desperation—are willing to risk long-term security for short-term gains. Nevertheless, in general moral behaviors tend to be more advantageous, if only because selfish behavior is more risky. All unethical behavior, even if carried on in secret, carries a risk of making enemies; and in the long run, enemies are less useful than friends. The funny thing about altruism is that it’s often more gainful than selfishness.

Thus this account of morality can be harmonized with an evolutionary account of human behavior. But what I find most satisfying about this view of morality is that it allows us to see why we care both about intentions and consequences. Intentions are important in deciding how to punish misconduct because they help determine how an individual is likely to behave in the future. A person who stole something intentionally has demonstrated a willingness to break the code, while a person who took something by accident has only demonstrated absent-mindedness. The first person is therefore more of a risk to the community. Nevertheless, it is seldom possible to prove what somebody intended beyond the shadow of a doubt, which is why it is also necessary to consider the consequences of an action. What is more, carelessness as regards the moral code must be forcibly discouraged, otherwise the code will not function properly. This is why, in certain cases, breaches of conduct must be punished even if they were demonstrably unintentional—to discourage other people in the future from being careless.

Let me pause here to sketch out some more philosophical objections to the Utilitarian and Kantian systems, besides the fact that they don’t adequately explain how we tend to think about morality. Utilitarianism does capture something important when it proclaims that actions should be judged insofar as they further the “greatest possible happiness.” Yet taken by itself this doctrine has some problems. The first is that you never know how something is going to turn out, and even the most concerted efforts to help people sometimes backfire. Should these efforts, made in good faith, be condemned as evil if they don’t succeed? What’s more, Utilitarian ethics can lead to disturbing moral questions. For example, is it morally right to kill somebody if you can use his organs to save five other people? Besides this, if the moral injunction is to work constantly towards the “greatest possible happiness,” then we might even have to condemn simple things like a game of tennis, since two people playing tennis certainly could be doing something more humanitarian with their time and energy.

The Kantian system has the opposite problem in that it stresses good intentions and consistency to an absurd degree. If the essence of immorality is to make an exception of oneself—which covers lying, stealing, and murder—then telling a fib is morally equivalent to murdering somebody in cold blood, since both of those actions equally make exceptions of the perpetrator. This is what results if you overemphasize consistency and utterly disregard consequences. What’s more, intentions are, as I said above, basically impossible to prove—and not only to other people, but also to yourself. Can you prove, beyond a shadow of a doubt, that your intentions were pure yesterday when you accidentally said something rude? How do you know your memory and your introspection can be trusted? However, let me leave off with these objections because I think entirely too much time in philosophy is given over to tweezing apart your enemies’ ideas and not enough to building your own.

Thus, to repeat myself, both consequences and intentions, both happiness and consistency must be a part of any moral theory if it is to capture how we do and must think about ethics. Morality is an adaptation. The capacity for morality has evolved because moral systems benefit both groups and individuals. Morality is rooted in self-interest, but it is an intelligent form of self-interest that recognizes that other people are most useful as allies than as enemies. Morality is neither consistency nor pleasure. Morality is consistency for the sake of pleasure. This is why moral strictures that demand that people devote their every waking hour to helping others or to never make exceptions of themselves are self-defeating, because when a moral system is onerous is isn’t performing its proper function.

But now I must deal with that fateful question: Is morality absolute or relative? At first glance it would seem that my account would put me squarely in the relativist camp, seeing that I point to a community code of conduct. Nevertheless, when it comes to violence I am decidedly a moral absolutist. This is because I think that physical violence can only ever be justified by citing defense. First, to use violence to defend yourself from violent attack is neither moral nor immoral, because at this point the moral code has already broken down. The metaphorical contract has been broken, and you are now in a situation where the you must either fight, run, or be killed. The operant rule is now survival and not morality. For the same reason a whole community may justifiably protect itself from invasion from an enemy force (although capitulating is equally defensible). And lastly violence (in the form of imprisonment) is justified in the case of criminals, for the reasons I discussed above.

What if there are two communities, community A and community B, living next to one another? Both of these communities have their own moral codes which the people abide by. What if a person from community A encounters a person from community B? Is it justifiable for either of them to use violence against the other? After all, each of them is outside the purview of the other’s moral code, since moral codes develop within communities. Well in practice situations like this do commonly result in violence. Whenever Europeans encountered a new community—whether in the Americas or in Africa—the result was typically disastrous for that community. This isn’t simply due to the wickedness of Europeans; it has been a constant throughout history: When different human communities interact, violence is very often the result. And this, by the way, is one of the benefits of globalization. The more people come to think of humanity as one community, the less violence we will experience.

Nevertheless, I think that violence between people from different communities is ultimately immoral, and this is why. To feel it is permissible to kill somebody just because they are not in your group is to consider that person subhuman—as fundamentally different. This is what we now call “Othering,” and it is what underpins racism, sexism, religious bigotry, homophobia, and xenophobia. But of course we now know that it is untrue that other communities, other religions, other races, women, men, or homosexuals or anyone else are “fundamentally” different or in any way subhuman. It is simply incorrect. And I think the recognition that we all belong to one species—with only fairly superficial differences in opinions, customs, rituals, and so on—is the key to moral progress. Moral systems can be said to be comparatively advanced or backward to the extent that they recognize that all humans belong to the same species. In other words, moral systems can be evaluated by looking at how many types of people they include.

This is the reason why it is my firm belief that the world as it exists today—full as it still is with all sorts of violence and prejudice—is morally superior than ever before. Most of us have realized that racism was wrong because it was based on a lie; and the same goes for sexism, homophobia, religious bigotry, and xenophobia. These forms of bias were based on misconceptions; they were not only morally wrong, but factually wrong.

Thus we ought to be tolerant of immorality in the past, for the same reason that we excuse people in the past for being wrong about physics or chemistry. Morality cannot be isolated from knowledge. For a long time, the nature of racial and sexual differences was unknown. Europeans had no experience and thus no understanding of non-Western cultures. All sorts of superstitions and religious injunctions were believed in, to an extent most of us can’t even appreciate now. Before widespread education and the scientific revolution, people based their opinions on tradition rather than evidence. And in just the same way that it is impossible to justly put someone in prison without evidence of their guilt, it impossible to be morally developed if your beliefs are based on misinformation. Africans and women used to be believed to be mentally inferior; homosexuals used to be believed to be possessed by evil spirits. Now we know that there is no evidence for these views, and in fact evidence to the contrary, so we can cast them aside; but earlier generations were not so lucky.

To the extent, therefore, that backward moral systems are based on a lack of knowledge, they must be tolerated. In this why we ought to be tolerant of other cultures and of the past. But to the extent that facts are wilfully disregarded in a moral system, that system can be said to be corrupt. Thus the real missionaries are not the ones who spread religion, but who spread knowledge, for increased understanding of the world allows us develop our morals.

These are my ideas in their essentials. But for the sake of honesty I have to add that the ideas I put forward above have been influenced by my studies in cultural anthropology, as well as my reading of Locke, Hobbes, Hume, Spinoza, Santayana, Ryle, Wittgenstein, and of course by Mill and Kant. I was also influenced by Richard Dawkins’s discussion of Game Theory in his book, The Selfish Gene. Like most third-rate intellectual work, this essay is, for the most part, a muddled hodgepodge of other people’s ideas.

On the Meaning of Life

On the Meaning of Life

What is the meaning of it all? What is the purpose of life, the universe, and everything?

Most thinking people, I suspect, ask themselves this at least once in their life. Some get rather obsessed by it, becoming existentialists or religious enthusiasts. But most of us deal with this question in a more foolproof way: by ignoring it. Indeed, when you’re enjoying yourself, this question—“What is the meaning of life?”—seems rather silly. It is usually when we feel depressed, anxious, frightened, nervous, or vulnerable that it arises to our minds, often with tremendous force.

I do not wish to delve too deeply into dubious psychoanalyzing as regards the motivation for asking this question. But it is worthwhile noting down why we so persistently ask it—or at least, the reasons why I have asked it. Most obviously, it is a response to the awareness of our own mortality. We are all going to die someday; our whole existence will come to an end; and this is terrifying. We can attempt to comfort ourselves with the thought that we will be remembered or that our children (if we have any) will perpetuate our line. Yet this is an empty form of immortality, not only because we aren’t around to appreciate it, but also because, however long our memory or our descendents last, they too will come to an end. All of humanity will end one day; that’s certain.

The famous “Death of God” (the decline of religion) in western history caused a similar crisis. If there was no God directing the universe and ordaining what is right and what is wrong; if there was no afterlife but only a black emptiness waiting for us—what was the point? Nihilism seemed to many to be inescapable. Existentialism grew up in this environment, which inherited many of the assumptions of Christianity while (for the most part) rejecting God Himself, which led to not a few tortured, tangled systems of thought that attempted to reconcile atheism with some of our more traditional assumptions about right and wrong and what it means to live a meaningful life.

I had fallen into this same trap by asking myself the question: “If everything will end someday and humans are only a small part of the universe, what is the point?” This question is very revealing, for it exposes some of the assumptions that, upon further reflection, don’t hold water. First, why is something more worthwhile if it lasts longer? Why do we need to imagine an eternal God and an eternal afterlife to feel secure in our meaningfulness? Do people who live to eighty have more of a meaningful life than those who make it to thirty? Put this way, it seems to be a rather dubious assumption. For my part, I can’t figure out what permanence has to do with meaning. And by the way, I also don’t think that the opposite idea—that life is meaningful because it is temporary—is more useful, even though it is a poetic sentiment.

I think all this talk about permanence and impermanence does not get to the essence of the word “meaning.” What is more, it is my opinion that, once we properly analyze this word “meaning,” we will see that this fateful question—“What is the meaning of life?”—will vanish before our eyes. And this is not because life has no meaning, but because the question is based on a false premise.

To begin, let us figure out what the word “meaningful” actually means. To do this, take something that we can all agree has meaning: language. Language is in fact the paragon of meaningfulness; it is a symbolic system by which we communicate. If words and sentences had no meaning, you would have no idea what I’m saying right now. But where does the meaning of a sentence lie? This is the question.

To answer this question, let me ask another: If every human perished in a cataclysmic event, would any of the writing that we left behind have meaning? Would the libraries and book stores, the shop signs and magazines, the instruction manuals and wine labels—would they have meaning? I think they would not.

We don’t even have to engage in a hypothetical here. Consider the Indus Script, a form of writing developed in ancient India that has yet to be deciphered. Researchers are now in the process of figuring out how to read the stone tablets. How should they go about doing so? They can weigh each of the tablets to figure out their mass; they can measure the average height and thickness of the lines; they can perform a chemical analysis. Would that help? Of course not. And this is for the obvious reason that the meaning of a tablet is not a physical property of an object. Rather, the meaning of the script lies wholly in our ability to respond appropriately to it. The meaning of the words exists in our experience of the tablets and our behavior related to the tablets, and is not a property of the tablets themselves.

I must pause here to address a philosophical pickle. It is an interesting debate whether the meaning of language exists in the minds of language-users (e.g. meaning is psychological) or in the behavior of language-users (e.g. meaning is social). This dichotomy might also be expressed by asking whether meaning is private or public. For my part, I think that there is a continuum of meaningfulness from private thoughts to public behavior, and in any case the question is immaterial to the argument of this essay. What matters is that meaning is a property of human experience. Meaning is not a property of objects, but is a property of how humans experience, think about, and behave toward objects. That’s the important point.

The reason the Indus Script is meaningless to us is therefore because it doesn’t elicit from us any consistent pattern of thoughts or actions. (Okay, well that’s not entirely true, since we do consistently think about and treat the tablets as if they were ancient artifacts bearing a mysterious script, but you get my point.)

By contrast, many things besides language do elicit from us a consistent pattern of thoughts and actions. Most people, for example, tend to respond to and think about chairs in a characteristic way. This is why we say that we know what chairs are for. The social purpose of chairs is what defines them—not their height, weight, design, material, or any other property of the chairs themselves—and this social purpose exists in us, in our behaviors and thoughts. If everyone on earth were brainwashed and told to think about these same objects as weapons instead of for sitting then chairs would have a different meaning for us.

Ultimately, I think that meaning is just an interpretation of our senses. A camera pointed at a chair will record the same light waves that are being emitted from the chair as I will; but only I will interpret this data to mean chair. You might even say that meaning is what a camera or any other recording device fails to record, since the devices can only record physical properties. Thus meaning, in the sense that I’m using the word, depends on an interpreting mind. Meaning exists for us.

I hope I’m not belaboring this point, but it seems to be worth a little belaboring since it is precisely this point people forget when they ask “What is the meaning of life?” Assuming that most people mean “human life” when they ask this question, then we are led to the conclusion that this question is unanswerable. Human life itself—as a biological fact—has no meaning, since no fact in itself has meaning. In itself, “human life” has no point in the same way that the moon or saw dust has no point. But our experience of human life certainly does. In fact, by definition the human experience comprises every conceivable meaning. All experience is one endless tapestry of significance.

I see this keyboard below my fingers and understand what it is for; I see a chair to my right and I understand its purpose. I see a candle flickering in front of me and I find it pretty and I like its smell. Every single one of these little experiences is brimming with meaning. In fact, I would go further. I think it is simply impossible for an intelligent creature to have a single experience that doesn’t have meaning. Every time you look at something and you understand what it is, the experience is shot through with meaning. Every time you find something interesting, pretty, repulsive, curious, frightening, attractive, these judgments are the very stuff of meaning. Every time you hear a sentence or a musical phrase, every time you enjoy a sunset or find something tasty—the whole fabric of your life, every second you experience, is inevitably meaningful.

This brings me to an important moral point. Humans are the locus of meaning. Our conscious experience is where meaning resides. Consciousness is not simply a reflection of the world, but an interpretation of the world; and interpretations are not the sorts of things that can be right or wrong. Interpretations can only be popular and unpopular.

For example, if you “misunderstand” a sentence, this only means that most people would tend to disagree with you about it. In the case of language, which is a necessarily strict system, we tend to say that you are “wrong” if your interpretation is unpopular, because unless people respond to words and sentences very consistently language can’t perform its proper function. “Proper” meaning is therefore enforced by language users; but the meaning is not inherent in the words and sentences themselves. But in the example of a very abstract painting, then we tend not to care so much whether people interpret the painting in the same way, since the painting is meant to illicit aesthetic sensations and not transmit specific information. (In practice, this is all we mean by the terms “objective” and “subjective”—namely, that the former is used for things most people agree on while the latter is used for things that many people disagree on. Phrased another way, objective meanings are those to which people respond consistently, while subjective meanings are those to which people respond inconsistently.)

This is why meaning is inescapably personal, since experience is personal. Nobody can interpret your experience but yourself. It’s simply impossible. Thus conscious individuals cannot be given a purpose from the “outside” in the same way that, for example, a chair can. The purpose of chairs is simply how we behave toward and think about chairs; it is a meaning imposed by us onto a certain class of objects. But this process does not work if we try to impose a meaning onto a conscious being, since that being experiences their own meaning. If, for example, everybody in the world treated a man as if his purpose were to be a comedian, and he thought his purpose was to be a painter, he wouldn’t be wrong. His interpretation of his own life might be unpopular, but it can never be incorrect.

Human life, either individually or in general, cannot be given a value. You cannot measure the worth of a life in money, friends, fame, goodness, or anything else. Valuations are only valid in a community of individuals who treat them as such. Money, for example, is only effective currency because that’s how we behave towards it. Money has value, not in itself, but for us. But a person does not only have value in the eyes of their community, but in their own eyes, and this value cannot be overridden or delegitimized. And since your experience is, by definition, the only thing you experience, if you experience yourself as valuable nobody else’s opinion can contradict that. A person despised by all the world is not worthless if she still respects herself.

In principle (though not in practice) meaning is not democratic. If everybody in the world but one thought that the point of life was to be good, and a single person thought that the point of life was to be happy, there would be no way to prove that this person was wrong. It is true, in practice people whose interpretations of the world differ from those of their community are usually put into line by an exercise of power. An Inquisition might, for example, prosecute and torture everybody that disagrees with them. Either this, or a particular interpretation imposes itself because, if an individual chooses to think differently, then they are unable to function in the community. Thus if I behaved towards money as if it was tissue paper, my resultant poverty would make me question this interpretation pretty quickly. But it’s important to remember that a king’s opinion of coleslaw isn’t worth any more than a cook’s, and even though everyone thinks dollar bills are valuable it doesn’t change the fact that they’re made of cotton. Power and practicality do not equal truth.

Thus we find that human life doesn’t have meaning, but human experience does; and this meaning changes from individual to individual, from moment to moment. This meaning has nothing to do with whether life is permanent or impermanent. It exists now. It has nothing to do with whether humans are the center of the universe or only a small part of it. The meaning exists for us. We don’t need to be the center of a divine plan to have meaningful lives. Nor is nihilism justified, since the fact that we are small and temporary creatures does not undermine our experience. Consider: every chair will eventually be destroyed. Yet we don’t agonize about the point of making chairs, since it isn’t important whether the chairs are part of a divine plan or will remain forever; the chairs are part of our plan and are useful now. Replace “chairs” with “our lives” and you’ve hit the truth.

You might say now that I’m missing the point entirely. I am interpreting the word “meaning” too generally, in the sense that I am including any kind of conscious interpretation or significance, explicit or implicit, public or private. When most people ask about the meaning of life, they mean something “higher,” something more profound, more noble, more deep. Fair enough.

Of course I can’t hope to solve this problem for you. But I will say that, since meaning resides in experience, and since all experience is personal, you cannot hope to solve the meaning of all human life. The best you can hope for is to find meaning—“higher” meaning—in your own experience. In fact, it is simply presumptuous and absurd to say “This is the meaning of human life,” since you can’t very well crawl into another person’s head and interpret their experiences for them, much less crawl into the heads of all of humanity. And in fact you should be happy for this, I think, because it means that your value can never be adequately measured by another person and that any exterior criterion that someone attempts to apply to you cannot delegitimize your own experience. But also remember that the same also applies to your attempts to measure others.

I will also add, just as my personal advice, that when you realize that meaning only exists in the present moment, since meaning only exists in your experience, much of the existential angst will disappear. Find the significance and beauty of what’s in front of your eyes. Life is only a succession of moments, and the more moments you appreciate the more you’ll get out of life. Don’t worry about how you measure up against any external standard, whether it be wealth, fame, respectability, love, or anything else; the meaning of your own life resides in you. And the meaning of your life not one thing, but the ever-changing flux of experience that comprises your reality.