Review: Rousseau’s Confessions

Review: Rousseau’s Confessions

ConfessionsConfessions by Jean-Jacques Rousseau

My rating: 4 of 5 stars

There are times when I am so unlike myself that I could be taken for someone else of an entirely opposite character.

This book begins with a falsehood and only escalates from there. Rousseau, prone to hyperbole, boldly asserts that his autobiography is without precedent. Nevermind St. Augustine’s famous autobiography, which shares the same name; and ignore the works of St. Teresa, Benvenuto Cellini, and Montaigne. I suppose this sort of boastful exaggeration shouldn’t count for much; after all, Milton began Paradise Lost by saying he was attempting “Things unattempted yet in prose or rhyme.” Nevertheless, the second part of Rousseau’s assertion, that his enterprise would “find no imitator,” is even more indisputably false than the first one. This book has found nothing if not imitators.

Rousseau’s Confessions is really two distinct works, the first covering his childhood to his early adulthood, the second up to age fifty-three. For my part, the first is far better, and far more original. Like any modern self-psychoanalyzer, Rousseau traces his personality to formative events in his childhood—quite unusual at the time, I believe. Even more surprising is how frankly sexual is Rousseau’s story. He begins by describing the erotic pleasure he derived from being spanked by his nanny, relates a few homosexual encounters (undesired on his part), and frequently mentions masturbation. Much of the first book is simply prolonged descriptions of all the women he’s had anything to do with.

The second part is less striking, sometimes dull, but still full of interesting episodes. Rousseau has much to say about his career as a composer, something of which I had no idea before reading this book. He begins his career as a musician as a bungler and a phony, but eventually succeeds in closing the gap between his pretensions and abilities. It isn’t long before Rousseau finds himself stitching together some musical and lyrical fragments from Jean-Philippe Rameau and Voltaire into Les fêtes de Ramire, a one-act opera; and he soon becomes Rameau’s enemy, because (Rousseau is convinced) Rameau is jealous of Rousseau’s musical powers.

Rousseau also relates the famous tale of his children. After taking a seamstress, Thérèse, as his mistress, and having several children by her, he persuades her (and himself) to give them up to the foundling hospital. This is probably the most infamous episode of Rousseau’s life, and has provided plentiful fuel for those wish to discredit his ideas on education and child-rearing. As Rousseau grows old and becomes a man of letters, he accumulates ever more enemies, including Diderot and Grimm, who (Rousseau asserts) plotted relentlessly against him, partially because Rousseau scorned city life and modern luxuries.

I can’t help comparing this book with another great autobiography I recently read, that of Benvenuto Cellini. The two men are in many ways opposites. Cellini is a man of the world; his eye is turned exclusively outward; he is all action; he is confident in high society; he rarely blushes and never admits a fault. Rousseau is a man of sentiment and feeling, absorbed in his private world, often timid, awkward, and unsure of himself, and who often makes self-deprecating remarks.

And yet, the more I read, the more I saw strong similarities between these two self-chronicles. They are both massive egotists. If I were to write my autobiography, I’d hope that it would include some nice portraits of people in my life; but in these books there is no compelling portrait of anyone except their authors.

Like many narcissists, their vanity is easily wounded. They are obsessed with slights, and consider anyone who doesn’t show the proper respect to be, not only inconsiderate, but downright villainous. They both make enemies quickly, wherever they go. And yet, the fact that so many people they meet turn against them does not prompt them to pause and reflect; rather, they attribute all antipathy to envy, jealousy, or pure malevolence. Both have persecution complexes; both are paranoid; and both entertain extremely high opinions of their own virtues and abilities. In Rousseau’s own words, he is among “the best of men.”

It occurs to me that the urge to write an autobiography, in an age when autobiography was anything but common, requires a certain amount of narcissism. What surprises me is that these two men, Cellini and Rousseau, are also quite oblivious of themselves and utterly unable to question their own opinion. This is in strong contrast to Montaigne, somebody who Rousseau explicitly scorns:

I have always laughed at the false ingenuousness of Montaigne, who, feigning to confess his faults, takes great care not to give himself any, except such as are amiable; whilst I, who have ever thought, and still think myself, considering everything, the best of men, felt there is no human being, however pure he may be, who does not internally conceal some odious vice.

There may be a grain of truth in accusing Montaigne of attributing only amiable faults to himself (though reports by his contemporaries coincide remarkably well with Montaigne’s self-report). Even so, Montaigne had a quality that Rousseau eminently lacked: the ability to jump out of his own perspective. When playing with his cat, Montaigne paused to reflect “who knows whether she is amusing herself with me more than I with her?” And in that simple question—pushing himself out of his own skull, seeing himself from the eyes of his cat—he transcends all of the searching self-analysis of Rousseau. Rousseau’s total inability to, even for one moment, question his righteousness and his enemies’ wickedness is what makes him, by the end of the book, nearly intolerable—at least for me.

So much for Rousseau’s personality. As a portrait of a man, this book is interesting enough; but as the confessions of one of the most influential thinkers in the 18th century, it is far more so. Rousseau, whatever his faults, was undeniably remarkable. To paraphrase Will Durant, Rousseau, with almost no formal education, abandoned early by his father, wandering incessantly from place to place, setting himself as an enemy of the dominant currents of thought and art of the time, the avowed antagonist both of Rameau, the foremost composer, and Voltaire and Diderot, the foremost writers—this Rousseau nevertheless managed to become the decisive influence on the next century.

Cases like Rousseau’s make me stop and reflect about the nature of intellectual work. Neither a strong reasoner nor an adept researcher—any competent professor could poke gaping holes in his arguments and cite reams of factual inaccuracies—it is Rousseau, not they, who is still being studied at college campuses all over the world, and who will be in the foreseeable future. Indisputably he was an excellent stylist, though this hardly accounts for his canonical status.

What sets Rousseau apart, intellectually at least, is his enormous originality. Rousseau himself realizes this:

I know my heart, and have studied mankind; I am not made like anyone I have been acquainted with, perhaps like no one in existence; if not better, I am least claim originality, and whether Nature did wisely in breaking the mold with which she formed me, can only be determined after having read this work.

Rousseau wrote in a way no one had before. His ideas were fresh, his attitude unique. Although he had influences, there is nothing derivative about him. The more I read and the longer I live, the more am I drawn to the conclusion that the ability to form new ideas—genuinely new, not just re-interpretations of old ones—is one of the rarest human faculties. Rousseau had this faculty in abundance. It is impossible to read him within the context of his time and not be utterly astounded at his creativity.

It is just this sort of creativity, the thing we most celebrate and praise, that seems impossible to teach— impossible by definition, since you cannot teach somebody to think totally outside the bounds of your own paradigm. You cannot, in other words, teach someone to transcend everything you teach them. You can teach somebody to solve problems creatively; but how can you teach somebody to examine problems previously unimagined? This is just one of the paradoxes of education, I suppose.

In any case, Rousseau is just another example of those canonical thinkers who could never get tenure nowadays. It’s a funny world.

View all my reviews

Review: Arrival (2016)

Review: Arrival (2016)

 Rating: A-

Language is the foundation of civilization. It is the glue that holds a people together. It is the first weapon drawn in a conflict.

(Cover image taken from the official trailer.)

I have never written a movie review before, so have some patience while I get my bearings. Also, I clearly can’t say much without spoilers, so be warned.

The premise of Arrival intrigued me as soon as I heard it: a science-fiction alien story centered, not on warfare, but on language. Instead of a soldier, the protagonist is a linguist; and instead of defeating aliens she needs to understand them.

After a touching yet cryptic opening sequence—whose relation to the story isn’t revealed until much later—the movie begins with another day in the life of Louise Banks (played by Amy Adams), a professor of linguistics. She walks into a lecture hall, one of those stale and lifeless theaters of knowledge, in order to give a class on the Romance languages—specifically, on why Portuguese sounds so different from the other languages. (She never explains this, which is frustrating, since I genuinely want to know!)

Something is clearly wrong, however, as few students are in class, and their phones keep beeping. The aliens have just arrived, and everybody all the world over is in a panic. The confusion and alarm that would accompany the appearance of genuine UFOs was portrayed with subtlety and realism. People are rushing home (but why would home be any safer?), the military is scrambling jets (in a show of force?), and the newscasters are droning on incessantly in their foux-knowledgeable voices, filling up airtime with their lack of information.

We see snatches of Banks’s life here, which give us a taste of her personality. She is a loner, somewhat cold, very quiet. We see her lakeside house—angular, empty, tranquil, and almost sterile. It needn’t even be said that she is single and lives alone. Snatches of a phone call with her mom further characterize her—she is calm, detached, and impatient of folly.

Then, as in any hero’s journey, comes the call to adventure, this time in the form of Army Colonel Weber (played by Forest Whitaker, who gives his colonel a strong Boston accent). The Colonel dramatically puts a device on the table, and plays a chilling recording; it is an unintelligible series of clicks, whooshes, and moans, obviously not human. Can she translate it?

The call to adventure is at first refused (she can’t translate from a recording), and then accepted (as it must be for the movie), and soon enough Banks is snatched away to begin her quest. Next we are shown our first vision of the UFO: it is an oblong black egg that hovers ominously over the landscape, as pitifully small fighter jets fly by. The soundtrack, written by Jóhann Jóhannson, really shines in this sequence. Unearthly wailing sounds, reminiscent of alien speech, swell in and out over a droning base as the helicopters approach the monolithic object.

We also meet the other protagonist, Ian Donnelly (played by Jeremy Renner), a theoretical physicist who will work with Banks. The two of them soon begin their task.

The alien spacecraft opens up a hatch every 18 hours, giving the humans a two-hour window to go inside and make contact. (The reason for this pattern is never explained.) I particularly liked the portrayal of the huge number of precautions that the military takes when going inside the UFO. Even though no form of radiation, bacteria, or anything else potentially hazardous is detected, they must receive numerous booster shots, wear hazmat suits with heavy air purifiers, and be decontaminated each time they return.

Finally they go inside. Watching Donnelly’s childlike joy at touching the spacecraft is moving; for all he knows, he’s in a highly dangerous situation, and yet he is like a seven-year old at a zoo. I think his character is at least partially inspired by Carl Sagan, the alien-obsessed physicist. Like Sagan, Donnelly wants to communicate with the aliens through math, supposedly the universal language; and yet he soon must play second-fiddle to the linguist.

The inside of the ship is a large empty black chamber, composed of perfectly right angles. On the far end of the chamber is a transparent screen flooded with white light, through which the aliens appear. At first it is difficult to see them, because their side of the chamber is full of black smoke (part of the atmosphere they breathe?), and their form is only revealed gradually. I can’t say I was totally impressed by the design of the aliens. They are called “heptopods,” due to their having seven appendages and seven digits on each appendage; but they basically look like big, black, lumpy squids.

Thus begins the quest to communicate with the heptopods, which is the main drama of the movie. The government needs to ask them why they arrived on earth; and this requires quite a bit of linguistic prep work, since not only do our heroes need to make the question intelligible, but enough vocabulary is needed to make the answer meaningful. As far as I know, putting translation in the center of an alien movie is unique. In Independence Day (1996), for example—which I watched obsessively as a kid—the attempt to communicate with the giant UFOs lasts about three seconds. (They fly a helicopter near the alien craft to flash lights as a way of making contact; a laser blast promptly destroys the helicopter.)

Banks quickly realizes that verbal communication is a non-starter, since human vocal chords can’t reproduce heptopod speech. So she opts for written communication, and soon discovers that the heptopods have their own written language. This language is quite different from our own. It does not correspond with what the heptopods “say”; it is not, in other words, a transcription of speech. This means that the meaning is not sequenced in time.

Like a sentence in any other language, an English sentence has a front end and a back end, and must be read in the correct order to make proper sense. When we speak, we obviously must start at some time and end later; and so do our written sentences. Not so the heptopod system, wherein meaning is encoded, as it were, directly, with reference purely to ideas. It has the same meaning forward and backwards; and its meaning can be understood at a glance, like a picture.

Its easy to see how simple nouns and verbs—lions, helicopters, walking, giving—could be represented this way; but it is difficult for me to imagine how complex logical relationships or temporal sequences could be transcribed so that the message is the same forwards and backwards. The movie does not get into the mechanics of the language, however, which is just as well.

While I’m at it, I also wonder if linguistic communication would be possible at all with creatures from another planet. Wittgenstein famously said “If a lion could speak, we could not understand him”—meaning, I think, that our language is so tied up in our human experience of the world that it could never serve as a bridge across different species. Put another way, Wittgenstein thought that our language does not and cannot refer to pure ideas—notions that would be the same as understood by any creature.

Our experience of the world is so filtered through our senses, our biology, our specifically human brains, that it seems to me that an alien—from a planet with a vastly different ecosystem, breathing different atmosphere, with senses adapted to different conditions and a nervous systems built on entirely different principals—might conceptualize the world in such different terms that any real communication would be nearly impossible. All this is a massive digression, of course. But a movie that can prompt such ponderings is certainly worth watching.

Soon enough, Banks is coming to grips with the heptopod written language. The visual design of this language is excellent: it is written in inky smoke, and takes the form of a circular swirl with complex bulges and branches. Meanwhile, Banks is beginning to have strange visions, all featuring an unidentified little girl—the same girl from the opening sequence. It is clear that Banks is her mother; and these can’t be memories, since Banks has never had children. Is Banks cracking from sleep deprivation?

While Banks is working on the translation, the world situation is growing ever-more tense. There are twelve of these “shells” (as they’re called), and each country is taking a different approach to communicating with the heptopods. People everywhere are panicking. An image of one of the creatures is leaked and goes viral. China in particular is full of military bluster, and seems constantly on the verge of attacking their shell; and the longer the situation persists, the more people seem to think that the wise thing to do is take military action.

This brings us to one of the movie’s major themes: confronting the unknown. The only thing threatening about the shells is that they are mysterious. Who are the aliens? Where did they come from? What do they want? They don’t attack; they don’t cause any damage; they just hover above the landscape. And yet, the mere presence of unknown visitors causes riots, protests, looting, cult suicides—total panic. It almost seems as if people would prefer that the aliens demonstrated some malicious intent; at least then they’d know what to do. In this situation of total ambiguity, people’s fears fill up the vacuum of knowledge. Never mind that the aliens likely have technology far in advance of humans. We have the urge to attack, not because it’s wise, but to end this terrifying doubt.

What should you do when you confront the unknown? Understand it, or destroy it? This is the movie’s essential question. Banks represents the first solution. The main drama of the movie takes place in the shell’s chamber. There, the confrontation is given stark visual form: Banks stands and stares straight into the blinding light at the other end. The aliens are literally unreachable, separated by a partition. They communicate by imposing form onto nebulous clouds. Language is the tool through which Banks and the heptopods bridge the gap that separates them from one another.

Captain Marks, who works with Banks and Donnelly, represents the other solution. We see him listening to conservative talk radio—an obvious parody of Rush Limbaugh—whose host castigates the Army for not having enough guns, and recommends a “shot across the bow” as a demonstration of human military might. This is probably the movie’s wryest cultural comment, the tendency of the right to use blustering and macho rhetoric, even in highly delicate and complex situations. Captain Marks, spooked by this and also by his wife’s fears, decides to go rogue and attack the ships. His attack fails to accomplish anything, however, and only results in his own death (or imprisonment?) and makes Banks’s job that much more difficult.

Another major theme of the movie is our inability to work together, even in the direst of circumstances. Although it is obviously within each country’s best interest to share their data and collaborate—a “non-zero sum game,” to quote the movie—communication ultimately breaks down between nations as suspicion and paranoia take hold.

As Banks repeatedly shows, communication requires trust, which is exactly why she is so skilled at it. Instead of being scared of contamination and frightened of approaching the heptopods, she removes her protective suit and puts her hand on the glass. In other words, she chooses to trust the heptopods. Communication breaks down between the nations of the world precisely because of this lack of trust; they are afraid that the aliens are trying to get them to attack one another.

Full crisis mode ensues when Banks finally asks what the aliens are doing on earth, and gets the response “Offer Weapon.” Thus begins the dramatic final sequence, during which Banks has to rush to interpret this message before other nations of the world begin bombing their shells. After a final visit to the shell, the heptopods explain to Banks that the “weapon” is their own language, which, because it is the same forwards and backwards, allows you to see the future when you learn it. They are offering it to humanity because they will need humanity’s help in 3,000 years (which they know because they can see the future).

By the way, the idea that learning a non-temporal language could so fundamentally alter your perception of time, allowing you to see into the future, is based on the Sapir-Whorf hypothesis, otherwise known as linguistic relativity. This is a real theory, put forward in the 1950s, which argued that your language fundamentally shapes your perception of the world. The most famous (and also most infamously incorrect) example of this are supposedly huge number of words for “snow” among the Inuit, reportedly allowing them to see fine differences in different types of snow. Strong versions of this hypothesis—in which one’s language totally shapes your cognitive processes—have been ruled out; but it is true, I believe, that our language influences our thought in manifold subtle ways.

Banks, now aware of her new ability, looks into the future in order to see how she can prevent the impending catastrophe, and stop the Chinese from attacking their shell. Like all time travel, this presents some interesting paradoxes of causality. Can knowledge of the future, already determined by the present, influence the present? If the only reason that Banks could obtain the information she needed was because she had already used it, what causes what? This paradox is sort of glossed over, and that’s fine by me.

The crisis resolved, the heptopods mysteriously vanish—having accomplished their goal of uniting the peoples of the world and teaching humanity their language—and Banks is left to live her life. This leads, predictably, to a romantic entanglement with physicist Ian Donnelly. He is the man with whom Banks has her daughter, an adorable little girl who is fated to die from a “really rare disease” sometime in her adolescence.

This brings us to the movie’s second major theme: confronting the known. Because she can see the future, Banks is forced to live her life with full awareness of how everything will turn out. Her marriage to Donnelly will end in divorce, and her daughter will die young. Indeed, Donnelly wants a divorce precisely because he thinks they shouldn’t have had a daughter if Banks knew she would die.

The odd fact is that total knowledge is, in a way, far more terrifying than total mystery. It is one thing to try something when you’re not sure you’ll succeed, but it requires even more courage to try something even when you know you will fail. And yet, Banks embraces her fate, and lives her life anyway. This is the most literal illustration of Nietzsche’s amor fati, love of fate, that I’ve ever seen: instead of trying to change anything, Banks tries to appreciate each moment for what it is.

As far as acting goes, the standout performance is Amy Adams’s. Her portayal of Banks is subtle and sensitive. Banks is quiet without being timid, highly observant but fiercely independent, and incredibly strong without being overpowering. She speaks in a soft voice, nearly a whisper, and her face is usually deadpan calm. And yet this makes the emotional moments of the film that much more touching.

I am glad that such a thoughtful, tasteful movie is finding both commercial and critical success nowadays. While arguably somewhat derivative of Kubrick’s work—the visuals and sound-effects were polished and excellent, but hardly groundbreaking—Arrival manages to ask many deep questions within a gripping and accessible plot. All in all, a truly excellent film.

 

Directed by Denis Villenueve

Written by Eric Heisserer

Staring Amy Adams, Jeremy Renner, and Forest Whitaker

 

Quotes & Commentary #52: Burns

Quotes & Commentary #52: Burns

We deny our own role in the conflict because self-examination is so shocking and painful, and because we’re secretly rewarded by the problem we’re complaining about. We want to do our dirty work in the dark so we can maintain a façade of innocence.

—David D. Burns, Feeling Good Together

Lately I’ve been churning over this moral dilemma: to what extent can circumstances excuse immoral actions? Are we just products of our environment, and therefore not personally responsible? Or do we have a personal responsibility that cannot be effaced by outside pressure?

The way you answer this questions will largely depend on where you fall on the political spectrum. Those on the right tend to hold individuals responsible; those on the left, circumstances.

Both sides seem to have a point. Obviously, some people must be responsible somewhere if we are to punish wrongdoers and improve society. Indeed, to treat people as helpless in the face of circumstances is tantamount to treating them as non-persons, possessing no moral agency.

On the other hand, holding people to be absolutely responsible can amount to blaming the victim. If somebody grows up in a poor neighborhood, with failing schools, few legitimate opportunities, and an oppressive police force, and then ends up committing a crime, it seems (to me at least) that harshly punishing this individual, without paying any attention to the circumstances, is the opposite of justice.

As is so often the case, both extremes prove dissatisfying. But where should we draw the line?

A few days ago I mentioned that, when thinking about failing school systems, we usually don’t hold the teachers responsible; but when thinking of Nazi death camps, we do blame the soldiers. This generalization about people’s opinions is, on second thought, not as clear-cut as I thought. In fact, teachers are often blamed for the faults in the educational system (which seems to me as just a way to avoid fixing the problem). And the culpability of soldiers who commit heinous acts under orders is also debated. There is the famous Milgram experiment, showing how easy it is to get normal people to do terrible things through authority.

Upon further reflection, I realized that this moral dilemma—individual vs. personal responsibility—was similar to something I read in a self-help book about relationships.

When a couple is having problems, it is typical for each of them to blame the other: “Well, maybe I’m doing y, but I wouldn’t if he wasn’t doing x!”

As Burns says, it is very difficult to get past this. Getting somebody to stop blaming their partner and to change themselves is difficult. Admitting your own faults is neither fun nor easy. Many people, when asked to change a negative behavior, point out that their behavior is just a reaction to their partner’s negative behavior. Why should they have to change? Shouldn’t their partner, since he’s the one being ridiculous?

This illustrates a chilling thing about responsibility: In any given social system, from romantic relationships to the world economy, responsibility can be shifted around at whim. Depending on your perspective and your ideology, you can pick any section of a social system and put the blame there. The right blames the government and the left blames businesses. Teachers blame students and students blame teachers. Husbands blame wives, sisters blame brothers, employees blame bosses, sailors blame the wind, entrepreneurs blame the economy, brokers blame the market, and on and on and on, an infinite deferment of responsibility.

The odd thing is that every one of these people is right. It’s true that your relationship problems would disappear if your partner just did everything you wanted. It’s true that your boss doesn’t appreciate the work you do. It’s true that your boat wouldn’t have sunk if not for the storm. All these things are true, since every social system is an cyclical network of causes.

When we single out one element and put the blame there, we are thinking about the cause linearly: A is causing B, therefore the fault is with A. Yet so often B is just as much the cause of A as A is of B.

In a relationship, your behaviors influence your partner’s, and vice versa; neither exist in isolation. In our school system, maybe we do have mediocre teachers; and maybe poor-quality teaching is a big cause of our educational problems. And yet to put the blame on the teachers is not to acknowledge all of the systemic flaws in teacher training and recruitment, and the inconsistencies in the what we expect from teachers with the resources we provide them.

So if the causation is cyclical—or, perhaps even more accurately, web-like, with each section influencing every other section—what should we do?

What Burns says about relationships is true of many situations in life. We cannot change our partners, nor can we change our bosses, nor the system as a whole, without first changing ourselves. That is, when trying to change the world for the better, the first step is always to stop deferring responsibility for negative situations, to stop excusing yourself by pointing your finger at all of the flaws around you, and to take responsibility yourself.

Indeed, when you understand how you are contributing to a negative situation and then change your behavior, often the situation improves dramatically without having to change anything else, since your own actions played a crucial role in the interlocking web of causation. And in any case, there is no chance in improving a bad situation if you yourself are contributing to it.

This sounds easy, but in fact it is extremely hard. Burns point out that blaming our partners for our relationship problems is so common because it is self-serving. We don’t have to change our behavior, we get to justify our anger, we get to complain and play the victim, and perhaps we get some sadistic pleasure out of upsetting our partners. These ulterior motivations are rarely acknowledged or discussed, because they are rather ugly, but they are operative. On the other hand, taking responsibility requires painful self-examination and the admission of guilt—which can be very damaging to our self-image.

The same thing happens in other circumstances. Let me continue with the example of a teacher (this will likely be relevant to my life). A frustrated teacher can easily blame her unmotivated students who never do their homework and who always talk in class; she can blame her boss, who has unrealistic expectations and little sympathy; the school system, which pays her very little for long hours. All of these things may be true, and yet it is obvious how self-serving this blaming can be. A teacher cannot transform her students into dedicated scholars or fire her boss or change her pay; she needs to do what she can with what she has.

I should take a step back here. If you choose to be a teacher, I think you have a responsibility to do a good job of it. But for those looking to reform the educational system, blaming all the problems on teachers is just the same thing as when teachers blame the system. It is to avoid responsibility.

I should also make clear that, while I think we should take responsibility for what we can influence, I am not advocating for people to blame themselves. Taking responsibility and blaming yourself, though superficially similar, as really quite different. Responsibility is proactive. It means taking action for what is within your control. Blaming yourself does just the opposite; it makes you feel guilty, and guilt is a bad motivator.

Indeed, you should not accuse yourself of causing the problem, since most likely the problem has many causes. You should acknowledge your part in the solution to the problem, and try to do your part to change things.

This is the closest thing to an answer I have to this unanswerable question, whether circumstances or individuals are morally culpable. The causes of any ethical problem are complex and interlocking—a cyclical interplay of dynamics and personalities—but your only choice is to start with your actions.

Quotes & Commentary #51: Montaigne

Quotes & Commentary #51: Montaigne

I set little store by my own opinions, but just as little by other people’s.

—Michel de Montaigne

Although nobody is free from self-doubt, I have long felt that I have this quality to an inordinate degree. The problem is that I can’t decide whether this is a good or a bad thing.

On the one hand, doubting yourself is one of the keys of moderation and wisdom. If you think you already know everything you cannot learn. If you are sure that your perspective is right you cannot empathize. Dogmatism, selfishness, and ignorance result from the inability to doubt the truth of your own opinions.

There are no such things as self-doubting fanatics. The ability to question your own opinions and conclusions is what prevents most people from committing atrocities. I couldn’t kill somebody in the name of an idea, since there is no idea I believe in strongly enough.

And yet, this tendency to doubt my own beliefs and conclusions so often makes me hesitating, indecisive, and occasionally spineless. Never mind killing anyone: I don’t even believe in my political ideals enough to stand up to somebody I find offensive. I doubt the worth of my dreams, the reason of my arguments, the virtue of my actions; and I am not terribly sure about my professional competency or my literary skill.

No matter what I do, I have this nagging feeling that, somewhere out there, there are people who could make me appear ridiculous by comparison. So often I feel out of the loop. I hesitate to submit my writing anywhere because I think a professional editor would cut it to pieces. I hesitate to put forth arguments because I think a real expert could see right through them. I hesitate to commit to a profession because I doubt my own ability to follow through, to perform difficult tasks, and to do my duties responsibly.

My nagging self-doubt is more of a feeling than a thought; but insofar as a feeling can be expressed in words, it goes like this: “Well, maybe there’s something big out there that I don’t know, something important that would render all my knowledge and standards inadequate.”

The odd thing is that I have no evidence that this fear is justified. In fact, I have evidence to the contrary. The more I read and travel, the more people I meet, the more places I work, the less surprised I am by what I find. The contours of daily reality have grown ever-more familiar, and yet this fear—the fear that, somehow, I have missed something big—this fear remains.

The perilous side of self-doubt is that it can easily ally itself with baser qualities. I can argue myself out of taking risks because I am unsure whether I really want the goal. I can argue myself out of standing up for what I believe is right by doubting whether it really is right, and whether I could prove it. Self-doubt and fear—fear of failure, fear of rejection, fear of being publically embarrassed—so often go hand-in-hand.

I can’t say why exactly, but the thought of writing a flawed argument, with a logical fallacy, an unwarranted assumption, or a sloppy generalization, fills me with dread. How mortifying to have my mental errors exposed to the world! Maybe this is from spending so much time in a school environment wherein the number of correct answers was used as a measure of my worth. Or maybe it is simply my personality; being “right” has always been important to me.

This fear of being wrong is particularly irrational, since some of the greatest minds and most influential thinkers in history have been wrong—Galileo, Newton, Darwin, Einstein, all of them have erred. Indeed, the fear of being wrong is not only irrational, but counterproductive to learning, since it sometimes prevents me from exposing my thoughts, increasing the likelihood that I will persist in an error.

Despite the negatives, I admit that I am often proud of my ability to doubt my own conclusions and change my opinions. I see it as a source of my independence of mind, my ability to think differently from others and to come to my own conclusions. After all, doubting yourself is the prerequisite of doubting anything at all. As Plato illustrated in his Socratic dialogues, from the moment we our born our minds are filled with all sorts of assumptions and prejudices which we absorb from our culture. The first step of doubting conventional opinion is thus doubting our own opinions.

But just as often as my self-doubt is a source of pride, it is a source of shame. I am sometimes filled with envy for those rare souls who seem perfectly self-confident. In this connection I think of Benvenuto Cellini, the Renaissance artist who left us his remarkable autobiography. Cocksure, boastful, selfish, prideful, Cellini was in many ways a despicable man. And yet he tells his story with such perfect certainty of himself that you can’t help but be won over.

Logically, self-confidence should come after success, since otherwise it isn’t justified. But so often self-confidence comes beforehand, and is actually the cause of success. In my experience, when you believe in yourself, others are inclined to believe in you. When you are confident you take risks, and these risks often enough pay off. When you are confident you state your opinions boldly and clearly, and thus have a better chance of convincing others.

Confidence is often discussed in dating. Self-confident people are seen as more attractive, and tend to have more romantic success since they take more risks. The ability to look somebody in the eye and say what you think and what you want—these are almost universally seen as attractive qualities; and not only in romance, but in politics, academics, business, and nearly everything else.

The charisma of confidence notwithstanding, this leads to an obvious danger. Many people are confident without substance. They boast more than they can accomplish; they speak with authority and yet have neither evidence nor logic to back their opinions. The world is ruled by such people—usually men—and I think most of us have personal experience with this type. I call it incompetent confidence, and it is rampant.

As Aristotle would say, there must be some ideal middle-ground between being confidently clueless, and being timidly thoughtful. And yet, in my experience, this middle-ground, if it even exists, is difficult to find.

I suppose that, ideally, we would be exactly as confident as the reach of our knowledge permitted: bold where we were sure, hesitating where we were ignorant. In practice, however, this is an impossible ideal. How can we ever be sure of how much we know, or how dependable our theories are? Indeed, this seems to be precisely what we can never know for sure—how much we know.

For the world to function, it seems that it needs doers and doubters. We need confident leaders and skeptical followers. And within our own brains, we need the same division: the ability to act boldly when needed, and to question ourselves when possible. Personally, I tend to err on the side of self-doubt, since it easily allies itself with laziness, inaction, and fear; but now I am starting to doubt my own doubting.

Quotes & Commentary #50: Campbell

Quotes & Commentary #50: Campbell

You yourself are participating in the evil, or you are not alive. Whatever you do is evil for somebody. This is one of the ironies of the whole creation.

—Joseph Campbell, The Power of Myth

I ended my last post with the dreary thought that we cannot help harming others. Almost inevitably, a system we must operate within—be it the economy, the school system, or a company we work for—will have undesirable consequences: it will exploit the disadvantaged, increase inequality, reinforce the status quo, or any of the other ugly things our social systems do.

Apart from this, it is worth considering that, even if we were operating within a relatively just and fair system, we could still unintentionally harm others. What if I take the job you had your dreams set on? Or if I marry the girl you’ve always had a crush on? Or take the last spot in your ideal university? A promotion, the lottery, the last potato chip in the bag—all these are limited resources, of which you deprive others by using.

As long as we live on a finite world with infinite wants, as long as our desires outpace our means, we will inevitably have to compete for some resources; and this competition will make us get in the way of each other’s happiness.

It is easy to get angry or depressed about this. The gazelle who has just been tackled by the lion probably thinks that life is monstrously unfair, and that the lion is being very unjust. The lion, for her part, probably thinks that it is perfectly fair that she eat this gazelle, since he was the slowest in the herd.

And I think they’re both right. From one perspective, life is horribly unfair; and from another, life is fairness itself. As long as there is limited gazelle meat in the world, there will be some competition for its use—the gazelle for its body, the lion for its food.

To be alive means participating in this struggle for resources; and in that respect, being alive means harming others, since any resources you take for yourself are unavailable for others. And if inflicting harm means doing evil, this means that, to some people, sometimes, you are evil. Being alive means participating in this basic, universal evil.

So if evil is inevitable, what does it mean to be moral?

Well, as I’ve discussed elsewhere, I think that morality is a system of behavior that allows individuals to live safely within the same community. This system consists of interpersonal rules: how you need to act towards others. For example, a safe community isn’t logically possible where theft and murder are considered permissible; thus moral rules prohibit these behaviors. These rules are enforced by the community through punishments. This way, behaviors incompatible with safe communal living are discouraged and diminished, allowing each member to live in relative peace and security.

Provided that these communal rules are not flawed (and historically they often have been) then by following them you are a by definition a moral person. Accepting a promotion—and, by doing so, depriving a coworker of the same promotion—may be “evil” from your coworker’s perspective, but it is not strictly immoral, since granting and accepting promotions are morally allowable actions. (By “morally allowable” I mean that these actions don’t inflict any harm, other than the unavoidable harm of allocating limited resources; and that they don’t make an exception of anyone, in that they don’t violate anyone’s rights.)

Moral systems (and their offspring, the concept of rights) are how we have learned to negotiate the crisscrossing pattern of desires, the unavoidable conflicts of interest, that exist when any two creatures inhabit the same space. By having general guidelines of conduct, we have an impartial, communally approved standard of deciding what is fair or unfair, a standard that treats every member of the community equally. In a way, a moral system is a way of imposing order onto the tragedy and comedy of all creation. It is a set of rules that tells you what desires you can or can’t satisfy—where, when, and how it is appropriate to obtain what you want. Moral systems legitimate some desires and delegitimate others. 

And the beauty of moral rules is that, by curbing some desires, and disallowing certain actions, it actually benefits for each community member in the long run, since it is these rules that make the community possible at all. Without them, the community would disintegrate into chaos, or at the very least would need oppressive force to hold it together, both of which are undesirable situations.

The problem is that any moral system, however well-constructed, cannot make life fair. Morality makes social life fair, but not life itself. No matter what, some people will be born with certain talents, some people will be born into wealthy families, some people will be born into privilege, and others will be cursed with abusive parents or struck down by disease. Aside from the accident of birth, luck intervenes at every important junction: relationships, careers, school, friendships, everything.

The omnipresence of luck—the enemy of fairness—and the finitude of life, makes unhappiness unavoidable, even in a perfectly constructed utopia. Our desires will always outpace our means, and reality will always baffle our attempts to control it. We want the impossible. We want to live forever with all our friends and family, eating wonderful meals five times daily, never feeling any pain or discomfort, bedding every attractive person we see. Of course we know this can’t happen, and so feel little bitterness, usually, that life is very different.

Nevertheless, how often do we feel that life is treating us unfairly? How often do we resent those around us for taking what we want, or shake our fists at the injustice of the universe for giving other people all the luck? This feeling of injustice most often results in anger; indeed, I think anger is the ego’s defense against the feeling of impotence. When we can’t get what we want, and things aren’t going our way, we naturally grow resentful and feel that the situation is somehow wrong. It is not our desires that are wrong—to the contrary, the universe should be cooperating, since the universe created me with these desires!—but the universe that is wrong. Right?

This is the reverse-side of Campbell’s point: Not only will you be evil to somebody, but somebody will also be evil to you, even when everyone is abiding by the dictates of morality. The wise course, I think, is to try to keep the whole in perspective, to realize that what seems unjust to you may seem perfectly just to others, and vice versa.

From up close, life is tragic, since we can never get everything we want, or even a fraction; but from a distance, seen as a whole, life is also comic, because we want the impossible and don’t appreciate what we have. This double-aspect of tragedy and comedy is, indeed, one of the ironies of creation. 

Quotes & Commentary #49: Orwell

Quotes & Commentary #49: Orwell

All left-wing parties in the highly industrialized parties are at-bottom a sham, because they make it their business to fight against something they do not really wish to destroy. They have internationalist aims, and at the same time they struggle to keep up a standard of life with which those aims are incompatible.

—George Orwell, A Collection of Essays

Yesterday I wrote an essay trying to answer this question: What’s the right thing to do in morally compromising circumstances? This is one of the oldest and most vexing questions of human existence; and there’s no way I’m going to crack this nut in one blog post. That’s why I’m writing another one.

As George Orwell points out, this question isn’t confined to any one sphere of our lives, but confronts us every day, in manifold and invisible ways. When we go to the grocery store, when we buy a shirt, when we download a song, when we get the latest model of smartphone, we are supporting business practices that are largely hidden from us, but which may be morally repulsive.

What is life like for the factory workers who made my computer? What are the conditions for the animals whose meat I eat? Where does the material from my jeans come from, how is it processed, who are the workers who make it? For all I know, I may be patronizing exploitative, abusive, oppressive, and otherwise unethical businesses—and, the more I consider it, the more it seems likely that I do.

Unethical business practices aside, there is the simple fact of inequality. On the left we spend a lot of time criticizing the vast wealth inequality that exists within the United States; and yet we do not often stop to realize how much wealthier are most of us than people elsewhere. Is the first situation unjust, and the second not? Is it right that some countries are wealthier than others? And if not, can we logically desire our present standard of life while maintaining our political ideals?

To the extent that opponents of inequality are immersed in a global economy—and we are, all of us—they are participating in a system whose consequences they find morally wrong. But how can you rebel against a global paradigm? You can try to minimize your damage. You can try to patronize businesses who have more humane business practices. You can become a vegan and buy second-hand clothes.

And yet, it is simply impossible—logistically, just from lack of time and resources—to be absolutely sure of the consequences of all your actions in a system so vast and so complex. It would be a full-time job to be a perfectly conscientious consumer. You can’t personally investigate each factory or tour each farm. You can’t know everything about the company you work for, the bank you store your money in, the supermarkets you buy your food from.

This is the enigma of being immersed in an ethically compromising system. To a certain extent, resist or not, you become complicit in a social system you did not design and whose consequences you don’t approve of. It is one of the tragic but unavoidable facts of human life that good people can still do bad things, simply by being immersed in a bad social system. An economy of saints can still sin.

In economics this has a technical name: the fallacy of composition. This is the fallacy of extrapolating from the qualities of the parts to the qualities of the whole. A nation full of penny-pinchers may still be in debt. A nation full of expert job-seekers may still have high unemployment. Morally, this means a nation of good people may yet do evil.

The question, for me, is this: Where do we draw the line separating the culpability of the individual from the culpability of the system? To illustrate this, let me take two extreme examples.

Since teaching, as a profession, tends to attract idealistic and left-wing people, I think many teachers, old and young, think that the educational system in the United States is deeply flawed. The standardized tests, the inequality between school districts, the way that we evaluate kids and impart knowledge—many aspects of the system seem unfair and ineffective.

And yet, I think very few people would condemn the teachers who continue to work within this system, even if the system tends to reproduce inequality. We naturally blame the policy-makers and not the teachers, who are only doing their best in compromising circumstances.

Take the opposite extreme: soldiers working in a concentration camp. Now, it is clear that these soldiers were not personally responsible for creating the camp, and were following the orders of their superiors. Like the teachers, they are immersed in a situation they did not design, in a system with morally reprehensible results. (Obviously, the results of a concentration camp are incomparably worse than even the most flawed school system.)

In this situation, I’d wager that most of us would maintain that the soldiers had some responsibility and, at the very least, some of the blame. That is, we do not simply blame the system, but blame the individuals who took part in it. The whole situation is so totally, fundamentally, indisputably unacceptable that there are no extenuating circumstances, no deferment of guilt.

Now, there is obviously a very big difference between a system that is (ostensibly at least) designed to reduce inequality and provide education, and a system that is designed to kill people by the thousands and millions. As a result, in both of these situations, the moral verdict seems relatively clear: the noble aims of the first system excuse its flaws, while the horrid aims of the second system condemn its participants.

The problem, for most of us, is that we so often find ourselves in between these two extremes (although, admittedly closer to the case of teachers than Nazi soldiers, I hope). But where exactly do we draw the line? Where does our responsibility—as participants in a system—begin? And in what circumstances are we morally excused by being immersed in a flawed system?

The more I think about it, the more I am led to the conclusion that being alive requires some ethical compromise. In this regard, I often think of something Joseph Campbell said: “You yourself are participating in the evil, or you are not alive. Whatever you do is evil for somebody. This is one of the ironies of the whole creation.”

And this quote, I think, is where I have to stop for now, since it brings me to another Quotes & Commentary.

Review: Othello

Review: Othello

OthelloOthello by William Shakespeare

My rating: 4 of 5 stars

I had rather be a toad and live upon the vapor of a dungeon than keep a corner in the thing I love for others’ use.

This play recently reasserted itself into my life after I was taken to see it performed here in Madrid. Though I couldn’t understand very much, since it was in elaborate and quick Spanish, I still enjoyed it. (Among other things, the performance featured lots of semi-nudity, men wearing gas masks on dog leashes, and M.I.A.’s “Paper Planes.”) Inspired, I decided to watch the BBC Television Shakespeare version, with Anthony Hopkins (looking suspiciously dark) playing the titular role.

The first time I read this play, I remember being somewhat baffled. Othello was stiff and uncompelling, Desdemona sickly sweet, and Iago operated from no discernable motive to accomplish pointless ends. This time around, I think I have made a little progress.

Othello naturally associates itself in my mind with Julius Caesar. In these plays, the titular characters, both generals, are distant, cold, and simple, and come to be totally overshadowed by other characters. In Julius Caesar, Brutus takes the lead, struggling to live morally in an immoral world; in this play we have Iago, who turns heroes into villains and innocence into carnage.

Who can pay attention to Othello when Iago is on the stage? He is hypnotizing. Shakespeare seems to accomplish the impossible by making one of his own characters the author of the play. Iago directs everything: he sets the plot in motion, manipulates the player’s emotions, controls what happens when, where, how fast, to who, for what reason, and what it means. He is playwright and stage manager, an artist whose intelligence is so cunning that he can paint upon reality itself.

The really frightening thing about Iago is that he can make you believe him, too, even though you know better. He is so utterly convincing in his lies, so keen in his psychological interpretations, so plausible in his attributions of motive and cause, that I found myself questioning whether Desdemona actually did sleep with Cassio. Nobody in the play stands a chance against such a roving and beguiling genius. Even Othello, brave, noble, commanding, is helpless in the Iago’s grips.

The mysterious thing about Iago is what drives him. In the beginning of the play, he attributes his hatred for Othello to rumors about Othello sleeping with his wife. Later on, Iago says he is resentful because Cassio was made Othello’s lieutenant. And yet his plan is not just to besmirch Cassio’s reputation—the self-interested thing to do—but to corrupt and then destroy Othello’s soul—which does not benefit Iago at all, or at least not in worldly terms.

What actuates him seems not to be jealousy, nor envy, nor egotism, but pure spite: the desire for revenge irrespective of justice or self-interest. Revenge for its own sake. This is so terrifying, and yet so compelling, because spite is such an exquisitely human emotion. It is an emotion that seems to have no practical benefit nor rational justification; and yet who has not felt the twangs of spite, the evil joy in injuring somebody who has injured you? It is spite that prompts Milton’s Satan to fight against infinite power; and it is spite that spurs Iago onward to destroy Othello, at great personal risk, for no personal benefit other than the joy in seeing Othello suffer for promoting Cassio instead of Iago.

As Harold Bloom points out, this tragedy is notable for having not even one moment of comic relief. It is unrelenting in its horror. We see innocent character after innocent character fall prey to Iago; we see Othello, a flawed but a good man, descend into madness; and finally we see Desdemona, the paragon of faithless love, smothered in her bed. Desdemona’s death scene is particularly hard to watch. She does not scream for help. She does not even protest her innocence as strongly as we’d like. Instead, she begs for one day, one half-hour, one moment of life more, and is denied.

We don’t even get the satisfaction of seeing Iago pay for his crimes, or having him explain himself. “Demand me nothing. What you know, you know. From this time forth I never shall speak word.”

An interesting question is whether Othello and Desdemona’s marriage would have had a crisis even without Iago. They are a particularly ill-starred couple. Othello is a man of war, shaped by camp-life, accustomed to absolute power; he solves his problems with force; he destroys those who challenge or disobey him. Desdemona is love incarnate, faithful, kind, gentle, and totally without malice. She is attracted to Othello for his adventurous life; Othello is attracted to her admiration for him. The story of their courtship—Othello regaling her with his war-stories, and she giving him hints of her interest—makes it sound as though Othello is only attracted to his own reflection in her. This is in keeping with a man who refers to himself in the third person.

Othello’s obvious unsuitability to married life makes him an easy dupe to Iago. Desdemona’s guileless purity makes her the perfect victim. Iago’s only mistake is that he underestimated his own wife—an odd, but telling mistake to make. Is there a moral to this story? I’m not sure. But I’ll be staying away from people named Iago.
View all my reviews

Quotes & Commentary #48: Orwell

Quotes & Commentary #48: Orwell

We have become too civilised to grasp the obvious. For the truth is very simple. To survive you often have to fight, and to fight you have to dirty yourself. War is evil, and it is often the lesser evil.

—George Orwell, A Collection of Essays

What is the right thing to do in morally compromising circumstances? What should you do when, for example, you’re working for a company whose business practices you find exploitative? What if you’re working in a school system that embodies an educational philosophy you think is false or harmful?

Or consider the situation Orwell describes: When you are forced to choose between fighting in a war or capitulating to fascists?

This is a dreadful choice to make. On the one hand, fascism is ethically intolerable, and allowing fascism to conquer means allowing injustice to reign and persecution to run rampant. But to stop fascism means having to fight; and fighting means getting your hands dirty. “Getting your hands dirty” is, of course, a euphemism for all of the morally compromising actions that war entails. You will have to kill strangers, violently and indiscriminately; and in modern warfare the death of innocent civilians is inevitable, considering the weapons we use.

It is one question (which I don’t intend to address here) whether the so-called “collateral damage” of a conflict justifies the war. It is another whether the moral damage of participating in warfare compensates for the moral benefit of defeating an enemy. To use religious language for a moment, my question is this: Does inflicting violence for a good cause imperil your soul? Does the justice outweigh the sin?

Orwell thought the answer was yes, and he lived his principles. He fought passionately, both in word and action, against fascism, even taking up arms in the Spanish Civil War. To pick another notable example, Malcolm X also agreed that violent means were justified when used against violent tyranny. If white people were going to violently oppress black people in America, then why shouldn’t black people fight back with any means necessary? Indeed, I think most people nowadays would agree that violence is sometimes justified by the outcome. Despite all the atrocities of the Second World War, fighting against the Nazis was morally preferable to letting them win.

On the other side of this debate are people like Gandhi, Martin Luther King, and James Baldwin. The justification for pacifism is that violence corrupts, both the victim and the attacker. By committing violence, even in the service of a noble cause, we degrade ourselves.

This argument sounds religious, and it often is; but you can make this same argument from a secular perspective. James Baldwin, a man totally disillusioned with Christianity, was nevertheless a pacifist, because he thought violence, injustice, and oppression corrupts its agents. Baldwin thought this because the purveyors of violent oppression must create comforting myths for themselves so they don’t have to face their own immorality; and this leads to a disconnect from reality and an inauthentic life.

For my part, the risk with using unethical means for ethical ends is that it forces you to make exceptions in your moral code. You must create an inconsistency in your standards of right and wrong, and this may lead to a slippery slope. In other words, if you make a special rule to use violence against one type of person, this creates a risk that the rule can be abused.

For one, if you decide that violence is allowable against one special class of person—fascist soldiers, let’s say—this leads to the difficulty of determining whether any specific person falls into this class. If you make a mistake, you will commit violence to an innocent person. And it is clear that this rule can be abused (and certainly was during the Spanish Civil War), for example, by anyone who has a score to settle, through a false accusation or other forms of foul play.

The other risk is that, by creating one category of allowable violence, you set a damaging precedent. In the future, perhaps the category is expanded, or other categories of allowable violence are created, citing the first one for authority. In other words, you may unintentionally open the door for unscrupulous people, who wish to cloak their violence in legitimacy rather than use violence to accomplish a noble end.

I am not willing, for the moment, to assert that either Orwell or Baldwin are definitely right (although I admit I’m inclined to pacifism, if only because I’m cowardly). The “right” answer seems to depend heavily on the particular circumstances.

Thankfully, most of us will not have to decide whether to use violence against injustice. But by virtue of living in a society, we will certainly have to make many other, far less dramatic decisions about the right thing to do when given only undesirable options.

This question came to the fore during the 2016 elections, particularly among fans of Bernie Sanders. Many Bernie fans believed that both Hillary Clinton and Donald Trump were morally corrupt, and they were not content to vote for the “lesser of two evils.” Now, in the case of Clinton and Trump, it seemed clear to me that Trump was incomparably worse than Clinton, so the choice wasn’t so hard. But in a general sense this question is certainly worth considering.

When faced with two unethical options, there is always a third option: don’t choose. That is, withdraw and refuse to participate. More generally, when you find yourself in a morally compromising environment, you can either attempt to navigate the environment in the least immoral way possible, or remove yourself from the environment.

Let me be a little more concrete. Imagine you are working in a business whose practices you disapprove of. Maybe you think the business exploits its workers—paying a low salary, with few benefits, and asking employees to work long hours—or maybe the business is selling a product under false pretenses, effectively fooling its customers.

Consider the latter case. To be even more concrete, imagine that you’re a salesperson selling a product you know is poor-quality. Your salary and your job security depend directly on how many units you sell. You have no way to improve the product. To sell it requires, if not lying, at least that you omit information—that is, that you fail to mention that the product is shoddy.

Maybe you’re first reaction is to say that the moral thing to do is to quit. If there is no moral way to do the job, then you shouldn’t do it, right? However, if you quit, do you really improve the world? The business will hire somebody else to replace you, perhaps somebody with less scruples, and the moral balance sheet of the universe will be unaffected. Indeed, by quitting, you inflicted harm on yourself by depriving yourself of the salary. And in that case is quitting the least moral thing to do?

This, I think, is the problem with morally compromising systems. By refusing to participate, all you do is damage yourself while allowing others to fill the same unethical role that you resigned.

True, you do have the option, in the example above, to try to create a movement against the business, to spread the knowledge that its products are shoddy (although this may be legally culpable if you signed a non-release form). Even so, when you think about it, the fundamental problem isn’t really that one business is selling a poor-quality product. The problem is that businesses can thrive by stretching the truth to sell products. (Or is the problem that consumers are not sufficiently well-informed? Where exactly does the business’s responsibility end and the consumer’s begin?)

Again, I’m unwilling, at least for now, to give a general prescription for conundrums like these. And yet the question cannot be put off. Life is one morally-compromising situation after another. How can we balance the need to look out for ourselves with the desire to harm as few people as possible?

Quotes & Commentary #47: Russell

Quotes & Commentary #47: Russell

Nothing is so exhausting as indecision, and nothing is so futile.

—Bertrand Russell

A few days ago, I wrote a post about the circumstances in which I’ve found it’s wise to distrust my emotions. Now I want to examine the occasions when I’ve found its wise to trust them.

There are few things more daunting, more agonizing, and more frightening for me than making important decisions. Yet life constantly confronts us with difficult choices. Where to go to school? Who to date? Who to marry? What profession to pursue? What job to accept? Where to live? To have kids? How many?

I hate making decisions like these, because it seems as if I’m gambling with my very life. Since I can’t know the future, how can I know I’m making the “right” choice? No matter how much information I collect, I can never be sure whether I have surveyed all the relevant points, nor can I ever be sure that another factor, unforeseeable but decisive, might appear in the future.

And if I could know all the important facts, even then, how could I be sure that my choice will maximize my happiness? What if my priorities change? What if something important to me now seems silly to me in ten years? How can I be certain of my preferences—whether I prefer living in the city or the country, for example—when I haven’t had experience of all the different options?

So you can see that both the relevant factors and the criteria are, to an extent, unknowable. The paradox boils down to this: I’m supposed to make a choice in the present that will bind my future self, without knowing exactly what I’m choosing or what my future self will be like. How can I do the right thing?

Thinking along these lines, it’s easy to fall into a pit of despair. It’s a gamble any way you look at it; and yet this is not money you’re dealing with, but your own life.

One way I’ve found to reduce this despair is to try to remind myself that my happiness does not depend on my external circumstances. As I know from painful experience, my mentality is far, far more important than my surroundings in determining my levels of anxiety and contentment. And the more I cultivate this ability to find joy within me rather than in external things, the less pressure is there to make the “right” choice. I no longer feel as though I’m gambling with my happiness, which reduces the significance of the decision.

Paradoxically, the less pressure you put on yourself—the less you tell yourself that your life hangs in the balance—the more likely you are to make the “right” choice, since anxiety, frustration, and fear are not conducive to clear thinking. Indeed, I think it’s wrong to apply the categories “right” and “wrong” to any choice like this. Life is wide open, and each option carries its own positives and negatives. Besides, no choice is absolutely binding. People change jobs, switch careers, get divorced and remarried, move cities, go back to university, and make a thousand other changes that their younger selves could never have predicted. All these are reasons not to agonize.

This brings me back to the role of emotion. I have found that, in making important life decisions, it is usually wiser to trust my intuition than any conscious analysis. Whether I’m visiting a potential college, going on a first date, or interviewing for a potential job, I have found that it either feels “right,” “wrong,” or somewhere in between, and that this feeling is often (though not always) more trustworthy than any of the factors I am weighing.

Let me give a concrete example. While in college, I took a class on the sociology of relationships. One day, the professor said something that has stuck with me. When looking for a partner, usually we have certain criteria we are applying to potential mates, a mental checklist we are trying to tick off. Maybe you want someone who doesn’t smoke, who’s taller than you, who is within a certain age-range. These are the things we normally use when on dating websites, for example, when judging other people’s profiles.

And yet, there is something besides these criteria, what my professor called “chemistry.” This is the way that two people actually interact: how they behave around each other, whether they make each other laugh, if they feel comfortable or uncomfortable, if they feel energetic or bored.

Chemistry is unpredictable. Somebody may satisfy your every criteria and yet bore you to death; and someone else may be totally unacceptable on paper and yet consistently make you laugh.

I think this notion of chemistry is applicable far beyond relationships. There is always an unpredictable element in your reactions. This is why we have interviews rather than hire people just on their résumés, and why we visit college campuses rather than decide from home. We need to experience something for ourselves, to confront it in our own experience, to see how we will react.

This leads to the question: What should you do when your instinctive reaction is out of harmony with your consciously chosen criteria? What if you instinctively like something that is mediocre on paper? Or if you instinctively dislike something that is great on paper?

I can only answer for myself. With decisions, I have learned to trust my gut reaction and to distrust my consciously chosen checklist. With very few exceptions, this strategy has proven satisfactory to me.

If life has taught me anything so far, it is that I am very bad at consciously predicting what I will like. From the university I attended, to the subject I studied, to the people I’ve dated, to the jobs I’ve taken—the most pleasant experiences, and the most satisfying choices, have inevitably been the result of unexpected gut feelings. Likewise, the periods in my life I have felt the worst, the choices I have most regretted, were times when I was trying to carry out some consciously-devised plan.

This leads me to another question: What is intuition? What is this part of my brain, unconscious and inaccessible, that is more trustworthy than my conscious thoughts? This is really a question for psychologists, I suppose, and I feel presumptuous answering it.

I will only say that, judging from my own experience, we are subconsciously aware of far more things than we can consciously take note of. Small details in our environment, little social cues and ticks of personality, a thousand details too fine and too subtle to be intentionally investigated—all this is taken in by our brains, automatically and without effort.

Now, I am not believer in the mystical subconscious, and I do not follow either Freud or Jung. Nevertheless, it seems one of the basic facts of my life that my brain performs far more operations than I am consciously aware of. There is no contradiction or mystery in this. Insects scan their environments with great efficiency without the need of consciousness at all (or at least, I don’t think insects are conscious). And in any case, to effectively comport myself in a physical environment, coordinating my limbs with my senses, keeping myself out of any sudden threats, I need to process many more facts than my poor conscious mind is able to.

(I hope to write more about this in the future, but for now it’s only important that I think we do the majority of our most vital cognitive labor without being consciously aware of it.)

Considering all this, it seems eminently wise to trust my intuition. With regards dating, for example, I believe my unconscious brain is a far more reliable judge of character than my conscious self. While I am fiddling around with psychological guessing games and simplistic theories, my unconscious brain, honed by thousands of years of social evolution, is producing a sophisticated analysis on the person I’m with, and giving this information to my conscious brain in the form of intuition and feeling.

I’ve gotten this far, and yet I still haven’t delineated the situations in which our intuition should be trusted, and in which it shouldn’t. The short answer is that everyone must figure this out for themselves. Only experience has shown me when following my intuition gets me into trouble, and when it has guided me well.

More generally, however, I think that, when making decisions regarding one’s own happiness, it is necessary to consult your intuition. But when making decisions of wider consequence, it is reckless to rely on intuition alone. Your intuition may let you know what will please you, but not what will please others. In other words, your intuition provides information about what you want, which is a fact about yourself. It does not, and cannot, provide reliable information about the world. This, I think, is a vital distinction to keep in mind.

Quotes & Commentary #46: Wittgenstein

Quotes & Commentary #46: Wittgenstein

If language is to be a means of communication there must be agreement not only in definitions but also (queer as this may sound) in judgments.

—Ludwig Wittgenstein, Philosophical Investigations

I often think about the relationship between the public and the private. As a naturally introverted person, I feel very keenly the separation of my own experience from the rest of reality. I make music, take pictures, and write this blog as a way of communicating this inner reality—of manifesting my private world in a publically consumable form.

Having an ‘inner world’ is one of the basic facts of life. Each of us is aware that there is a part of us—the most vital and most mysterious part, perhaps—that is inaccessible to others; we can keep secrets, we can make judgments without anyone else noticing, we can have private pleasures and pains. All of our experience takes place in this space; the only world we ever see, hear, or touch is in our heads.

And yet we are also aware that this reality is, in a sense, insubstantial and ultimately secondary. Our inner world exists in reference to the outer world, the world of objective facts, the world that is publically known. My senses are not just mental facts, but point outward; my thoughts, actions, and desires are oriented towards a world that does not exist in me. Rather, I exist in it, and my experience is just one interpretation of this world, and one vantage point from which to view it.

How are these two worlds related? How do they interact? Is one more important? What is the relationship of our private minds to our public bodies? These are classic philosophical conundrums, mysterious still after all these millennia.

Historical philosophers aside, most of us, in our more reflective moments, become acutely aware of the division between subjective and objective. When you are, for example, searching for a word—when a word is on the tip of your tongue—you feel as though you are rummaging through your own mind. The word is in you somewhere, and nobody but you can find it.

From this, and other experiences like it, we get the feeling that speaking (and by extension, writing) consists of taking something internal and externalizing it. Language is, in this view, an expression of thought; and words take their significance from cogitations. That is to say, our private mental world is the wellspring of significance; our minds imbue our language with meaning. The word “pizza,” for example, means pizza because I am thinking of pizza when I say it.

And yet, as Wittgenstein tried to show in his later philosophy, this is not how language really works. To the contrary, words are defined by their social use: what they accomplish in social situations. In other words, language is public. The meaning of words is determined, not by referring to any inner thought, nor by referring to any objective facts, but by convention, in a community of speakers. (I don’t have the space here to recapitulate his arguments; but you can see my review of his book here.) The word “pizza” means pizza because you can use it to order in a restaurant.

This may seem to be a merely academic matter; but when you begin to think of meaning as determined socially rather than psychologically, then you realize that your cognitive apparatus is not nearly as private as you are wont to believe. In order to communicate thought, you must transform it into something socially consumable: language. All of our vague notions must be put into boxes, whose dimensions are determined by the community, not by us.

But the social does not only intrude when we try to communicate with others; we also understand ourselves through these same social concepts. That is to say, insofar as we think in words, and we understand our own personalities through language, we are subjecting our deepest selves to public categories; even in our most private moments, we are seeing ourselves in the light of the community. We are social beings to our very core.

This does not only extend to the definitions of words. As Wittgenstein points out, to use language effectively, we must also judge like the community.

Any word, however well-defined, is ambiguous in its application. To apply the word “car” to a vehicle, for example, requires not only that I know the definition—whatever that may be—but that I learn how to differentiate between a car, a truck, a van, and an SUV. Every member of a community is involved in educating one another’s judgment, and keeping their opinions in tune. If I call an SUV a “car,” or a pickup truck a “van,” any fellow speakers will correct me, and in this way they will educate me to judge like a member of the community.

As I learn Spanish, I have firsthand experience of this. To pick a trivial example, English word “sausage” is more broad than any corresponding Spanish word. Here in Spain they differentiate between salchicha and salchichón, a difference that my American mind has a hard time understanding. Although Spaniards have tried to define this difference to me, I have found that the only way for me to learn it is by being corrected every time I apply the wrong word.

More significantly, in order to conjugate properly in Spanish, I must not only learn how to change the ending and so forth, but I must learn when it is appropriate to use each tense. To pick the most troubling example, in English we have only the simple past, whereas in Spanish there is both the imperfecto and the indefinido. I constantly use the wrong form, not because I don’t know their technical usage (it has been explained to me countless times, using various metaphors and examples, and I can recite this technical definition from memory), but because my judgment is out of alignment.

Whether an action is continuous, periodic, completed, ongoing, or occasional—this is not as self-apparent as every native-speaker likes to assume, but indeed requires a good deal of interpretation. My judgment has not yet been properly educated by the community, and so, despite my knowing the technical usage of these two forms, I still misuse them.

In a way, this aspect of language learning is somewhat chilling. In order to speak effectively, not only must I use communal vessels to contain my thoughts, but I must learn to judge along the same lines as other members of the community—to interpret, analyze, and distinguish like them. What is left of our private selves when we subtract everything shaped and put there by the community? Am I a self-existent person, or just a reflection of my social milieu?

Yet I do not think that all this is something to dread. Having communally defined categories, and a communally shaped judgment, gives permanence and exactitude to communication. Left on our own, thinking without symbols, communicating with no one but ourselves, there is nothing that grants stability to our reflections; they constantly slip through our fingers, an ever-changing flux tied to nothing. With no fixed points, our judgment flounders in a torrent of ideas, thrashing ineffectually.

When we learn a language, and learn to use it well, we learn how to pour the ambiguous stuff of thought into stable vessels, how to cast the molten metal of our mental life into solid forms. This way, not only can we understand the world better, but we can learn to understand ourselves better. This, I think, is the very purpose of culture itself: to partition reality into sections, to impose structure on ambiguous reality.

Let me give you a common example.

A relationship is a naturally ambiguous thing. The affection and commitment that two people feel for one another exists on a spectrum. And often we do not really know how committed we are to somebody until we examine the relationship in retrospect. And yet, relationships must be defined, and defined early-on, for the sake of the community.

Every culture on earth has rituals and categories associated with courtship, for the simple fact that somebody’s relationship status is a big part of their social identity. Ambiguities in social identity are not tolerated, because they impede normal social life; to deal with somebody effectively, you need them to have a recognizable social status, a status they tells you what to expect from them and what you can ask of them and a million other things.

In modern culture, as we delay marriage ever-more into the distant horizon, we have developed the need for new relationship categories. Now we are “dating,” and then “in a relationship.” The status of being “boyfriend” or “girlfriend” is now socially understood and approved as one level of commitment.

The interesting thing, to me, is that the decision to be in a relationship, to become boyfriend and girlfriend (or whatever the case may be), seems like a private decision, affecting only two people. And yet, it is really a decision for the benefit of the community. To be in a relationship defines where you stand in relation to everyone else: whether it is appropriate to flirt with you, to ask you out, to dance with you, to ask about your significant other, and so forth.

Now, this is not to say that the decision is solely for the benefit for the community. To put this another way, this also benefits you and your partner, because you are also part of the community. It puts a publicly understood category, indicating a certain level of commitment, on your naturally ambiguous and shifting feelings. In other words, by applying a public category to a private feeling, you are, in effect, imposing a certain level of stability on the feeling.

Look what happens next. This level of commitment, being publically labeled, is also bolstered. Friends, family, and coworkers treat you differently. You are now in a different category. And this response of the community helps to form and reinforce your private feelings of commitment. Relationships are never wholly private affairs between two people. It takes a village to make a couple.

Again, I am not suggesting that this is a bad thing. To the contrary, I think that having communal definitions is what allows us to understand our own selves at all. This is also why I write these quotes and commentary. By forcing myself to take my ambiguous thoughts and put them into words, into public vessels, not only do I communicate with others, but I find out what I myself think.