Quotes & Commentary #52: Burns

Quotes & Commentary #52: Burns

We deny our own role in the conflict because self-examination is so shocking and painful, and because we’re secretly rewarded by the problem we’re complaining about. We want to do our dirty work in the dark so we can maintain a façade of innocence.

—David D. Burns, Feeling Good Together

Lately I’ve been churning over this moral dilemma: to what extent can circumstances excuse immoral actions? Are we just products of our environment, and therefore not personally responsible? Or do we have a personal responsibility that cannot be effaced by outside pressure?

The way you answer this questions will largely depend on where you fall on the political spectrum. Those on the right tend to hold individuals responsible; those on the left, circumstances.

Both sides seem to have a point. Obviously, some people must be responsible somewhere if we are to punish wrongdoers and improve society. Indeed, to treat people as helpless in the face of circumstances is tantamount to treating them as non-persons, possessing no moral agency.

On the other hand, holding people to be absolutely responsible can amount to blaming the victim. If somebody grows up in a poor neighborhood, with failing schools, few legitimate opportunities, and an oppressive police force, and then ends up committing a crime, it seems (to me at least) that harshly punishing this individual, without paying any attention to the circumstances, is the opposite of justice.

As is so often the case, both extremes prove dissatisfying. But where should we draw the line?

A few days ago I mentioned that, when thinking about failing school systems, we usually don’t hold the teachers responsible; but when thinking of Nazi death camps, we do blame the soldiers. This generalization about people’s opinions is, on second thought, not as clear-cut as I thought. In fact, teachers are often blamed for the faults in the educational system (which seems to me as just a way to avoid fixing the problem). And the culpability of soldiers who commit heinous acts under orders is also debated. There is the famous Milgram experiment, showing how easy it is to get normal people to do terrible things through authority.

Upon further reflection, I realized that this moral dilemma—individual vs. personal responsibility—was similar to something I read in a self-help book about relationships.

When a couple is having problems, it is typical for each of them to blame the other: “Well, maybe I’m doing y, but I wouldn’t if he wasn’t doing x!”

As Burns says, it is very difficult to get past this. Getting somebody to stop blaming their partner and to change themselves is difficult. Admitting your own faults is neither fun nor easy. Many people, when asked to change a negative behavior, point out that their behavior is just a reaction to their partner’s negative behavior. Why should they have to change? Shouldn’t their partner, since he’s the one being ridiculous?

This illustrates a chilling thing about responsibility: In any given social system, from romantic relationships to the world economy, responsibility can be shifted around at whim. Depending on your perspective and your ideology, you can pick any section of a social system and put the blame there. The right blames the government and the left blames businesses. Teachers blame students and students blame teachers. Husbands blame wives, sisters blame brothers, employees blame bosses, sailors blame the wind, entrepreneurs blame the economy, brokers blame the market, and on and on and on, an infinite deferment of responsibility.

The odd thing is that every one of these people is right. It’s true that your relationship problems would disappear if your partner just did everything you wanted. It’s true that your boss doesn’t appreciate the work you do. It’s true that your boat wouldn’t have sunk if not for the storm. All these things are true, since every social system is an cyclical network of causes.

When we single out one element and put the blame there, we are thinking about the cause linearly: A is causing B, therefore the fault is with A. Yet so often B is just as much the cause of A as A is of B.

In a relationship, your behaviors influence your partner’s, and vice versa; neither exist in isolation. In our school system, maybe we do have mediocre teachers; and maybe poor-quality teaching is a big cause of our educational problems. And yet to put the blame on the teachers is not to acknowledge all of the systemic flaws in teacher training and recruitment, and the inconsistencies in the what we expect from teachers with the resources we provide them.

So if the causation is cyclical—or, perhaps even more accurately, web-like, with each section influencing every other section—what should we do?

What Burns says about relationships is true of many situations in life. We cannot change our partners, nor can we change our bosses, nor the system as a whole, without first changing ourselves. That is, when trying to change the world for the better, the first step is always to stop deferring responsibility for negative situations, to stop excusing yourself by pointing your finger at all of the flaws around you, and to take responsibility yourself.

Indeed, when you understand how you are contributing to a negative situation and then change your behavior, often the situation improves dramatically without having to change anything else, since your own actions played a crucial role in the interlocking web of causation. And in any case, there is no chance in improving a bad situation if you yourself are contributing to it.

This sounds easy, but in fact it is extremely hard. Burns point out that blaming our partners for our relationship problems is so common because it is self-serving. We don’t have to change our behavior, we get to justify our anger, we get to complain and play the victim, and perhaps we get some sadistic pleasure out of upsetting our partners. These ulterior motivations are rarely acknowledged or discussed, because they are rather ugly, but they are operative. On the other hand, taking responsibility requires painful self-examination and the admission of guilt—which can be very damaging to our self-image.

The same thing happens in other circumstances. Let me continue with the example of a teacher (this will likely be relevant to my life). A frustrated teacher can easily blame her unmotivated students who never do their homework and who always talk in class; she can blame her boss, who has unrealistic expectations and little sympathy; the school system, which pays her very little for long hours. All of these things may be true, and yet it is obvious how self-serving this blaming can be. A teacher cannot transform her students into dedicated scholars or fire her boss or change her pay; she needs to do what she can with what she has.

I should take a step back here. If you choose to be a teacher, I think you have a responsibility to do a good job of it. But for those looking to reform the educational system, blaming all the problems on teachers is just the same thing as when teachers blame the system. It is to avoid responsibility.

I should also make clear that, while I think we should take responsibility for what we can influence, I am not advocating for people to blame themselves. Taking responsibility and blaming yourself, though superficially similar, as really quite different. Responsibility is proactive. It means taking action for what is within your control. Blaming yourself does just the opposite; it makes you feel guilty, and guilt is a bad motivator.

Indeed, you should not accuse yourself of causing the problem, since most likely the problem has many causes. You should acknowledge your part in the solution to the problem, and try to do your part to change things.

This is the closest thing to an answer I have to this unanswerable question, whether circumstances or individuals are morally culpable. The causes of any ethical problem are complex and interlocking—a cyclical interplay of dynamics and personalities—but your only choice is to start with your actions.

Quotes & Commentary #51: Montaigne

Quotes & Commentary #51: Montaigne

I set little store by my own opinions, but just as little by other people’s.

—Michel de Montaigne

Although nobody is free from self-doubt, I have long felt that I have this quality to an inordinate degree. The problem is that I can’t decide whether this is a good or a bad thing.

On the one hand, doubting yourself is one of the keys of moderation and wisdom. If you think you already know everything you cannot learn. If you are sure that your perspective is right you cannot empathize. Dogmatism, selfishness, and ignorance result from the inability to doubt the truth of your own opinions.

There are no such things as self-doubting fanatics. The ability to question your own opinions and conclusions is what prevents most people from committing atrocities. I couldn’t kill somebody in the name of an idea, since there is no idea I believe in strongly enough.

And yet, this tendency to doubt my own beliefs and conclusions so often makes me hesitating, indecisive, and occasionally spineless. Never mind killing anyone: I don’t even believe in my political ideals enough to stand up to somebody I find offensive. I doubt the worth of my dreams, the reason of my arguments, the virtue of my actions; and I am not terribly sure about my professional competency or my literary skill.

No matter what I do, I have this nagging feeling that, somewhere out there, there are people who could make me appear ridiculous by comparison. So often I feel out of the loop. I hesitate to submit my writing anywhere because I think a professional editor would cut it to pieces. I hesitate to put forth arguments because I think a real expert could see right through them. I hesitate to commit to a profession because I doubt my own ability to follow through, to perform difficult tasks, and to do my duties responsibly.

My nagging self-doubt is more of a feeling than a thought; but insofar as a feeling can be expressed in words, it goes like this: “Well, maybe there’s something big out there that I don’t know, something important that would render all my knowledge and standards inadequate.”

The odd thing is that I have no evidence that this fear is justified. In fact, I have evidence to the contrary. The more I read and travel, the more people I meet, the more places I work, the less surprised I am by what I find. The contours of daily reality have grown ever-more familiar, and yet this fear—the fear that, somehow, I have missed something big—this fear remains.

The perilous side of self-doubt is that it can easily ally itself with baser qualities. I can argue myself out of taking risks because I am unsure whether I really want the goal. I can argue myself out of standing up for what I believe is right by doubting whether it really is right, and whether I could prove it. Self-doubt and fear—fear of failure, fear of rejection, fear of being publically embarrassed—so often go hand-in-hand.

I can’t say why exactly, but the thought of writing a flawed argument, with a logical fallacy, an unwarranted assumption, or a sloppy generalization, fills me with dread. How mortifying to have my mental errors exposed to the world! Maybe this is from spending so much time in a school environment wherein the number of correct answers was used as a measure of my worth. Or maybe it is simply my personality; being “right” has always been important to me.

This fear of being wrong is particularly irrational, since some of the greatest minds and most influential thinkers in history have been wrong—Galileo, Newton, Darwin, Einstein, all of them have erred. Indeed, the fear of being wrong is not only irrational, but counterproductive to learning, since it sometimes prevents me from exposing my thoughts, increasing the likelihood that I will persist in an error.

Despite the negatives, I admit that I am often proud of my ability to doubt my own conclusions and change my opinions. I see it as a source of my independence of mind, my ability to think differently from others and to come to my own conclusions. After all, doubting yourself is the prerequisite of doubting anything at all. As Plato illustrated in his Socratic dialogues, from the moment we our born our minds are filled with all sorts of assumptions and prejudices which we absorb from our culture. The first step of doubting conventional opinion is thus doubting our own opinions.

But just as often as my self-doubt is a source of pride, it is a source of shame. I am sometimes filled with envy for those rare souls who seem perfectly self-confident. In this connection I think of Benvenuto Cellini, the Renaissance artist who left us his remarkable autobiography. Cocksure, boastful, selfish, prideful, Cellini was in many ways a despicable man. And yet he tells his story with such perfect certainty of himself that you can’t help but be won over.

Logically, self-confidence should come after success, since otherwise it isn’t justified. But so often self-confidence comes beforehand, and is actually the cause of success. In my experience, when you believe in yourself, others are inclined to believe in you. When you are confident you take risks, and these risks often enough pay off. When you are confident you state your opinions boldly and clearly, and thus have a better chance of convincing others.

Confidence is often discussed in dating. Self-confident people are seen as more attractive, and tend to have more romantic success since they take more risks. The ability to look somebody in the eye and say what you think and what you want—these are almost universally seen as attractive qualities; and not only in romance, but in politics, academics, business, and nearly everything else.

The charisma of confidence notwithstanding, this leads to an obvious danger. Many people are confident without substance. They boast more than they can accomplish; they speak with authority and yet have neither evidence nor logic to back their opinions. The world is ruled by such people—usually men—and I think most of us have personal experience with this type. I call it incompetent confidence, and it is rampant.

As Aristotle would say, there must be some ideal middle-ground between being confidently clueless, and being timidly thoughtful. And yet, in my experience, this middle-ground, if it even exists, is difficult to find.

I suppose that, ideally, we would be exactly as confident as the reach of our knowledge permitted: bold where we were sure, hesitating where we were ignorant. In practice, however, this is an impossible ideal. How can we ever be sure of how much we know, or how dependable our theories are? Indeed, this seems to be precisely what we can never know for sure—how much we know.

For the world to function, it seems that it needs doers and doubters. We need confident leaders and skeptical followers. And within our own brains, we need the same division: the ability to act boldly when needed, and to question ourselves when possible. Personally, I tend to err on the side of self-doubt, since it easily allies itself with laziness, inaction, and fear; but now I am starting to doubt my own doubting.

Quotes & Commentary #50: Campbell

Quotes & Commentary #50: Campbell

You yourself are participating in the evil, or you are not alive. Whatever you do is evil for somebody. This is one of the ironies of the whole creation.

—Joseph Campbell, The Power of Myth

I ended my last post with the dreary thought that we cannot help harming others. Almost inevitably, a system we must operate within—be it the economy, the school system, or a company we work for—will have undesirable consequences: it will exploit the disadvantaged, increase inequality, reinforce the status quo, or any of the other ugly things our social systems do.

Apart from this, it is worth considering that, even if we were operating within a relatively just and fair system, we could still unintentionally harm others. What if I take the job you had your dreams set on? Or if I marry the girl you’ve always had a crush on? Or take the last spot in your ideal university? A promotion, the lottery, the last potato chip in the bag—all these are limited resources, of which you deprive others by using.

As long as we live on a finite world with infinite wants, as long as our desires outpace our means, we will inevitably have to compete for some resources; and this competition will make us get in the way of each other’s happiness.

It is easy to get angry or depressed about this. The gazelle who has just been tackled by the lion probably thinks that life is monstrously unfair, and that the lion is being very unjust. The lion, for her part, probably thinks that it is perfectly fair that she eat this gazelle, since he was the slowest in the herd.

And I think they’re both right. From one perspective, life is horribly unfair; and from another, life is fairness itself. As long as there is limited gazelle meat in the world, there will be some competition for its use—the gazelle for its body, the lion for its food.

To be alive means participating in this struggle for resources; and in that respect, being alive means harming others, since any resources you take for yourself are unavailable for others. And if inflicting harm means doing evil, this means that, to some people, sometimes, you are evil. Being alive means participating in this basic, universal evil.

So if evil is inevitable, what does it mean to be moral?

Well, as I’ve discussed elsewhere, I think that morality is a system of behavior that allows individuals to live safely within the same community. This system consists of interpersonal rules: how you need to act towards others. For example, a safe community isn’t logically possible where theft and murder are considered permissible; thus moral rules prohibit these behaviors. These rules are enforced by the community through punishments. This way, behaviors incompatible with safe communal living are discouraged and diminished, allowing each member to live in relative peace and security.

Provided that these communal rules are not flawed (and historically they often have been) then by following them you are a by definition a moral person. Accepting a promotion—and, by doing so, depriving a coworker of the same promotion—may be “evil” from your coworker’s perspective, but it is not strictly immoral, since granting and accepting promotions are morally allowable actions. (By “morally allowable” I mean that these actions don’t inflict any harm, other than the unavoidable harm of allocating limited resources; and that they don’t make an exception of anyone, in that they don’t violate anyone’s rights.)

Moral systems (and their offspring, the concept of rights) are how we have learned to negotiate the crisscrossing pattern of desires, the unavoidable conflicts of interest, that exist when any two creatures inhabit the same space. By having general guidelines of conduct, we have an impartial, communally approved standard of deciding what is fair or unfair, a standard that treats every member of the community equally. In a way, a moral system is a way of imposing order onto the tragedy and comedy of all creation. It is a set of rules that tells you what desires you can or can’t satisfy—where, when, and how it is appropriate to obtain what you want. Moral systems legitimate some desires and delegitimate others. 

And the beauty of moral rules is that, by curbing some desires, and disallowing certain actions, it actually benefits for each community member in the long run, since it is these rules that make the community possible at all. Without them, the community would disintegrate into chaos, or at the very least would need oppressive force to hold it together, both of which are undesirable situations.

The problem is that any moral system, however well-constructed, cannot make life fair. Morality makes social life fair, but not life itself. No matter what, some people will be born with certain talents, some people will be born into wealthy families, some people will be born into privilege, and others will be cursed with abusive parents or struck down by disease. Aside from the accident of birth, luck intervenes at every important junction: relationships, careers, school, friendships, everything.

The omnipresence of luck—the enemy of fairness—and the finitude of life, makes unhappiness unavoidable, even in a perfectly constructed utopia. Our desires will always outpace our means, and reality will always baffle our attempts to control it. We want the impossible. We want to live forever with all our friends and family, eating wonderful meals five times daily, never feeling any pain or discomfort, bedding every attractive person we see. Of course we know this can’t happen, and so feel little bitterness, usually, that life is very different.

Nevertheless, how often do we feel that life is treating us unfairly? How often do we resent those around us for taking what we want, or shake our fists at the injustice of the universe for giving other people all the luck? This feeling of injustice most often results in anger; indeed, I think anger is the ego’s defense against the feeling of impotence. When we can’t get what we want, and things aren’t going our way, we naturally grow resentful and feel that the situation is somehow wrong. It is not our desires that are wrong—to the contrary, the universe should be cooperating, since the universe created me with these desires!—but the universe that is wrong. Right?

This is the reverse-side of Campbell’s point: Not only will you be evil to somebody, but somebody will also be evil to you, even when everyone is abiding by the dictates of morality. The wise course, I think, is to try to keep the whole in perspective, to realize that what seems unjust to you may seem perfectly just to others, and vice versa.

From up close, life is tragic, since we can never get everything we want, or even a fraction; but from a distance, seen as a whole, life is also comic, because we want the impossible and don’t appreciate what we have. This double-aspect of tragedy and comedy is, indeed, one of the ironies of creation. 

Quotes & Commentary #49: Orwell

Quotes & Commentary #49: Orwell

All left-wing parties in the highly industrialized parties are at-bottom a sham, because they make it their business to fight against something they do not really wish to destroy. They have internationalist aims, and at the same time they struggle to keep up a standard of life with which those aims are incompatible.

—George Orwell, A Collection of Essays

Yesterday I wrote an essay trying to answer this question: What’s the right thing to do in morally compromising circumstances? This is one of the oldest and most vexing questions of human existence; and there’s no way I’m going to crack this nut in one blog post. That’s why I’m writing another one.

As George Orwell points out, this question isn’t confined to any one sphere of our lives, but confronts us every day, in manifold and invisible ways. When we go to the grocery store, when we buy a shirt, when we download a song, when we get the latest model of smartphone, we are supporting business practices that are largely hidden from us, but which may be morally repulsive.

What is life like for the factory workers who made my computer? What are the conditions for the animals whose meat I eat? Where does the material from my jeans come from, how is it processed, who are the workers who make it? For all I know, I may be patronizing exploitative, abusive, oppressive, and otherwise unethical businesses—and, the more I consider it, the more it seems likely that I do.

Unethical business practices aside, there is the simple fact of inequality. On the left we spend a lot of time criticizing the vast wealth inequality that exists within the United States; and yet we do not often stop to realize how much wealthier are most of us than people elsewhere. Is the first situation unjust, and the second not? Is it right that some countries are wealthier than others? And if not, can we logically desire our present standard of life while maintaining our political ideals?

To the extent that opponents of inequality are immersed in a global economy—and we are, all of us—they are participating in a system whose consequences they find morally wrong. But how can you rebel against a global paradigm? You can try to minimize your damage. You can try to patronize businesses who have more humane business practices. You can become a vegan and buy second-hand clothes.

And yet, it is simply impossible—logistically, just from lack of time and resources—to be absolutely sure of the consequences of all your actions in a system so vast and so complex. It would be a full-time job to be a perfectly conscientious consumer. You can’t personally investigate each factory or tour each farm. You can’t know everything about the company you work for, the bank you store your money in, the supermarkets you buy your food from.

This is the enigma of being immersed in an ethically compromising system. To a certain extent, resist or not, you become complicit in a social system you did not design and whose consequences you don’t approve of. It is one of the tragic but unavoidable facts of human life that good people can still do bad things, simply by being immersed in a bad social system. An economy of saints can still sin.

In economics this has a technical name: the fallacy of composition. This is the fallacy of extrapolating from the qualities of the parts to the qualities of the whole. A nation full of penny-pinchers may still be in debt. A nation full of expert job-seekers may still have high unemployment. Morally, this means a nation of good people may yet do evil.

The question, for me, is this: Where do we draw the line separating the culpability of the individual from the culpability of the system? To illustrate this, let me take two extreme examples.

Since teaching, as a profession, tends to attract idealistic and left-wing people, I think many teachers, old and young, think that the educational system in the United States is deeply flawed. The standardized tests, the inequality between school districts, the way that we evaluate kids and impart knowledge—many aspects of the system seem unfair and ineffective.

And yet, I think very few people would condemn the teachers who continue to work within this system, even if the system tends to reproduce inequality. We naturally blame the policy-makers and not the teachers, who are only doing their best in compromising circumstances.

Take the opposite extreme: soldiers working in a concentration camp. Now, it is clear that these soldiers were not personally responsible for creating the camp, and were following the orders of their superiors. Like the teachers, they are immersed in a situation they did not design, in a system with morally reprehensible results. (Obviously, the results of a concentration camp are incomparably worse than even the most flawed school system.)

In this situation, I’d wager that most of us would maintain that the soldiers had some responsibility and, at the very least, some of the blame. That is, we do not simply blame the system, but blame the individuals who took part in it. The whole situation is so totally, fundamentally, indisputably unacceptable that there are no extenuating circumstances, no deferment of guilt.

Now, there is obviously a very big difference between a system that is (ostensibly at least) designed to reduce inequality and provide education, and a system that is designed to kill people by the thousands and millions. As a result, in both of these situations, the moral verdict seems relatively clear: the noble aims of the first system excuse its flaws, while the horrid aims of the second system condemn its participants.

The problem, for most of us, is that we so often find ourselves in between these two extremes (although, admittedly closer to the case of teachers than Nazi soldiers, I hope). But where exactly do we draw the line? Where does our responsibility—as participants in a system—begin? And in what circumstances are we morally excused by being immersed in a flawed system?

The more I think about it, the more I am led to the conclusion that being alive requires some ethical compromise. In this regard, I often think of something Joseph Campbell said: “You yourself are participating in the evil, or you are not alive. Whatever you do is evil for somebody. This is one of the ironies of the whole creation.”

And this quote, I think, is where I have to stop for now, since it brings me to another Quotes & Commentary.

Review: Othello

Review: Othello

OthelloOthello by William Shakespeare

My rating: 4 of 5 stars

I had rather be a toad and live upon the vapor of a dungeon than keep a corner in the thing I love for others’ use.

This play recently reasserted itself into my life after I was taken to see it performed here in Madrid. Though I couldn’t understand very much, since it was in elaborate and quick Spanish, I still enjoyed it. (Among other things, the performance featured lots of semi-nudity, men wearing gas masks on dog leashes, and M.I.A.’s “Paper Planes.”) Inspired, I decided to watch the BBC Television Shakespeare version, with Anthony Hopkins (looking suspiciously dark) playing the titular role.

The first time I read this play, I remember being somewhat baffled. Othello was stiff and uncompelling, Desdemona sickly sweet, and Iago operated from no discernable motive to accomplish pointless ends. This time around, I think I have made a little progress.

Othello naturally associates itself in my mind with Julius Caesar. In these plays, the titular characters, both generals, are distant, cold, and simple, and come to be totally overshadowed by other characters. In Julius Caesar, Brutus takes the lead, struggling to live morally in an immoral world; in this play we have Iago, who turns heroes into villains and innocence into carnage.

Who can pay attention to Othello when Iago is on the stage? He is hypnotizing. Shakespeare seems to accomplish the impossible by making one of his own characters the author of the play. Iago directs everything: he sets the plot in motion, manipulates the player’s emotions, controls what happens when, where, how fast, to who, for what reason, and what it means. He is playwright and stage manager, an artist whose intelligence is so cunning that he can paint upon reality itself.

The really frightening thing about Iago is that he can make you believe him, too, even though you know better. He is so utterly convincing in his lies, so keen in his psychological interpretations, so plausible in his attributions of motive and cause, that I found myself questioning whether Desdemona actually did sleep with Cassio. Nobody in the play stands a chance against such a roving and beguiling genius. Even Othello, brave, noble, commanding, is helpless in the Iago’s grips.

The mysterious thing about Iago is what drives him. In the beginning of the play, he attributes his hatred for Othello to rumors about Othello sleeping with his wife. Later on, Iago says he is resentful because Cassio was made Othello’s lieutenant. And yet his plan is not just to besmirch Cassio’s reputation—the self-interested thing to do—but to corrupt and then destroy Othello’s soul—which does not benefit Iago at all, or at least not in worldly terms.

What actuates him seems not to be jealousy, nor envy, nor egotism, but pure spite: the desire for revenge irrespective of justice or self-interest. Revenge for its own sake. This is so terrifying, and yet so compelling, because spite is such an exquisitely human emotion. It is an emotion that seems to have no practical benefit nor rational justification; and yet who has not felt the twangs of spite, the evil joy in injuring somebody who has injured you? It is spite that prompts Milton’s Satan to fight against infinite power; and it is spite that spurs Iago onward to destroy Othello, at great personal risk, for no personal benefit other than the joy in seeing Othello suffer for promoting Cassio instead of Iago.

As Harold Bloom points out, this tragedy is notable for having not even one moment of comic relief. It is unrelenting in its horror. We see innocent character after innocent character fall prey to Iago; we see Othello, a flawed but a good man, descend into madness; and finally we see Desdemona, the paragon of faithless love, smothered in her bed. Desdemona’s death scene is particularly hard to watch. She does not scream for help. She does not even protest her innocence as strongly as we’d like. Instead, she begs for one day, one half-hour, one moment of life more, and is denied.

We don’t even get the satisfaction of seeing Iago pay for his crimes, or having him explain himself. “Demand me nothing. What you know, you know. From this time forth I never shall speak word.”

An interesting question is whether Othello and Desdemona’s marriage would have had a crisis even without Iago. They are a particularly ill-starred couple. Othello is a man of war, shaped by camp-life, accustomed to absolute power; he solves his problems with force; he destroys those who challenge or disobey him. Desdemona is love incarnate, faithful, kind, gentle, and totally without malice. She is attracted to Othello for his adventurous life; Othello is attracted to her admiration for him. The story of their courtship—Othello regaling her with his war-stories, and she giving him hints of her interest—makes it sound as though Othello is only attracted to his own reflection in her. This is in keeping with a man who refers to himself in the third person.

Othello’s obvious unsuitability to married life makes him an easy dupe to Iago. Desdemona’s guileless purity makes her the perfect victim. Iago’s only mistake is that he underestimated his own wife—an odd, but telling mistake to make. Is there a moral to this story? I’m not sure. But I’ll be staying away from people named Iago.
View all my reviews

Quotes & Commentary #48: Orwell

Quotes & Commentary #48: Orwell

We have become too civilised to grasp the obvious. For the truth is very simple. To survive you often have to fight, and to fight you have to dirty yourself. War is evil, and it is often the lesser evil.

—George Orwell, A Collection of Essays

What is the right thing to do in morally compromising circumstances? What should you do when, for example, you’re working for a company whose business practices you find exploitative? What if you’re working in a school system that embodies an educational philosophy you think is false or harmful?

Or consider the situation Orwell describes: When you are forced to choose between fighting in a war or capitulating to fascists?

This is a dreadful choice to make. On the one hand, fascism is ethically intolerable, and allowing fascism to conquer means allowing injustice to reign and persecution to run rampant. But to stop fascism means having to fight; and fighting means getting your hands dirty. “Getting your hands dirty” is, of course, a euphemism for all of the morally compromising actions that war entails. You will have to kill strangers, violently and indiscriminately; and in modern warfare the death of innocent civilians is inevitable, considering the weapons we use.

It is one question (which I don’t intend to address here) whether the so-called “collateral damage” of a conflict justifies the war. It is another whether the moral damage of participating in warfare compensates for the moral benefit of defeating an enemy. To use religious language for a moment, my question is this: Does inflicting violence for a good cause imperil your soul? Does the justice outweigh the sin?

Orwell thought the answer was yes, and he lived his principles. He fought passionately, both in word and action, against fascism, even taking up arms in the Spanish Civil War. To pick another notable example, Malcolm X also agreed that violent means were justified when used against violent tyranny. If white people were going to violently oppress black people in America, then why shouldn’t black people fight back with any means necessary? Indeed, I think most people nowadays would agree that violence is sometimes justified by the outcome. Despite all the atrocities of the Second World War, fighting against the Nazis was morally preferable to letting them win.

On the other side of this debate are people like Gandhi, Martin Luther King, and James Baldwin. The justification for pacifism is that violence corrupts, both the victim and the attacker. By committing violence, even in the service of a noble cause, we degrade ourselves.

This argument sounds religious, and it often is; but you can make this same argument from a secular perspective. James Baldwin, a man totally disillusioned with Christianity, was nevertheless a pacifist, because he thought violence, injustice, and oppression corrupts its agents. Baldwin thought this because the purveyors of violent oppression must create comforting myths for themselves so they don’t have to face their own immorality; and this leads to a disconnect from reality and an inauthentic life.

For my part, the risk with using unethical means for ethical ends is that it forces you to make exceptions in your moral code. You must create an inconsistency in your standards of right and wrong, and this may lead to a slippery slope. In other words, if you make a special rule to use violence against one type of person, this creates a risk that the rule can be abused.

For one, if you decide that violence is allowable against one special class of person—fascist soldiers, let’s say—this leads to the difficulty of determining whether any specific person falls into this class. If you make a mistake, you will commit violence to an innocent person. And it is clear that this rule can be abused (and certainly was during the Spanish Civil War), for example, by anyone who has a score to settle, through a false accusation or other forms of foul play.

The other risk is that, by creating one category of allowable violence, you set a damaging precedent. In the future, perhaps the category is expanded, or other categories of allowable violence are created, citing the first one for authority. In other words, you may unintentionally open the door for unscrupulous people, who wish to cloak their violence in legitimacy rather than use violence to accomplish a noble end.

I am not willing, for the moment, to assert that either Orwell or Baldwin are definitely right (although I admit I’m inclined to pacifism, if only because I’m cowardly). The “right” answer seems to depend heavily on the particular circumstances.

Thankfully, most of us will not have to decide whether to use violence against injustice. But by virtue of living in a society, we will certainly have to make many other, far less dramatic decisions about the right thing to do when given only undesirable options.

This question came to the fore during the 2016 elections, particularly among fans of Bernie Sanders. Many Bernie fans believed that both Hillary Clinton and Donald Trump were morally corrupt, and they were not content to vote for the “lesser of two evils.” Now, in the case of Clinton and Trump, it seemed clear to me that Trump was incomparably worse than Clinton, so the choice wasn’t so hard. But in a general sense this question is certainly worth considering.

When faced with two unethical options, there is always a third option: don’t choose. That is, withdraw and refuse to participate. More generally, when you find yourself in a morally compromising environment, you can either attempt to navigate the environment in the least immoral way possible, or remove yourself from the environment.

Let me be a little more concrete. Imagine you are working in a business whose practices you disapprove of. Maybe you think the business exploits its workers—paying a low salary, with few benefits, and asking employees to work long hours—or maybe the business is selling a product under false pretenses, effectively fooling its customers.

Consider the latter case. To be even more concrete, imagine that you’re a salesperson selling a product you know is poor-quality. Your salary and your job security depend directly on how many units you sell. You have no way to improve the product. To sell it requires, if not lying, at least that you omit information—that is, that you fail to mention that the product is shoddy.

Maybe you’re first reaction is to say that the moral thing to do is to quit. If there is no moral way to do the job, then you shouldn’t do it, right? However, if you quit, do you really improve the world? The business will hire somebody else to replace you, perhaps somebody with less scruples, and the moral balance sheet of the universe will be unaffected. Indeed, by quitting, you inflicted harm on yourself by depriving yourself of the salary. And in that case is quitting the least moral thing to do?

This, I think, is the problem with morally compromising systems. By refusing to participate, all you do is damage yourself while allowing others to fill the same unethical role that you resigned.

True, you do have the option, in the example above, to try to create a movement against the business, to spread the knowledge that its products are shoddy (although this may be legally culpable if you signed a non-release form). Even so, when you think about it, the fundamental problem isn’t really that one business is selling a poor-quality product. The problem is that businesses can thrive by stretching the truth to sell products. (Or is the problem that consumers are not sufficiently well-informed? Where exactly does the business’s responsibility end and the consumer’s begin?)

Again, I’m unwilling, at least for now, to give a general prescription for conundrums like these. And yet the question cannot be put off. Life is one morally-compromising situation after another. How can we balance the need to look out for ourselves with the desire to harm as few people as possible?

Quotes & Commentary #47: Russell

Quotes & Commentary #47: Russell

Nothing is so exhausting as indecision, and nothing is so futile.

—Bertrand Russell

A few days ago, I wrote a post about the circumstances in which I’ve found it’s wise to distrust my emotions. Now I want to examine the occasions when I’ve found its wise to trust them.

There are few things more daunting, more agonizing, and more frightening for me than making important decisions. Yet life constantly confronts us with difficult choices. Where to go to school? Who to date? Who to marry? What profession to pursue? What job to accept? Where to live? To have kids? How many?

I hate making decisions like these, because it seems as if I’m gambling with my very life. Since I can’t know the future, how can I know I’m making the “right” choice? No matter how much information I collect, I can never be sure whether I have surveyed all the relevant points, nor can I ever be sure that another factor, unforeseeable but decisive, might appear in the future.

And if I could know all the important facts, even then, how could I be sure that my choice will maximize my happiness? What if my priorities change? What if something important to me now seems silly to me in ten years? How can I be certain of my preferences—whether I prefer living in the city or the country, for example—when I haven’t had experience of all the different options?

So you can see that both the relevant factors and the criteria are, to an extent, unknowable. The paradox boils down to this: I’m supposed to make a choice in the present that will bind my future self, without knowing exactly what I’m choosing or what my future self will be like. How can I do the right thing?

Thinking along these lines, it’s easy to fall into a pit of despair. It’s a gamble any way you look at it; and yet this is not money you’re dealing with, but your own life.

One way I’ve found to reduce this despair is to try to remind myself that my happiness does not depend on my external circumstances. As I know from painful experience, my mentality is far, far more important than my surroundings in determining my levels of anxiety and contentment. And the more I cultivate this ability to find joy within me rather than in external things, the less pressure is there to make the “right” choice. I no longer feel as though I’m gambling with my happiness, which reduces the significance of the decision.

Paradoxically, the less pressure you put on yourself—the less you tell yourself that your life hangs in the balance—the more likely you are to make the “right” choice, since anxiety, frustration, and fear are not conducive to clear thinking. Indeed, I think it’s wrong to apply the categories “right” and “wrong” to any choice like this. Life is wide open, and each option carries its own positives and negatives. Besides, no choice is absolutely binding. People change jobs, switch careers, get divorced and remarried, move cities, go back to university, and make a thousand other changes that their younger selves could never have predicted. All these are reasons not to agonize.

This brings me back to the role of emotion. I have found that, in making important life decisions, it is usually wiser to trust my intuition than any conscious analysis. Whether I’m visiting a potential college, going on a first date, or interviewing for a potential job, I have found that it either feels “right,” “wrong,” or somewhere in between, and that this feeling is often (though not always) more trustworthy than any of the factors I am weighing.

Let me give a concrete example. While in college, I took a class on the sociology of relationships. One day, the professor said something that has stuck with me. When looking for a partner, usually we have certain criteria we are applying to potential mates, a mental checklist we are trying to tick off. Maybe you want someone who doesn’t smoke, who’s taller than you, who is within a certain age-range. These are the things we normally use when on dating websites, for example, when judging other people’s profiles.

And yet, there is something besides these criteria, what my professor called “chemistry.” This is the way that two people actually interact: how they behave around each other, whether they make each other laugh, if they feel comfortable or uncomfortable, if they feel energetic or bored.

Chemistry is unpredictable. Somebody may satisfy your every criteria and yet bore you to death; and someone else may be totally unacceptable on paper and yet consistently make you laugh.

I think this notion of chemistry is applicable far beyond relationships. There is always an unpredictable element in your reactions. This is why we have interviews rather than hire people just on their résumés, and why we visit college campuses rather than decide from home. We need to experience something for ourselves, to confront it in our own experience, to see how we will react.

This leads to the question: What should you do when your instinctive reaction is out of harmony with your consciously chosen criteria? What if you instinctively like something that is mediocre on paper? Or if you instinctively dislike something that is great on paper?

I can only answer for myself. With decisions, I have learned to trust my gut reaction and to distrust my consciously chosen checklist. With very few exceptions, this strategy has proven satisfactory to me.

If life has taught me anything so far, it is that I am very bad at consciously predicting what I will like. From the university I attended, to the subject I studied, to the people I’ve dated, to the jobs I’ve taken—the most pleasant experiences, and the most satisfying choices, have inevitably been the result of unexpected gut feelings. Likewise, the periods in my life I have felt the worst, the choices I have most regretted, were times when I was trying to carry out some consciously-devised plan.

This leads me to another question: What is intuition? What is this part of my brain, unconscious and inaccessible, that is more trustworthy than my conscious thoughts? This is really a question for psychologists, I suppose, and I feel presumptuous answering it.

I will only say that, judging from my own experience, we are subconsciously aware of far more things than we can consciously take note of. Small details in our environment, little social cues and ticks of personality, a thousand details too fine and too subtle to be intentionally investigated—all this is taken in by our brains, automatically and without effort.

Now, I am not believer in the mystical subconscious, and I do not follow either Freud or Jung. Nevertheless, it seems one of the basic facts of my life that my brain performs far more operations than I am consciously aware of. There is no contradiction or mystery in this. Insects scan their environments with great efficiency without the need of consciousness at all (or at least, I don’t think insects are conscious). And in any case, to effectively comport myself in a physical environment, coordinating my limbs with my senses, keeping myself out of any sudden threats, I need to process many more facts than my poor conscious mind is able to.

(I hope to write more about this in the future, but for now it’s only important that I think we do the majority of our most vital cognitive labor without being consciously aware of it.)

Considering all this, it seems eminently wise to trust my intuition. With regards dating, for example, I believe my unconscious brain is a far more reliable judge of character than my conscious self. While I am fiddling around with psychological guessing games and simplistic theories, my unconscious brain, honed by thousands of years of social evolution, is producing a sophisticated analysis on the person I’m with, and giving this information to my conscious brain in the form of intuition and feeling.

I’ve gotten this far, and yet I still haven’t delineated the situations in which our intuition should be trusted, and in which it shouldn’t. The short answer is that everyone must figure this out for themselves. Only experience has shown me when following my intuition gets me into trouble, and when it has guided me well.

More generally, however, I think that, when making decisions regarding one’s own happiness, it is necessary to consult your intuition. But when making decisions of wider consequence, it is reckless to rely on intuition alone. Your intuition may let you know what will please you, but not what will please others. In other words, your intuition provides information about what you want, which is a fact about yourself. It does not, and cannot, provide reliable information about the world. This, I think, is a vital distinction to keep in mind.

Quotes & Commentary #46: Wittgenstein

Quotes & Commentary #46: Wittgenstein

If language is to be a means of communication there must be agreement not only in definitions but also (queer as this may sound) in judgments.

—Ludwig Wittgenstein, Philosophical Investigations

I often think about the relationship between the public and the private. As a naturally introverted person, I feel very keenly the separation of my own experience from the rest of reality. I make music, take pictures, and write this blog as a way of communicating this inner reality—of manifesting my private world in a publically consumable form.

Having an ‘inner world’ is one of the basic facts of life. Each of us is aware that there is a part of us—the most vital and most mysterious part, perhaps—that is inaccessible to others; we can keep secrets, we can make judgments without anyone else noticing, we can have private pleasures and pains. All of our experience takes place in this space; the only world we ever see, hear, or touch is in our heads.

And yet we are also aware that this reality is, in a sense, insubstantial and ultimately secondary. Our inner world exists in reference to the outer world, the world of objective facts, the world that is publically known. My senses are not just mental facts, but point outward; my thoughts, actions, and desires are oriented towards a world that does not exist in me. Rather, I exist in it, and my experience is just one interpretation of this world, and one vantage point from which to view it.

How are these two worlds related? How do they interact? Is one more important? What is the relationship of our private minds to our public bodies? These are classic philosophical conundrums, mysterious still after all these millennia.

Historical philosophers aside, most of us, in our more reflective moments, become acutely aware of the division between subjective and objective. When you are, for example, searching for a word—when a word is on the tip of your tongue—you feel as though you are rummaging through your own mind. The word is in you somewhere, and nobody but you can find it.

From this, and other experiences like it, we get the feeling that speaking (and by extension, writing) consists of taking something internal and externalizing it. Language is, in this view, an expression of thought; and words take their significance from cogitations. That is to say, our private mental world is the wellspring of significance; our minds imbue our language with meaning. The word “pizza,” for example, means pizza because I am thinking of pizza when I say it.

And yet, as Wittgenstein tried to show in his later philosophy, this is not how language really works. To the contrary, words are defined by their social use: what they accomplish in social situations. In other words, language is public. The meaning of words is determined, not by referring to any inner thought, nor by referring to any objective facts, but by convention, in a community of speakers. (I don’t have the space here to recapitulate his arguments; but you can see my review of his book here.) The word “pizza” means pizza because you can use it to order in a restaurant.

This may seem to be a merely academic matter; but when you begin to think of meaning as determined socially rather than psychologically, then you realize that your cognitive apparatus is not nearly as private as you are wont to believe. In order to communicate thought, you must transform it into something socially consumable: language. All of our vague notions must be put into boxes, whose dimensions are determined by the community, not by us.

But the social does not only intrude when we try to communicate with others; we also understand ourselves through these same social concepts. That is to say, insofar as we think in words, and we understand our own personalities through language, we are subjecting our deepest selves to public categories; even in our most private moments, we are seeing ourselves in the light of the community. We are social beings to our very core.

This does not only extend to the definitions of words. As Wittgenstein points out, to use language effectively, we must also judge like the community.

Any word, however well-defined, is ambiguous in its application. To apply the word “car” to a vehicle, for example, requires not only that I know the definition—whatever that may be—but that I learn how to differentiate between a car, a truck, a van, and an SUV. Every member of a community is involved in educating one another’s judgment, and keeping their opinions in tune. If I call an SUV a “car,” or a pickup truck a “van,” any fellow speakers will correct me, and in this way they will educate me to judge like a member of the community.

As I learn Spanish, I have firsthand experience of this. To pick a trivial example, English word “sausage” is more broad than any corresponding Spanish word. Here in Spain they differentiate between salchicha and salchichón, a difference that my American mind has a hard time understanding. Although Spaniards have tried to define this difference to me, I have found that the only way for me to learn it is by being corrected every time I apply the wrong word.

More significantly, in order to conjugate properly in Spanish, I must not only learn how to change the ending and so forth, but I must learn when it is appropriate to use each tense. To pick the most troubling example, in English we have only the simple past, whereas in Spanish there is both the imperfecto and the indefinido. I constantly use the wrong form, not because I don’t know their technical usage (it has been explained to me countless times, using various metaphors and examples, and I can recite this technical definition from memory), but because my judgment is out of alignment.

Whether an action is continuous, periodic, completed, ongoing, or occasional—this is not as self-apparent as every native-speaker likes to assume, but indeed requires a good deal of interpretation. My judgment has not yet been properly educated by the community, and so, despite my knowing the technical usage of these two forms, I still misuse them.

In a way, this aspect of language learning is somewhat chilling. In order to speak effectively, not only must I use communal vessels to contain my thoughts, but I must learn to judge along the same lines as other members of the community—to interpret, analyze, and distinguish like them. What is left of our private selves when we subtract everything shaped and put there by the community? Am I a self-existent person, or just a reflection of my social milieu?

Yet I do not think that all this is something to dread. Having communally defined categories, and a communally shaped judgment, gives permanence and exactitude to communication. Left on our own, thinking without symbols, communicating with no one but ourselves, there is nothing that grants stability to our reflections; they constantly slip through our fingers, an ever-changing flux tied to nothing. With no fixed points, our judgment flounders in a torrent of ideas, thrashing ineffectually.

When we learn a language, and learn to use it well, we learn how to pour the ambiguous stuff of thought into stable vessels, how to cast the molten metal of our mental life into solid forms. This way, not only can we understand the world better, but we can learn to understand ourselves better. This, I think, is the very purpose of culture itself: to partition reality into sections, to impose structure on ambiguous reality.

Let me give you a common example.

A relationship is a naturally ambiguous thing. The affection and commitment that two people feel for one another exists on a spectrum. And often we do not really know how committed we are to somebody until we examine the relationship in retrospect. And yet, relationships must be defined, and defined early-on, for the sake of the community.

Every culture on earth has rituals and categories associated with courtship, for the simple fact that somebody’s relationship status is a big part of their social identity. Ambiguities in social identity are not tolerated, because they impede normal social life; to deal with somebody effectively, you need them to have a recognizable social status, a status they tells you what to expect from them and what you can ask of them and a million other things.

In modern culture, as we delay marriage ever-more into the distant horizon, we have developed the need for new relationship categories. Now we are “dating,” and then “in a relationship.” The status of being “boyfriend” or “girlfriend” is now socially understood and approved as one level of commitment.

The interesting thing, to me, is that the decision to be in a relationship, to become boyfriend and girlfriend (or whatever the case may be), seems like a private decision, affecting only two people. And yet, it is really a decision for the benefit of the community. To be in a relationship defines where you stand in relation to everyone else: whether it is appropriate to flirt with you, to ask you out, to dance with you, to ask about your significant other, and so forth.

Now, this is not to say that the decision is solely for the benefit for the community. To put this another way, this also benefits you and your partner, because you are also part of the community. It puts a publicly understood category, indicating a certain level of commitment, on your naturally ambiguous and shifting feelings. In other words, by applying a public category to a private feeling, you are, in effect, imposing a certain level of stability on the feeling.

Look what happens next. This level of commitment, being publically labeled, is also bolstered. Friends, family, and coworkers treat you differently. You are now in a different category. And this response of the community helps to form and reinforce your private feelings of commitment. Relationships are never wholly private affairs between two people. It takes a village to make a couple.

Again, I am not suggesting that this is a bad thing. To the contrary, I think that having communal definitions is what allows us to understand our own selves at all. This is also why I write these quotes and commentary. By forcing myself to take my ambiguous thoughts and put them into words, into public vessels, not only do I communicate with others, but I find out what I myself think.

Quotes & Commentary #45: Montaigne

Quotes & Commentary #45: Montaigne

No pleasure has any savor for me without communication. Not even a merry thought comes to my mind without my being vexed at having produced it alone without anyone to offer it to.

—Michel de Montaigne, Essays

A few months ago I started an Instagram account, and since then something funny has happened.

Whenever I go out—whether its on a walk, a trip to a new city, or sometimes even when I’m alone in my room—a part of my brain keeps on the lookout for a good photo. The shimmering reflections in a pond, the crisscrossing angles and vanishing perspective of a cityscape, or any chance concatenation of color and light: I notice these, and do my best to photograph them when they appear.

Now it has even gotten to the point that, whenever I am taking a photo, I mentally plan how I will digitally edit it, compensating for its defects and accentuating its merits— sharpening some lines and blurring others, turning up the contrast and saturation.

I am not mentioning all this to call attention to my photography skills—which are purely amateurish in any case—but to a pervasive aspect of modern life: the public invasion of privacy.

I have learned to see the world with an imaginary audience by my side, a crowd of silent spectators that follows me around. When I see something beautiful, I do not only savor its beauty, but think to myself: “Wow, there are lots of other people who would find this beautiful, too! I better record it.”

This requires a significant cognitive shift. It means that, aside from my own tastes, I have internalized the tastes of others; or at least I have learned where my tastes and that of others tends to overlap. The consequences of this shift are equally significant. A beautiful dawn, the way a certain window catches the sun’s rays, the flowers in bloom on my walk to work—these are no longer silent joys, but public events.

(Incidentally, as I learn to see the world with the eyes of others, something else is taking place: I am learning to see the world through technology. By now I know when something is too far away to capture, or when the lighting will spoil the photo; and I know which defects I can edit and which ones I can’t. This means I have internalized, not only public tastes, but also technological limitations. My seeing is becoming ever-more mediated.)

It is customary to bemoan this development. I have done so myself, many times. Just today, as I walked through Retiro Park, I found myself thinking evil thoughts about all the people taking selfies. “Just stop posing and enjoy the beautiful day!” I thought, and began ruminating on the decline of modern culture.

And indeed, I do think it’s unhealthy, or at the very least in poor taste, to spend all your time on vacation taking photos of yourself and your friends, photos to be uploaded immediately for the benefit of all your other friends. Like anything, taking photos can be taken too far.

Nevertheless, I think it is a mistake to see this phenomenon—the public invasion of privacy—as fundamentally new. As Montaigne exemplifies, in a time long before Facebook, nearly everybody has the urge to share their pleasures with others. Social media is just a continuation of this. Before the internet, all of us publicized our private joys the old fashioned way: telling our friends and family. Adding pictures to Instagram and Snapchat is just an extension of the ancient art of telling anecdotes.

When I was on my trip to Rome, traveling alone, I visited St. Peter’s in the Vatican. It was such an impressive building that I wanted to go “Ooh” and “Aah,” to gush, to blubber in admiration, but I had no one to do this with. Alone, I had to keep my pleasures to myself; and far from being a neutral fact, I think this actually diminished my enjoyment of the experience. Unshared pleasures aren’t quite as sweet.

Why is this? A cynic would say that we share our pleasures as a form of bragging. “Look at the cool thing I’m doing! Bet your life isn’t as good as mine!”—this is what your vacation selfies say to your friends (and rivals) online. I do not deny that the cynic is partially right; bragging is an unavoidable part of social life. Who doesn’t like being envied?

This is the uglier side of the issue; and I think the bragging motivation—never openly said, but operative—is what drives people to take sharing too far. When people see themselves purely in the light of other people, in a giant popularity contest, then they fall prey to a cycle of envy and bragging. Naturally, everybody does their best to be envied; and since only public joys can be envied, somebody in the thrall of this mentality will neglect all purely private forms of pleasure.

Yet I do not think the cynic is totally right. We humans are a social species. Even the most introverted among us likes to spend time with others. And an extension of our urge to socialize is an urge to share our private selves.

Partially, we do this for validation: to have our judgments and perspective confirmed, to feel more ‘normal’ and less ‘strange’. If I think something is funny, it is a relief to know that others find it funny, too. Maybe this is a sign of weakness, a lack of self-confidence; but I think even the most confident among us feels a relief and joy when somebody confirms their own judgment.

Apart from validation, however, I think there is a simple joy in sharing. It feels good to make someone else smile or laugh; it even feels good to share a negative emotion. The feeling of being alone is one of the most painful feelings we experience, and yet loneliness so often creeps up on us. The feeling of connection—breaking through your own perspective and reaching another’s—is a natural joy, as inherently enjoyable as ice cream. Montaigne thought so; and this is also why, despite some misgivings, I enjoy using Instagram.

This is also why I write a blog rather than keep my scribbling to myself. Indeed, I find that, whenever I try to write something purely for myself, I can hardly write at all. The sentences come out mangled, the thoughts are confused, and the entire thing is mess. To do my best writing, I’ve found that I need a public (or at least a theoretical one). The knowledge that someone else might read my writing keeps me focused; it also makes writing far more fun.

Without a reader, writing feels entirely without consequence to me, and is consequently joyless. Montaigne apparently felt the same way, which is why, despite leading a fairly reclusive life in his castle tower, he published several editions of his essays.

It is vanity to seek fame. But is it vanity to wish to share joys and to connect with others, and to be understood by as many people as possible?

 

Quotes & Commentary #44: Montaigne

Quotes & Commentary #44: Montaigne

Who does not see that I have taken a road along which I shall go, without stopping and without effort, as long as there is ink and paper in the world?

—Michel de Montaigne

One thing above all attracts me to Montaigne: we both have an addiction to writing.

It is a rather ugly addiction. I personally find those who love the sound of their own voice nearly intolerable—and unfortunately I fall into this category, too—but to be addicted to writing is far, far worse: Not only to I love airing my opinions in conversation, but I think my views are so valuable that they should be shared with the world and preserved for future generations.

Why do I write so much? Why do I so enjoy running my fingers over a keyboard and seeing letters materialize on the screen? What mad impulse keeps me going at it, day after day, without goal and without end? And why do I think it’s a day wasted if I don’t have time to do my scribbling?

In his essay “Why I Write,” George Orwell famously answered these questions for himself. His first reason was “sheer egoism,” and this certainly applies to me, although I would define it a little differently. Orwell characterizes the egoism of writers as the desire “to seem clever, to be talked about, to be remembered after death,” and in general to live one’s own life rather than to live in the service of others.

I would call this motivation “vanity” rather than “egoism,” which is undeniably one of my motivations to write—especially the desire to seem clever, one of my uglier qualities. But this vanity is rather superficial; there is a deeper egoism at work.

Ever since I can remember, I have had the awareness, at times keen and painful, that the world of my senses, the world that I share with everyone else, is separate and distinct from the world in my head—my feelings, imagination, thoughts, my dreams and fantasies. The two world were intimately related, and communicated constantly, but there was still an insuperable barrier cutting off one from the other.

The problem with this was that my internal world was often far more interesting and beautiful to me than the world outside. Everyone around me seemed totally absorbed in things that were, to me, boring and insipid; and I was expected to show interest in these things too, which was frustrating. If only I could express the world in my head, I thought, and bring my internal world into the external world, then people would realize that the things they busy themselves with are silly and would occupy their time with the same things that fascinated me.

But how to externalize my internal world? This is a constant problem. Some of my sweetest childhood memories are of playing all by myself, with sticks, rocks, or action figures, in my room or my backyard, in a landscape of my own imagination. While alone, I could endow my senses with the power of my inner thoughts, and externalize my inner world for myself.

Yet to communicate my inner world to others, I needed to express it somehow. This led to my first artistic habit: drawing. I used to draw with the same avidity as I write now, filling up pages and pages with my sketches. I advanced from drawings of whales and dinosaurs, to medieval arms and armor, to modern weaponry. Eventually this gave way to another passion: video games.

Now, obviously, video games are not a means of self-expression; but I found them addicting nonetheless, and played them with a seriousness and dedication that alarms me in retrospect—so many hours gone!—because they were an escape. When you play a video game you enter another world, in many ways a more exciting and interesting world, a world of someone’s imagination. And you are allowed to contribute to this dream world—in part, at least—and adopt a new identity, in a world that abides by different rules.

Clearly, escapism and self-expression, even if they spring from the same motive, are incompatible; in the first you abandon your identity, and in the second you share it. For this reason, I couldn’t be satisfied for long with gaming. In high school I began learn guitar, to sing, and eventually to write my own songs. This satisfied me for a while; and to a certain extent it still does.

But music, for me, is primarily a conduit of emotion; and as I am not a primarily emotional person, I’ve always felt, even after writing my own songs, that the part of myself I wanted to express, the internal world I still wanted to externalize, was still getting mostly left behind. It was this that led me to my present addiction: writing.

I should pause here and note that I’m aware how egotistical and cliché this narrative seems. My internal world is almost entirely a reflection of the world around me—far, far less interesting than the world itself—and my brain, I’m sorry to say, is mostly full of inanities. I am in every way a product—a specifically male, middle-class, suburban product—of my time and place; and even my narrative about trying to express myself is itself a product of my environment. My feeling of being original is unoriginal. My life story is a stereotype.

I know all of this very well, and yet I cannot shake this persistent feeling that I have something I need to share with the world. More than share, I want to shape the world, to mold it, to make it more in accordance with myself. And my writing is how I do that. This is egoism in its purist form: the desire to remake the world in my image.

A blank page is a parallel world, and I am its God. I control what happens and when, how it looks, what are its values, how it ends, and everything else. This feeling of absolute control and complete self-expression is what is so intoxicating about writing, at least for me. Once you get a taste of it, you can’t stop. Montaigne couldn’t, at least: he kept on editing, polishing, revising, and expanding his essays until his death. And I suspect I’ll do the same, in my pitiful way, pottering about with nouns and verbs, eventually running out of new things to write about and so endlessly rehashing old ones, until I finally succumb to the tooth of time.

After mentioning the egoism of writers, Orwell goes on to mention three other motivations: aesthetic enthusiasm, historical impulse, and political purpose. But I think he leaves two things out: self-discovery and thinking.

Our thoughts are fugitive and vague, like shadows flickering on the wall, forever in motion, impossible to get hold of. And even when we do seem to come upon a complete, whole, well-formed thought, as often as not it pops like a soap bubble as soon as we stretch out our fingers to touch it. Whenever I try to think something over silently, without recording my thoughts, I almost inevitably find myself grasping at clouds. Instead of reaching a conclusion, I get swept off track, blown into strange waters, unable to remember even where I started.

Writing is how I take the fleeting vapors of my thoughts and solidify them into definite form. Unless I write down what I’m thinking, I can’t even be sure what I think. This is why I write these quotes and commentary; so far it has been a journey of self-investigation, probing myself to find out my opinions.

When I commit to write, it keeps me on a certain track. Unless you are like Montaigne and write wherever your thoughts take you, writing inevitably means sticking to a handful of subjects and proceeding in an orderly way from one to the other. Since I am recording my progress, and since I am committed to reaching the conclusion, this counteracts my tendency to get distracted or to go off topic, as I do when I think silently.

This essay is a case in point. Although these are things I have often talked and thought about, I had never fully articulated to myself the reasons why I write, or strung all my obsessions into a narrative with a unified motivation, as I did above, until I decided to write about them. No wonder I’m addicted.