It is a great fault of symbolic pseudo-mathematical methods of formalizing a system of economic analysis . . . that they expressly assume strict independence between the factors involved and lose all their cogency and authority if this hypothesis is disallowed.
—John Maynard Keynes
I ended my last commentary by swearing to leave off thinking about the coronavirus. Alas, I am weak. The situation is bleak and depressing; it has affected nearly every aspect of my life, from my free-time to my work, my exercise routines and my relationships; but it is also, if one can be excused for saying so, quite morbidly absorbing.
What especially occupies me is how those in charge will weigh the costs and benefits of their policies. Because the threat posed by coronavirus is so novel, because these decisions involve human life, and because it is difficult not to feel afraid, I think there is a certain moral repugnance that many feel toward this kind of thinking. However, as I argued in my previous post, I think truly moral action will require a thorough appraisal of all of the many potential consequences of action and inaction. This will make any choice that much more difficult, and I do not envy those who will have to make it.
As anyone familiar with the famous trolley problem knows, moral dilemmas often involve numbers. If the actor has to choose between a lower and a higher number of victims, one must choose the lower number. However, there are several refinements of the problem which show the limitations of our moral intuition. For example, respondents are willing to divert a runaway trolley onto a track where it will kill one person rather than five; but respondents are unwilling to push an enormously fat bystander onto the tracks to save five people. We seem to be willing to think in purely numerical terms only about those involved ‘in the situation’ and unwilling to do so with those we perceive as ‘outside the situation.’
Well, in case we are facing, virtually everyone is ‘inside the situation’; so this leads us to a numerical treatment. But of course this is not so simple. What should we be measuring and comparing, exactly? I raised the question in my last post about this calculus of harm, and how it seems impossible to compare different types and levels of harm. As hospitals get overwhelmed, however, and care begins to be rationed, doctors are forced to make difficult choices along these lines, giving treatment to patients with the highest chances of recovery. Politicians are now faced with a kind of society-level triage.
One obvious basis of comparison is the number of lives lost. This is how we think of the trolley problem. But I think there is a case for also considering the number of lived years lost. What is ethically preferable: allowing the death of one person, or allowing the lifespans of 10 people to be reduced by 20 years each? I cannot answer this question, but I do think that the answer is not easy or self-evident. Reducing somebody’s lifespan may not be ethically on a par with letting someone die, but it is still quite a heavy consideration.
Further down the line, ethically speaking, is quality of life. Though it seems egregious to weigh death against quality of life issues, in practice we do it all the time. Smoking, drinking, and driving carry a risk, and a certain number of people will die per year by engaging in these activities; but we accept the cost because, as a society, we apparently have decided that it is “worth it” in terms of our quality of life. But of course, this comparison is not exactly appropriate for the case of coronavirus, since we ourselves make the decision to smoke or drive, whereas the risk of coronavirus is not voluntary. Thus, to save lives we should be willing to accept a greater loss in quality of life in this case, since we cannot control our exposure to the risk.
How exactly we choose to weigh or balance these three levels of damage—lives lost, lives shortened, and lives made worse—is not something I am prepared to put into numbers. (I suppose some economist is already doing so.) But I think we are obligated to try to at least take all of them into account.
Now, the other set of variables we must consider are empirical. On the medical side, these are: the lethality of the virus and the percentage of the population likely to get infected. On the economic side there are obvious factors like unemployment and loss in GDP and so forth. There are also factors such as loss in standard of living, homelessness, and the poverty rate; and still more difficult to calculate variables like the rate of suicide and drug addiction likely to result.
One major problem is that we know all of these variables imperfectly, and in some cases very imperfectly. To take an obvious datum, there is the virus’s lethality rate. From the available numbers, in Italy the fatality rate appears as high as 8%, while in Germany it is as low as 0.5%. This huge range contains a great deal of uncertainty. On the one hand, there is a good case that Germany gives a more accurate picture of the virus’s lethality, since they have done the most testing, about 120,000 a day; and logically more testing gives a more accurate result. However, we should remember that the virus’s lethality rate is not a single, static number. It affects different demographics differently, and it also depends on the availability of treatment. All of these factors need to be taken into account to establish the virus’s risk.
Complicating the uncertainty is the fact that the virus can create mild or even no symptoms, thus leaving open the question of the total number of cases—a number that must be known to determine the lethality rate. Asked to offer an estimate of the total number of infected people in Spain (the registered number is about 45,000 as of now), mathematicians offered estimates ranging from 150,000 to 900,000—and, of course, these are little more than educated guesses. If the former figure is correct, it would put the lethality at around 2%, while if the latter is correct the lethality is about 0.4%: another big range.
Now that Spain is receiving a massive shipment of tests from China, our picture of the virus will likely become much more accurate in the coming days and weeks. (Actually, many of these tests are apparently worthless, so nevermind.*) However, one crucial datum is still missing from our knowledge: the total number who have already had the virus. To ascertain this, we will need to test for antibodies. It appears we will begin to have information on this front soon, as well, since the UK has purchased a great deal of at-home antibody tests. I believe other countries are following suit. Not only is this data crucial to accurately estimating the virus’s threat, but it is also of practical value, since those with antibodies will be in far less danger either of catching or of spreading the disease. (In the movie Contagion, those with antibodies are given little bracelets and allowed to travel freely.)
The New York Times has created an interesting tool for roughly estimating the potential toll of the virus. By adjusting the infection and fatality rate, we can examine the likely death toll. Of course, these rough calculations are limited in that they make the mistake Keynes highlights above—they assume an independence of variables. For example, the calculator shows how the coronavirus would match up with expected cancer and heart disease deaths. But of course more coronavirus deaths would likely mean fewer deaths from other causes, since many who would have died from other causes would succumb to coronavirus. (Other causes of death like traffic accidents may also go down because of the lockdown.) The proper way to make a final estimate, I believe, would be to see how many total deaths we have had in a year, and then compare that total with what we would reasonably expect to have had without the coronavirus.
As you can see, the problem of coming up with a grand calculation is difficult in the extreme. Even if we can ultimately ascertain all of the information we need—medical, economic, sociological—we will still have only an imperfect grasp of the situation. Indeed, Keynes’s warning is quite pertinent here, since every factor will be influencing every other. Unemployment affects access to health care, an overwhelmed health care system will be less effective across the board, and the fear of the virus alone has economic consequences. This makes the ‘trolley problem’ model misleading, since there are no entirely independent tracks that the trolley can be moving on. Any decision will affect virtually everyone in many different ways; and this makes the arithmetical approach limited.
Trump has said that the cure cannot be worse than the disease. Obviously, however, the decision is not a simple choice between economic and bodily well-being. This is what makes the decision so very subtle and complicated. Not only must we weigh sorts of damage in our ethical scales, but we also must be able to think synthetically about the whole society—the many ways in which its health and wealth are bound up together—in order to act appropriately.
Once again, I do not envy those who will have to make these choices.
The creed which accepts as the foundation of morals, Utility, or the Greatest Happiness Principle, holds that actions are right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness.
—John Stuart Mill
Like so many people on this fragile green globe, I have been thinking about this novel crisis. I find myself quite constantly frustrated, largely because we suddenly find ourselves in the position of reacting to a threat of unknown potency using uncertain means. I argued in my last post that our shift from total indifference to all-consuming concern cannot constitute a rational response. In this post I hope to explore what a rational, and ethical, response must comprise.
Governments around the world are in the tragic position of having to choose between saving a relatively small (but ever-growing) portion of the population from severe damage, and inflicting less acute damage on a much larger portion of the population.
In making decisions like this, I believe the only ethical principle that we turn to is the principle of utility. However, I do not think John Stuart Mill’s strong version—our duty is to promote happiness as much as possible—is feasible, at least not for individuals. My main criticism of such a formulation is that it would lead to basically unlimited duties on a person. Can a thoroughgoing utilitarian really enjoy a quiet tennis match when she could be, say, working in a soup kitchen? I do not think we can embrace an ethic that would demand a population of saints; and, besides, such an ethic would be self-defeating if actually put into practice, since each person who is trying to create the most happiness would, themselves, potentially be miserable.
Thus, when it comes to individual behavior, I think that a negative version of utilitarianism applies: that we ought to try to refrain from activities that cause pain, harm, or unhappiness. Playing a tennis match, then, is alright; but breaking someone’s arm is not. This, I believe, is the standard that should be applied to individuals.
A government, however, is different; and different standards apply. A government has the obligation not only to avoid doing harm, but to actively reduce misery as much as possible. Unlike an individual—who could not live with an unlimited obligation to reduce unhappiness—a government, as an institution, does not have to balance its own happiness against others’, and so has a greater ethical obligation.
With this principle in view, the requirement for an ethical action on the government’s part would be to reduce suffering as much as possible. Of course, creating an exact calculus of harm is difficult at best. How can you compare, say, one death and a hundred million headaches? Yet since we must act, and since we ought to act as ethically as possible, we have little choice but to make do with a certain amount of imprecision.
Another source of imprecision is, of course, that we cannot know the future, and we know even the present only imperfectly. Because of this ignorance, every act can have unforeseen consequences. This is why we cannot evaluate an action on the ends it achieves alone, but must consider what could be reasonably known about the probable consequences of an action at the time it was taken. This means we have to have a certain ethical lenience for actions taken at a low state of knowledge, especially if the best available knowledge was consulted at the time.
Aside from the test of morality, there is also a related test of rationality. To be rational means to be consistent. This means the same standard is applied to all of our actions, and that there are no special categories. An irrational ethic is necessarily an imperfect ethic, since it means at least some of its actions are less ethical than others. Many of our society’s injustices are cases of irrational ethics: holding people to different standards, giving out different rewards or punishments for the same actions, and so on.
I am trying to define what a rational, ethical response means, exactly, because I think very soon we will have to make more nuanced decisions about this crisis. So far the primary approach has been to institute lockdowns, with the idea of slowing the virus’s spread. Even though I think there is a strong argument that Western governments were culpably unprepared, locking things down now may be a rational response given the potential threat of the virus. But if we find that even thirty or forty days in confinement is not enough to put the virus on the defensive, then we will have to begin to weigh the social costs more carefully. Just as the real effectiveness of a lockdown remains to be seen, so is the social price still undetermined.
At the moment we are just coming to grips with the virus, and we are belatedly adopting a “better safe than sorry” policy. We are tracking the rising death tolls of the virus and focusing our attention quite exclusively on this crisis. This may be rational, considering the novelty of the virus and the currently unknown threat that it may pose. But as the crisis wears on, we will be forced to consider other factors. There is, after all, no guarantee that forty, fifty, or even sixty days of lockdown will make it safe for people to return to their daily lives. Michael Osterholm (an expert on infectious diseases) expects that the virus will be around until there is a vaccine, and that a vaccine will take 18 months at minimum.
Now, perhaps extraordinary measures and unlimited resources can reduce the time until we arrive at an effective vaccine. That is unknown. My worry is that a lockdown will begin to have very real negative effects on quite a huge number of people; and this will almost certainly happen before the vaccine is available. We in Madrid are one week into our lockdown so far. At the moment, the streets are mostly empty, and they are constantly patrolled by police who can give enormous fines for breaking curfew. Today I heard a loud and violent argument down the street as one person harassed another person for doing something outside (they were out of view). I can hear neighbors occasionally quarrelling through my window.
Arguments and tantrums are the least of our worry. A protracted lockdown will exacerbate mental health problems (some of which are quite serious) and put pressure on marriages, as the spike in Chinese divorces shows. In Spain, open air sport, like going on a walk or a run is forbidden. (Meanwhile, tobacco stores remain open, even though smoking is known as one of the risk factors of the disease.) How will this ban affect children if it is protracted? And how long are we prepared to keep children out of schools? Not only will they be learning less, but social interaction is crucial to childhood development. (Osterholm is skeptical of the school closures, since he thinks that there is little evidence that children are significant vectors of the disease, and many health personnel with children might be forced to stay home. The demographic data from Spain—which doubtless overestimates the fatality rate across the board, since testing has been limited—seems to bear him out.)
In Spain, the government has, at yet, not waived rental payments. For people living from paycheck to paycheck, and who have been laid off, what will they do on April 1st when rent is due? Even if we are released on April 12, how many people will be completely out of work at that time? How many people risk losing their jobs and their homes? If and when coronavirus has disappeared completely, it may take a damaged economy years to return to normal. How will people get by if thousands of businesses go bankrupt and unemployment remains high for the long term?
Economic damage may sound fanciful compared with a health crisis, but it translates into a reduced—sometimes a drastically reduced—quality of life for millions. In the scale of human suffering, poverty is not negligible. Yet if such considerations seem petty, we must also consider something that Nicholas Kristoff (among others) has written about extensively: the rise of “deaths of despair” among America’s working class. These deaths can result from drug overdose or suicide, and they have been on the rise because of the worsening plight of working class America. We must consider, then, that economic damage does not only reduce the quality of life for millions, but it can translate directly into fatalities.
More generally, economic failure has a pronounced effect on life expectancy, even if you try to control for other factors. To quote from Bryson’s book on the body: “Someone who is otherwise identical to you but poor—exercises as devotedly, sleeps as many hours, eats a similarly healthy diet, but just as less money in the bank—can expect to die between ten and fifteen years sooner.” So economics indeed translates into years of life.
Now, I am not advocating against the lockdown. As I said before, given our available information, it may be a good and rational response. I am saying that, in the long run, an ethical evaluation requires that we consider the complete social costs of these measures. Just as importantly, if the coronavirus is, indeed, here to stay, then a rational response requires that we act consistently towards this risk.
This is what I mean. At the moment, coronavirus deaths are treated as a special category. This may be rational if we can eliminate the threat in a reasonable amount of time. But if we cannot eliminate this threat, and it is, indeed, here to stay for a year or more, then I am not sure that this attitude can be rationally sustained. A rational ethic will require us to see the coronavirus as one of many potential causes of death—more dangerous, perhaps, but no more or less acceptable than any other cause of death.
After this long and perhaps silly post—which I hope will be the last thing I devote to the virus for a time—I will end on a more practical note.
From what I observe, I fear that we may not be learning quite the right lessons from China’s success. Donald McNeil (a science reporter for the New York Times) explains that the lockdown in China was only the necessary and not the sufficient reason for the country’s recovery. The lockdown was complemented by widespread testing and the government’s ability to isolate people from one another. A small town in Italy, Vò, was similarly able to halt the virus by testing all of its residents and isolating the infected. Mike Ryan, an expert at the World Health Organization, has recently offered the same advice—that a lockdown without other measures is not enough. Here in Spain, the testing resources are severely limited, and only available for serious cases (although this will change soon). People with mild symptoms are not isolated but are being told to stay in their homes, which could potentially mean infecting their whole family. I hope, then, that we can not only learn from China’s strictest measures, but also their most intelligent.
So far, about morals, I only know that what is moral is what you feel good after and what is immoral is what you feel bad after.
—Ernest Hemingway
It is an essential part of the process of maturing, I think, to come to terms with your own emotions. Can they be trusted? How far? In what circumstances are they misleading? Do they make you act irrationally or do things you don’t normally do? Are you afraid of your emotions? Are you afraid of communicating them to others? Why? Do you tend you bury your emotions, or to ignore them? With what consequences? All these questions, and more, are unavoidable as we grow older and learn how to deal with ourselves.
It is worth pointing out the odd fact, taken for granted by nearly everyone, that our emotions are discussed as something essentially separate from ourselves. They are things that happen to us, things that strike us, things that affect us like a sickness. And yet they are ourselves, aren’t they? What could be more integrally a part of yourself than your feelings?
Perhaps we think of emotions are outside events, comparable to snowstorms or car accidents, because we recognize that they are universal experiences. Being angry, depressed, giddy, the feeling of being in love—the triggers of an emotion vary, but the experience itself binds us together. And in that way, the emotions can be said to be objective facts, not the most intimate part of ourselves, because they are the same for everyone.
Or perhaps we talk of our emotions as separate from ourselves because they come and go, sometimes at random, and are often beyond our control. Feeling melancholy on a lonely walk is like being caught in the rain—an event that depends on the whims of fate.
As someone who prides himself on being logical—although, heaven knows how silly I can be—my relationship with my emotions has always been rather skeptical, even suspicious. My friends in elementary school used to tease me for being robotic. As I grew up, I lost most of this robotic coldness, but some traces of it remain. I am still quite skeptical of emotions, and I still find my feelings to be suspicious.
In my experience at least, emotions cannot be trusted as sources of information. A classic example is walking out the door and feeling that you’ve forgotten something, or packing for a trip and feeling sure that there’s something your missing. In my case, this feeling is almost inevitably wrong; my feeling of worry or confidence have almost nothing to do with whether I have actually forgotten something.
It was a major discovery—which I only made in university—that my mood had very little to do with the things I normally hold responsible for it. Sometimes I would get angry and think about all the unpleasant things my friends did and said, all the inanities of my roommates, all the annoyances of my classes. Or I would get melancholy and think about things I missed from home, or convince myself that I was lonely and unloved, or castigate myself for being a failure.
And yet all of these things I blamed for my mood were totally irrelevant. Almost inevitably, if I sat down and ate something, or if I had a coffee and a candy bar, my mood improved dramatically. Indeed, after I drink coffee I am often ecstatically happy, and I think equally unrealistic thoughts about how great my life is.
Experiences like these reinforce my skepticism about feelings. I can feel sure I’ve forgotten something, even after checking three times. I can be enraged and curse the world and everyone in it, and yet this is only due to hunger. Feelings come and go, each one seeming to tell me something clear and definite, only to be replaced in the next moment by another feeling that tells me the exact opposite thing. Each one is convincing in its strength, and yet each is totally devoid of substance; they give me the feeling of certainty without any evidence to support it.
In Cognitive Behavioral Therapy, there is a form of cognitive distortion, erroneous thinking, called “emotional reasoning.” This consists precisely in trusting your emotions. Depressed people often feel ashamed, worthless, and hopeless, and then reason that these things must be true, since why else would they feel that way so persistently? Similarly, anxious people feel afraid, and believe that this fear is justified and is telling them about a real threat to their safety.
Indeed, it seems to me—or at least it’s been my experience—that getting over anxiety and depression involves learning to distrust your own feelings. I have learned, for example, that my feelings of fear often have nothing to do with something bad that might actually happen; and that my feelings of shame are not a reliable indicator of what other people will actually think.
To a certain extent, I think most people would agree, in theory at least, that emotions can be misleading. Nevertheless, there is one domain in which nearly everyone puts implicit faith in their emotions: morality.
I remember reading a book by Steven Pinker in which he demonstrated the emotional basis of our moral thinking in this way.
Consider this short situation: A family’s dog, who had lived with them for many years, was killed in front of their house by a car. The family heard that dog meat was delicious, so they cut up the dog and cooked him for dinner.
Now, in this situation, did the family act immorally? If you’re like me, you feel somewhat disgusted by this; and maybe you have decided that you’d never want to be friends with this family; and maybe you think it heartless that they could eat their loyal friend and companion. But did they do anything immoral?
I don’t think they did, because they didn’t hurt anyone or act out of accordance with the categorical imperative. And yet I admit that the first time I read this, I felt disgusted and almost outraged at this family. This illustrates Pinker’s general point: we have moral feelings first, and then try to rationalize them later. In other words, our moral reactions are not based on any logical standard but instead on gut feeling.
It is, of course, difficult to rationalize morality. Philosophers still struggle with it, and there are no easy answers. Be that as it may, this is no excuse to substitute feeling for thinking. Even a slight acquaintance with history shows that people have thought many things were terribly immoral—mixed-race marriages, or premarital sex—that nowadays don’t raise an eyebrow. The world is full of taboos and prohibitions that, to outsiders, don’t make any sense. We are capable of having strong moral reactions about activities that don’t harm anyone or pose any threat to society.
I do not know why people continue to trust their feelings of disgust and outrage when it has been shown again and again, even in my lifetime, that these feelings are often based on nothing at all. We trust our gut like it’s the Oracle of Delphi, handing out moral verdicts from on high; but out guts often disagree with one another, and just as often contradict themselves on different days.
Take the example of gay love. I remember when I was young, the idea of two men kissing was considered, by nearly everyone I knew, to be absolutely obscene; and now, we have a movie the features homoerotic love winning Best Picture. (I do not mean to suggest, even for a second, that we have overcome homophobia; but we have made progress.) The controversy surrounding trans people seems to be based on this same gut reaction of disgust. The “argument” about the “dangers” of trans people in public restrooms is so devoid of substance that I can only conclude it is feeble attempt to rationalize a feeling.
And yet I wouldn’t be surprised if, one day, being transgender was considered as unremarkable as gay love. I can see no logical reason to regulate, ban, or even worry about sexuality, gender, and orientation, because they don’t hurt anyone and don’t pose any threat to society. You may not like gay love, you may find the idea of trans people gross, and that’s fine, but this feeling is no valid indication that these things are wrong.
This brings me around to Hemingway’s quote. Hemingway said this in connection with bullfighting. He expected to find bullfighting disgusting, but he loved it, and for that reason didn’t think it was wrong. Well, it’s obvious by now that I don’t agree with this method of telling right from wrong. If bullfighting is right or wrong, we need to explain why, with reference to some standard.
My problem is that I normally think of morality as a relationship between humans, and I actually don’t know how to think about morality regarding animals. A bull cannot understand a duty, an obligation, or the idea of consequences; a bull can’t be reasoned with or convinced. All of these things are necessary, I think, for a creature to be a moral agent, to be bound and protected by a system of moral injunctions. So when we’re dealing with animals, can an action be right or wrong?
My gut feeling is that bullfighting is wrong, because it involves animal cruelty. But this feeling, however intense, is just that: a feeling. Can I rationally believe bullfighting is wrong while continuing to eat hamburgers? I really don’t know. Thus I am in the uncomfortable situation of having a dilemma for which my moral reasoning provides no solution; and this leaves me with nothing but a feeling. I suppose I’ll have to read and think some more about the subject.
It is an unfortunate fact of human nature that it can be extremely difficult to do something when you sense you are being forced into it.
—David D. Burns, Feeling Good
Today I taught a class on modal verbs. This is my favorite subject to teach in English, since modal verbs are the most philosophical area of the language. What is the difference between will and would? Between can and could? Between may and might?
Every time I teach this lesson, I pause on the word “should.” I have the following problem. Very often we use the word “should” for recommendations, such as: “You should avoid eating at McDonalds.” In this situation, there is no moral element; we are telling our friend to avoid McDonalds for his own benefit, not for any ethical reason.
In other situations, “should” has an unambiguously moral connotation, as in: “You should always leave a tip in the United States.” Here, we are being exhorted to do something, not for any personal benefit, but because it is the “right” thing to do.
In many cases, however, it is ambiguous whether the word does or doesn’t carry a moral imperative. This most often occurs when we’re talking to ourselves: “I should really jog more,” or “I should quit smoking,” or “I shouldn’t eat so many donuts.” The situation here is strange, for there is no moral rule involved—is it immoral to eat donuts?—and yet we feel we feel guilty when, as so often happens, we don’t follow our own advice.
David D. Burns, in his popular self-help book on depression, cautions against this last usage of the word should. We are always telling ourselves we “should” be doing this, and “shouldn’t” be doing that. But this leads us into a depressive spiral:
A deadly enemy of motivation is a sense of coercion. You feel under intense pressure to perform—generated from within and without. This happens when you try to motivate yourself with moralistic “shoulds” and “oughts.” You tell yourself, “I should do this” and “I have to do that.” Then you feel obliged, burdened, tense, resentful, and guilty.
The process goes like this. You tell yourself you “should” quit smoking. Then, you create resentment in yourself, since you feel like you’re being forced to do something. This resentment and guilt leads to a spiteful rejection of the advice; smoking becomes, not only a pleasure, but a guilty and rebellious pleasure. The habit thus continues, while your self-esteem is eroded by your inability to do the “right” thing.
I don’t know about you, but this sort of thing happens to me all the time. It was thus a revelation when Burns, in his book, pointed out this common tendency and also explained why it is illogical.
The error originates from a confusion of the first and the second usage of the word “should.” That is, when we tell ourselves we “should” quit smoking, we are really saying that it’s a good idea and we would benefit in the long run. We are appealing to our self-interest and not our moral sense. But we feel guilty and resentful nonetheless. Why? Because we are importing the moral imperative of the second usage into our understanding of the meaning. We are, in other words, judging something to be an ethical duty which is only a potentially beneficial activity.
This is very easy to do, I believe, because we don’t tend to think very clearly about obligations. I have heard philosophers say that the metaphysical distinction between the sphere of moral and amoral reality is the distinction between “ought” and “is.” Moral statements, in other words, do not report what the facts are, but how they should be. Many books have been written about where this “ought” comes from and what it says about the universe.
For my part, I do not find anything special or mysterious about “ought” statements. Indeed, I’d argue that, at bottom, the first and second of usage of “should” rest on the same basis; that is, recommendations and obligations both rest on self-interest.
That the power of a recommendation rests on self-interest is not controversial. The motivation to follow a recommendation is that you will personally benefit in some way. Usually recommendations consist of suggested ways to satisfy certain long-standing desires. We are recommended to apply to a certain job or to eat at a certain restaurant, and these are strategies for satisfying our insatiable desires for money and food.
The second assertion—that obligations rest on self-interest—is sure to raise an eyebrow. Well, let me give you an example. Imagine that you promised to pick your friend up at the airport, but you then your crush invited you to hang out at that same time. You are very tempted to blow off your friend and make him pay for a taxi, but then your mom tells you: “You should always keep your promises.”
Now, at first glance this is obviously not appealing to self-interest. You are being told to do something that will be dreadful instead of something fun. So why “should” you do it? Simply because it’s the “right” thing to do? But why is it “right”?
Now you must ask yourself: Do you want to live in a world where promises exist, or a world where they don’t? Think carefully about this. What if you could never trust somebody’s word, and you could not depend on anybody to follow a verbal agreement? I don’t know about you, but such a world seems unlivably dreadful to me.
The world seems to have come to the same conclusion, since promises exist. And the reason we have agreed to have the institution of the promise is that, although occasionally painful in the short-term, it is beneficial in the long-term to live in a society where you can trust somebody’s word. Thus people make a compromise. Accept some incidental annoyances as the price for the boon of general honesty. You gain more than you lose with this bargain.
This is, I think, the nature of all moral rules: they are rules of behavior that, while occasionally painful in the short-term, benefit every individual member in the long-term by enabling a society wherein people can expect their neighbors to be respectful, peaceful, and honest. But these rules only work if everybody abides by them. For a moral rule to be beneficial to its followers, it must not allow others to take advantage of them, but must lead to a long-term gain. If enough people chose not to follow a rule, and instead take advantage of its followers, then it will collapse. All moral action is motivated by long-term self-interest, and morality collapses when it is no longer in the long-term self-interest of its members to comply.
To return to the above example, you must realize that, by breaking your promise, you are making an exception of yourself. You want to live in a world where people keep their promises, but you don’t want to keep yours. Indeed, in a small way you are undermining the institution of the promise, and taking advantage of your friend’s trust. You are choosing to indulge in a short-term pleasure rather than consider the long-term consequences of this action.
To conclude, I think the moral force of the advice “You should always keep your promises” is related directly to self-interest. In almost every situation, the benefits of living in a society where you can trust the word of other people outweigh many times over the benefits of breaking a single promise.
Now, of course, in practice the fabric of society doesn’t collapse when a few promises are broken. Moral systems are human things, and thus imperfect. Moral laws can survive with a surprising amount of noncompliance and hypocrisy. But you also have to consider the potential consequences of acquiring a reputation for being untrustworthy. Besides that, by doing your friend a favor, you earn yourself social goodwill and might be able to call upon him in the future.
This brings me back to my earlier point. A moral obligation is, at base, simply the realization that you have more to gain by following a moral rule than by breaking it. A moral obligation is thus like a piece of especially good advice; and at bottom, the first and second usage of the word “should” are identical.
I have found this way of thinking personally beneficial, since it allows me to avoid the feelings of guilt, bitterness, and resentment that I get when I tell myself “I should do such and such.” Now, I remind myself of how I will personally benefit from the action in the future. I remind myself that the things I “should” do are just ways of satisfying certain long-standing, insatiable desires of mine. And nobody feels guilty when they don’t efficiently satisfy a desire.
Laws and principles are not for the times when there is no temptation: they are for such moments as this, when the body and the soul rise in mutiny against their rigor; stringent are they; inviolable they shall be.
—Jane Eyre, Charlotte Brontë
This passage made a lasting impression on me the first time I read it.
In the story, Jane is at her lowest ebb. She just agreed to marry Rochester; and at the last moment it was revealed that he was already married. Rochester begs her to run away with him, to flee the hypocritical, pretentious morality of England and to have a happy life together. Jane is sorely tempted. She recognizes the injustice of the situation, and she is deeply in love with Rochester. But in the end, her principles overrule her passions, and she forces herself to leave him.
My feeling about this were mixed. On the one hand, it was clear to me, a modern, secular American, that the law preventing Rochester and Jane from marrying was idiotic and unjust. There was simply no logic behind it, just dumb prejudice and unthinking tradition. If I were Jane, I would have ran off with Rochester, and left all those dimwits to live within the narrow confines of their self-righteous morality. So I was a bit disappointed in Jane, normally a rebellious spirit, for being such a slave to custom.
Nevertheless, I couldn’t help admiring Jane for doing what she thought was right, even though it caused her so much pain. The second time I read the book, I found myself admiring her even more. What seemed at first to be obeisance to an old-fashioned prejudice looked now like loyalty to herself.
Jane knew that the negative opinion of eloping existed for a reason. Even though it was extremely tempting, she knew that running away with Rochester would ultimately be a betrayal of herself. It would be compromising on what she wanted and deserved: to be legally bound with someone she loved, in a union accepted and recognized by the community.
Remember that Jane was poor, and Rochester rich. Running away with him without the sanction of society would thus have put her fully and completely under his power. She would have no recourse if, one day, Rochester suddenly changed his mind and decided to leave her. She would have no claim on him. Thus her apparently unselfish act—to run away from Rochester—was really a more intelligent form of selfishness. (In my opinion, nobility normally consists, not in acting unselfishly, but in being more intelligently selfish.)
This quote and this story encapsulates why humans create moral rules. Most of the time, in daily life, our short-term and long-term desires are in harmony. We can satisfy our immediate desires without jeopardizing our future goals. In these situations, moral rules become rather irrelevant, or at the very least automatic, since the function of moral rules is, at base, to harmonize individual interests with group interests.
For example, no moral injunction is needed for me to go to work; nor is one needed for my employer to hire me. Both of us act selfishly, but in harmony, because each of our desires is satisfied by the other. I have something to gain from work (money), and my employer has something to gain from my work (English classes), so what need is there of any rule?
There are situations in life, however, when our short-term desires are so markedly out of harmony with our long-term goals that rules are needed to guide behavior. Jane Eyre’s situation was one such example; and in the end the choice turned out to be the right one.
The difficulty is that, sometimes, the temptations to have one’s cake and eat it too can be overwhelming. This especially applies in cases where, even if it is against the rules of society, an unethical act will most likely escape detection, and thus escape consequences. Every human has an interest in maintaining the rules of society as far as other people are concerned, and strategically breaking them in their own case. This is why E.O. Wilson, in his book about human nature, said: “It is exquisitely human to make spiritual commitments that are absolute to the very moment they are broken.”
But every breach of the moral code, however carefully concealed, carries a risk of detection. And even if you aren’t detected, the stress associated with concealing a secret can be punishment in itself. Epicurus made this point: “It is impossible for the person who secretly violates any article of the social compact to feel confident that he will remain undiscovered, even if he has already escaped ten thousand times; for right on to the end of his life he is never sure he will not be detected.”
Thus I think it is wise, as much as possible, to be consistent with your words and deeds, with the code you hold others to and the code you hold yourself to, and to act as though everything you do will one day be revealed. But, of course, all this is easier said than done.
Men are mistaken in thinking themselves free; their opinion is made up of consciousness of their own actions, and ignorance of the causes by which they are conditioned. Their idea of freedom, therefore, is simply their ignorance of any cause of their actions. As for their saying that human action depends on the will, this is a mere phrase without any idea to correspond thereto. What the will is, and how it moves the body, none of them know; those who boast of such knowledge, and feign dwellings and habitations for the soul, are wont to provoke either laughter or disgust.
—Baruch Spinoza, Ethics
Few things can make you more skeptical about free will than studying anthropology. For me, this had three components.
The first was cultural. I read about the different customs, rituals, religions, arts, superstitions, and worldviews that have existed around the world. Many “facts” that I assumed were universal, obvious, or unquestionable were shown to be pure prejudice. And many behaviors that I assumed to be “natural” were shown to be products of the cultural environment.
It is unsettling, but nonetheless valuable, to consider all the things you do just because that’s what your neighbors, family, and friends do. These include not only superficial habits, but our most basic opinions and values. Our culture is not like a jacket that we put on when we go out into the world; culture is not a superficial layer on our deeper selves. Rather, culture penetrates to the very core of our beings, shaping our most intimate thoughts and sensations.
The next influence was primatology, the study of primate behavior. This came to me most memorably in the books of Jane Goodall, about the chimpanzees she studied. Chimpanzees are our closest relatives. They are recognizably animals and yet so strangely human. They get jealous, become infatuated, bicker, fight, make up, and joke around. They make tools and solve puzzles.
I remember the story of a small chimp who, while walking through the forest with his group, saw a banana out of the corner of his eye. The rest of his group didn’t notice it; and this chimp knew that the bigger ones would take the banana away if they saw him eating it. So he ran off in another direction, causing everyone to follow him, and then secretly snuck back to get the banana. If that’s not human, I don’t know what is.
Last was the study of human evolution. This also involves the study of archaeology: the material culture that hominins have left behind. I held reproductions of the skulls of human ancestors, and examples of the stone tools made by our smaller-brained predecessors. I saw how the tools became more advanced as the brain size increased. Crude choppers became the beautiful hand axes of the homo erectus, and these large axes became refined into serrated blades and arrow heads by later species. Finally our species began showing evidence of symbolic thinking: burying people, crafting statues, painting caves, carving flutes, and almost definitely using language.
After seeing the obvious influence of evolution on our capacities and tendencies, after learning about the striking similarities between us and our ape cousins, and after witnessing the pervasive effects of culture upon behavior, my belief in free will was in tatters. True, even if we take all these evolutionary and cultural factors into account, we can’t predict the exact moment when I’m going to scratch my nose. But neither can we predict where a fly will land, or which patch of skin a mosquito will bite. Nobody thinks flies or mosquitoes have free will, so why us?
I normally understand “free will” to mean the ability of an organism to fully determine its own actions. In other words, a free organism is one whose actions cannot be predicted or explained by pointing to anything outside, including genes or upbringing. Not DNA, nor culture, nor childhood experiences would be enough to fully explain a free individual’s behavior. A free action is, in principle, unpredictable; and thus the free agent is morally responsible for his actions.
I do not believe in this type of freedom, and I have not for a long time. For my part, I think Spinoza is exactly right: “free will” is just a name for our ignorance of the causes of our own behavior. If we knew these causes, our actions could be predicted like any other natural phenomenon, and “freedom” would disappear.
This ignorance is not difficult to explain. Human behavior is the product, first, of our environment, which is infinitely varied and constantly changing; and, second, of the human brain, one of the most complex things in the universe. Because of the amount and complexity of the data, along with our lack of understanding, we can’t even come close to making predictions on the scale of individual human actions, like scratching one’s nose. But we can’t conclude from our inability that our actions are thus “free,” anymore that we can conclude from our inability to predict where a fly will land that flies possess a mystical “freedom.”
Kurt Vonnegut made this point, with much more wit, in Slaughterhouse Five. His Tralfamadorians, who can see in the time dimension as well as space dimensions, already know everything that will happen. Thus they have no concept of freedom, and find it puzzling that humans do: “I’ve visited thirty-one inhabited planets in the universe, and I have studied reports on one hundred more. Only on Earth is there any talk of free will.”
To me it seems manifest that the traditional definition of freedom has been thoroughly discredited by what we know about the natural and cultural world. Humans are made of matter obeying physical laws, shaped by evolution, subject to genetic influence, and responsive to the cultural environment. The mind is not a mysterious metaphysical substance, but a product of the human brain; thus the mind and its behavior, like the brain, can be understood scientifically, just like any other animal’s.
All this being said, there are nevertheless ways to redefine free will so that it is compatible with what we know about physics, biology, anthropology, and psychology.
Perhaps free will is simply the inability of a thinking organism to predict what it is about to do? Every person has, at one time or another, been surprised by their own actions. This is because, as the philosopher Gilbert Ryle explained, “A prediction of a deed or a thought is a higher order operation, the performance of which cannot be among the things considered in making the prediction.” That is to say that it is logically impossible to predict how the act of predicting an action will alter the action, because the prediction itself cannot factor into the prediction (you can try to predict how you will predict, but this leads to an infinite regress).
Or perhaps free will is a condition caused by our ignorance of the future? After all, difficult decisions are difficult because we can’t be sure what will happen or how we’ll react. Deciding between two job offers, for example, is only difficult because we can’t be sure which one we’ll like more. If we could be sure—and I mean absolutely sure—which job would make us happier, then there wouldn’t be a decision at all; we would simply take the better job without a dilemma even occurring to us. In this way, our freedom is as much a product of our ignorance of the future as it is our ignorance of the causes of our actions.
What sets humans apart from other animals is not our freedom per se, but our behavioral flexibility. Humans are able to continually adapt to new environments, and to learn new habits, techniques, and concepts throughout their lives. This ability to adapt and to learn, which serves us so well, is not freedom so much as slavery to a different master: our environment. Our genes do not instill in us a specific behavioral pattern, as in ants, but give us the capability to develop many different behavioral patterns in response to our cultural and climatic surroundings. But is it any more “noble” or “free” for our behavior to be determined by social and environmental pressure rather than from genetic predestination?
Probably the best practical definition of freedom I can come up with is this: Humans are free because we are able to alter our behavior based on anticipated consequences. This is what makes morality possible: we can influence people’s behavior by telling them what will happen if they don’t follow the rules. What is more, people can understand that they have more to gain by playing along and helping their neighbors than by acting impulsively and at the expense of their neighbors. Thus our intelligence, by allowing us to understand the consequences of our actions, gives us the ability to be more intelligently selfish: we can weigh long-term benefits with short-term pleasures.
Freedom is, of course, a fundamental concept in our political philosophy. So if we choose to stop believe in freedom as traditionally defined, how are we to proceed? Here is my answer.
The important distinction to be made in political philosophy, regarding freedom, is what separates freedom from coercion. The difference between freedom and coercion is not that one is self-caused and the other caused by the outside—since even the freest person imaginable has been profoundly shaped by their environment, and is making decisions in response to their environment. Rather, there are two important differences: coercion implies force (or the threat of force) while freedom doesn’t; and “free” actions usually benefit the acting individual, while “coerced” actions usually benefit an outside party at the expense of the acting individual.
The difference thus has nothing to do with freedom as such (freedom from environmental influences), but is determined by the type of environmental influence (violent or non-violent), and by the party (actor or not) that receives the benefits. (Even though an altruistic act benefits a party besides the actor, it is not a coerced act because, first, it’s not motivated by threat of violence, and, second, because altruistic acts usually benefit the actor in some way, either socially or psychologically.)
I find that some people become horrified when I tell them about my rejection of freedom. For my part, I find that my disbelief in freedom has made me more tolerant. When I consider that people are products of their environment and their genes, I stop judging and blaming them. I know that, ultimately, they are not responsible for who they are. In a profound sense, they can’t help it. We are each born with certain desires, and throughout our lives other desires are instilled into us. Our behavior is the end product of an internal battle of competing desires.
If you think that morality is impossible with this worldview, I beg you to read Spinoza’s Ethics. You will find that, not only is morality possible, but it is necessary, logical, and beautiful.
If you can love and respect yourself in failure, worlds of adventure and new experiences will open up before you, and your fears will vanish.
It is an interesting statement on contemporary culture that practical, self-help books are often looked down on as lowbrow, unsophisticated, and unworthy of serious consideration. Just note how often in reviews of self-help books you come across the phrase, “I don’t normally read books like this,” or the like. Of course, skepticism regarding books of this kind is merited, especially when you take into account the amount of quackery, chicanery, demagoguery, and baloney in print. Indeed, I think it’s fair to say we have a veritable advice industry in our culture today, with a great deal of money to be made and thus lots of enterprising, unscrupulous people peddling various forms of nonsense, hoping to get rich. Self-help books now sell so well that they have to be excluded from non-fiction sales rankings, because if they weren’t the top 10 best sellers would be an endless parade of one self-help book after another.
But why are so many people willing to pay for and devour book after book, getting swept about by the ceaseless winds of doctrine, navigating their lives through fad after fad? Fashionable ways of running and ruining your life have always been with us; yet I think there is another aggravating factor at work in the present day.
Recently I read two history books, one about Ancient Greece and the other about Rome. As I learned about the philosophies of education in those societies, I noticed how central were the ideas of ethical and moral teaching. I don’t mean ethical in the narrow sense of right and wrong, but in the wider Greek sense, used by Aristotle, the Stoics, and the Epicureans—how to cultivate wisdom, how to live a well-regulated life, how to deal with the hardships and misfortunes that are so often thrown our way. These were primary concerns of pedagogy. By contrast, our current education system, as least here in the States, has deemphasized ethical teaching almost completely.
There are, of course, many reasons for this, and many of them are good ones; but I do think it leaves a certain gap in our culture that self-help books partially fill. Unfortunately, from what I can tell, many of these seem rather mediocre—or worse. But this book, by David D. Burns, is for me one of the exceptions. It is an interesting and, for me, an extremely useful book, based on a well-studied and much-tested therapeutic technique.
Burns’s aim in writing this book was to popularize the methods of Cognitive Behavioral Therapy (CBT), a therapeutic technique developed by the psychologist Aaron T. Beck, among others. The premise of CBT is very simple: your moods are caused by your thoughts, so by controlling your thoughts you can control your moods. At first sight, this may seem like complete nonsense; our moods come and go, and our thoughts simply take on the timbre of whatever mood we happened to be in, right? This seems to be what most people assume; certainly I did. Yet consider this scenario, which actually happened to me:
My boss scheduled a meeting with me out of the blue. I immediately started thinking that I hadn’t been doing a good job recently, so I began to panic, sure I was about to get fired. Eventually, this panic turned to indignation, as I convinced myself of the injustice of the situation, since I worked hard and tried my best. So, literally trembling with anxiety and outrage, I went to the meeting and sat down; and my boss said: “We’re giving you a bonus, because you’ve been doing so well. Congratulations!” Suddenly, all my negative feelings turned into joy.
This I think well illustrates the central idea behind CPT. All of my negative and positive emotions in this scenario were due to my interpretation of the event, not the event itself. I made the false assumption, based on no evidence, that I was going to be fired. I thought of every mistake and imperfection in my work over the last month or so, and convinced myself that I was doing poorly and that termination was imminent. Then, I persuaded myself that I wasn’t given adequate resources or support, and that the situation was unjust. And when I was finally given the bonus, I interpreted that to mean I was doing a good job and that I was getting all the support I needed—which were equally tenuous interpretations. Thus you can see how my mood was a direct product of my thoughts.
All of my negative assumption in the above paragraph contain what Burns calls “warped thoughts,” or cognitive distortions. These are irrational patterns of thinking which have been found to be common in depressed and overly anxious patients. The CBT interpretation of depression is that these thinking patterns are not caused by depression, but actually cause depression. In other words, depression results from persistent, unrealistic negative interpretations of one’s life and experience, leading one to focus solely on the bad and to feel hopeless about the future.
Burns gives a list of 10 types of warped thoughts, but in my opinion there is quite a bit of overlap in the categories. The distortions more or less boil down to the following:
—Making negative assumptions, whether about the future or about what someone else is thinking; —Assuming that one’s emotions accurately reflect reality; —Over-generalizing a small number of negative occurrences into an inevitable trend; —Willfully ignoring all of the positives to focus solely on the negative; —Thinking in black and white categories; —Making unjustified “should” or “ought” statements about the world without considering other people’s perspectives; —Feeling that you are responsible for things over which you have no control; —Labeling oneself and others with vague pejoratives.
The first part of this book is dedicated to allowing the reader to recognize these types of thoughts and to combat them. This most often is just a matter of writing these thoughts down and exposing the distortions that lay beneath. Simple as this sounds, I’ve found this to be remarkably effective. As you might have guessed from the above example, I am rather prone to anxiety; and during this summer, my anxiety was getting to the point that I felt incapacitated. I was driving my friends and family nuts with my constant worrying; and nobody, including myself, knew how to deal with me.
Luckily, I heard about a site called MoodGYM, which is a website developed by the Australian National University for people dealing with anxiety and depression, using the techniques of CBT. Desperate for some relief, I completed the reading and activities on the website, and found that I felt much, much better. Impressed, I looked for books on CBT techniques, and of course came across this one.
What most intrigued me about CBT was the emphasis on accuracy. The techniques weren’t based on the premise that I was somehow damaged or filled with strange desires, nor did they include any amount of self-delusion or wishful thinking. Quite the reverse: the whole emphasis was on thinking clearly, basing beliefs on evidence, avoiding unreasonable assumptions, and seeing things from multiple points of view.
Take anger. Very often (though not always), our feelings of indignation simply result from seeing an event through a narrowly selfish lens. We don’t get the job we interviewed for, and we feel cheated; someone beat us to that parking spot, and we feel outraged. But when we consider these scenarios from the perspective of the boss or the other driver, the situation suddenly seems much more just and fair; they are pursuing their own interests, just like we are. So simply by looking at the situation from multiple points of view, and thus understanding it more fully, our feelings of anger are cooled.
When I began working through the techniques in the book, I was astounded by how often these types of distortions plagued my thinking. It would almost be funny if it wasn’t so unpleasant: I could twist any situation or piece of information into somehow reflecting negatively on my character. Everything bad in the world confirmed my negativity, and everything good only served to reproach me and to make me envious and resentful. The good news was that, when I began to recognize these illogical patterns of thought, it was extremely easy for me to correct them; and for the past month or so I’ve been feeling a great deal happier and calmer.
After teaching the reader several personal and interpersonal techniques—strategies for dealing with oneself and others more effectively—Burns moves on to examining some of the underlying assumptions that give rise to warped thinking. It turns out that these all involve equating one’s “value” or “worth” with some extrinsic good, whether it be approval, love, success, fame, or even skill. There is, of course, nothing wrong with enjoying the approval of others, the thrill of love, the sense of accomplishment, or the satisfaction of a job well done. The problem arises when, instead of enjoying something, we use it to measure ourselves.
To use a somewhat silly but germane example, how many people believe that those who read more books, bigger books, harder books, are somehow “superior” to people who don’t? I’ve certainly been guilty of this; but it is pretty clearly an absurd position when I think about it, and one that I couldn’t possibly defend on any valid moral or intellectual grounds. What on earth does it even mean to be a “superior” person?
Superiority only makes sense when we have some quality we can measure, such as wealth or strength; but when we say “superior” by itself, what quality do we mean? “Worth”? How do you measure that? You can try being clever and say “By the number of books you read” or something, but that’s clearly circular reasoning. If you are a humanist or religious, you might say that you have worth just from the fact of being alive; but then of course everyone is equally worthy and there’s no sense in feeling worthless.
In the non-Goodreads population, I suspect book addiction isn’t a big problem; more often, people feel down because they imagine that approval, love, money, or expertise is necessary to be a worthwhile and happy person. But the absurdity of this kind of thinking is revealed when you consider how many famous, beloved, rich, virtuosic, brilliant, successful people there have been, and still are, who are deeply depressed and feel worthless and hopeless. Short of torture, there are no circumstances in life that guarantee unhappiness; and the same goes for happiness. This is not to say that you shouldn’t try to change or improve your situation, only a reminder you that the way you interpret a situation is often as important as the situation itself, if not more so.
I cannot hope to sum up the entire book in the space of this review; but I hope what I have included has convinced you that it’s at least worth looking into. After all, by definition, nothing feels better than happiness.
Of course, the book isn’t perfect. Burns’s writing style is nothing remarkable, and it is occasionally tacky; but I think that it’s excusable considering that he’s a therapist, not a writer, and that he’s trying to reach a popular audience. One flaw that I thought was less easy to excuse was Burns’s exclusive focus on straight couples in his sections on love and relationships. Burns writes in a purely heteronormative vein, not even acknowledging same-sex couples, which is difficult to justify, considering the higher rates of depression and anxiety among gays and lesbians—not to mention others in the LGBTQ community. I hope this is changed in future additions.
A criticism I am tempted to make, but which I actually think is unfair, is that CBT makes people passive, accepting, and more content with the status quo. It sometimes seems as if Burns is telling people not to try to change their circumstances, but rather to accommodate themselves to them. I think this is unfair for a few reasons. No matter how powerful we may be, there will always be things in life which we cannot change and which we simply have to accept; so developing the tools to do so without frustration or anger is useful for everyone. What’s more, real depression and anxiety are not conducive to effective action. Quite the opposite: depression often makes people apathetic and anxiety makes people feel too overwhelmed to do anything. Besides, you can’t solve a problem unless you can see it clearly, and the thinking patterns associated with depression and anxiety lead to a total inability to see problems clearly and to deal with them rationally. So I think accusations that this book is somehow reactionary or that it leads to passivity are unfair.
To sum up this already overlong review, I just hope I’ve convinced you that this book might be extremely valuable to you or to someone you know. It certainly has been for me. Now I no longer feel that I am at the mercy of my moods or emotions, or that my sense of self-worth or confidence is dependent on my circumstances. And I’d say these benefits definitely outweigh the tacky cover and the corny title, don’t you?
(Oh, and if the book seems like too big a commitment, MoodGYM is pretty swell too, despite additional corniness of course.)
Ernest Hemingway was, to put it mildly, not an animal rights advocate; but even he felt misgivings before attending his first bullfight—not for the bull, but for the horses. (More on the horses later.) He went for reasons of art; he wanted a chance to see death for himself, to analyze his own feelings about it, in order to escape what he regarded as the trap of the aspiring writer—to feel as you’re expected to feel, not as you actually feel. Much of his book on bullfighting is dedicated to persuading the reader to do the same; he enjoins us to attend at least one show, and to do so with an open mind—to see how it really affects you, instead of how it’s supposed to affect you.
I put down Death in the Afternoon and decided that I would give it a try. But I still felt uneasy about it. Not many things are more controversial in Spain than the bullfight. The country is split between aficionados and those who object on moral grounds. In several parts of Spain, including Catalonia, the bullfight has even been outlawed. It is easy for me to see why people find the custom unethical. Six animals are killed per show, and they are not killed quickly. Nevertheless, from my studies of anthropology I have retained the conviction that you ought to try to understand something before you condemn it. Thus I wanted to see a fight with my own eyes, to analyze my own reactions, before I came to any sort of verdict.
This post will follow that course, first by providing a description, and then my attempt at analysis. Probably everything I say will seem infuriatingly ignorant to the aficionado, but that is unavoidable. I’m a guiri and there’s no escaping that.
The Fight
The big time to see bullfights is in May and June, during the festival of San Isidro. A fight is held every day for eight weeks straight. The fight I saw took place in Madrid’s bullring, Las Ventas. It is a lovely stadium, built in a Neo-Mudéjar style with horseshoe arches, ceramic tiles, and elaborate ornamentation in the red-brick façade. I’d bought the cheapest tickets I could. In any bullring, the price of the ticket depends on the distance from the action, as well as whether the seat is in the sun or the shade (the seats in the shade can be twice as pricey). The seats are hardly seats, just a slap of concrete. You can rent a pillow to sit on for €1, which is probably a good idea.
Las VentasThe stadium was completely full; the vast majority of the crowd were not tourists, but Spaniards. Unlike flamenco, the bullfight has retained a strong fandom among the natives here. There were people of all descriptions: young children, teenage girls, twenty-something men, married couples, and senior citizens. Almost everyone was dressed in their Sunday best.
A bullfight is a highly organized affair. Each event has three matadors; each matador fights two bulls—not consecutively, but by turns. The matadors fight in the order of reputation, with the most famous (and presumably most skilled) matador taking the last turn. A complete fight takes less than fifteen minutes. It is divided into three parts, each announced by a trumpet blast.
First the bull runs out, charging into the arena at full speed. The bull is fresh, energetic, and haughty. It charges at anything that moves, trying to dominate its environment. This bull has hardly seen a dismounted man before in its life; it has been reared in isolation, to be both fierce and inexperienced. Before anything can be done with the bull, the bull must be tested. Thus the matador and his banderilleros begin to provoke the bull. To do this, they are each equipped with large capes, pink and yellow, which they use to attract the bull’s attention. It runs at them, and they hide for safety behind special nooks in the arena’s edge. Sometimes the bull tries to pursue them, ramming the wooden wall with his horns; but there is nothing the bull can do once they get into the nook.
Hiding from the bullThe only person who comes out and stands in the ring is the matador, who performs some passes with his cape. Really impressive capework is impossible with the bull at this stage, since it is too vigorous and belligerent. But these passes are not for show. The matador needs to see how the bull moves, the way it charges, whether the bull favors any specific area of the arena. Each bull is different. Some will charge at anything, and others need to be coaxed. Some are defensive, others offensive. Some slash their horns left and right, and others scoop down and lift up. The matador needs to know the bull to work with it.
Testing the bull(It sometimes happens that they decide the bull is unsuitable. This happened once during my show. Suddenly everyone left the ring, leaving the bull alone. Then the gates opened, and half a dozen heifers ran into the ring. The bull, seeing the heifers, immediately calmed down, and followed them out of the ring. I assume that the bull is killed in this case, since it isn’t useful for anything; a bad bull won’t be bred, and a bull cannot be fought twice, since they learn from experience.)
Next the picadores enter the ring. These are men armed with lances, riding on horseback. The horses are blindfolded and heavily armored with padding. The bull is led by the bandilleros towards the horses and provoked to attack. For whatever reason, the bull always tries to lift the horse on its horns. This doesn’t work, because the horse is significantly bigger than the bull; indeed, the horse seems hardly to react at all to the bull’s attack. Meanwhile, the picador stabs the bull in its back, jabbing his lance into a mound of neck muscle. As the bull ineffectually tries to lift the horse, it drives the spear into its own flesh. The pain is usually enough to discourage the bull after about a minute. By the end of the ordeal, the bull’s back is covered in blood.
A picador facing a bull(In the past, when Hemingway wrote his book, this part of the bullfight was considerably more gory. The horses wore no armor, and were thus often killed. There are some terrible photos of horses being impaled in Hemingway’s book. The bull would rip them apart. The picador thus had a narrow window to do his job, and would often end up on the ground, pinned under his dying horse. I am glad that this isn’t the custom anymore, though doubtless a purist like Hemingway would mourn its passing.)
The bull gives up, the picadores leave the ring. Next the bandilleros must further weaken the bull. They do this by stabbing barbs into the same area of the bull’s back. This is a really dangerous job. The bull must be running straight at them in order to drive the barbs deep enough into its muscles. The bandillero runs at an angle to the bull’s charge, holding the barbs high above his head with outstretched arms, and stab the bull right over its own horns. The pain makes the bull pause for a second—which gives the bandillero much needed time to get the out of there. Even so, the guys have to run like hell, and often end up jumping straight over the wall out of the arena in order to escape. Three pairs of barbs must be speared into the bull. These barbs, which are covered in colorful paper, don’t fall out, but hang from the bull’s back for the rest of the fight.
A bandillero preparing to attackFinally the matador enters the arena. This is the culminating phase, the part that everything else has been leading up to. By now the bull has been thoroughly weakened. It is tired, injured, and, most importantly, disillusioned of its own power. The bull does not charge at anything that moves anymore, but conserves its strength carefully; it does not heedlessly waste its energy sprinting across the field, but makes more calculated attacks. The bull also holds its head lower, and does not slash with its horns, since its neck muscles have been damaged. In this state, the matador can work with the bull.
The matadorWith a red cape in one hand and a sword in the other, the matador dominates the bull. It is incredible to see. In just a minute, the bull goes from a dangerous, wild animal to mere clay in the matador’s palm. The matador can let the bull pass within a hair’s breath of his chest; he can stand a mere footstep in front of the bull’s face; he can turn his back and walk away. The bull is completely under his control. I cannot imagine the amount of time spent around bulls necessary to achieve this seemingly mystical ability.
Working up close with the bullAfter about three minutes of capework, wherein the matador lets the bull come nearer and nearer to him, then it is finally time for the kill. The matador walks to the edge of the ring and exchanges his sword for a heavier one. (What was the first one for?) A hush comes over the ring. Hundreds of people hiss, urging all conversation and cheering to stop. The matador stands before the bull, holding the sword above his head. With his left hand, he shakes the cape. The bull charges, the matador lunges with his sword, stabbing the bull over its horns and into its back. The crowd erupts in applause. The bull begins to stagger. The bandilleros come out, sweeping their capes at the bull, who is now too weak to properly attack. Finally the bull gives up. It limps away from its harassers, making its way to the opposite corner of the ring. But soon it loses its strength; its legs collapse and it falls to the ground. A bandillero walks over and finishes it off with a dagger.
The fight is over. The bull’s body is tied to a team of mules, and dragged around the arena in triumph before being removed from the ring.
Reaction
The bullfight is not considered a sport, but an art form. This is important to note, for as a sport the bullfight would fail utterly. There is no winning or losing, only a beautiful or an ugly performance. There is also hardly any element of suspense, since every bullfight follows the same course and ends the same way.
Of course there is a certain unpredictability to a fight, since everyone who enters the ring risks his life. No matter how much you practice around bulls, you cannot eliminate the chance of being gored. During my show alone, the bulls managed to knock down two people, and probably would have killed them if the others hadn’t managed to quickly get the bull away. But the occupational hazard of being killed by the bull, while certainly integral to the fight, is not what excites aficionados. Rather, it is the skill and artfulness of the matador they enjoy.
It does not take an imaginative eye to see symbolism in a bullfight. The bull is a force of nature. It is stronger and faster than any man, a heedless, seemingly indomitable force that will indifferently trample anyone in its wake. The bull is elemental. It is fought by men in elaborate costumes, following a prescribed ritual. The bull moves with violent impulse; the men move with elaborate grace. The bull stands on four legs, his dark brown body close to the ground; the men stand on two legs, holding their brightly clad bodies rigidly erect.
The men defeat the bull because they have intelligence. The bull cannot understand the difference between the cape and the man, and thus all its strength is wasted in pointless attacks. The men use an animal they tamed—the horse—as well as tools they invented—the pike, the barb, the cape, the sword—in order to dominate and vanquish the bull. Thus the bullfight dramatizes the triumph of human intelligence over mindless power, the victory of culture over nature.
Or perhaps you can interpret the spectacle as a psychological allegory. Bulls have been a symbol of the beastly side of human nature since the story of the minotaur in the labyrinth, and probably long before. The bull thus represents unbridled instinct, the untamed animal that lurks within us, the impulses that we have but must repress in order to live in society. The matador controls and then destroys these impulses, restoring us to civilization. In this light, the bullfight represents the triumph of the ego over the id.
In any case, the spectacle is meant to be tragic. The bull is a beautiful, noble animal, who fights with tenacity and courage. The bull is feared, respected, and envied for its power and its freedom. The tragedy is that this sublime animal must be killed. But its death is necessary, for the bull represents everything incompatible with society, everything we must attempt to banish from ourselves in order to live in civilization. To be absolutely free, as free as an untamed bull, and to be civilized are irreconcilable states. Living in society requires that we give up some freedom and remove ourselves from the state of nature. Although we gain in peace and security from this renunciation, it can still be sorely regretted, for it means leaving some impulses forever unsatisfied. Thus we identify with the bull as much as with the matador; and even though we understand that the bull must be killed, we know this is terribly sad, because it means a part of ourselves must be killed.
This is how I understand the bullfight. I am sure many would find this interpretation terribly jejune. But the more important point is that the spectacle is one that can be seriously analyzed for its aesthetics. It is not a mere display of daring and skill, but an artistic performance that touches on themes of life and death, nature and culture, animal and man. It is as ritualized as a Catholic mass, and just as laden with symbolism.
But is it moral? Should it be tolerated? Is it ethical to enjoy the spectacle of an animal getting wounded and then killed? Is it wrong to cheer as a matador successfully stabs a sword into a living creature?
Ernest Hemingway had this to say about the morality of bullfighting:
So far, about morals, I only know that what is moral is what you feel good after and what is immoral is what you feel bad after and judged by these moral standards, which I do not defend, the bullfight is very moral to me because I feel very fine while it is going on and have a feeling of life and death and mortality and immortality, and after it is over I feel very sad but very fine.
If I adopt Hemingway’s view, and take my emotional reaction as the basis of my moral judgments, then I must come to a different conclusion. Of course, I had many emotions as I watched. First I was impressed by the spectacle of the bull charging across the arena. Then I admired the stoicism of the horses as they withstood the bull’s attacks; and I felt pity for the bull as the lance was driven into its back. I was again impressed by the physical courage of the bandilleros as they let the bull charge full speed towards them. And of course I was filled with awe at the skill of the matador, who sometimes seemed more god than man.
But finally I was disgusted. Hemingway described the bull’s death as a tragedy, but for me it was not sad; it was sickening. I felt weak, dizzy, and nauseated. And it was not the type of nausea that I get in long car rides. It was a feeling I’ve had only a few times before. The first time was in the sixth grade. I was performing a dissection on a pig in science class. My partner was a vegetarian, but I was the one who had to leave midway through, because I thought I would vomit.
During that dissection, I felt that I had swallowed a stone, that I was covered in filth, that my blood was rancid, that my skin was alive and crawling. I had this same feeling when I saw a goat have its throat cut open in Kenya, and I had this same feeling as I watched a bull struggle across the arena, its chest heaving, its legs shaking, blood dripping from its mouth, only to collapse into a heap of quivering pain, and die.
If I followed my emotions, I must condemn the bullfight as unambiguously immoral. But I have read enough psychology to know that emotional reactions can often be illogical. And I have read enough Nietzsche to know that moral judgments are often hypocritical and self-serving. Indeed, as somebody who eats meat, I feel odd drawing a line between a bullfight and a slaughterhouse. Does it really make such a big difference if the animal is killed painlessly or not? We do not make this distinction with humans. You simply cannot kill a human “humanely,” though we think we can kill animals that way. So if I want to condemn the bullfight, ought I to become a vegetarian?
Hypocrisy aside, I have trouble deciding how animals should be considered in a moral framework. As I have written elsewhere, I think humans can be held accountable for their actions because they can understand their consequences and alter their behavior accordingly. Bulls obviously cannot do this; a bull cannot reason “If I kill this man, I will be killed as punishment.” Thus a bull cannot be held accountable in any moral framework; and this also means that a bull cannot enjoy the protection of moral injunctions. The golden rule cannot be applied to an untamed animal—or to any animal, for that matter.
For this reason, I am not against meat eating or hunting (except endangered species, of course). But bullfighting is distinguished from those two activities by the amount of pain inflicted on the animal, and all for the sake of mere spectacle. Now, I can understand why this didn’t bother anyone in the past. Death and suffering used to be far more integral to people’s lives; infant mortality was high, childbirth was dangerous, and most people lived on farms, constantly surrounded by birth and death. But nowadays, as we have banished death to slaughterhouses and hospitals, seeing an animal stabbed and killed before our eyes is shocking and gruesome. The reason the bullfight is tolerated is because it is cloaked in ritual and hallowed by time. The tradition and aesthetic refinement stops people from seeing the bullfight as animal cruelty.
As I said before, animals cannot operate within a moral system, so they cannot be protected by moral codes. The morality of bullfighting is thus not a question of the bull, but of us. How does it affect us to watch a creature suffering without feeling compunction? How does it change us to witness a ritualized death and to cheer it on? How does it reflect upon us that we can be so desensitized to violence passing right before our eyes? The willingness to turn a creature into an object, and to use pain as a plaything, is not something I want for myself. I do not want to be so totally insensitive to the suffering of a fellow creature.
Nevertheless, I have serious misgivings about condemning the bullfight. For one, it is an art form, and a beautiful one. But more importantly, I feel remarkably hypocritical, not only because I eat meat, but because my modern, luxurious lifestyle allows me to completely banish the killing of animals into the background. Instead of having to witness it, I allow death to happen behind the scenes, as I go about my day blissfully unaware. Perhaps having to witness death is a good thing, to bring me back to reality and to prevent me from living in a kind of bourgeois fantasyland.
In conclusion, then, I have to admit that I don’t really know what to think. I would be sad to see the tradition disappear, but I also find the spectacle sickening. In any case, I’m happy I went, but I do not plan on going again.