Review: The World as Will and Representation

Review: The World as Will and Representation

The World as Will and Representation, Vol. 1 by Arthur Schopenhauer

My rating: 3 of 5 stars

To truth only a brief celebration of victory is allowed between the two long periods during which it is condemned as paradoxical, or disparaged as trivial.

Arthur Schopenhauer is possibly the Western philosopher most admired by non-philosophers. Revered by figures as diverse as Richard Wagner, Albert Einstein, and Jorge Luis Borges, Schopenhauer’s influence within philosophy has been comparatively muted. True, Nietzsche absorbed and then repudiated Schopenhauer, while Wittgenstein and Ryle took kernels of thought and elements of style from him. Compared with Hegel, however—whom Schopenhauer detested—his influence has been somewhat limited.

For my part, I came to Schopenhauer fully prepared to fall under his spell. He has much to recommend him. A cosmopolitan polyglot, a lover of art, and a writer of clear prose (at a time when obscurity was the norm), Schopenhauer certainly cuts a more dashing and likable figure than the lifeless, professorial, and opaque Hegel. But I must admit, from the very start, that I was fairly disappointed in this book. Before I criticize it, however, I should offer a little summary.

Schopenhauer published The World as Will and Representation when he was only thirty, and held fast to the views expressed in this book for the rest of his life. Indeed, when he finally published a second edition, in 1844, he decided to leave the original just as it was, only writing another, supplementary volume. He was not a man of tentative conclusions.

He was also not a man of humility. One quickly gets a taste for his flamboyant arrogance, as Schopenhauer demands that his reader read his book twice (I declined), as well as to read several other essays of his (I took a rain check), in order to fully understand his system. He also, for good measure, berates Euclid for being a bad mathematician, Newton for being a bad physicist, Winckelmann for being a bad art critic, and has nothing but contempt for Fichte, Schlegel, and Hegel. Kant, his intellectual hero, is more abused than praised. But Schopenhauer would not be a true philosopher if he did not believe that all of his predecessors were wrong, and himself wholly right—about everything.

The quickest way into Schopenhauer’s system is through Kant, which means a detour through Hume.

David Hume threw a monkey wrench into the gears of the knowledge process with his problems of causation and induction. In a nutshell, Hume demonstrated that it was illogical either to assert that A caused B, or to conclude that B always accompanies A. As you might imagine, this makes science rather difficult. Kant’s response to this problem was rather complex, but it depended upon his dividing the world into noumena and phenomena. Everything we see, hear, touch, taste, and smell is phenomena—the world as we know it. This world, Kant said, is fundamentally shaped by our perception of it. And—crucially—our perception imposes upon this observed world causal relationships.

This way, Hume’s problems are overcome. We are, indeed, justified in deducing that A caused B, or that B always accompanies A, since that is how our perception shapes our phenomenal world. But he pays a steep price for this victory over Hume. For the world of the noumena—the world in-itself, as it exists unperceived and unperceivable—is, indeed, a world where causal thinking does not apply. In fact, none of our concepts apply, not even space and time. The fundamental reality is, in a word, unknowable. By the very fact of perceiving the world, we distort it so completely that we can never achieve true knowledge.

Schopenhauer begins right at this point, with the division of the world into phenomena and noumena. Kant’s phenomena become Schopenhauer’s representation, with only minimal modifications. Kant’s noumena undergo a more notable transformation, and become Schopenhauer’s will. Schopenhauer points out that, if space and time do not exist for the noumena, then plurality must also not exist. In other words, fundamental reality must be single and indivisible. And though Schopenhauer agrees that observation can never reveal anything of significance about this fundamental reality, he believes that our own private experience can. And when we look inside, what we find is will: the urge to move, to act, and to live.

Reality, then, is fundamentally will—a kind of vital urge that springs up out of nothingness. The reality we perceive, the world of space, time, taste, and touch, is merely a kind of collective hallucination, with nothing to tell us about the truly real.

Whereas another philosopher could have turned this ontology into a kind of joyous vitalism, celebrating the primitive urge that animates us all, Schopenhauer arrives at the exact opposite conclusion. The will, for him, is not something to be celebrated, but defeated; for willing leads to desiring, and desiring leads to suffering. All joy, he argues, is merely the absence of suffering. We always want something, and our desires are painful to us. But satisfying desires provides only a momentary relief. After that instant of satiety, desire creeps back in a thousand different forms, to torture us. And even if we do, somehow, manage to satisfy all of our many desires, boredom sets in, and we are no happier.

Schopenhauer’s ethics and aesthetics spring from this predicament. The only escape is to stop desiring, and art is valuable insofar as it allows us to do this. Beauty operates, therefore, by preventing us from seeing the world in terms of our desires, and encouraging us to see it as a detached observer. When we see a real mountain, for example, we may bemoan the fact that we have to climb it; but when we see a painting of a craggy peak, we can simply admire it for what it is. Art, then, has a deep importance in Schopenhauer’s system, since it helps us towards the wisdom and enlightenment. Similarly, ethics consists in denying the will-to-live—in a nutshell, asceticism. The more one overcomes one’s desires, the happier one will be.

So much for the summary; on to evaluation.

To most modern readers, I suspect, Schopenhauer’s metaphysics will be the toughest pill to swallow. Granted, his argument that Kant should not have spoken of ‘noumena’ in the plural, but rather of a single unknowable reality, is reasonable; and if we are to equate that deeper reality with something, then I suppose ‘will’ will do. But this is all just a refinement of Kant’s basic metaphysical premises, which I personally do not accept.

Now, it is valid to note that our experience of reality is shaped and molded by our modes of perception and thought. It is also true that our subjective representation of reality is, in essence, fundamentally different from the reality that is being represented. But it strikes me as unwarranted to thus conclude that reality is therefore unknowable. Consider a digital camera that sprung to life. The camera reasons: “The image I see is a two-dimensional representation of a world of light, shape, and color. But this is just a consequence of my lens and software. Therefore, fundamental reality thus must not have any of those qualities—it has no dimensions, no light, no shape, and no color! And if I were to stop perceiving this visible world, the world would simply cease to exist, since it is only a representation.”

I hope you can see that this line of reasoning is not sound. While it is true that a camera only detects certain portions of reality, and that a photo of a mountain is a fundamentally different sort of thing than a real mountain, it is also true that cameras use real data from the outside world to create representations—useful, pleasing, and accurate—of that world. If this were not true, we would not buy cameras. And if our senses were not doing something similar, they would not help us to navigate the world. In other words, we can acknowledge that the subjective world of our experience is a kind of interpretive representation of the world-in-itself, without concluding that the world-in-itself has no qualities in common with the world of our representation. Besides, it does seem a violence done to language to insist that the world of our senses is somehow ‘unreal’ while some unknowable shadow realm is ‘really real.’ What is ‘reality’ if not what we can know and experience?

I also think that there are grave problems with Schopenhauer’s ethics, at least as he presents it here. Schopenhauer prizes the ascetics who try to conquer their own will-to-live. Such a person, he thinks, would necessarily be kind to others, since goodness consists in making less distinction between oneself and others. Thus, Schopenhauer’s virtue results from a kind of ego death. However, if all reality, including us, is fundamentally the will to live, what can be gained from fighting it? Some respite from misery, one supposes. But in that case, why not simply commit suicide? Schopenhauer argues that suicide does not overcome the will, but capitulates to it, since its an action that springs from the desire to be free from misery. Be that as it may, if there is no afterlife, and if life is only suffering punctuated by moments of relief, there does not seem to be a strong case against suicide. There is not even a strong case against murder, since a mass-murderer is arguably riding the world of more suffering than any sage ever could.

In short, it is difficult to have an ethics if one believes that life is necessarily miserable. But I would also like to criticize Schopenhauer’s argument about desires. It is true that some desires are experienced as painful, and their satisfaction is only a kind of relief. Reading the news is like that for me—mounting terror punctuated by sighs of relief. But this is certainly not true for all desires. Consider my desire for ice cream. There is absolutely nothing painful in it; indeed, I actually take pleasure in looking forward to eating the ice cream. The ice cream itself is not merely a relief but a positive joy, and afterwards I have feelings of delighted satisfaction. This is a silly example, but I think plenty of desires work this way—from seeing a loved one, to watching a good movie, to taking a trip. Indeed, I often find that I have just as much fun anticipating things as actually doing them.

The strongest part of Schopenhauer’s system, in my opinion, is his aesthetics. For I do think he captures something essential about art when he notes that art allows us to see the world as it is, as a detached observer, rather than through the windows of our desires. And I wholeheartedly agree with him when he notes that, when properly seen, anything can be beautiful. But, of course, I cannot agree with him that art merely provides moments of relief from an otherwise torturous life. I think it can be a positive joy.

As you can see, I found very little to agree with in these pages. But, of course, that is not all that unusual when reading a philosopher. Disagreement comes with the discipline. Still, I did think I was going to enjoy the book more. Schopenhauer has a reputation for being a strong writer, and indeed he is, especially compared to Kant or (have mercy!) Hegel. But his authorial personality—the defining spirit of his prose—is so misanthropic and narcissistic, so haughty and bitter, that it can be very difficult to enjoy. And even though Schopenhauer is not an obscure writer, I do think his writing has a kind of droning, disorganized quality that can make him hard to follow. His thoughts do not trail one another in a neat order, building arguments by series of logical steps, but flow in long paragraphs that bite off bits of the subject to chew on.

Despite all of my misgivings, however, I can pronounce Schopenhauer a bold and original thinker, who certainly made me think. For this reason, at least, I am happy to have read him.



View all my reviews

Review: Phenomenology of Spirit

Review: Phenomenology of Spirit

The Phenomenology of MindThe Phenomenology of Mind by Georg Wilhelm Friedrich Hegel
My rating: 4 of 5 stars

Georg Wilhelm Friedrich Hegel is easily the most controversial of the canonical philosophers. Alternately revered and reviled, worshiped or scorned, he is a thinker whose conclusions are almost universally rejected and yet whose influence is impossible to escape. Like Herodotus, he is either considered to be the Father of History or the Father of Lies. Depending on who you ask, Hegel is the capstone of the grand Western attempt to explain the world through reason, or the commencement of a misguided stream of metaphysical nonsense which has only grown since.

A great deal of this controversy is caused by Hegel’s famous obscurity, which is proverbial. His writing is a great inky cloud of abstractions, a bewildering mixture of the pedantic and the mystic, a mass of vague mysteries uttered in technical jargon. This obscurity has made Hegel an academic field unto himself. There is hardly anything you can say about Hegel’s ideas that cannot be contested, which leads to the odd situation we see demonstrated in most reviews of his works, wherein people opine positively and negatively without venturing to summarize what Hegel is actually saying. Some people seem to read Hegel with the attitude of a pious Christian hearing a sermon in another language, and believe and revere without understanding; while others conclude that Hegel’s language plays the part of a screen in a magician’s act, concealing cheap tricks under a mysterious veil.

For my part, either dismissing or admiring Hegel without making a serious attempt to understand him is unsatisfactory. The proper attitude toward any canonical thinker is respect tinged with skepticism: respect for influence and originality, skepticism towards conclusions. That being said, most people, when confronted with Hegel’s style, will either incline towards the deifying or the despising stance. My inclination is certainly towards the latter. He is immensely frustrating to read, not to mention aggravating to review, since I can hardly venture to say anything about Hegel without risking the accusation of having fundamentally misunderstood him. Well, so be it.

The Phenomenology of Spirit was Hegel’s first published book, and it is widely considered his masterpiece. It is a history of consciousness. Hegel attempts to trace all of the steps that consciousness must go through—Consciousness, Self-Consciousness, Reason, Spirit, and Religion—before it can arrive at the point of fully adequate knowledge (Absolute Knowledge). Nobody had ever attempted anything similar, and even today this project seems ludicrously ambitious. Not only is the subject original, but Hegel also puts forward a new method of philosophy, the dialectical method. In other words, he is trying to do something no one had ever thought of doing before, using a way of thinking no one had thought of using before.

The Phenomenology begins with its justly famous Preface, which was written after the rest of the book was completed. This Preface alone is an important work, and is sometimes printed separately. Since it is easily the most lucid and eloquent section of the book, I would recommend it to those with even a passing interest in philosophy. This is where Hegel outlines his dialectical method.

The dialectical method is a new type of logic, meant to replace deductive reasoning. Ever since Aristotle, philosophers have mainly relied on deductive arguments. The most famous example is the syllogism (All men are mortal, Socrates is a man, etc.). Deduction received renewed emphasis with Descartes, who thought that mathematics (which is deductive) is the most certain form of knowledge, and that philosophy should emulate this certainty.

The problem with syllogisms and proofs, Hegel thought, is that they divorce content from form. Deductive frameworks are formulaic; different propositions (all pigs are animals, all apples are fruit) can be slotted into the framework indifferently, and still produce an internally consistent argument. Even empirically false propositions can be used (all apples are pineapples), and the argument may still be logically correct, even if it fails to align with reality. In other words, the organization of argument is something independent of the order of the world. In the generation before Hegel, Kant took this even further, arguing that our perception and our logic fundamentally shape the world as it appears to us, meaning that pure reason can never tell us anything about reality in itself.

Hegel found this unsatisfactory. In the words of Frederick Copleston, he was a firm believer in the equivalence of content and form. Every notion takes a form in experience; and every formula for knowledge—whether syllogistic, mathematical, or Kantian—alters the content by imposing upon it a foreign form. All attempts to separate content from form, or vice versa, therefore do an injustice to the material; the two are inseparable.

Traditional logic has one further weakness. It conceives of the truth as a static proposition, an unchanging conclusion derived from unchanging premises. But this fails to do justice to the nature of knowledge. Our search to know the truth evolves through a historical process, adopting and discarding different modes of thought in its restless search to grasp reality. Unlike in a deductive process, where incorrect premises will lead to incorrect conclusions, we often begin with an incorrect idea and then, through trial and error, eventually adopt the correct one.

Deductive reasoning not only mischaracterizes the historical growth of knowledge, but it also is unable to deal with the changing nature of reality itself. The world we know is constantly evolving, shifting, coming to being and passing away. No static formula or analysis—Newton’s equations or Kant’s metaphysics, for example—could possibly describe reality adequately. To put this another way, traditional logic is mechanistic; it conceives reality as a giant machine with moving, interlocking parts, and knowledge as being a sort of blue-print or diagram of the machine. Hegel prefers the organic metaphor.

To use Hegel’s own example, imagine that we are trying to describe an oak tree. Traditional logic might take the mature tree, divide it into anatomical sections that correspond with those of other trees, and end with a description in general terms of a static tree. Hegel’s method, by contrast, would begin with the acorn, and observe the different stages it passes through in its growth to maturity; and the terms of the description, instead of being taken from general anatomic descriptions of trees, would emerge of necessity from the observation of the growing tree itself. The final description would include every stage of the tree, and would be written in terms specific to the tree.

This is only an example. Hegel does not intend for his method to be used by biologists. What the philosopher observes is, rather, Mind or Spirit. Here we run into a famous ambiguity, because the German word Geist cannot be comfortably translated as either “mind” or “spirit.” The edition I used translates the title as the Phenomenology of Mind, whereas later translations have called it The Phenomenology of Spirit. This ambiguity is not trivial. The nature of mind—how it comes to know itself and the world, how it is related to the material world—is a traditional inquiry in philosophy, whereas spirit is something quasi-religious or mystical in flavor. For my part, I agree with Peter Singer in thinking that we ought to try to use “mind,” since it leaves Hegel’s meaning more open, while using “spirit” pre-judges Hegel’s intent.

Hegel is an absolute idealist. All reality is mental (or spiritual), and the history of mind consists in its gradual realization of this momentous fact: that mind is reality. As the famous formula goes, the rational is the real and the real is the rational. Hegel’s project in the Phenomenology is to trace the process, using his dialectic method, in which mind passes from ignorance of its true nature to the realization that it comprises the fabric of everything it knows.

How does this history unfold? Many have described the dialectic process as consisting of thesis, antithesis, and synthesis. The problem with this characterization is that Hegel never used those terms; and as we’ve seen he disliked logical formulas. Nevertheless, the description does manage to give a taste of Hegel’s procedure. Mind, he thought, evolved through stages, which he calls “moments.” At each of these moments, mind takes a specific form, in which it attempts to grapple with its reality. However, when mind has an erroneous conception of itself or its reality (which is just mind itself in another guise), it reaches an impasse, where it seems to encounter a contradiction. This contradiction is overcome via a synthesis, where the old conception and its contradiction are accommodated in a wider conception, which will in turn reach its own impasse, and so on until the final stage is reached.

This sounds momentous and mysterious (and it is), but let me try to illustrate it with a metaphor.

Imagine a cell awoke one day in the human body. At first, the cell is only aware of itself as a living thing, and therefore considers itself to be the extent of the world. But then the cell notices that it is limited by its environment. It is surrounded by other cells, which restrict its movement and even compete for resources. The cell then learns to define itself negatively, as against its environment. Not only that, but the cell engages in a conflict with its neighbors, fighting for resources and trying to assert its independence and superiority. But this fight is futile. Every time the cell attempts to restrict resources to its neighbors, it simultaneously impedes the flow of blood to itself. Eventually, after much pointless struggle, the cell realizes that it is a part of a larger structure—say, a nerve—and that it is one particular example of a universal type. In other words, the cell recognizes its neighbors as itself and itself as its neighbors. This process then repeats, from nerves to muscles to organs, until the final unity of the human body is understood to consists as one complete whole, an organism which lives and grows, but which nevertheless consists of distinct, co-dependent elements. Once again, Hegel’s model is organic rather than mechanic.

Just so, the mind awakes in the world and slowly learns to recognize the world as itself, and itself as one cell in the world. The complete unity, the world’s “body,” so to speak, is the Absolute Mind.

Hegel begins his odyssey of knowledge in the traditional Cartesian starting point, with sense-certainty. We are first aware of sensations—hot, light, rough, sour—and these are immediately present to us, seemingly truth in its naked form. However, when mind tries to articulate this truth, something curious happens. Mind finds that it can only speak in universals, which fail to capture the particularity and the immediacy of its sensations. Mind tries to overcome this by using terms like “This!” or “Here!” or “Now!” But even these will not do, since what is “here” one moment is “there” the next, and what is “this” one moment is “that” the next. In other words, the truth of sense-certainty continually slips away when you try to articulate it.

The mind then begins to analyze its sensations into perceptions—instead of raw data, we get definite objects in time and space. However, we reach other curious philosophical puzzles here. Why do all the qualities of salt—its size, weight, flavor, color—cohere in one location, persist through time, and reappear regularly? What unites these same qualities in this consistent way? Is it some metaphysical substance that the qualities inhere in? Or is the unity of these qualities just a product of the perceiving mind?

At this point, it is perhaps understandable why Hegel thought that mind comprises all reality. From a Cartesian perspective—as an ego analyzing its own subjective experience—this is true: everything analyzed is mental. And, as Kant argued, the world’s organization in experience may well be due to the mind’s action upon the world as perceived. Thus true knowledge would indeed require an understanding of how our mind shapes the experience.

But Hegel’s premiss—that the real is rational and the rational is real—becomes much more difficult to accept once we move into the world of intersubjective reality, when individual minds acknowledge other minds as real and existing in the same universe. For my part, I find it convenient to put the question of the natural world to one side. Hegel had no notion of change in nature; his picture of the world had no Big Bang, and no biological evolution, and in any case he did not like Newtonian physics (he thinks, quite dumbly, that the Law of Attraction is the general form of all laws, and that it doesn’t explain anything about nature) and he was not terribly interested in natural science. Hegel was far more preoccupied with the social world; and it is in this sphere that his ideas seem more sensible.

In human society, the real is the rational and the rational is the real, in the sense that our beliefs shape our actions, and our actions shape our environments, and our environments in turn shape our beliefs, in a constantly evolving dialogue—the dialectic. The structure of society is thus intimately related to the structure of belief at any given time and place. Let me explain that more fully.

Hegel makes quite an interesting observation about beliefs. (Well, he doesn’t actually say this, but it’s implied in his approach.) Certain mentalities, even if they can be internally consistent for an individual, reveal contradictions when the individual tries to act out these beliefs. In other words, mentalities reveal their contradictions in action and not in argument. The world created by a mentality may not correspond with the world it “wants” to create; and this in turn leads to a change in mentality, which in turn creates a different social structure, which again might not correspond with the world it is aiming for, and so on until full correspondence is achieved. Some examples will clarify this.

The classic Hegelian example is the master and the slave. The master tries to reduce the slave to the level of an object, to negate the slave’s perspective entirely. And yet, the master’s identity as master is tied to the slave having a perspective to negate; thus the slave must not be entirely objectified, but must retain some semblance of perspective in order for the situation to exist at all. Meanwhile, the slave is supposed to be a nullity with no perspective, a being entirely directed by the master. But the slave transforms the world with his work, and by this transformation asserts his own perspective. (This notion of the slave having his work “alienated” from him was highly influential, especially on Marx.)

Hegel then analyzes Stoicism. The Stoic believes that the good resides entirely in his own mental world, while the exterior world is entirely devoid of value. And yet the Stoic recognizes that he has duties in this exterior world, and thus this world has some moral claim on him. Mind reacts to this contradiction by moving to total Skepticism, believing that the world is unreal and entirely devoid of value, recognizing no duties at all. And yet this is a purely negative attitude, a constant denial of something that is persistently there, and this constant mode of denial collapses when the Skeptic goes about acting within this supposedly unreal world. Mind then decides that the world is unreal and devoid of value, including they themselves as parts of the world, but that value exists in a transcendent sphere. This leads us to medieval Christianity and the self-alienated soul, and so on.

I hope you see by now what I mean by a conception not being able to be acted out without a contradiction. Hegel thought that mind progressed from one stage to another until finally the world was adequate to the concept and vice versa; indeed, at this point the world and the concept would be one, and the real would be rational and the rational real. Thought, action, and world would be woven into one harmonious whole, a seamless fabric of reason.

I am here analyzing Hegel in a distinctly sociological light, which is easily possible in many sections of the text. However, I think this interpretation would be difficult to justify in other sections, where Hegel seems to be making the metaphysical claim that all reality (not just the social world) is mental and structured by reason. Perhaps one could make the argument on Kantian grounds that our mental apparatus, as it evolves through time, shapes the world we experience in progressively different ways. But this would seem to require a lot more traditional epistemology than I see here in the text.

In a nutshell, this is what I understand Hegel to be saying. And I have been taking pains to present his ideas (as far as I understand them) in as positive and coherent a light as I can. So what are we to make of all this?

A swarm of criticisms begin to buzz. The text itself is disorganized and uneven. Hegel spends a great deal of time on seemingly minor subjects, and rushes through major developments. He famously includes a long, tedious section on phrenology (the idea that the shape of the skull reveals a person’s personality), while devoting only a few, very obscure pages to the final section, Absolute Knowledge, which is the entire goal of the development. This latter fact is partially explained by the book’s history. Hegel made a bad deal with his publisher, and had to rush the final sections.

As for prose, the style of this book is so opaque that it could not have been an accident. Hegel leaves many important terms hazily defined, and never justifies his assumptions nor clarifies his conclusions. Obscurity is beneficial to thinkers in that they can deflect criticism by accusing critics of misunderstanding; and the ambiguity of the text means that it can be variously interpreted depending on the needs of the occasion. I think Hegel did something selfish and intellectually irresponsible by writing this way, and even now we still hear the booming thunder of his unintelligible voice echoed in many modern intellectuals.

Insofar as I understand Hegel’s argument, I cannot accept it. Although Hegel presents dialectic as a method of reasoning, I failed to be convinced of the necessary progression from one moment to the next. Far from a series of progressive developments, the pattern of the text seemed, rather, to be due entirely to Hegel’s whim.

Where Hegel is most valuable, I think, is in his emphasis on history, especially on intellectual history. This is something entirely lacking in his predecessors. He is also valuable for his way of seeing mind, action, and society as interconnected; and for his observation that beliefs and mentalities are embodied in social relations.

In sum, I am left with the somewhat lame conclusion that Hegel’s canonical status is well-deserved, but so is his controversial reputation. He is infuriating, exasperating, and has left a dubious legacy. But his originality is undeniable, his influence is pervasive, and his legacy, good or bad, will always be with us.

View all my reviews

Quotes & Commentary #26: Durant

Quotes & Commentary #26: Durant

A sense of humor, being born of perspective, bears a near kinship with philosophy; each is the soul of the other.

—Will Durant, The Story of Philosophy

Durant, though not much of a comedian (and hardly more of a philosopher), did have his funny moments. My favorite of his subtle sarcasms is this delicious pun: “Holland boasted of several ladies who courted in Latin, who could probably conjugate better than they could decline.”

I was reminded of this quote while reading Viktor Frankl’s book, Man’s Search for Meaning. In his brief overview of his therapeutic technique, Logotherapy, Frankl mentions that he often uses humor to help his patients deal with neuroses.

The popular cognitive therapist, David D. Burns, also uses humor to help his patients deal with anxiety and depression. One of his techniques for managing fear is to replace a dreadful fantasy with a funny one. This relies on the same principle as the advice commonly given to people with a fear of public speaking: imagine everyone in the crowd in their undergarments. The effect of this is to transform something dreadfully serious and frightening into something absurd, and even fun.

I remember something from a documentary I saw long ago (I wish I could remember which one) that human babies laugh when something apparently dangerous turns out, upon closer inspection, to be harmless. For example: A mom hides her face behind her hands. The baby gets confused and nervous. He can no longer see her face. Is that still his mom? What’s going on? Then, the mom takes her hands away, revealing a silly smile. The baby giggles with delight. It was mommy all along!

The reason why humor is effective in dealing with anxiety relies, I think, on this same mechanism. When we manage to see the humor in our situation, we see it from a new point of view, a new perspective in which our problems, which looked terrible from up close, now look silly and harmless.

In a way, to find something humorous, we must see the situation from a greater distance. Instead of getting absorbed in a problem, letting its shape occupy our whole field of vision, we place the problem in a landscape and thus contextualize it. When we do this, often we find ourselves laughing, because the problem, which before seemed so huge, is really small and insignificant in the grand scheme of things.

Here’s a recent example. A book review I wrote, of which I am fairly proud, was somehow deleted off Goodreads. At first I got very annoyed and upset. I had put so much effort into writing it! And I lost all the likes and comments! Then, with a smile, I realized that it is a bit absurd to get worked up about an internet book review. People are struggling to find jobs, managing chronic diseases—and for Pete’s sake Trump is president! My lost book review was nothing to get frustrated about.

As Durant points out, it is this quality of humor—seeing the part within the context of the whole—that most approaches philosophy. Durant does not, of course, mean “philosophy” in the strict, modern sense of the word (the subject that deals with problems of metaphysics, epistemology, ethics, and so forth). He means, rather, philosophy in its classic sense, as a method of regulating one’s life and thoughts in order to be more virtuous and happy. Nowadays, instead of philosophy, we have therapy and self-help books to aid us in this quest. But whatever we decide to call the art of life, I think most of us can agree that humor plays no small part in it.

Just the other day, I was having a conversation with a teacher in the philosophy department of my school. I asked if there were any Spanish philosophers she would recommend. She mentioned a couple names, but then she added: “You know, the most profound Spanish philosophy cannot be found in any philosophy book. It’s in Don Quixote.”

I was struck by this comment, because Unamuno, who I just finished reading, had the same opinion. And I can’t help agreeing. All profound comedy—and there is no comedy no more profound than Don Quixote—necessarily carries with it a profound philosophy. I do not mean by this that you can extract from Cervantes anything similar to Kant’s ethics; only that great comedy requires an ability to see things as they really are, within the context of the whole, and to transmit this vision with punch and savor.

The comedian alive who, in my opinion, comes closest to this quixotic ideal is Louis C.K. His comedy is distinctive for its emphasis on self-mockery. Most often he uses himself as the butt of his jokes. But his comedy is saved from narcissism because, despite his wealth and fame, he convincingly adopts an everyman persona. Whenever he makes fun of himself he is making fun of you, because inevitably you think the same thoughts and do the same things. But his comedy isn’t threatening because, however denigrating he can be, everyone in the audience is all in it together.

This ability to make fun of yourself is one of the qualities I value most highly. It saves you from being arrogant, condescending, and over-serious. It allows you to be humorous without picking on other people. Self-mockery is also, I think, an excellent antidote to many of life’s petty troubles (like deleted book reviews). If you can take a step back from yourself, and honestly see your faults, your pettiness, and your absurdities—not with bitterness but with forgiving humor—then you will be able to see your successes and your failures with the gentle irony that life, a thoroughly silly thing, so richly deserves.

Review: Story of Philosophy

Review: Story of Philosophy

The Story of PhilosophyThe Story of Philosophy by Will Durant
My rating: 2 of 5 stars

The Story tried to salt itself with a seasoning of humor, not only because wisdom is not wise if it scares away merriment, but because a sense of humor, being born of perspective, bears a near kinship to philosophy; each is the soul of the other.

A long time ago, as I began to set about learning philosophy, I bought a used copy of this book, which sat, unread, on my shelves for a few years, its yellowed pages only growing more yellow, and its already cracked and broken spine castigating me from my bookshelf every time I passed by. Thus, about four or five months ago, I finally decided to read this book; but I quickly lost interest. Every time I put the book down, I waited a long time before picking it up again; and it was only when I downloaded an audiobook, last month that I was able to finish Durant’s popular history of philosophy.

This difficulty in finishing is the clearest indication of how I felt about it: I was unimpressed. Though by no means a bad book, and one with many good qualities, I can’t say I would recommend this book to anyone, for I believe Durant does an injustice to his topic. Simply put, this is both a poor history of and introduction to philosophy; it fails to convey adequately what philosophy is, what philosophers do, and how philosophy developed. There is little of intellectual or academic interest in these pages, and despite its eloquence I often managed to find it quite dull.

The trouble comes early on, when Durant makes this announcement:

The author believes that epistemology has kidnapped modern philosophy, and well nigh ruined it; he hopes for the time when the study of the knowledge-process will be recognized as the business of psychology, and when philosophy will again be understood as the synthetic interpretation of all experience rather than the analytic description of the mode and process of experience itself.

The absurdity of the above paragraph is obvious to anyone who has read a fair share of philosophy. Writing a history of philosophy while omitting epistemology is like writing a history of chemistry while refusing to talk about chemical bonds. Epistemology is a central part of philosophy, and, besides, a central concern of the greatest modern philosophers; so any treatment of the subject lacking epistemology is doomed to miss the mark. Besides this, I would also like to point out that the above paragraph reveals an intellectual weakness as well. How could epistemology be the subject of psychology, a science? Epistemology asks “What is knowledge?” This is clearly not a subject that can be investigated empirically or decided scientifically, for scientific investigation already presupposes that knowledge is empirical in nature. So already Durant is showing himself to be a poor philosopher, as well as a poor historian.

When we get into the thick of Durant’s book, we encounter an even more general problem. Durant’s modus operandi throughout this work is to treat the ideas of philosophers as byproducts of their experiences and their personalities. Not only does this often leads him into cheap psychoanalyzing (such as speculating about how Nietzsche’s father and mother influenced his outlook) as well as broad and often ridiculous generalizations about peoples and places (the Germans do this, the Jews do that), but, more damningly, turns systems of philosophy into mere quirks of personality and whims of fancy. In this book, philosophers are artists, not thinkers. Although Durant would have you believe that this is the wise and cosmopolitan perspective on the matter, this fails completely to do justice to these men.

Philosophy is, among other things, the art of argumentation. Philosophers, at least good philosophers, are extremely focused on the logical reasons for their beliefs. This is embodied in that great creation-myth of Western philosophy, Plato’s tales of Socrates, wherein that old sage wanders from citizen to citizen, perpetually demanding to know the reasons why they believe what they do. Plato’s Socrates is always asking, What do you mean by this word? And why do you mean it that way? The final goal of the philosopher is to harbor no dogmatic opinions—and by dogmatic I mean opinions that are accepted without scrutiny—but rather to probe and investigate every assumption, idea, and goal in life.

Durant’s treatment of philosophers does exactly the opposite. In Durant’s hands, philosophers are mere pundits, who spout theories left and right without taking the time to justify them. Durant’s chapters on their ideas are mere liturgies of opinions; and the final impression is that philosophy is just the art of having pompous and high-sounding views about grandiose subjects. It is absolutely worthless to know that Plato believed in a world of ideal forms without knowing why he did so; and the same goes for every other philosopher’s view. This emphasis on reason and argument is what separates philosophy from philosophizing; but you will find almost exclusively the latter in this book.

I would be being unfair if I didn’t acknowledge that many of this book’s faults are due to its genesis. This book was originally published as a series of pamphlets for the Blue Book series, which were inexpensive paperbacks for worker education. This origin largely explains why this book contains such a huge chronological leap, from Aristotle all the way to Francis Bacon, and also why Durant continually emphasizes the practical over the theoretical, the biographical over the intellectual.

Less excusable, perhaps, was Durant’s choice to write a chapter on Voltaire, who wasn’t even a philosopher, and Herbert Spencer, who was obsolecent even back when this book was written. Much better would have been a chapter on John Locke, who formulated many of the ideas later endorsed by Voltaire, and John Stuart Mill, a contemporary of Herbert Spencer who has had a much more lasting effect on the subsequent history of philosophy. While I’m at it, I think a chapter on Descartes would have been much better than a chapter on Francis Bacon (who is a fairly minor figure in the history of philosophy), for Descartes was also a pioneer of science, as well as a great mathematician, not to mention the father of modern philosophy.

For these reason, I would much more highly recommend Russell’s History of Western Philosophy over this book, as Russell, being himself a philosopher, at least does his best to reconstruct the reasons for other philosophers’ views, even if Russell sometimes falls short in this task. (I also want to note, in passing, that Durant considers Russell’s early work in logic and mathematics to be pure hogwash, whereas most philosophers today consider that to be Russell’s most enduring work.)

The only place that Durant surpasses Russell is in his chapter on Kant, which I think is a truly excellent piece of work, and a good place to start for any students seeking to understand that obscure German metaphysician. Other than this brief flash of sunlight, the rest of this book is nothing but passing storm clouds, rumbling ominously, constantly threatening to rain, and yet passing overhead with nary a drop, leaving us as parched as they found us.

View all my reviews

Quotes & Commentary #20: Locke

Quotes & Commentary #20: Locke

Let us then suppose the mind to be, as we say, white paper, void of all characters, without any ideas; how comes it to be furnished? Whence comes it by that vast store, which the busy and boundless fancy of man has painted on it, with an almost endless variety? Whence has it all the materials of reason and knowledge? To this I answer, in one word, from experience: in that, all our knowledge is founded; and from that it ultimately derives itself.

—John Locke, Essay Concerning Human Understanding

This passage is one of the most famous formulation of the tabula rasa account of the human mind. Tabula rasa is Latin for “blank slate,” which is the traditional metaphor used to explain the theory. At birth, the mind is like a blank chalk board, devoid of writing; our experience is the hand that writes upon us; and our knowledge is the end result.

John Locke held that there was nothing in the mind that did not originate in the senses. Yes, we could have abstract ideas, like our notion of a triangle; but these ideas were simply generalizations from individual triangles that we have experienced through our eyes. Thus all knowledge, however general, abstract, or theoretical, was just a summary of our experience.

In Locke’s own lifetime, this idea was contested by Leibniz, who wrote an entire book-length response to Locke’s Essay, arguing that the mind needed certain innate principles in order to acquire knowledge. And this puzzle—the respective roles of experience, sense data, induction, deduction, and abstraction in our knowledge of the world—forms the basis of Kant’s magnificent Critique of Pure Reason.

Locke was a philosopher, and thus his Essay is largely concerned with epistemology—the nature, limit, and acquisition of knowledge. Yet this debate—empiricism versus rationalism, the “blank slate” versus “innate ideas”—is often reframed, in today’s world, as a scientific controversy.

The most famous example of this that I’m aware of is the controversy in linguistics. How much structure must we posit in the human brain in order to account for language acquisition? These classic answer to this question was given by Chomsky. He argued that, contrary to Locke, we can’t imagine the brain at birth as a blank slate, but must assume an enormous amount of complex machinery.

Several arguments led him to this conclusion, the most famous of which was the “poverty of input.” This is the observation that, without some kinds of basic assumptions guiding their derivation, children are not exposed to nearly enough examples of language in order to derive the correct grammatical form. For the human infant trying to guess the meaning of an unknown sentence, there are an enormous number of logical possibilities. If the language learner had to eliminate each one of these possibilities one by one, then it would take far too much time. Thus some in-built, innate schema must allow them to guess intelligently.

Not only that, but for the learner attempting to divine the deep structure from the surface structure, they must contend with the fact that the surface structure of a language is often misleading. Consider these two sentences: (A) “I expected the doctor to examine John,” and (B) “I persuaded the doctor to examine John.” Now let’s say we transform the first sentence into the passive voice: “I expected John to be examined by the doctor.” Notice that the meaning of this sentence is identical with the earlier sentence.

Suppose the learner, reasoning by analogy, transformed sentence (B) the same way, resulting in “I persuaded John to be examined by the doctor.” Now notice that the meaning of this new sentence is different from the first one. In the active voice the doctor is being persuaded, and in the passive voice John is. And this, despite undergoing what, superficially at least, appears to be the same transformation as sentence (A). Clearly, there is more to the grammar than meets the eye.

From all this, Chomsky concludes that there must be a “Universal Grammar,” which is a schema in the brain that determines which types of grammatical rules are permissible. Put more simply, Universal Grammar is something that allows learners to guess intelligently, rather than randomly, about the structure of language. Clearly such a schema would be a lot of information to be born with. In this, Chomsky resembles Leibniz and Kant far more than Locke and Hume.

But you don’t really need any of Chomsky’s arguments to realize that there must be some innate organization in our brains that allow us to learn language. After all, almost every person learns a language, while dogs and cats, who also have brains, and who are exposed to about as much language, never pull it off. Computers are better at many cognitive tasks than humans; and yet a few minutes with Google Translate is enough to convince anyone that computers haven’t quite gotten the hang of language. Clearly there is something special about the human brain that allows us to acquire language, while cats and computers struggle.

Thus we are left with several interesting questions. First, how much information and organization does the human brain possess at birth? How much of this information consists of general learning strategies, and how much is specific to language acquisition? And what exactly does this information consist of? Chomsky’s model of Universal Grammar, for example, was an attempt to answer this last question, by proposing a set of conditions that all languages must abide by. But his model has of late been criticized, first, for positing too much organization, and second, for failing to account for the structure of certain rare languages.

I am not a linguist, and thus I cannot hope to solve this controversy, or even make an interesting contribution to it. I only want to point out that this debate, although new in form, harks all the way back to Plato and Aristotle. Plato thought all knowledge was buried in the mind, and all philosophers had to do was uncover it; and Aristotle, like Locke, thought that knowledge derived from the senses. It is obvious to everyone by now that either extreme must be wrong. But apparently 2,500 years hasn’t been enough time for us to come to a conclusion.

On Morality

On Morality

What does it mean to do the right thing? What does it mean to be good or evil?

These questions have perplexed people since people began to be perplexed about things. They are the central questions of one of the longest lines of intellectual inquiry in history: ethics. Great thinkers have tackled it; whole religions have been based around it. But confusion still remains.

Well perhaps I should be humble before attempting to solve such a momentous question, seeing who have come before me. And indeed, I don’t claim any originality or finality in these answers. I’m sure they have been thought of before, and articulated more clearly and convincingly by others (though I don’t know by whom). Nevertheless, if only for my own sake I think it’s worthwhile to set down how I tend to think about morality—what it is, what it’s for, and how it works.

I am much less concerned in this essay with asserting how I think morality should work than with describing how it does work—although I think understanding the second is essential to understanding the first. That is to say, I am not interested in fantasy worlds of selfless people performing altruistic acts, but in real people behaving decently in their day-to-day life. But to begin, I want to examine some of the assumptions that have characterized earlier concepts of ethics, particularly with regard to freedom.

Most thinkers begin with a free individual contemplating multiple options. Kantians think that the individual should abide by the categorical imperative and act with consistency; Utilitarians think that the individual should attempt to promote happiness with her actions. What these systems disagree about is the appropriate criterion. But they do both assume that morality is concerned with free individuals and the choices they make. They disagree about the nature of Goodness, but agree that Goodness is a property of people’s actions, making the individual in question worthy of blame or praise, reward or punishment.

The Kantian and Utilitarian perspectives both have a lot to recommend them. But they do tend to produce an interesting tension: the first focuses exclusively on intentions while the second focuses exclusively on consequences. Yet surely both intentions and consequences matter. Most people, I suspect, wouldn’t call somebody moral if they were always intending to do the right thing and yet always failing. Neither would we call somebody moral if they always did the right thing accidentally. Individually, neither of these systems captures our intuitive feeling that both intentions and consequences are important; and yet I don’t see how they can be combined, because the systems have incompatible intellectual justifications.

But there’s another feature of both Kantian and Utilitarian ethics that I do not like, and it is this: Free will. The systems presuppose individuals with free will, who are culpable for their actions because they are responsible for them. Thus it is morally justifiable to punish criminals because they have willingly chosen something wrong. They “deserve” the punishment, since they are free and therefore responsible for their actions.

I’d like to focus on this issue of deserving punishment, because for me it is the key to understanding morality. By this I mean the notion that doing ill to a criminal helps to restore moral order to the universe, so to speak. But before I discuss punishment I must take a detour into free will, since free will, as traditionally conceived, provides the intellectual foundation for this worldview.

What is free will? In previous ages, humans were conceived of as a composite of body and soul. The soul sent directions to the body through the “will.” The body was material and earthly, while the soul was spiritual and holy. Impulses from the body—for example, anger, lust, gluttony—were bad, in part because they destroyed your freedom. To give into lust, for example, was to yield to your animal nature; and since animals aren’t free, neither is the lustful individual. By contrast, impulses from the soul (or mind) were free because they were unconstrained by the animal instincts that compromise your ability to choose.

Thus free will, as it was originally conceived, was the ability to make choices unconstrained by one’s animal nature and by the material world. The soul was something apart and distinct from one’s body; the mind was its own place, and could make decisions independently of one’s impulses or one’s surroundings. It was even debated whether God Himself could predict the behavior of free individuals. Some people held that even God couldn’t, while others maintained that God did know what people would or wouldn’t do, but God’s knowledge wasn’t the cause of their doing it. (And of course, some people believed in predestination.)

It is important to note that, in this view, free will is an uncaused cause. That is, when somebody makes a decision, this decision is not caused by anything in the material world as we know it. The choice comes straight from the soul, bursting into our world of matter and electricity. The decision would therefore be impossible to predict by any scientific means. No amount of brain imaging or neurological study could explain why a person made a certain decision. Nor could the decision be explained by cultural or social factors, since individuals, not groups, were responsible for them. All decisions were therefore caused by individuals, and that’s the essence of freedom.

It strikes me that this is still how we tend to think about free will, more or less. And yet, this view is based on an outdated understanding of human behavior. We now know that human behavior can be explained by a combination of biological and cultural influences. Our major academic debate—nature vs. nurture—presupposes that people don’t have free will. Behavior is the result of the way your genes are influenced by your environment. There is no evidence for the existence of the soul, and there is no evidence that the mind cannot be explained through understanding the brain.

Furthermore, even without the advancements of the biological and social sciences, the old way of viewing things was not philosophically viable, since it left unexplained how the soul affects the body and vice versa. If the soul and the body were metaphysically distinct, how could the immaterial soul cause the material body to move? And how could a pinch in your leg cause a pain in your mind? What’s more, if there really was an immaterial soul that was causing your body to move, and if these bodily movements truly didn’t have any physical cause, then it’s obvious that your mind would be breaking the laws of physics. How else could the mind produce changes in matter that didn’t have any physical cause?

I think this old way of viewing the body and the soul must be abandoned. Humans do not have free will as originally conceived. Humans do not perform actions that cannot be scientifically predicted or explained. Human behavior, just like cat behavior, is not above scientific explanation. The human mind cannot generated uncaused causes, and does not break the laws of physics. We are intelligent apes, not entrapped gods.

Now you must ask me: But if human behavior can be explained in the same way that squirrel behavior can, how do we have ethics at all? We don’t think squirrel are capable of ethical or unethical behavior because they don’t have minds. We can’t hold a squirrel to any ethical standard and we therefore can’t justifiably praise or censor a squirrel’s actions. If humans aren’t categorically different then squirrels, than don’t we have to give up on ethics altogether?

This is not justified. Even though I think it is wrong to say that certain people “deserve” punishment (in the Biblical sense), I do think that certain types of consequences can be justified as deterrents. The difference between humans and squirrels is not that humans are free, but that humans are capable of thinking about the long term consequences of an action before committing it. Individuals should be held accountable, not because they have free will, but because humans have a great deal of behavioral flexibility, thus allowing their behavior to be influenced by the threat of prison.

This is why it is justifiable to lock away murderers. If it is widely known among the populace that murderers get caught and thrown into prison, this reduces the number of murders. Imprisoning squirrels for stealing peaches, on the other hand, wouldn’t do anything at all, since the squirrel community wouldn’t understand what was going on. With humans, the threat of punishment acts as a deterrent. Prison becomes part of the social environment, and therefore will influence decision-making. But in order for this threat to act as an effective deterrent, it cannot be simply a threat; real murderers must actually face consequences or the threat won’t be taken seriously and thus won’t influence behavior.

To understand how our conception of free will affects the way we organize our society, consider the case of drug addiction. In the past, addicts were seen as morally depraved. This was a direct consequence of the way people thought about free will. If people’s decisions were made independently of their environment or biology, then there was no excuses or mitigating circumstance for drug addicts. Addicts were simply weak, depraved people who mysteriously kept choosing self-destructive behavior. What resulted from this was the disastrous war on drugs, a complete fiasco. Now we know that it is absurd to throw people into jail for being addicted, simply absurd, because addicts are not capable of acting otherwise. This is the very definition of addiction, that one’s decision-making abilities have been impaired.

As we’ve grown more enlightened about drug addiction, we’ve realized that throwing people in jail doesn’t solve anything. Punishment does not act as an effective deterrent when normal decision-making is compromised. By transitioning to a system where addiction is given treatment and support, we have effectively transitioned from an old view of free will to the new view that humans behavior is the result of biology, environment, and culture. We don’t hold them “responsible” because we know it would be like holding a squirrel responsible for burying nuts. This is a step forward, and it has been taken by abandoning the old views of free will.

I think we should apply this new view of human behavior to other areas of criminal activity. We need to get rid of the old notions of free will and punishment. We must abandon the idea of punishing people because they “deserve” it. Murderers should be punished, but not because they deserve to suffer, but for the following two reasons: first, because they have shown themselves to be dangerous and should be isolated; and second, because their punishment helps to act as a deterrent to future murderers. Punishment is just only insofar as these two criteria are met. Once a murderer is made to suffer more than is necessary to deter future crimes, and is isolated more than is necessary to protect others, then I think it is unjustifiable and wrong to punish him further.

In short, we have to give up on the idea that inflicting pain and discomfort on a murderer helps to restore moral balance to the universe. Vengeance in all its forms should be removed from our justice system. It is not the job of us or anyone else to seek retributions for wrongs committed. Punishments are only justifiable because they help to protect the community. The aim of punishing murderers is neither to hurt nor to help them, but to prevent other people from becoming murderers. And this is, I think, the reason why the barbarous methods of torture and execution are wrong, because I very much doubt that brutal punishments are justified in terms of further efficacy in deterrence. However, I’m sure there is interesting research somewhere on this.

Seen in this way, morality can be understood in the same way we understand language—as a social adaptation that benefits the community as a whole as well as individual members of the community. Morality is a code of conduct imposed by the community on its members, and derivations from this code of conduct are justifiably punished for the safety of the other members of the community. When this code is broken, a person forfeits the protection under the code, and is dealt with in such a way that future derivations from the moral code are discouraged.

Just as Wittgenstein said that a private language is impossible, so I’d argue that a private morality is impossible. A single, isolated individual can be neither moral nor immoral. People are born with a multitude of desires; and every desire is morally neutral. A moral code comes into play when two individuals begin to cooperate. This is because the individuals will almost inevitably have some desires that conflict. A system of behavior is therefore necessary if the two are to live together harmoniously. This system of behavior is their moral code. In just the same way that language results when two people both use the same sounds to communicate the same messages, morality results when two people’s desires and actions are in harmony. Immorality arises when the harmonious arrangement breaks down, and one member of the community satisfies their desire at the expense of the others. Deviations of this kind must have consequences if the system is to maintain itself, and this is the justification for punishment.

One thing to note about this account of moral systems is that they arise for the well-being of their participants. When people are working together, when their habits and opinions are more or less in harmony, when they can walk around in their neighborhood without fearing every person they meet, both the individual and the group benefits. This point is worth stressing, since we now know that the human brain is the product of evolution, and therefore we must surmise that universal features of human behavior, such as morality, are adaptive. The fundamental basis for morality is self-interest. What distinguishes moral from immoral behavior is not that the first is unselfish while the other is selfish, but that the first is more intelligently selfish than the second.

It isn’t hard to see how morality is adaptive. One need only consider the basic tenets of game theory. In the short term, to cooperate with others may not be as advantageous as simply exploiting others. Robbery is a quicker way to make money than farming. And indeed, the potentially huge advantages of purely selfish behavior explains why unethical behavior occurs: Sometimes it benefits individuals more to exploit rather than to help one another. Either that, or certain individuals—either from ignorance or desperation—are willing to risk long-term security for short-term gains. Nevertheless, in general moral behaviors tend to be more advantageous, if only because selfish behavior is more risky. All unethical behavior, even if carried on in secret, carries a risk of making enemies; and in the long run, enemies are less useful than friends. The funny thing about altruism is that it’s often more gainful than selfishness.

Thus this account of morality can be harmonized with an evolutionary account of human behavior. But what I find most satisfying about this view of morality is that it allows us to see why we care both about intentions and consequences. Intentions are important in deciding how to punish misconduct because they help determine how an individual is likely to behave in the future. A person who stole something intentionally has demonstrated a willingness to break the code, while a person who took something by accident has only demonstrated absent-mindedness. The first person is therefore more of a risk to the community. Nevertheless, it is seldom possible to prove what somebody intended beyond the shadow of a doubt, which is why it is also necessary to consider the consequences of an action. What is more, carelessness as regards the moral code must be forcibly discouraged, otherwise the code will not function properly. This is why, in certain cases, breaches of conduct must be punished even if they were demonstrably unintentional—to discourage other people in the future from being careless.

Let me pause here to sketch out some more philosophical objections to the Utilitarian and Kantian systems, besides the fact that they don’t adequately explain how we tend to think about morality. Utilitarianism does capture something important when it proclaims that actions should be judged insofar as they further the “greatest possible happiness.” Yet taken by itself this doctrine has some problems. The first is that you never know how something is going to turn out, and even the most concerted efforts to help people sometimes backfire. Should these efforts, made in good faith, be condemned as evil if they don’t succeed? What’s more, Utilitarian ethics can lead to disturbing moral questions. For example, is it morally right to kill somebody if you can use his organs to save five other people? Besides this, if the moral injunction is to work constantly towards the “greatest possible happiness,” then we might even have to condemn simple things like a game of tennis, since two people playing tennis certainly could be doing something more humanitarian with their time and energy.

The Kantian system has the opposite problem in that it stresses good intentions and consistency to an absurd degree. If the essence of immorality is to make an exception of oneself—which covers lying, stealing, and murder—then telling a fib is morally equivalent to murdering somebody in cold blood, since both of those actions equally make exceptions of the perpetrator. This is what results if you overemphasize consistency and utterly disregard consequences. What’s more, intentions are, as I said above, basically impossible to prove—and not only to other people, but also to yourself. Can you prove, beyond a shadow of a doubt, that your intentions were pure yesterday when you accidentally said something rude? How do you know your memory and your introspection can be trusted? However, let me leave off with these objections because I think entirely too much time in philosophy is given over to tweezing apart your enemies’ ideas and not enough to building your own.

Thus, to repeat myself, both consequences and intentions, both happiness and consistency must be a part of any moral theory if it is to capture how we do and must think about ethics. Morality is an adaptation. The capacity for morality has evolved because moral systems benefit both groups and individuals. Morality is rooted in self-interest, but it is an intelligent form of self-interest that recognizes that other people are most useful as allies than as enemies. Morality is neither consistency nor pleasure. Morality is consistency for the sake of pleasure. This is why moral strictures that demand that people devote their every waking hour to helping others or to never make exceptions of themselves are self-defeating, because when a moral system is onerous is isn’t performing its proper function.

But now I must deal with that fateful question: Is morality absolute or relative? At first glance it would seem that my account would put me squarely in the relativist camp, seeing that I point to a community code of conduct. Nevertheless, when it comes to violence I am decidedly a moral absolutist. This is because I think that physical violence can only ever be justified by citing defense. First, to use violence to defend yourself from violent attack is neither moral nor immoral, because at this point the moral code has already broken down. The metaphorical contract has been broken, and you are now in a situation where the you must either fight, run, or be killed. The operant rule is now survival and not morality. For the same reason a whole community may justifiably protect itself from invasion from an enemy force (although capitulating is equally defensible). And lastly violence (in the form of imprisonment) is justified in the case of criminals, for the reasons I discussed above.

What if there are two communities, community A and community B, living next to one another? Both of these communities have their own moral codes which the people abide by. What if a person from community A encounters a person from community B? Is it justifiable for either of them to use violence against the other? After all, each of them is outside the purview of the other’s moral code, since moral codes develop within communities. Well in practice situations like this do commonly result in violence. Whenever Europeans encountered a new community—whether in the Americas or in Africa—the result was typically disastrous for that community. This isn’t simply due to the wickedness of Europeans; it has been a constant throughout history: When different human communities interact, violence is very often the result. And this, by the way, is one of the benefits of globalization. The more people come to think of humanity as one community, the less violence we will experience.

Nevertheless, I think that violence between people from different communities is ultimately immoral, and this is why. To feel it is permissible to kill somebody just because they are not in your group is to consider that person subhuman—as fundamentally different. This is what we now call “Othering,” and it is what underpins racism, sexism, religious bigotry, homophobia, and xenophobia. But of course we now know that it is untrue that other communities, other religions, other races, women, men, or homosexuals or anyone else are “fundamentally” different or in any way subhuman. It is simply incorrect. And I think the recognition that we all belong to one species—with only fairly superficial differences in opinions, customs, rituals, and so on—is the key to moral progress. Moral systems can be said to be comparatively advanced or backward to the extent that they recognize that all humans belong to the same species. In other words, moral systems can be evaluated by looking at how many types of people they include.

This is the reason why it is my firm belief that the world as it exists today—full as it still is with all sorts of violence and prejudice—is morally superior than ever before. Most of us have realized that racism was wrong because it was based on a lie; and the same goes for sexism, homophobia, religious bigotry, and xenophobia. These forms of bias were based on misconceptions; they were not only morally wrong, but factually wrong.

Thus we ought to be tolerant of immorality in the past, for the same reason that we excuse people in the past for being wrong about physics or chemistry. Morality cannot be isolated from knowledge. For a long time, the nature of racial and sexual differences was unknown. Europeans had no experience and thus no understanding of non-Western cultures. All sorts of superstitions and religious injunctions were believed in, to an extent most of us can’t even appreciate now. Before widespread education and the scientific revolution, people based their opinions on tradition rather than evidence. And in just the same way that it is impossible to justly put someone in prison without evidence of their guilt, it impossible to be morally developed if your beliefs are based on misinformation. Africans and women used to be believed to be mentally inferior; homosexuals used to be believed to be possessed by evil spirits. Now we know that there is no evidence for these views, and in fact evidence to the contrary, so we can cast them aside; but earlier generations were not so lucky.

To the extent, therefore, that backward moral systems are based on a lack of knowledge, they must be tolerated. In this why we ought to be tolerant of other cultures and of the past. But to the extent that facts are wilfully disregarded in a moral system, that system can be said to be corrupt. Thus the real missionaries are not the ones who spread religion, but who spread knowledge, for increased understanding of the world allows us develop our morals.

These are my ideas in their essentials. But for the sake of honesty I have to add that the ideas I put forward above have been influenced by my studies in cultural anthropology, as well as my reading of Locke, Hobbes, Hume, Spinoza, Santayana, Ryle, Wittgenstein, and of course by Mill and Kant. I was also influenced by Richard Dawkins’s discussion of Game Theory in his book, The Selfish Gene. Like most third-rate intellectual work, this essay is, for the most part, a muddled hodgepodge of other people’s ideas.