Review: Autobiography (Darwin)

Review: Autobiography (Darwin)
The Autobiography of Charles Darwin, 1809–82

The Autobiography of Charles Darwin, 1809–82 by Charles Darwin

My rating: 4 of 5 stars

I have attempted to write the following account of myself, as if I were a dead man in another world looking back at my own life. Nor have I found this difficult, for life is nearly over with me. I have taken no pains about my style of writing.

This is the quintessential scientific autobiography, a brief and charming book that Darwin wrote “for nearly an hour on most afternoons” for a little over two months. Originally published in 1887—five years after the naturalist’s death—it was somewhat censored, the more controversial religious opinions being taken out. It was only in 1958, to celebrate the centennial of The Origin of Species, that the full version was restored, edited by one of Darwin’s granddaughters, Nora Barlow.

The religious opinions that Darwin expresses are, nowadays, not enough to raise eyebrows. In short, his travels and his research slowly eroded his faith until all that remained was an untroubled agnosticism. What is interesting is that Darwin attributes to his loss of faith his further loss of sensitivity to music and to grand natural scenes. Apparently, in later life he found himself unable to experience the sublime. His scientific work also caused him to lose his appreciation for music, pictures, and poetry, which he heartily regrets: “My mind seems to have become a kind of machine for grinding general laws out of large collections of facts,” he says, and attributes to this the fact that “for many years I cannot endure to read a line of poetry.”

The most striking and lovable of Darwin’s qualities is his humility. He notes his lack of facility with foreign languages (which partially caused him to refuse Marx’s offer to dedicate Kapital to him), his terrible ear for music, his difficulty with writing, his incompetence in mathematics, and repeatedly laments his lack of higher aesthetic sensitivities. His explanation for his great scientific breakthrough is merely a talent for observation and dogged persistence. He even ends the book by saying: “With such moderate abilities as I possess, it is truly surprising that thus I should have influenced to a considerable extent the beliefs of scientific men on some important point.” It is remarkable that such a modest and retiring man should have stirred up one of the greatest revolutions in Western thought. Few thinkers have been more averse to controversy.

This little book also offers some reflection on the development of his theory—with the oft-quoted paragraph about reading Malthus—as well as several good portraits of contemporary thinkers. But the autobiography is not nearly as full as one might expect, since Darwin skips over his voyage on the Beagle (he had already written an excellent book about it) and since the second half of his life was extremely uneventful. For Darwin developed a mysterious ailment that kept his mostly house-bound, so much so that he did not even go to his father’s funeral. The explanation eluded doctors in his time and has resisted firm diagnosis ever since. But the consensus seems to be that it was at least in part psychological. It did give Darwin a convenient excuse to avoid society and focus on his work.

The final portrait which emerges is that of a scrupulous, methodical, honest, plainspoken, diffident, and level-headed fellow. It is easy to imagine him as a retiring uncle or a reserved high school teacher. That such a man, through a combination of genius and circumstance—and do not forget that he almost did not go on that famous voyage—could scandalize the public and make a fundamental contribution to our picture of the universe, is perhaps the greatest argument that ever was against the eccentric genius trope.

View all my reviews

Review: Why Buddhism is True

Review: Why Buddhism is True

Why Buddhism is True: The Science and Philosophy of Meditation and EnlightenmentWhy Buddhism is True: The Science and Philosophy of Meditation and Enlightenment by Robert Wright

My rating: 3 of 5 stars

A far more accurate title for this book would be Why Mindfulness Meditation is Good. For as Wright—who does not consider himself a Buddhist—admits, he is not really here to talk about any form of traditional Buddhism. He does not even present a strictly “orthodox” view of any secular, Western variety of Buddhism. Instead, this is a rather selective interpretation of some Buddhist doctrines in the light of evolutionary psychology.

Wright’s essential message is that the evolutionary process that shaped the human brain did not adequately program us for life in the modern world; and that mindfulness meditation can help to correct this bad programming.

The first of these claims is fairly uncontroversial. To give an obvious example, our love of salt, beneficial when sodium was hard to come by in natural products, has become maladaptive in the modern world where salt is cheap and plentiful. Our emotions, too, can misfire nowadays. Caring deeply that people have a high opinion of you makes sense when you are, say, living in a small village full of people you know and interact with daily; but it makes little sense when you are surrounded by strangers on a bus.

This mismatch between our emotional setup and the newly complex social world is one reason for rampant stress and anxiety. Something like a job interview—trying to impress a perfect stranger to earn a livelihood—simply didn’t exist for our ancestors. This can also explain of tribalism, which Wright sees as the most pressing danger of the modern world. It makes evolutionary sense to care deeply for oneself and one’s kin, with some close friends thrown in; and those who fall outside of this circle should, following evolutionary logic, be treated with suspicion—which explains why humans are so prone to dividing themselves into mutually antagonistic groups.

But how can mindfulness meditation help? Most obviously, it is a practice designed to give us some distance from our emotions. This is done by separating the feeling from its narrative. In daily life, for example, anger is never experienced “purely”; we always get angry about something; and the thought of this event is a huge component of its experience. But the meditator does her best to focus on the feeling itself, to examine its manifestation in her body and brain, while letting go of the corresponding narrative. Stripped of the provoking incident, the feeling itself ceases to be provocative; and the anger may even disappear completely.

Explained in this way, mindfulness meditation is the mirror image of Cognitive-Behavioral Therapy (CBT). In CBT the anger is attacked from the opposite side: by focusing on the narrative and subjecting it to logical criticism. In my experience, at least, the things one tells oneself while angry rarely stand up to cool analysis. And when one ceases to believe in the thought, the feeling disappears. The efficacy of both mindfulness meditation and CBT, then, is based on the interdependence of feeling and thought. If separated—either by focusing on the feeling during meditation, or the thought through analysis—the emotion disappears.

This, in a nutshell, is how mindfulness meditation can be therapeutic. But Wright wants to make a far more grandiose claim: that mindfulness meditation can reveal truths about the nature of mind, the world, and morality.

One of the central ideas of Buddhism is that of “emptiness”: that the enlightened meditator sees the world as empty of essential form. The first time I encountered this idea in a Buddhist text it made no sense to me; but Wright gives it an intriguing interpretation. Our brain, designed to survive, naturally assigns value to things in our environment based on how useful or harmful they are to us. These evaluations are, according to Wright’s theory, experienced as emotional reactions. I have quite warm and fuzzy feelings about my laptop, for example; and even the communal computers where I work evoke in me a comforting sense of familiarity and utility.

These emotions, which are sometimes very tiny indeed, are what give experiential reality a sense of essence. The emotions, in other words, help us to quickly identify and use objects: I don’t have to think too much about the computers, for example, since the micro-emotion brings its instrumental qualities quickly to my attention. The advantages of this are obvious to anyone in a hurry. Likewise, this emotional registering is equally advantageous in avoiding danger, since taking time to ponder a rattlesnake isn’t advisable.

But the downside is that we can look at the world quite narrowly, ignoring the sensuous qualities of objects in favor of an instrumental view. Visual art actively works against this tendency, I think, by creating images that thwart our normal registering system, thus prompting us into a sensuous examination of the work. Good paintings make us into children again, exploring the world without worrying about making use of things. Mindfulness meditation is supposed to engender this same attitude, not just with regards to a painting, but to everything. Stripped of these identifying emotional reactions, the world might indeed seem “empty”—empty of distinctions, though full of rich sensation.

With objects, it is hard to see why this state of emptiness would be very desirable. (Also it should be said that this idea of micro-emotions serving as registers of essential distinctions is Wright’s interpretation of the psychological data, and is rather speculative.) But with regards to humans, this mindset might have its advantages. Instead of attributing essential qualities of good and bad to somebody we might see that their behavior can vary quite a bit depending on circumstances, and this can make us less judgmental and more forgiving.

Wright also has a go at the traditional Buddhist idea that the self is a delusion. According to what we know about the brain, he says, there is no executive seat of consciousness. He cites the famous split-brain experiments, and others like it, to argue that consciousness is not the powerful decision-maker we once assumed, but is more like a publicity agent: making our actions seem more cogent to others.

This is necessary because, underneath the apparent unity of conscious experience, there are several domain-specific “modules”—such as for sexual jealousy, romantic wooing, and so on—that fight amongst themselves in the brain for power and attention. Each module governs our behavior in different ways; and environmental stimuli determine which module is in control. Our consciousness gives a sense of continuity and coherence to this shifting control, which makes us look better in the eyes of our peers—or that’s how the theory goes, which Wright says is well-supported.

In any case, the upshot of this theory still would not be that the self doesn’t exist; only that the self is more fragmented and less executive than we once supposed. Unfortunately, the book steeply declines in quality in the last few chapters—where Wright tackles the most mystical propositions of Buddhism—when the final stage of the no-self argument is given. This leads him into the following speculations:

If our thoughts are generated by a variety of modules, which use emotion to get our attention; and if we can learn to dissociate ourselves from these emotions and see the world as “empty”; if, in short, we can reach a certain level of detachment from our thoughts and emotions: then, perhaps, we can see sensations arising in our body as equivalent to sensations arising from without. And maybe, too, this state of detachment will allow us to experience other people’s emotions as equivalent to our own, like how we feel pain from seeing a loved one in pain. In this case, can we not be said to have seen the true oneness of reality and the corresponding unreality of personal identity?

These lofty considerations aside, when I am struck by a car they better not take the driver to the emergency room; and when Robert Wright gets a book deal he would be upset if they gave me the money. My point is that this experience of oneness in no way undermines the reality of distinct personal identity, without which we could hardly go a day. And this state of perfect detachment is arguably, contra Wright, a far less realistic way of seeing things, since being genuinely unconcerned as to whom a pain belonged, for example, would make us unable to help. (Also in this way, contra Wight, it would make us obviously less moral.)

More generally, I think Wright is wrong in insisting that meditation can help us to experience reality more “truly.” Admittedly, I know from experience that meditation can be a great aid to introspection and can allow us to deal with our emotions more effectively. But the notion that a meditative experience can allow us to see a metaphysical truth—the unreality of self or the oneness of the cosmos—I reject completely. An essentially private experience cannot confirm or deny anything, as Wright himself says earlier on.

I also reject Wright’s claim that meditation can help us to see moral reality more clearly. By this he means that the detachment engendered by meditation can allow us to see every person as equally valuable rather than selfishly considering one’s own desires more important.

Now, I do not doubt that meditation can make people calmer and even nicer. But detachment does not lead logically to any moral clarity. Detachment is just that—detachment, which means unconcern; and morality is impossible without concern. Indeed, it seems to me that an enlightened person would be even less likely to improve the world, since they can accept any situation with perfect equanimity. Granted, if everyone were perfectly enlightened there would be no reason to improve anything—but I believe the expression about hell freezing over applies here.

Aside from the intellectual weakness of these later chapters, full as they are of vague hand-waving, the book has other flaws. I often got the sense that Wright was presenting the psychological evidence very selectively, emphasizing the studies and theories that accorded with his interpretations of Buddhism, without taking nearly enough time to give the contrasting views. On the other hand, he interprets the Buddhist doctrines quite freely—so in the end, when he says that modern science is confirming Buddhism, I wonder what is confirming what, exactly. The writing, while quite clear, was too hokey and jokey for me.

Last, I found his framing of meditation as a way to save humanity from destructive tribalism as both naïve and misguided. In brief, I think that we ought to try to create a society in which the selfish interests of the greatest number of people are aligned. Selfish attachment, while potentially narrow, need not be if these selves are in enmeshed in mutually beneficial relationships; and some amount of attachment, with its concomitant dissatisfactions, seems necessary for people to exert great effort in improving their station and thus changing our world.

Encouraging people to become selflessly detached, on the other hand, besides being unrealistic, also strikes me as generally undesirable. For all the suffering caused by attachment—of which I am well aware—I am not convinced that life is better without it. As Orwell said: “Many people genuinely do not wish to be saints, and it is probable that some who achieve or aspire to sainthood have never felt much temptation to be human beings.”

View all my reviews

Review: Voyage of the Beagle

Review: Voyage of the Beagle
Voyage of the Beagle

Voyage of the Beagle by Charles Darwin

My rating: 4 of 5 stars

This book is really a rare treasure. Is there anything comparable? Here we have the very man whose ideas have revolutionized completely our understanding of life, writing with charm about the very voyage which sparked and shaped his thinking on the subject. And even if this book was not a window into the mind of one of history’s most influential thinkers, it would still be entertaining on its own merits. Indeed, the public at the time thought so, making Darwin into a bestselling author.

I can hardly imagine how fascinating it would have been for a nineteenth-century Englishman to read about the strange men and beasts in different parts of the world. Today the world is so flat that almost nothing can surprise. But what this book has lost in exotic charm, it makes up for in historical interest; for now it is a fascinating glimpse into the world of 150 years ago. Through Darwin’s narrative, we both look out at the world as it was, and into the mind of a charming man. And Darwin was charming. How strange it is that one of today’s most vicious debates—creationism vs. evolution, religion vs. science—was ignited by somebody as mild-mannered and likable as Mr. Darwin.

His most outstanding characteristic is his curiosity; everything Darwin sees, he wants to learn about: “In England any person fond of natural history enjoys in his walks a great advantage, by always having something to attract his attention; but in these fertile climates, teeming with life, the attractions are so numerous, that he is scarcely able to walk at all.”

As a result, the range of topics touched upon in this volume is extraordinary: botany, entomology, geology, anthropology, paleontology—the list goes on. Darwin collects and dissects every creature he can get his hands on; he examines fish, birds, mammals, insects, spiders. (Admittedly, the descriptions of anatomy and geological strata were often so detailed as to be tedious; Darwin, though brilliant, could be very dry.) In the course of these descriptions, Darwin also indulged in quite a bit of speculation, offering an interesting glimpse into both his thought-process and the state of science at that time. (I wonder if any edition includes follow-ups of these conjectures; it would have been interesting to see how they panned out.)

In retrospect, it is almost unsurprising that Darwin came up with his theory of evolution, since he encounters many things that are perplexing and inexplicable without it. Darwin finds fossils of extinct megafauna, and wonders how animals so large could have perished completely. He famously sees examples of one body-plan being adapted—like a theme and variations—in the finches of the Galapagos Islands. He also notes that the fauna and flora on those islands are related to, though quite different from, that in mainland South America. (If life there was created separately, why wouldn’t it be completely different? And if it was indeed descended from the animals on the mainland, what made it change?)

Darwin also sees abundant examples of convergent evolution—two distinct evolutionary lines producing similar results in similar circumstances—in Australia:

A little time before this I had been lying on a sunny bank, and was reflecting on the strange character of the animals in this country as compared with the rest of the world. An unbeliever in everything but his own reason might exclaim, ‘Two distinct Creators must have been at work; their object, however, has been the same & certainly the end in each case is complete.’

More surprisingly, Darwin finds that animals in isolated, uninhabited islands tend to have no fear of humans. And, strangely enough, an individual animal from these islands cannot even be taught to fear humans. Why, Darwin asks, does an individual bird in Europe fear humans, even though it is never been harmed by one? And why can’t you train an individual bird from an isolated island to fear humans? My favorite anecdote is of Darwin repeatedly throwing a turtle into the water, and having it return to him again and again—because, as Darwin notes, its natural predators are ocean-bound, and it has adapted to see the land as a place of safety. Darwin also manages to walk right up to an unwary fox and kill it with his geological hammer.

You can see how all of these experiences, so odd without a theory of evolution, become clear as day when Darwin’s ideas are embraced. Indeed, many are still textbook examples of the implications of his theories.

This book would have been extraordinary just for the light it sheds on Darwin’s early experiences in biology, but it contains many entertaining anecdotes as well. It is almost a Bildungsroman: we see the young Darwin, a respectable Englishman, astounded and amazed by the wide world. He encounters odd creatures, meets strange men, and travels through bizarre landscapes. And, like all good coming of age stories, he often makes a fool of himself:

The main difficulty in using either a lazo or bolas, is to ride so well, as to be able at full speed, and while suddenly turning about, to whirl them so steadily about the head, as to take aim: on foot any person would soon learn the art. One day, as I was amusing myself by galloping and whirling the balls round my head, by accident the free one struck a bush; and its revolving motion being thus destroyed, it immediately fell to the ground, and like magic caught one hind leg of my horse; the other ball was then jerked out of my hand, and the horse fairly secured. Luckily he was an old practiced animal, and knew what it meant; otherwise he would probably have kicked till he had thrown himself down. The Gauchos roared with laughter; they cried they had seen every sort of animal caught, but had never before seen a man caught by himself.

At this point, I am tempted to get carried away and include all of the many quotes that I liked. Darwin writes movingly about the horrors of slavery, he includes some vivid description of “savages,” and even tells some funny stories. But I will leave these passages to be discovered by the curious reader, who, in his voyage through the pages of this book, will indulge in a voyage far more comfortable than, and perhaps half as fascinating as, Darwin’s own. At the very least, the fortunate reader need not fear exotic diseases (Darwin suffered from ill health the rest of his days) or heed Darwin’s warning to the potential traveler at sea: “If a person suffer much from sea-sickness, let him weigh it heavily in the balance. I speak from experience: it is no trifling evil which may be cured in a week.”

View all my reviews

Review: Defending Science—Within Reason

Review: Defending Science—Within Reason

Defending Science - within Reason: Between Scientism And CynicismDefending Science – within Reason: Between Scientism And Cynicism by Susan Haack

My rating: 4 of 5 stars

Is quark theory or kwark theory politically more progressive?—the question makes no sense.

Ever since I can remember I was fascinated by science and its discoveries. Like Carl Sagan and Stephen Jay Gould, I grew up in New York City going routinely to the Museum of Natural History. I wondered at the lions and elephants in the Hall of African Mammals; I gazed in awe at the massive dinosaur fossils, which dwarfed even my dad in height and terror; I spent hours in the Hall of Ocean Life gaping at the dolphins, the sea lions, and the whales. The diorama of a sperm whale fighting a giant squid—two massive, monstrous forms, shrouded in the darkness of the deep sea—held a particular power over my childhood imagination. I must have made half a thousand drawings of that scene, the resolute whale battling the hideous squid in the imponderable depths.

Growing up, I found that not everybody shared my admiration for the process of science and its discoveries. This came as a shock. Even now, no intellectual stance upsets me more than science denial. To me, denying science has always seemed tantamount to denying both the beauty of the world and the power of the human mind. And yet here we are, in a world fundamentally shaped by our scientific knowledge, full of people who, for one reason or another, deny the validity of the scientific enterprise.

The reasons for science denial are manifold. Most obviously there is religious fundamentalism; and not far behind is corporate greed in industries, such as the coal or the cigarette industry, that might be hurt by the discoveries of scientists. These forms of science denial often take the form of anti-intellectualism; but what troubles me more are the various forms of science denial in intellectual circles: sociologists who see scientific discoveries as political myth-making, literary theorists who see science as a rhetoric of power, philosophers who see knowledge as wholly relative. Add to this the more plebeian forms of science denial often encountered on the left—such as skepticism about GMOs and vaccines—and we have a disbelief that extends across the political spectrum, throughout every level of education and socio-economic status.

And all this is not to mention the science-worship that has grown up, partly as a response to this skepticism. So often we see headlines proclaiming “Science Discovers” or “Scientists Have Proved” and so on; and time and again I’ve heard people use “because, science says” as an argument. Scientists are treated as a priestly class, handing out truths from high up above, truths reached by inscrutable methods using arcane theories and occult techniques, which must be trusted on faith. Needless so say, this attitude is wholly alien to the spirit of the scientific enterprise, and ultimately plays into the hands of skeptics who wish to treat modern science as something on par with traditional religion. Also needless to say (I hope), both the supinely adoring and the snobbishly scorning attitudes fail to do justice to what science really is and does.

This is where Susan Haack comes in. In this book, Haack attempts to offer an epistemological account of why the sciences have been effective, as well as a critique of the various responses to the sciences—from skepticism, to cynicism, to paranoia, to worship, to deference—to show how these responses misunderstand or mischaracterize, overestimate or underestimate, what science is really all about. Along the way, Haack also offers her opinions on the relation between the natural and the social sciences, science and the law, science and religion, science and values, and the possible “end of science.”

She begins, as all worthy philosophers must, by criticizing her predecessors. The early philosophers of science made two related errors that prevented them from coming to grips with the enterprise. The first was assuming that there was such a thing as the “scientific method”—a special methodology that sets the sciences apart from other forms of inquiry. The second mistake was assuming that this methodology was a special form of logic—deduction, induction, probability, and so on—used by scientists to achieve their results. In other words, they assumed that they could demarcate science from other forms of inquiry; and that this demarcation was logical in nature.

Haack takes issue with both of these assumptions. She asserts that, contrary to popular belief, there is no such thing as a special “scientific method” used only by scientists and not by any other sort of inquirer. Rather, scientific inquiry is continuous with everyday inquiry, from detective work to historical research to trying to find where you misplaced your keys this morning: it relies on the collection of evidence, coming up with theories to explain a phenomenon, testing different theories against the available evidence and new discoveries, using other inquirers to help check your judgment, and so on.

Because of this, Haack objects to the use of the adjective “scientific” as an honorific, as a term of epistemological praise—such as in “scientifically tested toothpaste”—since “scientific” knowledge is the same sort of knowledge as every other sort of knowledge. The only differences between “scientific” knowledge and everyday knowledge are, most obviously, the subject matter (chemistry and not car insurance rates), and less obviously how scrupulously it has been tested, discussed, and examined. To use her phrase, scientific knowledge is like any other sort of knowledge, only “more so”—the fruit of more dedicated research, and subjected to more exacting standards.

What sets the natural sciences apart, therefore, is not a special form of logic or method, but various helps to inquiry: tools that extend the reach of human sensation; peer-reviewed journals that help both to check the quality of information and to pool research from different times and places; mathematical techniques and computers to help deal with quantitative data; linguistic innovations and metaphors that allow scientists to discuss their work more precisely and to extend the reach of the human imagination; and so on.

Haack’s most original contribution to the philosophy of science is her notion of ‘foundherentism’ (an ugly word), which she explains by the analogy of a crossword puzzle. Scientific theories have connections both with other scientific theories and with the observable world, in much the same way that entries in a crossword puzzle have connections with other entries and with their clues. Thus the strength of any theory will depend on how well it explains the phenomenon in question, whether it is compatible with other theories that explain ‘neighboring’ phenomena, and how well those neighboring theories explain their own phenomena. Scientific theories, in other words, connect with observed reality and with each other at many different points—far more like the intersecting entries of a crossword puzzle than the sequential steps of a mathematical proof—which is why any neat logic cannot do them justice.

It is possible that all this strikes you as either obvious or pointless. But this approach is useful because it allows us to acknowledge the ways that background beliefs affect and constrain our theorizing, without succumbing to pure coherentism, in which the only test of a scientific theory’s validity is how compatible it is with background beliefs. While there is no such thing as a “pure” fact or a “pure” observation untainted by theory, and while it is true that our theories of the world always influence how we perceive the world, all this doesn’t mean that our theories don’t tell us anything about the world. Observation, while never “pure,” still provides a real check and restraint on our theorizing. To give a concrete example, we may choose to interpret a black speck in a photograph as a weather balloon, a bird, a piece of dirt that got on the lens, or a UFO—but we can’t choose not to see the black speck.

Using this subtle picture of scientific knowledge, Haack is able to avoid both the pitfalls of an overly formalistic account of science, such as Popper’s deductivism, and an overly relativistic account of science, such as Kuhn’s theory of scientific revolutions. There may be revolutions when the fundamental assumptions of scientists radically change; but the test of a theory’s worth is not purely in respect to these assumptions but also to the stubborn, observed phenomenon—the black speck. Scientific revolutions might be compared to a team of crossword puzzle-solvers suddenly realizing that the clues make more sense in Spanish than in English. The new background assumption will affect how they read the clues, but not the clues themselves; and the ultimate test of those assumptions—whether the puzzle can be convincingly solved—remains the same.

One of the more frustrating things I’ve heard science skeptics assert is that science requires faith. Granted, to do science you do need to take some things for granted—that there is a real world that exists independently of whether you know it or not, that your senses provide a real, if imperfect, window into this world, that the world is predictable and operates by the same laws in the present as in the past and the future, and so on. But all this is also taken for granted when you ruffle through your bag to find the phone you dropped in there that morning, or when you assume your shoelaces will work the same way today as they did yesterday. Attempts to deny objective truth—very popular in the post-modern world—are always self-defeating, since the denial itself presupposes objective truth (it is only subjectively true that objective truth doesn’t exist?).

We simply cannot operate in the world, or say anything about the world, without presupposing that, yes, the world exists, and that we can know something about it. Maybe this sounds obvious to you, gentle reader, but you would be astounded how much intellectual work in the social sciences and humanities is undermined by this inescapable proposition. Haack does a nice job of explaining this in her chapter on the sociology of science—pointing out all the sociologists, literary theorists, and ethnologists of science who, in treating all scientific knowledge as socially constructed, and therefore dubious, undermine their own conclusions (since those, too, are presumably socially constructed by the inquirers)—but I’m afraid Haack, in trying to push back against attempts like these, is pushing back against what I call the “Lotz Theory of Inquiry.”

(The Lotz Theory of Inquiry states that you cannot be a member of any intellectual discipline without presupposing that your discipline is the most important discipline in academe, and that all other disciplines are failed attempts to be your own discipline. Thus, for a sociologist, all physicists are failed sociologists, and so on.)

Because I am relatively unversed in the philosophy of science, I feel unqualified to say anything beyond the fact that I found Haack’s approach, on the whole, reasonable and convincing.

My main criticism is that she puts far too much weight on the idea of “everyday inquiry” or “common sense”—ideas which are far more culturally and historically variable than she seems to assume. This is exemplified in here criticism of religious inquiry as “discontinuous” with everyday forms of inquiry, since it relies on visions, trances, supernatural intervention, and the authority of sacred texts—normally not explanations or forms of evidence we use when explaining why we got food poisoning (the Mexican restaurant, or an act of God?).

While it is true that, nowadays, most people in the ‘developed’ world do not rely on these religious forms of evidence and explanation in their everyday life, it was not always true historically (think of Luther explaining the creaks in the walls as prowling demons), nor is this true across cultures. One has only to read Evans-Pritchard’s Witchcraft, Oracles, and Magic among the Azande to see a society in which even simple explanations and the most routine decisions rely on supernatural intervention. In cultures around the world, trances and visions, spirits and ghosts, are not seen as discontinuous with the everyday world, but a normal part of sensing and explaining the world around them.

Thus Haack’s continuity test can’t do the trick of demarcating superstitious or theological inquiry from other (more dependable) forms of inquiry into the observable world. It seems that something like Popper’s falsificationism (if not exactly Popper’s formulation) is needed to show why explanations in terms of invisible spirits and the visions caused by snorting peyote don’t provide us with reliable explanations. In other words, I think Haack needs to say much more about why one theory ought to be preferred to another in order to provide a fully adequate defense of science.

This criticism notwithstanding, I think this is an excellent, refreshing, humane book—and a necessary one. It is not complete (she does not cover the relation between science and philosophy, and science and mathematics, for example), nor is it likely to appeal to a wide audience—since Haack, though she writes with personality and charm, is prone to fits of academic prolixity and gets into some syntactical tangles (such as when she begins a sentence “It would be less than candid not to admit that this list does not encourage…” This, by the way, only supports what I call the “Lotz Theory of Academic Writing”—that the quality of prose varies inversely to the number of years spent in academe—but I digress.) Yet for all its flaws and shortcomings, this book does an excellent job of capturing what is good in science and defending science from unfair attacks, without going into the opposite extreme of deifying science.

As the recent withdrawal from the Paris Climate Agreement shows, science denial is an all-too-real and all-too-potent force in today’s world. Too many people I know—many, smart people—don’t understand what scientists do and misconstrue science as a body of beliefs, with scientists as priests, rather than a form of inquiry that rests on the same presuppositions they rely on every day. Either that, or they see science is just a “matter of opinion” or as a bit of arm-chair theorizing. Really, there must be something terribly wrong with our education system if these opinions have become so pervasive. But perhaps there are some reasons for modest optimism. The United States shamefully backed out of the Paris Climate Agreement, but nearly every other country in the world signed on.

So maybe we naive people who believe we can know something about the world need to take a hint from the sperm whale, with its enormous head, preparing to descend to the black depths of the ocean to battle the multi-tentacled squid: hold our breath, have patience, and buck up for a struggle. We may get a few tentacle scars, but we’ve pulled through before and we can pull through again.

[Cover image by Breakyunit. Taken from the Wikipedia article on the Museum of Natural History; used under the Creative Commons BY-SA 3.0 license.]

View all my reviews

Review: The Righteous Mind

Review: The Righteous Mind

The Righteous Mind: Why Good People are Divided by Politics and ReligionThe Righteous Mind: Why Good People are Divided by Politics and Religion by Jonathan Haidt

My rating: 4 of 5 stars

I expected this book to be good, but I did not expect it to be so rich in ideas and dense with information. Haidt covers far more territory than the subtitle of the book implies. Not only is he attempting to explain why people are morally tribal, but also the way morality works in the human brain, the evolutionary origins of moral feelings, the role of moral psychology in the history of civilization, the origin and function of religion, and how we can apply all this information to the modern political situation—among much else along the way.

Haidt begins with the roles of intuition and reasoning in making moral judgments. He contends that our moral reasoning—the reasons we aver for our moral judgments—consists of mere post hoc rationalizations for our moral intuitions. We intuitively condemn or praise an action, and then search for reasons to justify our intuitive reaction.

He bases his argument on the results of experiments in which the subjects were told a story—usually involving a taboo violation of some kind, such as incest—and then asked whether the story involved any moral breach or not. These stories were carefully crafted so as not to involve harm to anyone (such as a brother and sister having sex in a lonely cabin and never telling anyone, and using contraception to prevent the risk of pregnancy).

Almost inevitably he found the same result: people would condemn the action, but then struggle to find coherent reasons to do so. To use Haidt’s metaphor, our intuition is like a client in a court case, and our reasoning is the lawyer: its job is to win the case for intuition, not to find the truth.

This is hardly a new idea. Haidt’s position was summed up several hundred years before he was born, by Benjamin Franklin: “So convenient a thing it is to be a reasonable creature, since it enables one to find or make a reason for everything one has a mind to do.” An intuitionist view of morality was also put forward by David Hume and Adam Smith. But Haidt’s account is novel for the evolutionary logic behind his argument and the empirical research used to back his claims. This is exemplified in his work on moral axes.

Our moral intuition is not one unified axis from right to wrong. There are, rather, six independent axes: harm, proportionality, equality, loyalty, authority, and purity. In other words, actions can be condemned for a variety of reasons: for harming others, for cheating others, for oppressing others, for betraying one’s group, for disrespecting authority, and for desecrating sacred objects, beings, or places.

These axes of morality arose because of evolutionary pressure. Humans who cared for their offspring and their families survived better, as did humans who had a greater sensitivity to being cheated by freeloaders (proportionality) and who resisted abusive alpha males trying to exploit them (equality). Similarly, humans who were loyal to their group and who respected a power hierarchy outperformed less loyal and less compliant humans, because they created more coherent groups (this explanation relies on group selection theory; see below). And lastly, our sense of purity and desecration—usually linked to religious and superstitious notions—arose out of our drive to avoid physical contamination (for example, pork was morally prohibited because it was unsafe to eat).

Most people in the world use all six of these axes in their moral systems. It is only in the West—particularly in the leftist West—where we focus mainly on the first three: harm, proportionality, and equality. Indeed, one of Haidt’s most interesting points is that the right tends to be more successful in elections because it appeals to a broader moral palate: it appeals to more “moral receptors” in the brain than left-wing morality (which primarily appeals to the axis of help and harm), and is thus more persuasive.

This brings us to Part III of the book, by far the most speculative.

Haidt begins with a defense of group selection: the theory that evolution can operate on the level of groups competing against one another, rather than on individuals. This may sound innocuous, but it is actually a highly controversial topic in biology, as Haidt himself acknowledges. Haidt thinks that group selection is needed to explain the “groupishness” displayed by humans—our ability to put aside personal interest in favor of our groups—and makes a case for the possibility of group selection occurring during the last 10,000 or so years of our history. He makes the theory seem plausible (to a layperson like me), but I think the topic is too complex to be covered in one short chapter.

True or not, Haidt uses the theory of group theory to account for what he calls “hiveish” behavior that humans sometimes display. Why are soldiers willing to sacrifice themselves for their brethren? Why do people like to take ecstasy and rave? Why do we waste so much money and energy going to football games and cheering for our teams? All these behaviors are bizarre when you see humans as fundamentally self-seeking; they only make sense, Haidt argues, if humans possess the ability to transcend their usual self-seeking perspective and identify themselves fully with a group. Activating this self-transcendence requires special circumstances, and it cannot be activated indefinitely; but it produces powerful effects that can permanently alter a person’s perspective.

Haidt then uses group selection and this idea of a “hive-switch” to explain religion. Religions are not ultimately about beliefs, he says, even though religions necessarily involve supernatural beliefs of some kind. Rather, the social functions of religions are primarily to bind groups together. This conclusion is straight out of Durkheim. Haidt’s innovation (well, the credit should probably go to David Sloan Wilson, who wrote Darwin’s Cathedral) is to combine Durkheim’s social explanation of religion with a group-selection theory and a plausible evolutionary story (too long to relate here).

As for empirical support, Haidt cites a historical study of communes, which found that religious communes survived much longer than their secular counterparts, thus suggesting that religions substantially contribute to social cohesion and stability. He also cites several studies showing that religious people tend to be more altruistic and generous than their atheistic peers; and this is apparently unaffected by creed or dogma, depending only on attendance rates of religious services. Indeed, for someone who describes himself as an atheist, Haidt is remarkably positive on the subject of religion; he sees religions as valuable institutions that promote the moral level and stability of a society.

The book ends with a proposed explanation of the political spectrum—people genetically predisposed to derive pleasure from novelty and to be less sensitive to threats become left-wing, and vice versa (the existence of libertarians isn’t explained, and perhaps can’t be)—and finally with an application of the book’s theses to the political arena.

Since we are predisposed to be “groupish” (to display strong loyalty towards our own group) and to be terrible at questioning our own beliefs (since our intuitions direct our reasoning), we should expect to be blind to the arguments of our political adversaries and to regard them as evil. But the reality, Haidt argues, is that each side possesses a valuable perspective, and we need to have civil debate in order to reach reasonable compromises. Pretty thrilling stuff.

Well, there is my summary of the book. As you can see, for such a short book, written for a popular audience, The Righteous Mind is impressively vast in scope. Haidt must come to grips with philosophy, politics, sociology, anthropology, psychology, biology, history—from Hume, to Darwin, to Durkheim—incorporating mountains of empirical evidence and several distinct intellectual traditions into one coherent, readable whole. I was constantly impressed by the performance. But for all that, I had the constant, nagging feeling that Haidt was intentionally playing the devil’s advocate.

Haidt argues that our moral intuition guides our moral reasoning, in a book that rationally explores our moral judgments and aims to convince its readers through reason. The very existence of his book undermines his uni-directional model of intuitions to reasoning. Being reasonable is not easy; but we can take steps to approach arguments more rationally. One of these steps is to summarize another person’s argument before critiquing it, which is what I’ve done in this review.

He argues that religions are not primarily about beliefs but about group fitness; but his evolutionary explanation of religion would be rejected by those who deny evolution on religious grounds; and even if specific beliefs don’t influence altruistic behavior, they certainly do influence which groups (homosexuals, biologists) are shunned. Haidt also argues that religions are valuable because of their ability to promote group cohesion; but if religions necessarily involve irrational beliefs, as Haidt admits, is it really wise to base a moral order on religious notions? If religions contribute to the social order by encouraging people to sacrifice their best interest for illogical reasons—such as in the commune example—should they really be praised?

The internal tension continues. Haidt argues that conservatives have an advantage in elections because they appeal to a broader moral palate, not just care and harm; and he argues that conservatives are valuable because their broad morality makes them more sensitive to disturbances of the social order. Religious conservative groups which enforce loyalty and obedience are more cohesive and durable than secular groups that value tolerance. But Haidt himself endorses utilitarianism (based solely on the harm axis) and ends the book with a plea for moral tolerance. Again, the existence of Haidt’s book presupposes secular tolerance, which makes his stance confusing.

Haidt’s arguments with regard to broad morality come dangerously close to the so-called ‘naturalistic fallacy’: equating what is natural with what is good. He compares moral axes to taste receptors; a morality that appeals to only one axis will be unsuccessful, just like a cuisine that appeals to only one taste receptor will fail to satisfy. But this analogy leads directly to a counter-point: we know that we have evolved to love sugar and salt, but this preference is no longer adaptive, indeed it is unhealthy; and it is equally possible that our moral environment has changed so much that our moral senses are no longer adaptive.

In any case, I think that Haidt’s conclusions about leftist morality are incorrect. Haidt asserts that progressive morality rests primarily on the axis of care and harm, and that loyalty, authority, and purity are actively rejected by liberals (“liberals” in the American sense, as leftist). But this is implausible. Liberals can be extremely preoccupied with loyalty—just ask any Bernie Sanders supporter. The difference is not that liberals don’t care about loyalty, but that they tend to be loyal to different types of groups—parties and ideologies rather than countries. And the psychology of purity and desecration is undoubtedly involved in the left’s concern with racism, sexism, homophobia, or privilege (accusing someone of speaking from privilege creates a moral taint as severe as advocating sodomy does in other circles).

I think Haidt’s conclusion is rather an artifact of the types of questions that he asks in his surveys to measure loyalty and purity. Saying the pledge of allegiance and going to church are not the only manifestations of these impulses.

For my part, I think the main difference between left-wing and right-wing morality is the attitude towards authority: leftists are skeptical of authority, while conservatives are skeptical of equality. This is hardly a new conclusion; but it does contradict Haidt’s argument that conservatives think of morality more broadly. And considering that a more secular and tolerant morality has steadily increased in popularity over the last 300 years, it seems prima facie implausible to argue that this way of thinking is intrinsically unappealing to the human brain. If we want to explain why Republicans win so many elections, I think we cannot do it using psychology alone.

The internal tensions of this book can make it frustrating to read, even if it is consistently fascinating. It seems that Haidt had a definite political purpose in writing the book, aiming to make liberals more open to conservative arguments; but in de-emphasizing so completely the value of reason and truth—in moral judgments, in politics, and in religion—he gets twisted into contradictions and risks undermining his entire project.

Be that as it may, I think his research is extremely valuable. Like him, I think it is vital that we understand how morality works socially and psychologically. What is natural is not necessarily what is right; but in order to achieve what is right, it helps to know what we’re working with.

View all my reviews

Quotes & Commentary #42: Montaigne

Quotes & Commentary #42: Montaigne

Everything has a hundred parts and a hundred faces: I take one of them and sometimes just touch it with the tip of my tongue or my fingertips, and sometimes I pinch it to the bone. I jab into it, not as wide but as deep as I can; and I often prefer to catch it from some unusual angle.

—Michel de Montaigne

The pursuit of knowledge has this paradoxical quality: it demands perfection and yet continuously, inevitably, and endlessly fails in its goal.

Knowledge demands perfection because it is meant to be true, and truth is either perfect or nonexistent—or so we like to assume.

Normally, we think about truth like this: I make a statement, like “the cat is on the mat,” and this statement corresponds to something in reality—a real cat on a real mat. This correspondence must be perfect to be valid; whether the cat is standing on the side of the mat, or if the cat is up a tree, then the statement is equally false.

To formulate true statements—about the cosmos, about life, about humanity—this is the goal of scholarship. But can scholarship end? Can we get to a point at which we know everything and we can stop performing experiments and doing research? Can we reach the whole truth?

This would require scholars to create works that were both definitive—unquestioned in their authority—and exhaustive—covering the entire subject. What would this entail? Imagine a scholar writing about the Italian Renaissance, for example, who wants to write the perfect work, the book that totally and completely encapsulates its subject, rendering all additional work unnecessary.

This seems as if it should be theoretically possible, at least. The Italian Renaissance was a sequence of events—individuals born, paintings painted, sculptures sculpted, trips to the toilet, accidental deaths, broken hearts, drunken brawls, late-night conversations, outbreaks of the plague, political turmoil, marriage squabbles, and everything else, great and small, that occurred within a specific period of time and space. If our theoretical historian could write down each of these events, tracing their causation, allotting each its proportional space, neutrally treating each fact, then perhaps the subject could be definitively exhausted.

There are many obvious problems with this, of course. For one, we don’t have all the facts available, but only a highly selective, imperfect, and tentative record, a mere sliver of a fraction of the necessary evidence. Another is that, even if we did have all the facts, a work of this kind would be enormously long—in fact, as long as the Italian Renaissance itself. This alone makes the undertaking impossible. But this is also not what scholars are after.

A book that represented each fact neutrally, in chronological sequence, would not be an explanation, but a chronicle; it would recapitulate reality rather than probe beneath the surface; or rather, it would render all probing superfluous by representing the subject perfectly. It would be a mirror of reality rather than search for its fundamental form.

And yet our brains are not, and can never be, impartial mirrors of reality. We sift, sort, prod, search for regularities, test our assumptions, and in a thousand ways separate the important from the unimportant. Our attention is selective of necessity, not only because we have a limited mental capacity, but because some facts are much more necessary than others for our survival.

We have evolved, not as impartial observers of the world, but as actors in a contest of life. It makes no difference, evolutionarily speaking, if our senses represent “accurately” what is out there in the world; it is only important that they alert us to threats and allow us to locate food. There is reason to believe, therefore, that our senses cannot be literally trusted, since they are adapted to survival, not truth.

Survival is, of course, not the operative motivation in scholarship. More generally, some facts are more interesting than others. Some things are interesting simply in themselves—charms that strike the sight, or merits that win the soul—while others are interesting in that they seem to hold within themselves the reason for many other events.

A history of the Italian Renaissance that gave equal space to a blacksmith as to Pope Julius II, or equal space to a parish church as to the Sistine Chapel, would be unsatisfactory, not because it was inaccurate, but because its priorities would be in disarray. All intellectual work requires judgment. A historian’s accuracy might be unimpeachable, and yet his judgment so faulty as to render his work worthless.

We have just introduced two vague concepts into our search for knowledge: interest and judgment—interest being the “inherent” value of a fact, and judgment our faculty for discerning interest. Both of these are clearly subjective concepts. So instead of impartially represented reality, our thinkers experience reality through a distorted lens—the lens of our senses, further shaped by culture and upbringing—and from this blurry image of the world, select what portion of that distorted reality they deem important.

Their opinion of what is beautiful, what is meritorious, what is crucial and what is peripheral, will be based on criteria—either explicit or implicit—that are not reducible to the content itself. In other words, our thinkers will be importing value judgments into their investigation, judgments that will act as sieves, catching some material and letting the rest slip by.

Even more perilous, perhaps, than the selection of facts, will be the forging of generalizations. Since, with our little brains, we simply cannot represent reality in all its complexity, we resort to general statements. These are statements about the way things normally happen, or the characteristics that things of the same type normally have—statements that attempt to summarize a vast number of particulars within one abstract tendency.

All generalizations employ inductive reasoning, and thus are vulnerable to Hume’s critique of induction. A thousand instances of red apples is no proof that the next apple will also be red. And even if we accept that generalizations are always more or less true—true as a rule, with some inevitable exceptions—this leaves undefined how well the generalization fits the particulars. Is it true nine times out of ten, or only seven? How many apples out of a hundred are red? Finally, to make a generalization requires selecting one quality—say, the color of apples, rather than their size or shape—among many that the particulars possess, and is consequently always arbitrary.

More hazardous still is the act of interpretation. By interpretation, I mean deciding what something means. Now, in some intellectual endeavors, such as the hard sciences, interpretation is not strictly necessary; only falsifiable knowledge counts. Thus, in quantum mechanics, it is unimportant whether we interpret the equations according to the Copenhagen interpretation or the Many-Worlds interpretation—whether the wave-function collapses, or reality splits apart—since in any case the equations predict the right result. In other words, we aren’t required to scratch our heads and ask what the equations “mean” if they spit out the right number; this is one of the strengths of science.

But in other fields, like history, interpretation is unavoidable. The historian is dealing with human language, not to mention the vagaries of the human heart. This alone makes any sort of “objective” knowledge impossible in this realm. Interpretation deals with meaning; meaning only exists in experience; experience is always personal; and the personal is, by definition, subjective. Two scholars may differ as to the meaning of, say, a passage in a diplomat’s diary, and neither could prove the other was incorrect, although one might be able to show her interpretation was far more likely than her counterpart’s.

Let me stop and review the many pitfalls on our road to perfect knowledge of the Italian Renaissance. First, we begin with an imperfect record of information; then we must make selections from this imperfect record. This selection will be based on vague judgments of importance and interest—what things are worth knowing, which facts explain other facts. We will also try to make generalizations about these facts—generalizations that are always tentative, arbitrary, and hazardous, and which are accurate to an undetermined extent. After during all this, we must interpret: What does this mean? Why did this happen? What is the crucial factor, what is mere surface detail? And remember that, before we even start, we are depending on a severely limited perception of the world, and a perspective warped by innumerable prejudices. Is it any wonder that scholarship goes on infinitely?

At this point, I am feeling a bit like Montaigne, chasing my thoughts left and right, trying to weave disparate threads into a coherent whole, and wondering how I began this already overlong essay. Well, that’s not so bad, I suppose, since Montaigne is the reason I am writing here in the first place.

Montaigne was a skeptic; he did not believe in the possibility of objective knowledge. For him, the human mind was too shifting, the human understanding too weak, the human lifespan too short, to have any hope of reaching a final truth. Our reasoning is always embodied, he observed, and is thus subjected to our appetites, excitements, passions, and fits of lassitude—to all of the fancies, hobbyhorses, prejudices, and vanities of the human personality.

You might think, from the foregoing analysis, that I take a similar view. But I am not quite so cheerfully resigned to the impossibility of knowledge. It is impossible to find out the absolute truth (and even if we could, we couldn’t be sure when or if we did). Through science, however, we have developed a self-correcting methodology that allows us to approach ever-nearer to the truth, as evidenced by our increasing ability to manipulate the world around us through technology. To be sure, I am no worshiper of science, and I think science is fallible and limited to a certain domain. But total skepticism regarding science would, I think, by foolish and wrong-headed: science does what it’s supposed to do.

What about domains where the scientific method cannot be applied, like history? Well, here more skepticism is certainly warranted. Since so much interpretation is needed, and since the record is so imperfect, conclusions are always tenuous. Nevertheless, this is no excuse to be totally skeptical, or to regard all conclusions as equally valid. The historian must still make logically consistent arguments, and back up claims with evidence; their theories must still plausibly explain the available evidence, and their generalizations must fit the facts available. In other words, even if a historian’s thesis cannot be falsified, it must still conform to certain intellectual standards.

Unlike in science, however, interpretation does matter, and it matters a great deal. And since interpretation is always subjective, this makes it possible for two historians to propose substantially different explanations for the same evidence, and for both of their theories to be equally plausible. Indeed, in a heuristic field, like history, there will be as many valid perspectives as there are practitioners.

This brings us back to Montaigne again. Montaigne used his skepticism—his belief in the subjectivity of knowledge, in the embodied nature of knowing—to justify his sort of dilettantism. Since nobody really knows what they’re talking, why can’t Montaigne take a shot? This kind of perspective, so charming in Montaigne, can be dangerous, I think, if it leads one to abandon intellectual standards like evidence and argument, or if it leads to an undiscerning distrust in all conclusions.

Universal skepticism can potentially turn into a blank check for fundamentalism, since in the absence of definite knowledge you can believe whatever you want. Granted, this would never have happened to Montaigne, since he was wise enough to be skeptical of himself above all; but I think it can easily befall the less wise among us.

Nevertheless, if proper respect is paid to intellectual standards, and if skepticism is always turned against oneself as well as one’s peers, then I think dilettantism, in Montaigne’s formulation, is not only acceptable but admirable:

I might even have ventured to make a fundamental study if I did not know myself better. Scattering broadcast a word here, a word there, examples ripped from their contexts, unusual ones, with no plan and no promises, I am under no obligation to make a good job of it nor even to stick to the subject myself without varying it should it so please me; I can surrender to doubt and uncertainty and to my master-form, which is ignorance.

Nowadays it is impossible to be an expert in everything. To be well-educated requires that we be dilettantes, amateurs, whether we want to or not. This is not to be wholly regretted, for I think the earnest dilettante has a lot to contribute in the pursuit of knowledge.

Serious amateurs (to use an oxymoron) serve as intermediaries between the professionals of knowledge and the less interested lay public. They also serve as a kind of check on professional dogmatism. Because they have one tiptoe in the subject, and the rest of their body out of it, they are less likely to get swept away by a faddish idea or to conform to academic fashion. In other words, they are less vulnerable to groupthink, since they do not form a group.

I think serious amateurs might also make a positive contribution, at least in some subjects that require interpretation. Although the amateur likely has less access to information and lacks the resources to carry out original investigation, each amateur has a perspective, a perspective which may be highly original; or she may notice something previously unnoticed, which puts old material in a new light.

Although respect must be paid to expertise, and academic standards cannot be lightly ignored, it is also true that professionals do not have a monopoly on the truth—and for all the reasons we saw above, absolute truth is unattainable, anyway—so there will always be room for fresh perspectives and highly original thoughts.

Montaigne is the perfect example: a sloppy thinker, a disorganized writer, a total amateur, who was nonetheless the most important philosopher and man of letters of his time.

On Morality

On Morality

What does it mean to do the right thing? What does it mean to be good or evil?

These questions have perplexed people since people began to be perplexed about things. They are the central questions of one of the longest lines of intellectual inquiry in history: ethics. Great thinkers have tackled it; whole religions have been based around it. But confusion still remains.

Well perhaps I should be humble before attempting to solve such a momentous question, seeing who have come before me. And indeed, I don’t claim any originality or finality in these answers. I’m sure they have been thought of before, and articulated more clearly and convincingly by others (though I don’t know by whom). Nevertheless, if only for my own sake I think it’s worthwhile to set down how I tend to think about morality—what it is, what it’s for, and how it works.

I am much less concerned in this essay with asserting how I think morality should work than with describing how it does work—although I think understanding the second is essential to understanding the first. That is to say, I am not interested in fantasy worlds of selfless people performing altruistic acts, but in real people behaving decently in their day-to-day life. But to begin, I want to examine some of the assumptions that have characterized earlier concepts of ethics, particularly with regard to freedom.

Most thinkers begin with a free individual contemplating multiple options. Kantians think that the individual should abide by the categorical imperative and act with consistency; Utilitarians think that the individual should attempt to promote happiness with her actions. What these systems disagree about is the appropriate criterion. But they do both assume that morality is concerned with free individuals and the choices they make. They disagree about the nature of Goodness, but agree that Goodness is a property of people’s actions, making the individual in question worthy of blame or praise, reward or punishment.

The Kantian and Utilitarian perspectives both have a lot to recommend them. But they do tend to produce an interesting tension: the first focuses exclusively on intentions while the second focuses exclusively on consequences. Yet surely both intentions and consequences matter. Most people, I suspect, wouldn’t call somebody moral if they were always intending to do the right thing and yet always failing. Neither would we call somebody moral if they always did the right thing accidentally. Individually, neither of these systems captures our intuitive feeling that both intentions and consequences are important; and yet I don’t see how they can be combined, because the systems have incompatible intellectual justifications.

But there’s another feature of both Kantian and Utilitarian ethics that I do not like, and it is this: Free will. The systems presuppose individuals with free will, who are culpable for their actions because they are responsible for them. Thus it is morally justifiable to punish criminals because they have willingly chosen something wrong. They “deserve” the punishment, since they are free and therefore responsible for their actions.

I’d like to focus on this issue of deserving punishment, because for me it is the key to understanding morality. By this I mean the notion that doing ill to a criminal helps to restore moral order to the universe, so to speak. But before I discuss punishment I must take a detour into free will, since free will, as traditionally conceived, provides the intellectual foundation for this worldview.

What is free will? In previous ages, humans were conceived of as a composite of body and soul. The soul sent directions to the body through the “will.” The body was material and earthly, while the soul was spiritual and holy. Impulses from the body—for example, anger, lust, gluttony—were bad, in part because they destroyed your freedom. To give into lust, for example, was to yield to your animal nature; and since animals aren’t free, neither is the lustful individual. By contrast, impulses from the soul (or mind) were free because they were unconstrained by the animal instincts that compromise your ability to choose.

Thus free will, as it was originally conceived, was the ability to make choices unconstrained by one’s animal nature and by the material world. The soul was something apart and distinct from one’s body; the mind was its own place, and could make decisions independently of one’s impulses or one’s surroundings. It was even debated whether God Himself could predict the behavior of free individuals. Some people held that even God couldn’t, while others maintained that God did know what people would or wouldn’t do, but God’s knowledge wasn’t the cause of their doing it. (And of course, some people believed in predestination.)

It is important to note that, in this view, free will is an uncaused cause. That is, when somebody makes a decision, this decision is not caused by anything in the material world as we know it. The choice comes straight from the soul, bursting into our world of matter and electricity. The decision would therefore be impossible to predict by any scientific means. No amount of brain imaging or neurological study could explain why a person made a certain decision. Nor could the decision be explained by cultural or social factors, since individuals, not groups, were responsible for them. All decisions were therefore caused by individuals, and that’s the essence of freedom.

It strikes me that this is still how we tend to think about free will, more or less. And yet, this view is based on an outdated understanding of human behavior. We now know that human behavior can be explained by a combination of biological and cultural influences. Our major academic debate—nature vs. nurture—presupposes that people don’t have free will. Behavior is the result of the way your genes are influenced by your environment. There is no evidence for the existence of the soul, and there is no evidence that the mind cannot be explained through understanding the brain.

Furthermore, even without the advancements of the biological and social sciences, the old way of viewing things was not philosophically viable, since it left unexplained how the soul affects the body and vice versa. If the soul and the body were metaphysically distinct, how could the immaterial soul cause the material body to move? And how could a pinch in your leg cause a pain in your mind? What’s more, if there really was an immaterial soul that was causing your body to move, and if these bodily movements truly didn’t have any physical cause, then it’s obvious that your mind would be breaking the laws of physics. How else could the mind produce changes in matter that didn’t have any physical cause?

I think this old way of viewing the body and the soul must be abandoned. Humans do not have free will as originally conceived. Humans do not perform actions that cannot be scientifically predicted or explained. Human behavior, just like cat behavior, is not above scientific explanation. The human mind cannot generated uncaused causes, and does not break the laws of physics. We are intelligent apes, not entrapped gods.

Now you must ask me: But if human behavior can be explained in the same way that squirrel behavior can, how do we have ethics at all? We don’t think squirrel are capable of ethical or unethical behavior because they don’t have minds. We can’t hold a squirrel to any ethical standard and we therefore can’t justifiably praise or censor a squirrel’s actions. If humans aren’t categorically different then squirrels, than don’t we have to give up on ethics altogether?

This is not justified. Even though I think it is wrong to say that certain people “deserve” punishment (in the Biblical sense), I do think that certain types of consequences can be justified as deterrents. The difference between humans and squirrels is not that humans are free, but that humans are capable of thinking about the long term consequences of an action before committing it. Individuals should be held accountable, not because they have free will, but because humans have a great deal of behavioral flexibility, thus allowing their behavior to be influenced by the threat of prison.

This is why it is justifiable to lock away murderers. If it is widely known among the populace that murderers get caught and thrown into prison, this reduces the number of murders. Imprisoning squirrels for stealing peaches, on the other hand, wouldn’t do anything at all, since the squirrel community wouldn’t understand what was going on. With humans, the threat of punishment acts as a deterrent. Prison becomes part of the social environment, and therefore will influence decision-making. But in order for this threat to act as an effective deterrent, it cannot be simply a threat; real murderers must actually face consequences or the threat won’t be taken seriously and thus won’t influence behavior.

To understand how our conception of free will affects the way we organize our society, consider the case of drug addiction. In the past, addicts were seen as morally depraved. This was a direct consequence of the way people thought about free will. If people’s decisions were made independently of their environment or biology, then there was no excuses or mitigating circumstance for drug addicts. Addicts were simply weak, depraved people who mysteriously kept choosing self-destructive behavior. What resulted from this was the disastrous war on drugs, a complete fiasco. Now we know that it is absurd to throw people into jail for being addicted, simply absurd, because addicts are not capable of acting otherwise. This is the very definition of addiction, that one’s decision-making abilities have been impaired.

As we’ve grown more enlightened about drug addiction, we’ve realized that throwing people in jail doesn’t solve anything. Punishment does not act as an effective deterrent when normal decision-making is compromised. By transitioning to a system where addiction is given treatment and support, we have effectively transitioned from an old view of free will to the new view that humans behavior is the result of biology, environment, and culture. We don’t hold them “responsible” because we know it would be like holding a squirrel responsible for burying nuts. This is a step forward, and it has been taken by abandoning the old views of free will.

I think we should apply this new view of human behavior to other areas of criminal activity. We need to get rid of the old notions of free will and punishment. We must abandon the idea of punishing people because they “deserve” it. Murderers should be punished, but not because they deserve to suffer, but for the following two reasons: first, because they have shown themselves to be dangerous and should be isolated; and second, because their punishment helps to act as a deterrent to future murderers. Punishment is just only insofar as these two criteria are met. Once a murderer is made to suffer more than is necessary to deter future crimes, and is isolated more than is necessary to protect others, then I think it is unjustifiable and wrong to punish him further.

In short, we have to give up on the idea that inflicting pain and discomfort on a murderer helps to restore moral balance to the universe. Vengeance in all its forms should be removed from our justice system. It is not the job of us or anyone else to seek retributions for wrongs committed. Punishments are only justifiable because they help to protect the community. The aim of punishing murderers is neither to hurt nor to help them, but to prevent other people from becoming murderers. And this is, I think, the reason why the barbarous methods of torture and execution are wrong, because I very much doubt that brutal punishments are justified in terms of further efficacy in deterrence. However, I’m sure there is interesting research somewhere on this.

Seen in this way, morality can be understood in the same way we understand language—as a social adaptation that benefits the community as a whole as well as individual members of the community. Morality is a code of conduct imposed by the community on its members, and derivations from this code of conduct are justifiably punished for the safety of the other members of the community. When this code is broken, a person forfeits the protection under the code, and is dealt with in such a way that future derivations from the moral code are discouraged.

Just as Wittgenstein said that a private language is impossible, so I’d argue that a private morality is impossible. A single, isolated individual can be neither moral nor immoral. People are born with a multitude of desires; and every desire is morally neutral. A moral code comes into play when two individuals begin to cooperate. This is because the individuals will almost inevitably have some desires that conflict. A system of behavior is therefore necessary if the two are to live together harmoniously. This system of behavior is their moral code. In just the same way that language results when two people both use the same sounds to communicate the same messages, morality results when two people’s desires and actions are in harmony. Immorality arises when the harmonious arrangement breaks down, and one member of the community satisfies their desire at the expense of the others. Deviations of this kind must have consequences if the system is to maintain itself, and this is the justification for punishment.

One thing to note about this account of moral systems is that they arise for the well-being of their participants. When people are working together, when their habits and opinions are more or less in harmony, when they can walk around in their neighborhood without fearing every person they meet, both the individual and the group benefits. This point is worth stressing, since we now know that the human brain is the product of evolution, and therefore we must surmise that universal features of human behavior, such as morality, are adaptive. The fundamental basis for morality is self-interest. What distinguishes moral from immoral behavior is not that the first is unselfish while the other is selfish, but that the first is more intelligently selfish than the second.

It isn’t hard to see how morality is adaptive. One need only consider the basic tenets of game theory. In the short term, to cooperate with others may not be as advantageous as simply exploiting others. Robbery is a quicker way to make money than farming. And indeed, the potentially huge advantages of purely selfish behavior explains why unethical behavior occurs: Sometimes it benefits individuals more to exploit rather than to help one another. Either that, or certain individuals—either from ignorance or desperation—are willing to risk long-term security for short-term gains. Nevertheless, in general moral behaviors tend to be more advantageous, if only because selfish behavior is more risky. All unethical behavior, even if carried on in secret, carries a risk of making enemies; and in the long run, enemies are less useful than friends. The funny thing about altruism is that it’s often more gainful than selfishness.

Thus this account of morality can be harmonized with an evolutionary account of human behavior. But what I find most satisfying about this view of morality is that it allows us to see why we care both about intentions and consequences. Intentions are important in deciding how to punish misconduct because they help determine how an individual is likely to behave in the future. A person who stole something intentionally has demonstrated a willingness to break the code, while a person who took something by accident has only demonstrated absent-mindedness. The first person is therefore more of a risk to the community. Nevertheless, it is seldom possible to prove what somebody intended beyond the shadow of a doubt, which is why it is also necessary to consider the consequences of an action. What is more, carelessness as regards the moral code must be forcibly discouraged, otherwise the code will not function properly. This is why, in certain cases, breaches of conduct must be punished even if they were demonstrably unintentional—to discourage other people in the future from being careless.

Let me pause here to sketch out some more philosophical objections to the Utilitarian and Kantian systems, besides the fact that they don’t adequately explain how we tend to think about morality. Utilitarianism does capture something important when it proclaims that actions should be judged insofar as they further the “greatest possible happiness.” Yet taken by itself this doctrine has some problems. The first is that you never know how something is going to turn out, and even the most concerted efforts to help people sometimes backfire. Should these efforts, made in good faith, be condemned as evil if they don’t succeed? What’s more, Utilitarian ethics can lead to disturbing moral questions. For example, is it morally right to kill somebody if you can use his organs to save five other people? Besides this, if the moral injunction is to work constantly towards the “greatest possible happiness,” then we might even have to condemn simple things like a game of tennis, since two people playing tennis certainly could be doing something more humanitarian with their time and energy.

The Kantian system has the opposite problem in that it stresses good intentions and consistency to an absurd degree. If the essence of immorality is to make an exception of oneself—which covers lying, stealing, and murder—then telling a fib is morally equivalent to murdering somebody in cold blood, since both of those actions equally make exceptions of the perpetrator. This is what results if you overemphasize consistency and utterly disregard consequences. What’s more, intentions are, as I said above, basically impossible to prove—and not only to other people, but also to yourself. Can you prove, beyond a shadow of a doubt, that your intentions were pure yesterday when you accidentally said something rude? How do you know your memory and your introspection can be trusted? However, let me leave off with these objections because I think entirely too much time in philosophy is given over to tweezing apart your enemies’ ideas and not enough to building your own.

Thus, to repeat myself, both consequences and intentions, both happiness and consistency must be a part of any moral theory if it is to capture how we do and must think about ethics. Morality is an adaptation. The capacity for morality has evolved because moral systems benefit both groups and individuals. Morality is rooted in self-interest, but it is an intelligent form of self-interest that recognizes that other people are most useful as allies than as enemies. Morality is neither consistency nor pleasure. Morality is consistency for the sake of pleasure. This is why moral strictures that demand that people devote their every waking hour to helping others or to never make exceptions of themselves are self-defeating, because when a moral system is onerous is isn’t performing its proper function.

But now I must deal with that fateful question: Is morality absolute or relative? At first glance it would seem that my account would put me squarely in the relativist camp, seeing that I point to a community code of conduct. Nevertheless, when it comes to violence I am decidedly a moral absolutist. This is because I think that physical violence can only ever be justified by citing defense. First, to use violence to defend yourself from violent attack is neither moral nor immoral, because at this point the moral code has already broken down. The metaphorical contract has been broken, and you are now in a situation where the you must either fight, run, or be killed. The operant rule is now survival and not morality. For the same reason a whole community may justifiably protect itself from invasion from an enemy force (although capitulating is equally defensible). And lastly violence (in the form of imprisonment) is justified in the case of criminals, for the reasons I discussed above.

What if there are two communities, community A and community B, living next to one another? Both of these communities have their own moral codes which the people abide by. What if a person from community A encounters a person from community B? Is it justifiable for either of them to use violence against the other? After all, each of them is outside the purview of the other’s moral code, since moral codes develop within communities. Well in practice situations like this do commonly result in violence. Whenever Europeans encountered a new community—whether in the Americas or in Africa—the result was typically disastrous for that community. This isn’t simply due to the wickedness of Europeans; it has been a constant throughout history: When different human communities interact, violence is very often the result. And this, by the way, is one of the benefits of globalization. The more people come to think of humanity as one community, the less violence we will experience.

Nevertheless, I think that violence between people from different communities is ultimately immoral, and this is why. To feel it is permissible to kill somebody just because they are not in your group is to consider that person subhuman—as fundamentally different. This is what we now call “Othering,” and it is what underpins racism, sexism, religious bigotry, homophobia, and xenophobia. But of course we now know that it is untrue that other communities, other religions, other races, women, men, or homosexuals or anyone else are “fundamentally” different or in any way subhuman. It is simply incorrect. And I think the recognition that we all belong to one species—with only fairly superficial differences in opinions, customs, rituals, and so on—is the key to moral progress. Moral systems can be said to be comparatively advanced or backward to the extent that they recognize that all humans belong to the same species. In other words, moral systems can be evaluated by looking at how many types of people they include.

This is the reason why it is my firm belief that the world as it exists today—full as it still is with all sorts of violence and prejudice—is morally superior than ever before. Most of us have realized that racism was wrong because it was based on a lie; and the same goes for sexism, homophobia, religious bigotry, and xenophobia. These forms of bias were based on misconceptions; they were not only morally wrong, but factually wrong.

Thus we ought to be tolerant of immorality in the past, for the same reason that we excuse people in the past for being wrong about physics or chemistry. Morality cannot be isolated from knowledge. For a long time, the nature of racial and sexual differences was unknown. Europeans had no experience and thus no understanding of non-Western cultures. All sorts of superstitions and religious injunctions were believed in, to an extent most of us can’t even appreciate now. Before widespread education and the scientific revolution, people based their opinions on tradition rather than evidence. And in just the same way that it is impossible to justly put someone in prison without evidence of their guilt, it impossible to be morally developed if your beliefs are based on misinformation. Africans and women used to be believed to be mentally inferior; homosexuals used to be believed to be possessed by evil spirits. Now we know that there is no evidence for these views, and in fact evidence to the contrary, so we can cast them aside; but earlier generations were not so lucky.

To the extent, therefore, that backward moral systems are based on a lack of knowledge, they must be tolerated. In this why we ought to be tolerant of other cultures and of the past. But to the extent that facts are wilfully disregarded in a moral system, that system can be said to be corrupt. Thus the real missionaries are not the ones who spread religion, but who spread knowledge, for increased understanding of the world allows us develop our morals.

These are my ideas in their essentials. But for the sake of honesty I have to add that the ideas I put forward above have been influenced by my studies in cultural anthropology, as well as my reading of Locke, Hobbes, Hume, Spinoza, Santayana, Ryle, Wittgenstein, and of course by Mill and Kant. I was also influenced by Richard Dawkins’s discussion of Game Theory in his book, The Selfish Gene. Like most third-rate intellectual work, this essay is, for the most part, a muddled hodgepodge of other people’s ideas.