Review: Arrival (2016)

Review: Arrival (2016)

 Rating: A-

Language is the foundation of civilization. It is the glue that holds a people together. It is the first weapon drawn in a conflict.

(Cover image taken from the official trailer.)

I have never written a movie review before, so have some patience while I get my bearings. Also, I clearly can’t say much without spoilers, so be warned.

The premise of Arrival intrigued me as soon as I heard it: a science-fiction alien story centered, not on warfare, but on language. Instead of a soldier, the protagonist is a linguist; and instead of defeating aliens she needs to understand them.

After a touching yet cryptic opening sequence—whose relation to the story isn’t revealed until much later—the movie begins with another day in the life of Louise Banks (played by Amy Adams), a professor of linguistics. She walks into a lecture hall, one of those stale and lifeless theaters of knowledge, in order to give a class on the Romance languages—specifically, on why Portuguese sounds so different from the other languages. (She never explains this, which is frustrating, since I genuinely want to know!)

Something is clearly wrong, however, as few students are in class, and their phones keep beeping. The aliens have just arrived, and everybody all the world over is in a panic. The confusion and alarm that would accompany the appearance of genuine UFOs was portrayed with subtlety and realism. People are rushing home (but why would home be any safer?), the military is scrambling jets (in a show of force?), and the newscasters are droning on incessantly in their foux-knowledgeable voices, filling up airtime with their lack of information.

We see snatches of Banks’s life here, which give us a taste of her personality. She is a loner, somewhat cold, very quiet. We see her lakeside house—angular, empty, tranquil, and almost sterile. It needn’t even be said that she is single and lives alone. Snatches of a phone call with her mom further characterize her—she is calm, detached, and impatient of folly.

Then, as in any hero’s journey, comes the call to adventure, this time in the form of Army Colonel Weber (played by Forest Whitaker, who gives his colonel a strong Boston accent). The Colonel dramatically puts a device on the table, and plays a chilling recording; it is an unintelligible series of clicks, whooshes, and moans, obviously not human. Can she translate it?

The call to adventure is at first refused (she can’t translate from a recording), and then accepted (as it must be for the movie), and soon enough Banks is snatched away to begin her quest. Next we are shown our first vision of the UFO: it is an oblong black egg that hovers ominously over the landscape, as pitifully small fighter jets fly by. The soundtrack, written by Jóhann Jóhannson, really shines in this sequence. Unearthly wailing sounds, reminiscent of alien speech, swell in and out over a droning base as the helicopters approach the monolithic object.

We also meet the other protagonist, Ian Donnelly (played by Jeremy Renner), a theoretical physicist who will work with Banks. The two of them soon begin their task.

The alien spacecraft opens up a hatch every 18 hours, giving the humans a two-hour window to go inside and make contact. (The reason for this pattern is never explained.) I particularly liked the portrayal of the huge number of precautions that the military takes when going inside the UFO. Even though no form of radiation, bacteria, or anything else potentially hazardous is detected, they must receive numerous booster shots, wear hazmat suits with heavy air purifiers, and be decontaminated each time they return.

Finally they go inside. Watching Donnelly’s childlike joy at touching the spacecraft is moving; for all he knows, he’s in a highly dangerous situation, and yet he is like a seven-year old at a zoo. I think his character is at least partially inspired by Carl Sagan, the alien-obsessed physicist. Like Sagan, Donnelly wants to communicate with the aliens through math, supposedly the universal language; and yet he soon must play second-fiddle to the linguist.

The inside of the ship is a large empty black chamber, composed of perfectly right angles. On the far end of the chamber is a transparent screen flooded with white light, through which the aliens appear. At first it is difficult to see them, because their side of the chamber is full of black smoke (part of the atmosphere they breathe?), and their form is only revealed gradually. I can’t say I was totally impressed by the design of the aliens. They are called “heptopods,” due to their having seven appendages and seven digits on each appendage; but they basically look like big, black, lumpy squids.

Thus begins the quest to communicate with the heptopods, which is the main drama of the movie. The government needs to ask them why they arrived on earth; and this requires quite a bit of linguistic prep work, since not only do our heroes need to make the question intelligible, but enough vocabulary is needed to make the answer meaningful. As far as I know, putting translation in the center of an alien movie is unique. In Independence Day (1996), for example—which I watched obsessively as a kid—the attempt to communicate with the giant UFOs lasts about three seconds. (They fly a helicopter near the alien craft to flash lights as a way of making contact; a laser blast promptly destroys the helicopter.)

Banks quickly realizes that verbal communication is a non-starter, since human vocal chords can’t reproduce heptopod speech. So she opts for written communication, and soon discovers that the heptopods have their own written language. This language is quite different from our own. It does not correspond with what the heptopods “say”; it is not, in other words, a transcription of speech. This means that the meaning is not sequenced in time.

Like a sentence in any other language, an English sentence has a front end and a back end, and must be read in the correct order to make proper sense. When we speak, we obviously must start at some time and end later; and so do our written sentences. Not so the heptopod system, wherein meaning is encoded, as it were, directly, with reference purely to ideas. It has the same meaning forward and backwards; and its meaning can be understood at a glance, like a picture.

Its easy to see how simple nouns and verbs—lions, helicopters, walking, giving—could be represented this way; but it is difficult for me to imagine how complex logical relationships or temporal sequences could be transcribed so that the message is the same forwards and backwards. The movie does not get into the mechanics of the language, however, which is just as well.

While I’m at it, I also wonder if linguistic communication would be possible at all with creatures from another planet. Wittgenstein famously said “If a lion could speak, we could not understand him”—meaning, I think, that our language is so tied up in our human experience of the world that it could never serve as a bridge across different species. Put another way, Wittgenstein thought that our language does not and cannot refer to pure ideas—notions that would be the same as understood by any creature.

Our experience of the world is so filtered through our senses, our biology, our specifically human brains, that it seems to me that an alien—from a planet with a vastly different ecosystem, breathing different atmosphere, with senses adapted to different conditions and a nervous systems built on entirely different principals—might conceptualize the world in such different terms that any real communication would be nearly impossible. All this is a massive digression, of course. But a movie that can prompt such ponderings is certainly worth watching.

Soon enough, Banks is coming to grips with the heptopod written language. The visual design of this language is excellent: it is written in inky smoke, and takes the form of a circular swirl with complex bulges and branches. Meanwhile, Banks is beginning to have strange visions, all featuring an unidentified little girl—the same girl from the opening sequence. It is clear that Banks is her mother; and these can’t be memories, since Banks has never had children. Is Banks cracking from sleep deprivation?

While Banks is working on the translation, the world situation is growing ever-more tense. There are twelve of these “shells” (as they’re called), and each country is taking a different approach to communicating with the heptopods. People everywhere are panicking. An image of one of the creatures is leaked and goes viral. China in particular is full of military bluster, and seems constantly on the verge of attacking their shell; and the longer the situation persists, the more people seem to think that the wise thing to do is take military action.

This brings us to one of the movie’s major themes: confronting the unknown. The only thing threatening about the shells is that they are mysterious. Who are the aliens? Where did they come from? What do they want? They don’t attack; they don’t cause any damage; they just hover above the landscape. And yet, the mere presence of unknown visitors causes riots, protests, looting, cult suicides—total panic. It almost seems as if people would prefer that the aliens demonstrated some malicious intent; at least then they’d know what to do. In this situation of total ambiguity, people’s fears fill up the vacuum of knowledge. Never mind that the aliens likely have technology far in advance of humans. We have the urge to attack, not because it’s wise, but to end this terrifying doubt.

What should you do when you confront the unknown? Understand it, or destroy it? This is the movie’s essential question. Banks represents the first solution. The main drama of the movie takes place in the shell’s chamber. There, the confrontation is given stark visual form: Banks stands and stares straight into the blinding light at the other end. The aliens are literally unreachable, separated by a partition. They communicate by imposing form onto nebulous clouds. Language is the tool through which Banks and the heptopods bridge the gap that separates them from one another.

Captain Marks, who works with Banks and Donnelly, represents the other solution. We see him listening to conservative talk radio—an obvious parody of Rush Limbaugh—whose host castigates the Army for not having enough guns, and recommends a “shot across the bow” as a demonstration of human military might. This is probably the movie’s wryest cultural comment, the tendency of the right to use blustering and macho rhetoric, even in highly delicate and complex situations. Captain Marks, spooked by this and also by his wife’s fears, decides to go rogue and attack the ships. His attack fails to accomplish anything, however, and only results in his own death (or imprisonment?) and makes Banks’s job that much more difficult.

Another major theme of the movie is our inability to work together, even in the direst of circumstances. Although it is obviously within each country’s best interest to share their data and collaborate—a “non-zero sum game,” to quote the movie—communication ultimately breaks down between nations as suspicion and paranoia take hold.

As Banks repeatedly shows, communication requires trust, which is exactly why she is so skilled at it. Instead of being scared of contamination and frightened of approaching the heptopods, she removes her protective suit and puts her hand on the glass. In other words, she chooses to trust the heptopods. Communication breaks down between the nations of the world precisely because of this lack of trust; they are afraid that the aliens are trying to get them to attack one another.

Full crisis mode ensues when Banks finally asks what the aliens are doing on earth, and gets the response “Offer Weapon.” Thus begins the dramatic final sequence, during which Banks has to rush to interpret this message before other nations of the world begin bombing their shells. After a final visit to the shell, the heptopods explain to Banks that the “weapon” is their own language, which, because it is the same forwards and backwards, allows you to see the future when you learn it. They are offering it to humanity because they will need humanity’s help in 3,000 years (which they know because they can see the future).

By the way, the idea that learning a non-temporal language could so fundamentally alter your perception of time, allowing you to see into the future, is based on the Sapir-Whorf hypothesis, otherwise known as linguistic relativity. This is a real theory, put forward in the 1950s, which argued that your language fundamentally shapes your perception of the world. The most famous (and also most infamously incorrect) example of this are supposedly huge number of words for “snow” among the Inuit, reportedly allowing them to see fine differences in different types of snow. Strong versions of this hypothesis—in which one’s language totally shapes your cognitive processes—have been ruled out; but it is true, I believe, that our language influences our thought in manifold subtle ways.

Banks, now aware of her new ability, looks into the future in order to see how she can prevent the impending catastrophe, and stop the Chinese from attacking their shell. Like all time travel, this presents some interesting paradoxes of causality. Can knowledge of the future, already determined by the present, influence the present? If the only reason that Banks could obtain the information she needed was because she had already used it, what causes what? This paradox is sort of glossed over, and that’s fine by me.

The crisis resolved, the heptopods mysteriously vanish—having accomplished their goal of uniting the peoples of the world and teaching humanity their language—and Banks is left to live her life. This leads, predictably, to a romantic entanglement with physicist Ian Donnelly. He is the man with whom Banks has her daughter, an adorable little girl who is fated to die from a “really rare disease” sometime in her adolescence.

This brings us to the movie’s second major theme: confronting the known. Because she can see the future, Banks is forced to live her life with full awareness of how everything will turn out. Her marriage to Donnelly will end in divorce, and her daughter will die young. Indeed, Donnelly wants a divorce precisely because he thinks they shouldn’t have had a daughter if Banks knew she would die.

The odd fact is that total knowledge is, in a way, far more terrifying than total mystery. It is one thing to try something when you’re not sure you’ll succeed, but it requires even more courage to try something even when you know you will fail. And yet, Banks embraces her fate, and lives her life anyway. This is the most literal illustration of Nietzsche’s amor fati, love of fate, that I’ve ever seen: instead of trying to change anything, Banks tries to appreciate each moment for what it is.

As far as acting goes, the standout performance is Amy Adams’s. Her portayal of Banks is subtle and sensitive. Banks is quiet without being timid, highly observant but fiercely independent, and incredibly strong without being overpowering. She speaks in a soft voice, nearly a whisper, and her face is usually deadpan calm. And yet this makes the emotional moments of the film that much more touching.

I am glad that such a thoughtful, tasteful movie is finding both commercial and critical success nowadays. While arguably somewhat derivative of Kubrick’s work—the visuals and sound-effects were polished and excellent, but hardly groundbreaking—Arrival manages to ask many deep questions within a gripping and accessible plot. All in all, a truly excellent film.

 

Directed by Denis Villenueve

Written by Eric Heisserer

Staring Amy Adams, Jeremy Renner, and Forest Whitaker

 

Quotes & Commentary #46: Wittgenstein

Quotes & Commentary #46: Wittgenstein

If language is to be a means of communication there must be agreement not only in definitions but also (queer as this may sound) in judgments.

—Ludwig Wittgenstein, Philosophical Investigations

I often think about the relationship between the public and the private. As a naturally introverted person, I feel very keenly the separation of my own experience from the rest of reality. I make music, take pictures, and write this blog as a way of communicating this inner reality—of manifesting my private world in a publically consumable form.

Having an ‘inner world’ is one of the basic facts of life. Each of us is aware that there is a part of us—the most vital and most mysterious part, perhaps—that is inaccessible to others; we can keep secrets, we can make judgments without anyone else noticing, we can have private pleasures and pains. All of our experience takes place in this space; the only world we ever see, hear, or touch is in our heads.

And yet we are also aware that this reality is, in a sense, insubstantial and ultimately secondary. Our inner world exists in reference to the outer world, the world of objective facts, the world that is publically known. My senses are not just mental facts, but point outward; my thoughts, actions, and desires are oriented towards a world that does not exist in me. Rather, I exist in it, and my experience is just one interpretation of this world, and one vantage point from which to view it.

How are these two worlds related? How do they interact? Is one more important? What is the relationship of our private minds to our public bodies? These are classic philosophical conundrums, mysterious still after all these millennia.

Historical philosophers aside, most of us, in our more reflective moments, become acutely aware of the division between subjective and objective. When you are, for example, searching for a word—when a word is on the tip of your tongue—you feel as though you are rummaging through your own mind. The word is in you somewhere, and nobody but you can find it.

From this, and other experiences like it, we get the feeling that speaking (and by extension, writing) consists of taking something internal and externalizing it. Language is, in this view, an expression of thought; and words take their significance from cogitations. That is to say, our private mental world is the wellspring of significance; our minds imbue our language with meaning. The word “pizza,” for example, means pizza because I am thinking of pizza when I say it.

And yet, as Wittgenstein tried to show in his later philosophy, this is not how language really works. To the contrary, words are defined by their social use: what they accomplish in social situations. In other words, language is public. The meaning of words is determined, not by referring to any inner thought, nor by referring to any objective facts, but by convention, in a community of speakers. (I don’t have the space here to recapitulate his arguments; but you can see my review of his book here.) The word “pizza” means pizza because you can use it to order in a restaurant.

This may seem to be a merely academic matter; but when you begin to think of meaning as determined socially rather than psychologically, then you realize that your cognitive apparatus is not nearly as private as you are wont to believe. In order to communicate thought, you must transform it into something socially consumable: language. All of our vague notions must be put into boxes, whose dimensions are determined by the community, not by us.

But the social does not only intrude when we try to communicate with others; we also understand ourselves through these same social concepts. That is to say, insofar as we think in words, and we understand our own personalities through language, we are subjecting our deepest selves to public categories; even in our most private moments, we are seeing ourselves in the light of the community. We are social beings to our very core.

This does not only extend to the definitions of words. As Wittgenstein points out, to use language effectively, we must also judge like the community.

Any word, however well-defined, is ambiguous in its application. To apply the word “car” to a vehicle, for example, requires not only that I know the definition—whatever that may be—but that I learn how to differentiate between a car, a truck, a van, and an SUV. Every member of a community is involved in educating one another’s judgment, and keeping their opinions in tune. If I call an SUV a “car,” or a pickup truck a “van,” any fellow speakers will correct me, and in this way they will educate me to judge like a member of the community.

As I learn Spanish, I have firsthand experience of this. To pick a trivial example, English word “sausage” is more broad than any corresponding Spanish word. Here in Spain they differentiate between salchicha and salchichón, a difference that my American mind has a hard time understanding. Although Spaniards have tried to define this difference to me, I have found that the only way for me to learn it is by being corrected every time I apply the wrong word.

More significantly, in order to conjugate properly in Spanish, I must not only learn how to change the ending and so forth, but I must learn when it is appropriate to use each tense. To pick the most troubling example, in English we have only the simple past, whereas in Spanish there is both the imperfecto and the indefinido. I constantly use the wrong form, not because I don’t know their technical usage (it has been explained to me countless times, using various metaphors and examples, and I can recite this technical definition from memory), but because my judgment is out of alignment.

Whether an action is continuous, periodic, completed, ongoing, or occasional—this is not as self-apparent as every native-speaker likes to assume, but indeed requires a good deal of interpretation. My judgment has not yet been properly educated by the community, and so, despite my knowing the technical usage of these two forms, I still misuse them.

In a way, this aspect of language learning is somewhat chilling. In order to speak effectively, not only must I use communal vessels to contain my thoughts, but I must learn to judge along the same lines as other members of the community—to interpret, analyze, and distinguish like them. What is left of our private selves when we subtract everything shaped and put there by the community? Am I a self-existent person, or just a reflection of my social milieu?

Yet I do not think that all this is something to dread. Having communally defined categories, and a communally shaped judgment, gives permanence and exactitude to communication. Left on our own, thinking without symbols, communicating with no one but ourselves, there is nothing that grants stability to our reflections; they constantly slip through our fingers, an ever-changing flux tied to nothing. With no fixed points, our judgment flounders in a torrent of ideas, thrashing ineffectually.

When we learn a language, and learn to use it well, we learn how to pour the ambiguous stuff of thought into stable vessels, how to cast the molten metal of our mental life into solid forms. This way, not only can we understand the world better, but we can learn to understand ourselves better. This, I think, is the very purpose of culture itself: to partition reality into sections, to impose structure on ambiguous reality.

Let me give you a common example.

A relationship is a naturally ambiguous thing. The affection and commitment that two people feel for one another exists on a spectrum. And often we do not really know how committed we are to somebody until we examine the relationship in retrospect. And yet, relationships must be defined, and defined early-on, for the sake of the community.

Every culture on earth has rituals and categories associated with courtship, for the simple fact that somebody’s relationship status is a big part of their social identity. Ambiguities in social identity are not tolerated, because they impede normal social life; to deal with somebody effectively, you need them to have a recognizable social status, a status they tells you what to expect from them and what you can ask of them and a million other things.

In modern culture, as we delay marriage ever-more into the distant horizon, we have developed the need for new relationship categories. Now we are “dating,” and then “in a relationship.” The status of being “boyfriend” or “girlfriend” is now socially understood and approved as one level of commitment.

The interesting thing, to me, is that the decision to be in a relationship, to become boyfriend and girlfriend (or whatever the case may be), seems like a private decision, affecting only two people. And yet, it is really a decision for the benefit of the community. To be in a relationship defines where you stand in relation to everyone else: whether it is appropriate to flirt with you, to ask you out, to dance with you, to ask about your significant other, and so forth.

Now, this is not to say that the decision is solely for the benefit for the community. To put this another way, this also benefits you and your partner, because you are also part of the community. It puts a publicly understood category, indicating a certain level of commitment, on your naturally ambiguous and shifting feelings. In other words, by applying a public category to a private feeling, you are, in effect, imposing a certain level of stability on the feeling.

Look what happens next. This level of commitment, being publically labeled, is also bolstered. Friends, family, and coworkers treat you differently. You are now in a different category. And this response of the community helps to form and reinforce your private feelings of commitment. Relationships are never wholly private affairs between two people. It takes a village to make a couple.

Again, I am not suggesting that this is a bad thing. To the contrary, I think that having communal definitions is what allows us to understand our own selves at all. This is also why I write these quotes and commentary. By forcing myself to take my ambiguous thoughts and put them into words, into public vessels, not only do I communicate with others, but I find out what I myself think.

Review: Philosophical Investigations

Review: Philosophical Investigations

Philosophical InvestigationsPhilosophical Investigations by Ludwig Wittgenstein

My rating: 5 of 5 stars

If you read first Wittgenstein’s Tractatus, and then follow it with his Philosophical Investigations, you will treat yourself to perhaps the most fascinating intellectual development in the history of philosophy. Wittgenstein has the distinct merit of producing, not one, but two enormously influential systems of philosophy—systems, moreover, that are at loggerheads with one another.

In fact, I wouldn’t recommend attempting to tackle this work without first reading the Tractatus, as the Investigations is essentially one long refutation and critique of his earlier, somewhat more conventional, views. But because I wish to give a short summary of some of Wittgenstein’s later views here, I will first give a little précise of the earlier work.

In the Tractatus, Wittgenstein argues that language has one primary function: to state facts. Language is a logical picture of the world. A given proposition mirrors a given state of affairs. This leads Wittgenstein to regard a great many types of utterances as strictly nonsense. For example, since ethics is not any given state of affairs, language couldn’t possible picture it; therefore, all propositions in the form of “action X is morally good” are nonsense.

Wittgenstein honestly believed that this solved all the problems of philosophy. Long-standing problems about causation, truth, the mind, goodness, beauty, etc., were all attempts to use language to picture something which it could not—because beauty, truth, etc., are not states of affairs. Philosophers only need stop the attempt to transcend the limits of language, and the problems would disappear. In his words: “The solution of the problem of life is seen in the vanishing of this problem.”

After publishing this work and taking leave of professional philosophy (as he thought it had been dealt with) Wittgenstein began to have some doubts. Certain everyday uses of language seemed hard to account for if you regarded language as purely a truth-stating tool. These doubts eventually culminated in a return to Cambridge, and to philosophy. His posthumously published Investigations represents the fullest expression of his later views.

So what are these views? Well, first let us compare the styles of the two works. The writing in both the Tractatus and the Investigations is extraordinary. Wittgenstein is one of the very finest writers of philosophy, in a league with Nietzsche and Plato. He uses almost no technical terms, and very simple sentence-structures; yet his phrases can stick in the mind for months, years, after first reading them. Just the other day, I was having a conversation with my German tutor about learning a foreign language. To something I said, she responded, “Die Grenzen meiner Spracher bedeuten die Grenzen meiner Welt.” (“The limits of my language are the limits of my world”—a quote from the Tractatus.)

Although the the writing in both works is equally compelling, the structures are quite different. In the Tractatus, Wittgenstein’s argument is unified, complete; he even numbers his sentences as primary, secondary, and tertiary in terms of their importance to the argument. In that work, we can clearly see the influence of Bertrand Russell’s logicism: language is reduced to logical propositions, and the argument is organized along logical grounds.

The reader of the Investigations will encounter something quite different. Wittgenstein writes in similarly terse aphorisms; he even retains a numbering-system for his points—each individual point getting its own numbered paragraph. The numbering of these paragraphs, however, is cumulative, and does not express anything about their significance to his larger design. It is almost as if Wittgenstein wrote down his thoughts on numbered flash cards, and simply constructed the book by moving the flash cards around. Unlike the Tractatus, which resolves itself into a unified whole, the Investigations is fragmentary.

I begin with style because the contrast in writing is a clue to the differences in thought between the earlier and later works. Unlike the Tractatus, the Investigations is rather a collection of observations and ideas. The spirit of Wittgenstein’s later enterprise is anti-systematic, rather than systematic. Wittgenstein aims not at erecting a whole edifice of thought, but at destroying other edifices. Thus, the text jumps from topic to topic, without any explicit connections or transitions, now attacking one common philosophical idea, now another. The experience can often be exasperating, since Wittgenstein is being intentionally oblique rather than direct. In the words of John Searle, reading the Investigations is “like getting a kit for a model airplane without any explanation for how to put it together.”

Let me attempt to put some of these pieces together—at least the pieces that were especially useful to me.

Wittgenstein replaces his old picture metaphor with a new tool metaphor. Instead of a word being meaningful because it pictures a fact, the meaning of a word is—at least most of the time—synonymous with the social use of that word. For example, the word “pizza” does not mean pizza because it names the food; rather, it means pizza because you can use the word to order the food at a restaurant. So instead of the reference to a type of object being primary, the social use is primary.

This example reveals a general quality of Wittgenstein’s later thought: the replacement of the objective/subjective dichotomy with the notion of public, social behavior.

Philosophers have traditionally posited theories of meaning that are either internal or external. For example, pizza can mean the particular food either because the word points to the food, or because the word points to our idea, or sensation, of the food. Either language is reporting objective states of affairs, or subjective internal experiences.

Wittgenstein destroys the external argument with a very simple observation. Take the word “game.” If the external theory of meaning is correct, the word game must mean what it does because it points to something essential about games. But what is the essential quality that makes games games? Is there any? Some games are not social (think of solitaire), some games are not trivial (think of the Olympic Games), some games are not consequence-free (think of compulsive gambling), and some games are social, trivial, and consequence-free. Is a game something that you play? But you also play records and trombones. So what is the essential, single quality of “game” that our word refers to?

Wittgenstein says there isn’t any. Rather, the word “game” takes on different meanings in different social contexts, or modes of discourse. Wittgenstein calls these different modes of discourse “language-games.” Some examples of language games are that of mimicking, of joking, of mourning, of philosophizing, of religious discourse. Every language game has its own rules; therefore, any proposed all-encompassing theory of language (like Wittgenstein’s own Tractatus) will fail, because it attempts to reduce the irreducible. You cannot reduce chess, soccer, solitaire, black-jack, and tag to one set of rules; the same is true (says Wittgenstein) of language.

Another popular theory of meaning is the internal theory. This theory holds that propositions mean things by referring to thoughts or sensations. When I refer to pain, I am referring to an internal object; when I refer to a bunny, I am referring to a set of visual sensations that I have learned to call ‘bunny’.

Wittgenstein makes short work of this argument too. Let’s start with the argument about sensations. Wittgenstein points out that our ‘sensations’ of an object—say, a bunny—are not something that we experience, as it were, purely. Rather, our interpretations alter the sensations themselves. To illustrate this, Wittgenstein uses perhaps the funiest example in all of philosophy, the duck-rabbit:

duckrabbit

As you can see, whether you interpret this conglomeration of shapes, lines, and spaces as a rabbit or a duck depends on your interpretation; and, if you had never seen a duck or a rabbit in your life, the picture would look rather strange. Ernst Gombrich summed up this point quite nicely in his Story of Art: “If we look out of the window we can see the view in a thousand different ways. Which of them is our sense impression?”

The point of all this is that trying to make propositions about sense-impressions is like trying to hit a moving target—since you only see something a certain way because of certain beliefs or experiences you already hold.

The argument about inner feelings is equally weak. For example, when we learned the word pain, did someone somehow point to the feeling and name it? Clearly, that’s impossible. What actually happens is that we (or someone else) exhibited normal behavioral manifestations of pain—crying, moaning, tearing, clutching the afflicted area. The word pain then is used (at least originally) to refer to pain-behavior, and we later use the word ‘pain’ as a replacement for our infantile pain-behavior—instead of moaning and clutching our arm, we tell someone we have a pain, and that it’s in our arm. This shows that the internal referent of the word ‘pain’ is not fundamental to its meaning, but is derivative of its more fundamental, public use.

This may seem trivial, but this line of argument is a powerful attack on the entire Cartesian tradition. Let me give you an example.

René Descartes famously sat in his room, and then tried to doubt the whole world. He then got down to his own ego, and tried to build the work back up from there. This line of thought places the individual at the center of the epistemological question, and makes all other phenomena derivative of the fundamental, subjective experience of certainty.

But let us, as Wittgenstein advises, examine the normal use of the word “to know.” You say, “I know Tom,” or “I know American history.” If someone asked you, “What makes you say you know Tom and American history?” you might say something like “I can pick Tom’s face out of a crowd,” or “I could pass a history test.” Already, you are giving social criteria for what it means to know. In fact, the word “to know” presupposes the ability to verify something with something that is not yourself. You would never verify something you remember by pointing to another thing you remember—that would be absurd, since your memory is the thing being tested. Instead, you indicate an independent criterion for determining whether or not you know something. (The social test of knowledge is also explicit in science, since experiments must be repeatable and communicable; if a scientist said “I know this but I my can’t prove it once more,” that would not be science.)

So because knowing anything apparently requires some kind of social confirmation, the Cartesian project of founding knowledge on subjective experience is doomed from the start. Knowing anything requires at least two people—since you couldn’t know if you were right or wrong without some kind of social confirmation.

Wittgenstein brings this home with his discussion of private language. Let’s say you had a feeling that nobody has told you how to name. As a result, you suspect that this feeling is unique to yourself, and so you create your own name for it. Every time you have the feeling, you apply this made-up name to it. But how do you know if you’re using the name correctly? How do you know that every time you use your private name you are referring to the same feeling? You can’t check it against your memory, since your memory is the very thing being doubted. You can’t ask somebody else, because nobody else knows this name or has this sensation. Therefore, merely thinking you’re using the name consistently and actually using the name consistently would be indistinguishable experiences. You could never really know.

Although Wittgenstein’s views changed dramatically from the early to the late phase of his career, you can see some intriguing similarities. One main current of Wittgenstein’s thought is that all philosophical problems result from the misuse of language. Compare this statement from the Tractatus, “All philosophy is ‘Critique of language’,” with this, from the Investigations: “Philosophy is a battle against the bewitchment of our intelligence by means of language.” In both works, Wittgenstein is convinced that philosophical problems only arise because of the misuses of language; that philosophers either attempt to say the unsayable, or confuse the rules of one language-game with another—producing nonsense.

I cannot say I’ve thought-through Wittgenstein’s points fully enough to say whether I agree or disagree with them. But, whether wrong or right, Wittgenstein already has the ultimate merit of any philosopher—provoking thought about fundamental questions. And even if he was wrong about everything, his books would be worth reading for the writing alone. Reading Wittgenstein can be very much like taking straight shots of vodka—it burns on the way down, it addles your brain, it is forceful and overwhelming; but after all the pain and toil, the end-result is pleasant elation.

View all my reviews

On Morality

On Morality

What does it mean to do the right thing? What does it mean to be good or evil?

These questions have perplexed people since people began to be perplexed about things. They are the central questions of one of the longest lines of intellectual inquiry in history: ethics. Great thinkers have tackled it; whole religions have been based around it. But confusion still remains.

Well perhaps I should be humble before attempting to solve such a momentous question, seeing who have come before me. And indeed, I don’t claim any originality or finality in these answers. I’m sure they have been thought of before, and articulated more clearly and convincingly by others (though I don’t know by whom). Nevertheless, if only for my own sake I think it’s worthwhile to set down how I tend to think about morality—what it is, what it’s for, and how it works.

I am much less concerned in this essay with asserting how I think morality should work than with describing how it does work—although I think understanding the second is essential to understanding the first. That is to say, I am not interested in fantasy worlds of selfless people performing altruistic acts, but in real people behaving decently in their day-to-day life. But to begin, I want to examine some of the assumptions that have characterized earlier concepts of ethics, particularly with regard to freedom.

Most thinkers begin with a free individual contemplating multiple options. Kantians think that the individual should abide by the categorical imperative and act with consistency; Utilitarians think that the individual should attempt to promote happiness with her actions. What these systems disagree about is the appropriate criterion. But they do both assume that morality is concerned with free individuals and the choices they make. They disagree about the nature of Goodness, but agree that Goodness is a property of people’s actions, making the individual in question worthy of blame or praise, reward or punishment.

The Kantian and Utilitarian perspectives both have a lot to recommend them. But they do tend to produce an interesting tension: the first focuses exclusively on intentions while the second focuses exclusively on consequences. Yet surely both intentions and consequences matter. Most people, I suspect, wouldn’t call somebody moral if they were always intending to do the right thing and yet always failing. Neither would we call somebody moral if they always did the right thing accidentally. Individually, neither of these systems captures our intuitive feeling that both intentions and consequences are important; and yet I don’t see how they can be combined, because the systems have incompatible intellectual justifications.

But there’s another feature of both Kantian and Utilitarian ethics that I do not like, and it is this: Free will. The systems presuppose individuals with free will, who are culpable for their actions because they are responsible for them. Thus it is morally justifiable to punish criminals because they have willingly chosen something wrong. They “deserve” the punishment, since they are free and therefore responsible for their actions.

I’d like to focus on this issue of deserving punishment, because for me it is the key to understanding morality. By this I mean the notion that doing ill to a criminal helps to restore moral order to the universe, so to speak. But before I discuss punishment I must take a detour into free will, since free will, as traditionally conceived, provides the intellectual foundation for this worldview.

What is free will? In previous ages, humans were conceived of as a composite of body and soul. The soul sent directions to the body through the “will.” The body was material and earthly, while the soul was spiritual and holy. Impulses from the body—for example, anger, lust, gluttony—were bad, in part because they destroyed your freedom. To give into lust, for example, was to yield to your animal nature; and since animals aren’t free, neither is the lustful individual. By contrast, impulses from the soul (or mind) were free because they were unconstrained by the animal instincts that compromise your ability to choose.

Thus free will, as it was originally conceived, was the ability to make choices unconstrained by one’s animal nature and by the material world. The soul was something apart and distinct from one’s body; the mind was its own place, and could make decisions independently of one’s impulses or one’s surroundings. It was even debated whether God Himself could predict the behavior of free individuals. Some people held that even God couldn’t, while others maintained that God did know what people would or wouldn’t do, but God’s knowledge wasn’t the cause of their doing it. (And of course, some people believed in predestination.)

It is important to note that, in this view, free will is an uncaused cause. That is, when somebody makes a decision, this decision is not caused by anything in the material world as we know it. The choice comes straight from the soul, bursting into our world of matter and electricity. The decision would therefore be impossible to predict by any scientific means. No amount of brain imaging or neurological study could explain why a person made a certain decision. Nor could the decision be explained by cultural or social factors, since individuals, not groups, were responsible for them. All decisions were therefore caused by individuals, and that’s the essence of freedom.

It strikes me that this is still how we tend to think about free will, more or less. And yet, this view is based on an outdated understanding of human behavior. We now know that human behavior can be explained by a combination of biological and cultural influences. Our major academic debate—nature vs. nurture—presupposes that people don’t have free will. Behavior is the result of the way your genes are influenced by your environment. There is no evidence for the existence of the soul, and there is no evidence that the mind cannot be explained through understanding the brain.

Furthermore, even without the advancements of the biological and social sciences, the old way of viewing things was not philosophically viable, since it left unexplained how the soul affects the body and vice versa. If the soul and the body were metaphysically distinct, how could the immaterial soul cause the material body to move? And how could a pinch in your leg cause a pain in your mind? What’s more, if there really was an immaterial soul that was causing your body to move, and if these bodily movements truly didn’t have any physical cause, then it’s obvious that your mind would be breaking the laws of physics. How else could the mind produce changes in matter that didn’t have any physical cause?

I think this old way of viewing the body and the soul must be abandoned. Humans do not have free will as originally conceived. Humans do not perform actions that cannot be scientifically predicted or explained. Human behavior, just like cat behavior, is not above scientific explanation. The human mind cannot generated uncaused causes, and does not break the laws of physics. We are intelligent apes, not entrapped gods.

Now you must ask me: But if human behavior can be explained in the same way that squirrel behavior can, how do we have ethics at all? We don’t think squirrel are capable of ethical or unethical behavior because they don’t have minds. We can’t hold a squirrel to any ethical standard and we therefore can’t justifiably praise or censor a squirrel’s actions. If humans aren’t categorically different then squirrels, than don’t we have to give up on ethics altogether?

This is not justified. Even though I think it is wrong to say that certain people “deserve” punishment (in the Biblical sense), I do think that certain types of consequences can be justified as deterrents. The difference between humans and squirrels is not that humans are free, but that humans are capable of thinking about the long term consequences of an action before committing it. Individuals should be held accountable, not because they have free will, but because humans have a great deal of behavioral flexibility, thus allowing their behavior to be influenced by the threat of prison.

This is why it is justifiable to lock away murderers. If it is widely known among the populace that murderers get caught and thrown into prison, this reduces the number of murders. Imprisoning squirrels for stealing peaches, on the other hand, wouldn’t do anything at all, since the squirrel community wouldn’t understand what was going on. With humans, the threat of punishment acts as a deterrent. Prison becomes part of the social environment, and therefore will influence decision-making. But in order for this threat to act as an effective deterrent, it cannot be simply a threat; real murderers must actually face consequences or the threat won’t be taken seriously and thus won’t influence behavior.

To understand how our conception of free will affects the way we organize our society, consider the case of drug addiction. In the past, addicts were seen as morally depraved. This was a direct consequence of the way people thought about free will. If people’s decisions were made independently of their environment or biology, then there was no excuses or mitigating circumstance for drug addicts. Addicts were simply weak, depraved people who mysteriously kept choosing self-destructive behavior. What resulted from this was the disastrous war on drugs, a complete fiasco. Now we know that it is absurd to throw people into jail for being addicted, simply absurd, because addicts are not capable of acting otherwise. This is the very definition of addiction, that one’s decision-making abilities have been impaired.

As we’ve grown more enlightened about drug addiction, we’ve realized that throwing people in jail doesn’t solve anything. Punishment does not act as an effective deterrent when normal decision-making is compromised. By transitioning to a system where addiction is given treatment and support, we have effectively transitioned from an old view of free will to the new view that humans behavior is the result of biology, environment, and culture. We don’t hold them “responsible” because we know it would be like holding a squirrel responsible for burying nuts. This is a step forward, and it has been taken by abandoning the old views of free will.

I think we should apply this new view of human behavior to other areas of criminal activity. We need to get rid of the old notions of free will and punishment. We must abandon the idea of punishing people because they “deserve” it. Murderers should be punished, but not because they deserve to suffer, but for the following two reasons: first, because they have shown themselves to be dangerous and should be isolated; and second, because their punishment helps to act as a deterrent to future murderers. Punishment is just only insofar as these two criteria are met. Once a murderer is made to suffer more than is necessary to deter future crimes, and is isolated more than is necessary to protect others, then I think it is unjustifiable and wrong to punish him further.

In short, we have to give up on the idea that inflicting pain and discomfort on a murderer helps to restore moral balance to the universe. Vengeance in all its forms should be removed from our justice system. It is not the job of us or anyone else to seek retributions for wrongs committed. Punishments are only justifiable because they help to protect the community. The aim of punishing murderers is neither to hurt nor to help them, but to prevent other people from becoming murderers. And this is, I think, the reason why the barbarous methods of torture and execution are wrong, because I very much doubt that brutal punishments are justified in terms of further efficacy in deterrence. However, I’m sure there is interesting research somewhere on this.

Seen in this way, morality can be understood in the same way we understand language—as a social adaptation that benefits the community as a whole as well as individual members of the community. Morality is a code of conduct imposed by the community on its members, and derivations from this code of conduct are justifiably punished for the safety of the other members of the community. When this code is broken, a person forfeits the protection under the code, and is dealt with in such a way that future derivations from the moral code are discouraged.

Just as Wittgenstein said that a private language is impossible, so I’d argue that a private morality is impossible. A single, isolated individual can be neither moral nor immoral. People are born with a multitude of desires; and every desire is morally neutral. A moral code comes into play when two individuals begin to cooperate. This is because the individuals will almost inevitably have some desires that conflict. A system of behavior is therefore necessary if the two are to live together harmoniously. This system of behavior is their moral code. In just the same way that language results when two people both use the same sounds to communicate the same messages, morality results when two people’s desires and actions are in harmony. Immorality arises when the harmonious arrangement breaks down, and one member of the community satisfies their desire at the expense of the others. Deviations of this kind must have consequences if the system is to maintain itself, and this is the justification for punishment.

One thing to note about this account of moral systems is that they arise for the well-being of their participants. When people are working together, when their habits and opinions are more or less in harmony, when they can walk around in their neighborhood without fearing every person they meet, both the individual and the group benefits. This point is worth stressing, since we now know that the human brain is the product of evolution, and therefore we must surmise that universal features of human behavior, such as morality, are adaptive. The fundamental basis for morality is self-interest. What distinguishes moral from immoral behavior is not that the first is unselfish while the other is selfish, but that the first is more intelligently selfish than the second.

It isn’t hard to see how morality is adaptive. One need only consider the basic tenets of game theory. In the short term, to cooperate with others may not be as advantageous as simply exploiting others. Robbery is a quicker way to make money than farming. And indeed, the potentially huge advantages of purely selfish behavior explains why unethical behavior occurs: Sometimes it benefits individuals more to exploit rather than to help one another. Either that, or certain individuals—either from ignorance or desperation—are willing to risk long-term security for short-term gains. Nevertheless, in general moral behaviors tend to be more advantageous, if only because selfish behavior is more risky. All unethical behavior, even if carried on in secret, carries a risk of making enemies; and in the long run, enemies are less useful than friends. The funny thing about altruism is that it’s often more gainful than selfishness.

Thus this account of morality can be harmonized with an evolutionary account of human behavior. But what I find most satisfying about this view of morality is that it allows us to see why we care both about intentions and consequences. Intentions are important in deciding how to punish misconduct because they help determine how an individual is likely to behave in the future. A person who stole something intentionally has demonstrated a willingness to break the code, while a person who took something by accident has only demonstrated absent-mindedness. The first person is therefore more of a risk to the community. Nevertheless, it is seldom possible to prove what somebody intended beyond the shadow of a doubt, which is why it is also necessary to consider the consequences of an action. What is more, carelessness as regards the moral code must be forcibly discouraged, otherwise the code will not function properly. This is why, in certain cases, breaches of conduct must be punished even if they were demonstrably unintentional—to discourage other people in the future from being careless.

Let me pause here to sketch out some more philosophical objections to the Utilitarian and Kantian systems, besides the fact that they don’t adequately explain how we tend to think about morality. Utilitarianism does capture something important when it proclaims that actions should be judged insofar as they further the “greatest possible happiness.” Yet taken by itself this doctrine has some problems. The first is that you never know how something is going to turn out, and even the most concerted efforts to help people sometimes backfire. Should these efforts, made in good faith, be condemned as evil if they don’t succeed? What’s more, Utilitarian ethics can lead to disturbing moral questions. For example, is it morally right to kill somebody if you can use his organs to save five other people? Besides this, if the moral injunction is to work constantly towards the “greatest possible happiness,” then we might even have to condemn simple things like a game of tennis, since two people playing tennis certainly could be doing something more humanitarian with their time and energy.

The Kantian system has the opposite problem in that it stresses good intentions and consistency to an absurd degree. If the essence of immorality is to make an exception of oneself—which covers lying, stealing, and murder—then telling a fib is morally equivalent to murdering somebody in cold blood, since both of those actions equally make exceptions of the perpetrator. This is what results if you overemphasize consistency and utterly disregard consequences. What’s more, intentions are, as I said above, basically impossible to prove—and not only to other people, but also to yourself. Can you prove, beyond a shadow of a doubt, that your intentions were pure yesterday when you accidentally said something rude? How do you know your memory and your introspection can be trusted? However, let me leave off with these objections because I think entirely too much time in philosophy is given over to tweezing apart your enemies’ ideas and not enough to building your own.

Thus, to repeat myself, both consequences and intentions, both happiness and consistency must be a part of any moral theory if it is to capture how we do and must think about ethics. Morality is an adaptation. The capacity for morality has evolved because moral systems benefit both groups and individuals. Morality is rooted in self-interest, but it is an intelligent form of self-interest that recognizes that other people are most useful as allies than as enemies. Morality is neither consistency nor pleasure. Morality is consistency for the sake of pleasure. This is why moral strictures that demand that people devote their every waking hour to helping others or to never make exceptions of themselves are self-defeating, because when a moral system is onerous is isn’t performing its proper function.

But now I must deal with that fateful question: Is morality absolute or relative? At first glance it would seem that my account would put me squarely in the relativist camp, seeing that I point to a community code of conduct. Nevertheless, when it comes to violence I am decidedly a moral absolutist. This is because I think that physical violence can only ever be justified by citing defense. First, to use violence to defend yourself from violent attack is neither moral nor immoral, because at this point the moral code has already broken down. The metaphorical contract has been broken, and you are now in a situation where the you must either fight, run, or be killed. The operant rule is now survival and not morality. For the same reason a whole community may justifiably protect itself from invasion from an enemy force (although capitulating is equally defensible). And lastly violence (in the form of imprisonment) is justified in the case of criminals, for the reasons I discussed above.

What if there are two communities, community A and community B, living next to one another? Both of these communities have their own moral codes which the people abide by. What if a person from community A encounters a person from community B? Is it justifiable for either of them to use violence against the other? After all, each of them is outside the purview of the other’s moral code, since moral codes develop within communities. Well in practice situations like this do commonly result in violence. Whenever Europeans encountered a new community—whether in the Americas or in Africa—the result was typically disastrous for that community. This isn’t simply due to the wickedness of Europeans; it has been a constant throughout history: When different human communities interact, violence is very often the result. And this, by the way, is one of the benefits of globalization. The more people come to think of humanity as one community, the less violence we will experience.

Nevertheless, I think that violence between people from different communities is ultimately immoral, and this is why. To feel it is permissible to kill somebody just because they are not in your group is to consider that person subhuman—as fundamentally different. This is what we now call “Othering,” and it is what underpins racism, sexism, religious bigotry, homophobia, and xenophobia. But of course we now know that it is untrue that other communities, other religions, other races, women, men, or homosexuals or anyone else are “fundamentally” different or in any way subhuman. It is simply incorrect. And I think the recognition that we all belong to one species—with only fairly superficial differences in opinions, customs, rituals, and so on—is the key to moral progress. Moral systems can be said to be comparatively advanced or backward to the extent that they recognize that all humans belong to the same species. In other words, moral systems can be evaluated by looking at how many types of people they include.

This is the reason why it is my firm belief that the world as it exists today—full as it still is with all sorts of violence and prejudice—is morally superior than ever before. Most of us have realized that racism was wrong because it was based on a lie; and the same goes for sexism, homophobia, religious bigotry, and xenophobia. These forms of bias were based on misconceptions; they were not only morally wrong, but factually wrong.

Thus we ought to be tolerant of immorality in the past, for the same reason that we excuse people in the past for being wrong about physics or chemistry. Morality cannot be isolated from knowledge. For a long time, the nature of racial and sexual differences was unknown. Europeans had no experience and thus no understanding of non-Western cultures. All sorts of superstitions and religious injunctions were believed in, to an extent most of us can’t even appreciate now. Before widespread education and the scientific revolution, people based their opinions on tradition rather than evidence. And in just the same way that it is impossible to justly put someone in prison without evidence of their guilt, it impossible to be morally developed if your beliefs are based on misinformation. Africans and women used to be believed to be mentally inferior; homosexuals used to be believed to be possessed by evil spirits. Now we know that there is no evidence for these views, and in fact evidence to the contrary, so we can cast them aside; but earlier generations were not so lucky.

To the extent, therefore, that backward moral systems are based on a lack of knowledge, they must be tolerated. In this why we ought to be tolerant of other cultures and of the past. But to the extent that facts are wilfully disregarded in a moral system, that system can be said to be corrupt. Thus the real missionaries are not the ones who spread religion, but who spread knowledge, for increased understanding of the world allows us develop our morals.

These are my ideas in their essentials. But for the sake of honesty I have to add that the ideas I put forward above have been influenced by my studies in cultural anthropology, as well as my reading of Locke, Hobbes, Hume, Spinoza, Santayana, Ryle, Wittgenstein, and of course by Mill and Kant. I was also influenced by Richard Dawkins’s discussion of Game Theory in his book, The Selfish Gene. Like most third-rate intellectual work, this essay is, for the most part, a muddled hodgepodge of other people’s ideas.

Review: The Concept of Mind

Review: The Concept of Mind
The Concept of Mind

The Concept of Mind by Gilbert Ryle

My rating: 5 of 5 stars

Men are not machines, not even ghost-ridden machines. They are men—a tautology which is sometimes worth remembering.

The problem of mind is one of those philosophical quandaries that give me a headache and prompt an onset of existential angst whenever I try to think about them. How does consciousness arise from matter? How can a network of nerves create a perspective? And how can this consciousness, in turn, influence the body it inhabits? When we look at a brain, or anywhere else in the physical world, we cannot detect consciousness; only nerves firing and blood rushing. Where is it? The only evidence for consciousness is my own awareness. So how do I know anybody else is conscious? Could it be just me?

If you think about the problem in this way, I doubt you will make any progress either, because it is insoluble. This is where Gilbert Ryle enters the picture. According to Ryle, the philosophy of mind was put on a shaky foundation by Descartes and his followers. When Descartes divided the world into mind and matter, the first private and the other public, he created several awkward problems: How do we know other people have minds? How do the realms of matter and mind interact? How can the mind be sure of the existence of the material world? And so on. This book is an attempt to break away from the assumptions that led to these questions.

Ryle’s philosophy is often compared with that of the later Wittgenstein, and justly so. The main thrusts of their argument are remarkably similar. This may have been due simply to the influence of Wittgenstein on Ryle, or vice versa—there appears to be some doubt. Regardless, it is appropriate to compare them, as I think, taken together, their ideas help to shed light on one another’s philosophy.

Both Wittgenstein and Ryle are extraordinary writers. Wittgenstein is certainly the better of the two, though this is not due to any defect on Ryle’s part. Wittgenstein is aphoristic, sometimes oblique, employing numerous allegories and similes to make his point. Ryle is sharp, direct, and epigrammatic. Wittgenstein is in the same tradition as Nietzsche and Schopenhauer, while Ryle is the direct descendent of Jane Austen. But both of them are witty, quotable, and brilliant. They have managed to create excellent works of philosophy without using any jargon and avoiding all obscurity. Why can’t philosophy always be written so well?

There is no contradiction, or even paradox, in describing someone as bad at practising what he is good at preaching. There have been thoughtful and original literary critics who have formulated admirable canons of prose style in execrable prose. There have been others who have employed brilliant English in the expression of the silliest theories of what constitute good writing.

Ryle also has the quality—unusual among philosophers—of being apparently quite extroverted. His eyes are turned not toward himself, but to his surroundings. He speaks with confidence and insight about the way people normally behave and talk, and in general prefers this everyday understanding of things to the tortured theories of his introverted colleagues.

Teachers and examiners, magistrates and critics, historians and novelists, confessors and non-commissioned officers, employers, employees and partners, parents, lovers, friends and enemies all know well enough how to settle their daily questions about the qualities of character and intellect of the individuals with whom they have to do.

This book, his most famous, is written not as a monograph or an analysis, but as a manifesto. Ryle piles epigram upon epigram until you are craving just one qualification, just one admission that he might be mistaken. He even seems to get carried away by the force of his own pen, leading to some needlessly long and repetitious sections. What is more, his style has the defect of all epigrammatists: he is utterly convincing in short gasps, but ultimately leaves his reader grasping for something more systematic.

Ryle is often called an ordinary language philosopher, and the label suits him. Like Wittgenstein, he thinks that philosophical puzzles come about by the abuse of words; philosophers fail to correctly analyze the logical category of words, and thus use them inappropriately, leading to false paradoxes. The Rylean philosopher’s task is to undo this damage. Ryle likens his own project to that of a cartographer in a village. The residents of the village are perfectly able to find their way around and can even give directions. But they might not be able to create an abstract representation of the village’s layout. This is the philosopher’s job: to create a map of the logical layout of language. This will prevent other foreigners from getting lost.

Ryle begins by pointing out some obvious problems with the Cartesian picture—a picture he famously dubs the ‘Ghost in the Machine.’ First, we have no idea how these two metaphysically distinct realms of mind and matter interact. Thus by attempting to explain the nature of human cognition, the Cartesians cordon it off from the familiar world and banish it to a shadow world, leaving unexplained how the shadow is cast.

Second, the Cartesian picture renders all acts of communication into a kind of impossible guessing game. You would constantly be having to fathom the significance of a word or gesture by making conjectures as to what is happening in a murky realm behind an impassible curtain (another person’s mind). Conjectures of this kind would be fundamentally dissimilar to other conjectures because there would be, in principle, no way to check them. In the Cartesian picture, people’s minds are absolutely cut off from all outside observation.

Ryle is hardly original in pointing out these two problems, although he does manage to emphasize these embarrassing conundrums with special force. His more original critique is what has been dubbed “Ryle’s Regress.” This is made against what Ryle calls the “intellectualist legend,” which is the notion that all intelligent behaviors are the products of thoughts.

For example, if you produced a grammatically correct English sentence, it means (according to the “legend”) that you have properly applied the correct criteria for English grammar. However, this must mean that you applied the proper criteria to the criteria, i.e. you applied the meta-criteria that allowed you to choose the rules for English grammar and not the rules for Spanish grammar. But what meta-meta-criteria allowed you to pick the correct meta-criteria for the criteria for the English sentence? (I.e., what anterior rule allowed you to pick the rule that allowed you to choose the rule for determining whether English or Spanish rules should be used instead of the rule for choosing whether salt or sugar should be added to a recipe?—sorry, that’s a mouthful.)

The point is that we are led down an infinite regress if we require rules to proceed action. This is one of the classic arguments against cognitive theories of the mind. (I believe Hubert Dreyfus used this same argument in his criticisms of artificial intelligence and cognitive psychology. Considering the strides that A.I. has made since then, I’m sure there must be some way around this regress, though I don’t know what. Hopefully somebody can explain it to me.)

These are his most forceful reasons for rejecting the Ghost in the Machine. From reading the other reviews here, I gather that many people are fairly convinced by these arguments. Nonetheless, some have accused Ryle of failing to replace the Cartesian picture with anything else. This is not a fair criticism. Ryle does his best to rectify the mistaken picture with his own view, though you may not find this view very satisfying.

After doing his best to discredit the Cartesian picture, the rest of the book is devoted to demonstrating Ryle’s view that none of the ways we ordinarily use language necessitate or even imply that “the mind is its own place.” This is where he most nearly approaches Wittgenstein, for his main contentions are the following: First, it is only when language is misused by philosophers (and laypeople) that we get the impression that the mind is a metaphysically distinct thing. Second, our intellectual and emotional lives are in fact not cut off and separate from the world; rather, public behavior is at the very core of our being.

Here is just one example. According to the Cartesian view, a person “really knows” how to divide if, when given a problem—let’s say, 144 divided by 24—his mind goes through the necessary steps. Let us say a professor gives a student this problem, and the student correctly responds: 6. The professor conjectures that the student’s mind has gone through the appropriate operation. But what if the professor asks him the exact same question five minutes later, and the student responds: 8? And what if he did it again, and the student responds: 3? The following dialogue ensues:

PROFESSOR: Ah, you’re just saying random numbers. You really don’t know how to divide.

STUDENT: But my mind performed the correct operation when you asked me the first time. I forgot how to do it after that.

PROFESSOR: How do you know your mind performed the correct operation the first time?

STUDENT: Introspection.

PROFESSOR: But if you can’t remember how to do it now, how can you be sure that you did know previously?

STUDENT: Introspection, again.

PROFESSOR: I don’t believe you. I don’t think you ever knew.

The point of the dialogue is this. According to the Cartesian view, introspection provides not merely the best, but the only true window into the mind. You are the only person who can know your own mind, and everyone else knows it via conjecture. Thus the student, and only the student, would really know if his mind performed the proper operation, and thus he alone would really know if he could divide. Yet this is not the case. We say somebody “knows how to divide” if they can consistently answer questions of division correctly.

Thus, Ryle argues, to “know how to divide” is a disposition. And a disposition cannot be analyzed into episodes. In other words, “knowing how to divide” is not a collection of discrete times when a mind went through the proper operations. Similarly, if I say “the glass is fragile,” I do not mean that it has broken or even that it will necessarily break, just that it would break easily. Fragility, like knowing long division, is a disposition.

According to Ryle, when philosophers misconstrued what it meant to know how to divide (and other things), they committed a “category mistake.” They miscategorized the phrase; they mistook a disposition for an episode. More generally, the Cartesians mix up two different sorts of knowledge: knowing how and knowing that. They confuse dispositions, capacities, and propensities for rules, facts, and criteria. This leads them into all sorts of muddles.

Here is a classic example. Since Berkley, philosophers have been perplexed by the mind’s capacity to form abstract ideas. The word “red” encompasses many different particular shades, and is thus abstract. Is our idea of red some sort of vague blend of all particular reds? Or is it a collections of different, distinct shades we bundle together into a group? Ryle contends that this question makes the following mistake: Recognizing the color red is a knowing how. It is a skill we learn, just like recognizing melodies, foreign accents, and specific flavors. It is a capacity we develop; it is not the forming of a mental object, an “idea,” that sits somewhere in a mental space.

Ryle applies this method to problem after problem, which seem to dissolve in the acid of his gaze. It is an incredible performance, and a great antidote for a lot of the conundrums philosophers like to tie themselves up in. Nevertheless, you cannot shake the feeling that for all his directness, Ryle dances around the main question: How does awareness arise from the brain?

Well, I’m not positive about this, but I believe it was never Ryle’s intention to explain this, since he considers the question outside the proper field of philosophy. It is a scientific, not a philosophical question. His goal was, rather, to show that the mind/body problem is not an insoluble mystery or evidence of metaphysical duality, and that the mind is not fundamentally private and untouchable. Humans are social creatures, and it is only with great effort that we keep some things to ourselves.

I certainly cannot keep this review to myself. This was the best work of philosophy I have read since finishing Wittgenstein’s Philosophical Investigations in 2014, and I hope you get a chance to read it too. Is it conclusive? No. Is it irrefutable? I doubt it. But it is witty, eloquent, original, and devoid of nonsense. This is as good as philosophy gets.

View all my reviews