Quotes & Commentary #19: King

Quotes & Commentary #19: King

This is a short book because most books about writing are filled with bullshit. Fiction writers, present company included, don’t understand very much about what they do – not why it works when it’s good, not why it doesn’t when it’s bad.

—Stephen King, On Writing Well

Stephen King’s book about writing is among a handful of books whose reading has permanently changed my day to day life. This is partly because, as Stephen King says, it is a book with admirably little bullshit in it.

In this quote, for example, King points out something that is commonly overlooked: being good at doing something is no guarantee of being good at teaching or analyzing it. This applies with special force to artists. Few things are more disappointing than hearing a great artist talk about his work.

I thought about this most recently while watching a movie inspired by Bob Dylan’s life, I’m Not There. In one scene, Dylan (played by Cate Blanchett) is questioned by an intellectual from the BBC. He is asked questions about social and political issues, to which Dylan gives characteristically curt and flippant responses. The intellectual gets angry and concludes that Dylan is a poser; and Dylan, in turn, gets frustrated because the intellectual is obviously missing the point.

The inability of artists to articulate the principles or ideas embodied in their works is just one example of the distinction, made famous by Gilbert Ryle, between knowing how (knowing a skill) and knowing that (possessing knowledge). The difference between knowing how to write a protest song about racism, and knowing about the mechanics of racist institutions, is not a difference of degree, but a difference of kind; and there is no contradiction, or even irony, in somebody being able to write good protest songs without being to explain how he does it, and without having a particularly deep knowledge of what he is protesting.

Stephen King, although certainly no philosopher, is well aware of the difference between knowing how and knowing that. Learning to write is learning a skill; the knowledge is embodied in practice. Thus good writing cannot be reduced to a set of rules, maxims, and principles. And even if such rules did exist, it would not be necessary for a novelist to be able to learn and articulate the rules in order to produce good art, in the same way that it isn’t necessary for children to learn a theory of bike riding to ride a bike.

It is true that, when teaching beginners in any skill, teachers often resort to providing rules. These rules are inevitably simplifications, meant to ease the pupil’s progress. But at a certain point the pupil becomes so adept at the task that it is unnecessary—not to mention impossible, for lack of time—to consciously consult these rules during practice. Not only that, but the pupil learns (largely unconsciously) when and how to interpret the rules (because all rules need interpretation), where to apply them (which does not depend on another rule), and when to break them (because all rules can be broken). This is what it means to be an expert.

Because of this strange ineffability of expert knowledge, at a certain point the learner must resort to observation and imitation. Rather than trying to articulate rules, the learner simply watches what experts do, and tries to recreate it. This is why, as Stephen King says, the only way to become a good writer is to read, read, read, and then write, write, write. No style guide will compensate; no set of rules will suffice; no magic formula exists. Writing, like baking, basketball, and playing the banjo, is an embodied skill, and thus must be learned through imitation.

This is why Stephen King emphasizes, again and again, that aspiring writers must write daily. He advises setting yourself a minimum word count, and then making sure you write that many words, come hell or high water. King’s word count is 2,000, but that’s quite high. For a while I forced myself to write 1,000 words a day. It was very hard at first—some days it was excruciating—but it gradually became easier. Nowadays I’m more lax; 500 is enough to satisfy me. But I will be forever grateful to King for dispensing with the bullshit, for forgetting about the rules, and for encouraging me to put pen to paper.

Quotes & Commentary #17: Spinoza

Quotes & Commentary #17: Spinoza

Men are mistaken in thinking themselves free; their opinion is made up of consciousness of their own actions, and ignorance of the causes by which they are conditioned. Their idea of freedom, therefore, is simply their ignorance of any cause of their actions. As for their saying that human action depends on the will, this is a mere phrase without any idea to correspond thereto. What the will is, and how it moves the body, none of them know; those who boast of such knowledge, and feign dwellings and habitations for the soul, are wont to provoke either laughter or disgust.

—Baruch Spinoza, Ethics

Few things can make you more skeptical about free will than studying anthropology. For me, this had three components.

The first was cultural. I read about the different customs, rituals, religions, arts, superstitions, and worldviews that have existed around the world. Many “facts” that I assumed were universal, obvious, or unquestionable were shown to be pure prejudice. And many behaviors that I assumed to be “natural” were shown to be products of the cultural environment.

It is unsettling, but nonetheless valuable, to consider all the things you do just because that’s what your neighbors, family, and friends do. These include not only superficial habits, but our most basic opinions and values. Our culture is not like a jacket that we put on when we go out into the world; culture is not a superficial layer on our deeper selves. Rather, culture penetrates to the very core of our beings, shaping our most intimate thoughts and sensations.

The next influence was primatology, the study of primate behavior. This came to me most memorably in the books of Jane Goodall, about the chimpanzees she studied. Chimpanzees are our closest relatives. They are recognizably animals and yet so strangely human. They get jealous, become infatuated, bicker, fight, make up, and joke around. They make tools and solve puzzles.

I remember the story of a small chimp who, while walking through the forest with his group, saw a banana out of the corner of his eye. The rest of his group didn’t notice it; and this chimp knew that the bigger ones would take the banana away if they saw him eating it. So he ran off in another direction, causing everyone to follow him, and then secretly snuck back to get the banana. If that’s not human, I don’t know what is.

Last was the study of human evolution. This also involves the study of archaeology: the material culture that hominins have left behind. I held reproductions of the skulls of human ancestors, and examples of the stone tools made by our smaller-brained predecessors. I saw how the tools became more advanced as the brain size increased. Crude choppers became the beautiful hand axes of the homo erectus, and these large axes became refined into serrated blades and arrow heads by later species. Finally our species began showing evidence of symbolic thinking: burying people, crafting statues, painting caves, carving flutes, and almost definitely using language.

After seeing the obvious influence of evolution on our capacities and tendencies, after learning about the striking similarities between us and our ape cousins, and after witnessing the pervasive effects of culture upon behavior, my belief in free will was in tatters. True, even if we take all these evolutionary and cultural factors into account, we can’t predict the exact moment when I’m going to scratch my nose. But neither can we predict where a fly will land, or which patch of skin a mosquito will bite. Nobody thinks flies or mosquitoes have free will, so why us?

I normally understand “free will” to mean the ability of an organism to fully determine its own actions. In other words, a free organism is one whose actions cannot be predicted or explained by pointing to anything outside, including genes or upbringing. Not DNA, nor culture, nor childhood experiences would be enough to fully explain a free individual’s behavior. A free action is, in principle, unpredictable; and thus the free agent is morally responsible for his actions.

I do not believe in this type of freedom, and I have not for a long time. For my part, I think Spinoza is exactly right: “free will” is just a name for our ignorance of the causes of our own behavior. If we knew these causes, our actions could be predicted like any other natural phenomenon, and “freedom” would disappear.

This ignorance is not difficult to explain. Human behavior is the product, first, of our environment, which is infinitely varied and constantly changing; and, second, of the human brain, one of the most complex things in the universe. Because of the amount and complexity of the data, along with our lack of understanding, we can’t even come close to making predictions on the scale of individual human actions, like scratching one’s nose. But we can’t conclude from our inability that our actions are thus “free,” anymore that we can conclude from our inability to predict where a fly will land that flies possess a mystical “freedom.”

Kurt Vonnegut made this point, with much more wit, in Slaughterhouse Five. His Tralfamadorians, who can see in the time dimension as well as space dimensions, already know everything that will happen. Thus they have no concept of freedom, and find it puzzling that humans do: “I’ve visited thirty-one inhabited planets in the universe, and I have studied reports on one hundred more. Only on Earth is there any talk of free will.”

To me it seems manifest that the traditional definition of freedom has been thoroughly discredited by what we know about the natural and cultural world. Humans are made of matter obeying physical laws, shaped by evolution, subject to genetic influence, and responsive to the cultural environment. The mind is not a mysterious metaphysical substance, but a product of the human brain; thus the mind and its behavior, like the brain, can be understood scientifically, just like any other animal’s.

All this being said, there are nevertheless ways to redefine free will so that it is compatible with what we know about physics, biology, anthropology, and psychology.

Perhaps free will is simply the inability of a thinking organism to predict what it is about to do? Every person has, at one time or another, been surprised by their own actions. This is because, as the philosopher Gilbert Ryle explained, “A prediction of a deed or a thought is a higher order operation, the performance of which cannot be among the things considered in making the prediction.That is to say that it is logically impossible to predict how the act of predicting an action will alter the action, because the prediction itself cannot factor into the prediction (you can try to predict how you will predict, but this leads to an infinite regress).

Or perhaps free will is a condition caused by our ignorance of the future? After all, difficult decisions are difficult because we can’t be sure what will happen or how we’ll react. Deciding between two job offers, for example, is only difficult because we can’t be sure which one we’ll like more. If we could be sure—and I mean absolutely sure—which job would make us happier, then there wouldn’t be a decision at all; we would simply take the better job without a dilemma even occurring to us. In this way, our freedom is as much a product of our ignorance of the future as it is our ignorance of the causes of our actions.

What sets humans apart from other animals is not our freedom per se, but our behavioral flexibility. Humans are able to continually adapt to new environments, and to learn new habits, techniques, and concepts throughout their lives. This ability to adapt and to learn, which serves us so well, is not freedom so much as slavery to a different master: our environment. Our genes do not instill in us a specific behavioral pattern, as in ants, but give us the capability to develop many different behavioral patterns in response to our cultural and climatic surroundings. But is it any more “noble” or “free” for our behavior to be determined by social and environmental pressure rather than from genetic predestination?

Probably the best practical definition of freedom I can come up with is this: Humans are free because we are able to alter our behavior based on anticipated consequences. This is what makes morality possible: we can influence people’s behavior by telling them what will happen if they don’t follow the rules. What is more, people can understand that they have more to gain by playing along and helping their neighbors than by acting impulsively and at the expense of their neighbors. Thus our intelligence, by allowing us to understand the consequences of our actions, gives us the ability to be more intelligently selfish: we can weigh long-term benefits with short-term pleasures.

Freedom is, of course, a fundamental concept in our political philosophy. So if we choose to stop believe in freedom as traditionally defined, how are we to proceed? Here is my answer.

The important distinction to be made in political philosophy, regarding freedom, is what separates freedom from coercion. The difference between freedom and coercion is not that one is self-caused and the other caused by the outside—since even the freest person imaginable has been profoundly shaped by their environment, and is making decisions in response to their environment. Rather, there are two important differences: coercion implies force (or the threat of force) while freedom doesn’t; and “free” actions usually benefit the acting individual, while “coerced” actions usually benefit an outside party at the expense of the acting individual.

The difference thus has nothing to do with freedom as such (freedom from environmental influences), but is determined by the type of environmental influence (violent or non-violent), and by the party (actor or not) that receives the benefits. (Even though an altruistic act benefits a party besides the actor, it is not a coerced act because, first, it’s not motivated by threat of violence, and, second, because altruistic acts usually benefit the actor in some way, either socially or psychologically.)

I find that some people become horrified when I tell them about my rejection of freedom. For my part, I find that my disbelief in freedom has made me more tolerant. When I consider that people are products of their environment and their genes, I stop judging and blaming them. I know that, ultimately, they are not responsible for who they are. In a profound sense, they can’t help it. We are each born with certain desires, and throughout our lives other desires are instilled into us. Our behavior is the end product of an internal battle of competing desires.

If you think that morality is impossible with this worldview, I beg you to read Spinoza’s Ethics. You will find that, not only is morality possible, but it is necessary, logical, and beautiful.

On Egotism and Education

On Egotism and Education

A while ago a friend asked me an interesting question.

As usual, I was engrossed in some rambling rant about a book I was reading—no doubt enlarging upon the author’s marvelous intellect (and, by association, my own). My poor friend, who is by now used to this sort of thing, suddenly asked me:

“Do you really think reading all these books has made you a better person?”

“Well, yeah…” I stuttered. “I think so…”

An awkward silence took over. I could truthfully say that reading had improved my mind, but that wasn’t the question. Was I better? Was I more wise, more moral, calmer, braver, kinder? Had reading made me a more sympathetic friend, a more caring partner? I didn’t want to admit it, but the answer seemed to be no.

This wasn’t an easy thing to face up to. My reading was a big part of my ego. I was immensely proud, indeed even arrogant, about all the big books I’d gotten through. Self-study had strengthened a sense of superiority.

But now I was confronted with the fact that, however much more knowledgeable and clever I had become, I had no claim to superiority. In fact—although I hated even to consider the possibility—reading could have made me worse in some ways, by giving me a justification for being arrogant.

This phenomenon is by no means confined to myself. Arrogance, condescension, and pretentiousness are ubiquitous qualities in intellectual circles. I know this both at first- and second-hand. While lip-service is often given to humility, the intellectual world is rife with egotism. And often I find that the more well-educated someone is, the more likely they are to assume a condescending tone.

This is the same condescending tone that I sometimes found myself using in conversations with friends. But condescension is of course more than a tone; it is an attitude towards oneself and the world. And this attitude can be fostered and reinforced by habits you pick up through intellectual activity.

One of these habits is argumentativeness for me, most closely connected with reading philosophy. Philosophy is, among other things, the art of argument; and good philosophers are able to bring to their arguments a level of rigor, clarity, and precision that is truly impressive. The irony here is that there is far more disagreement in philosophy than in any other discipline. To be fair, this is largely due to the abstract, mysterious, and often paradoxical nature of the questions they investigate—which resist even the most thorough analysis.

Nevertheless, given that their professional success depends upon putting forward the strongest argument to a given problem, philosophers devote a lot of time to picking apart the theories and ideas of their competitors. Indeed, the demolition of a rival point of view can assume supreme importance. A good example of this is Gilbert Ryle’s Concept of Mind—a brilliant and valuable book, but one that is mainly devoted to debunking an old theory rather than putting forward a new one.

This sort of thing isn’t confined to philosophy, of course. I have met academics in many disciplines whose explicit goal is to quash another theory rather than to provide a new one. I can sympathize with this, since proving an opponent wrong can feel immensely powerful. To find a logical fallacy, an unwarranted assumption, an ambiguous term, an incorrect generalization in a competitor’s work, and then to focus all your firepower on this structural weakness until the entire argument comes tumbling down—it’s really satisfying. Intellectual arguments can have all the thrill of combat, with none of the safety hazards.

But to steal a phrase from the historian Richard Fletcher, disputes of this kind usually generate more heat than light. Disproving a rival claim is not the same thing as proving your own claim. And when priority is given to finding the weaknesses rather than the strengths of competing theories, the result is bickering rather than the pursuit of truth.

To speak from my own experience, in the past I’ve gotten to the point where I considered it a sign of weakness to agree with somebody. Endorsing someone else’s conclusions without reservations or qualifications was just spineless. And to fail to find the flaws in another thinker’s argument—or, worse yet, to put forward your own flawed argument—was simply mortifying for me, a personal failing. Needless to say this mentality is not desirable or productive, either personally or intellectually.

Besides being argumentative, another condescending attitude that intellectual work can reinforce is name-dropping.

In any intellectual field, certain thinkers reign supreme. Their theories, books, and even their names carry a certain amount of authority; and this authority can be commandeered by secondary figures through name-dropping. This is more than simply repeating a famous person’s name (although that’s common); it involves positioning oneself as an authority on that person’s work.

Two books I read recently—Mortimer Adler’s How to Read a Book, and Harold Bloom’s The Western Canon—are prime examples of this. Both authors wield the names of famous authors like weapons. Shakespeare, Plato, and Newton are bandied about, used to cudgel enemies and to cow readers into submission. References to famous thinkers and writers can even be used as substitutes for real argument. This is the infamous argument from authority, a fallacy easy to spot when explicit, but much harder when used in the hands of a skilled name-dropper.

I have certainly been guilty of this. Even while I was still an undergraduate, I realized that big names have big power. If I even mentioned the names of Dante or Milton, Galileo or Darwin, Hume or Kant, I instantly gained intellectual clout. And if I found a way to connect the topic under discussion to any famous thinker’s ideas—even if that connection was tenuous and forced—it gave my opinions weight and made me seem more “serious.” Of course I wasn’t doing this intentionally to be condescending or lazy. At the time, I thought that name-dropping was the mark of a dedicated student, and perhaps to a certain extent it is. But there is a difference between appropriately citing an authority’s work and using their work to intimidate people.

There is a third way that intellectual work can lead to condescending attitudes, and that is, for lack of a better term, political posturing. This particular attitude isn’t very tempting for me, since I am by nature not very political, but this habit of mind is extremely common nowadays.

By political posturing I mean several related things. Most broadly, I mean when someone feels that people (himself included) must hold certain beliefs in order to be acceptable. These can be political or social beliefs, but they can also be more abstract, theoretical beliefs. In any group—be it a university department, a political party, or just a bunch of friends—a certain amount of groupthink is always a risk. Certain attitudes and opinions become associated with the group, and they become a marker of identity. In intellectual life this is a special hazard because proclaiming fashionable and admirable opinions can replace the pursuit of truth as the criterion of acceptability.

At its most extreme, this kind of political posturing can lead to a kind of gang mentality, wherein disagreement is seen as evil and all dissent must be punished with ostracism and mob justice. This can be observed in the Twitter shame campaigns of recent years, but a similar thing happens in intellectual circles.

During my brief time in graduate school, I felt an intense and ceaseless pressure to espouse leftist opinions. This seemed to be ubiquitous: students and professors sparred with one another, in person and in print, by trying to prove that their rival is not genuinely right-thinking (or “left-thinking” as the case may be). Certain thinkers could not be seriously discussed, much less endorsed, because their works had intolerable political ramifications. Contrariwise, questioning the conclusions of properly left-thinking people could leave you vulnerable to accusations about your fidelity to social justice or economic equality.

But political posturing has a milder form: know-betterism. Know-betterism is political posturing without the moral outrage, and its victims are smug rather than indignant.

The book Language, Truth, and Logic by A.J. Ayer comes to mind, wherein the young philosopher, still in his mid-twenties, simply dismisses the work of Plato, Aristotle, Spinoza, Kant and others as hogwash, because it doesn’t fit into his logical positivist framework.

Indeed, logical positivism is an excellent example of the pernicious effects of know-betterism. In retrospect, it seems incredible that so many brilliant people endorsed it, because logical positivism has crippling and obvious flaws. But not only did people believe it, but they thought it was “The Answer”—the solution to every philosophical problem—and considered anyone who thought otherwise a crank or a fool, somebody who couldn’t see the obvious. This is the danger of groupthink: when everyone “in the know” believes something, it can seem obviously right, regardless of the strength of the ideas.

The last condescending attitude I want to mention is rightness—the obsession with being right. Now of course there’s nothing wrong with being right. Getting nearer to the truth is the goal of all honest intellectual work. But to be overly preoccupied with being right is, I think, both an intellectual and a personal shortcoming.

As far as I know, the only area of knowledge in which real certainty is possible is mathematics. The rest of life is riddled with uncertainty. Every scientific theory might, and probably will, be overturned by a better theory. Every historical treatise is open to revision when new evidence, priorities, and perspectives arise. Philosophical positions are notoriously difficult to prove, and new refinements are always around the corner. And despite the best efforts of the social sciences, the human animal remains a perpetually surprising mystery.

To me, this uncertainty in our knowledge means that you must always be open to the possibility that you are wrong. The feeling of certainty is just that—a feeling. Our most unshakeable beliefs are always open to refutation. But when you have read widely on a topic, studied it deeply, thought it through thoroughly, it gets more and more difficult to believe that you are possibly in error. Because so much effort, thought, and time has gone into a conclusion, it can be personally devastating to think that you are mistaken.

This is human, and understandable, but can also clearly lead to egotism. For many thinkers, it becomes their goal in life to impose their conclusions upon the world. They struggle valiantly for the acceptance of their opinions, and grow resentful and bitter when people disagree with or, worse, ignore them. Every exchange thus becomes a struggle, pushing your views down another person’s throat.

This is not only an intellectual shortcoming—since it is highly unlikely that your views represent the whole truth—but it is also a personal shortcoming, since it makes you deaf to other people’s perspectives. When you are sure you’re right, you can’t listen to others. But everyone has their own truth. I don’t mean that every opinion is equally valid (since there are such things as uninformed opinions), but that every opinion is an expression, not only of thoughts, but of emotions, and emotions can’t be false.

If you want to have a conversation with somebody instead of giving them a lecture, you need to believe that they have something valuable to contribute, even if they are disagreeing with you. In my experience it is always better, personally and intellectually, to try to find some truth in what someone is saying than to search for what is untrue.

Lastly, being overly concerned with being right can make you intellectually timid. Going out on a limb, disagreeing with the crowd, putting forward your own idea—all this puts you at risk of being publicly wrong, and thus will be avoided out of fear. This is a shame. The greatest adventure you can take in life and thought is to be extravagantly wrong. Name any famous thinker, and you will be naming one of the most gloriously incorrect thinkers in history. Newton, Darwin, Einstein—every one of them has been wrong about something.

For a long time I have been the victim of all of these mentalities—argumentativeness, name-dropping, political posturing, know-betterism, and rightness—and to a certain extent, probably I always will. What makes them so easy to fall into is that they are positive attitudes taken to excess. It is admirable and good to subject claims to logical scrutiny, to read and cite major authorities, to advocate for causes you think are right, to respect the opinions of your peers and colleagues, and to prioritize getting to the truth.

But taken to excesses, these habits can lead to egotism. They certainly have with me. This is not a matter of simple vanity. Not only can egotism cut you off from real intimacy with other people, but it can lead to real unhappiness, too.

When you base your self-worth on beating other people in argument, being more well read than your peers, being on the morally right side, being in the know, being right and proving others wrong, then you put yourself at risk of having your self-worth undermined. To be refuted will be mortifying, to be questioned will be infuriating, to be contradicted will be intolerable. Simply put, such an attitude will put you at war with others, making you defensive and quick-tempered.

An image that springs to mind is of a giant castle with towering walls, a moat, and a drawbridge. On the inside of this castle, in the deepest chambers of the inner citadel, is your ego. The fortifications around your ego are your intellectual defenses—your skill in rhetoric, logic, argument, debate, and your impressive knowledge. All of these defense are necessary because your sense of self-worth depends on certain conditions: being perceived, and perceiving oneself, as clever, correct, well-educated, and morally admirable.

Intimacy is difficult in these circumstances. You let down the drawbridge for people you trust, and let them inside the walls. But you test people for a long time before you get to this point—making sure they appreciate your mind and respect your opinions—and even then, you don’t let them come into the inner citadel. You don’t let yourself be totally vulnerable, because even a passing remark can lead to crippling self-doubt when you equate your worth with your intellect.

Thus the fundamental mindset that leads to all of the bad habits described above is that being smart, right, or knowledgeable is the source of your worth as a human being. This is dangerous, because it means that you constantly have to reinforce the idea that you have all of these qualities in abundance. Life becomes then a constantly performance, an act for others and for yourself. And because a part of you knows that its an act—a voice you try to ignore—then it also leads to considerable bad faith.

As for the solution, I can only speak from my own experience. The trick, I’ve found, is to let down my guard. Every time you defend yourself you make yourself more fragile, because you tell yourself that there is a part of you that needs to be defended. When you let go of your anxieties about being wrong, being ignorant, or being rejected, your intellectual life will be enriched. You will find it easier to learn from others, to consider issues from multiple points of view, and to propose original solutions.

Thus I can say that reading has made me a better person, not because I think intellectual people are worth more than non-intellectuals, but because I realized that they aren’t.

Review: The Concept of Mind

Review: The Concept of Mind
The Concept of Mind

The Concept of Mind by Gilbert Ryle

My rating: 5 of 5 stars

Men are not machines, not even ghost-ridden machines. They are men—a tautology which is sometimes worth remembering.

The problem of mind is one of those philosophical quandaries that give me a headache and prompt an onset of existential angst whenever I try to think about them. How does consciousness arise from matter? How can a network of nerves create a perspective? And how can this consciousness, in turn, influence the body it inhabits? When we look at a brain, or anywhere else in the physical world, we cannot detect consciousness; only nerves firing and blood rushing. Where is it? The only evidence for consciousness is my own awareness. So how do I know anybody else is conscious? Could it be just me?

If you think about the problem in this way, I doubt you will make any progress either, because it is insoluble. This is where Gilbert Ryle enters the picture. According to Ryle, the philosophy of mind was put on a shaky foundation by Descartes and his followers. When Descartes divided the world into mind and matter, the first private and the other public, he created several awkward problems: How do we know other people have minds? How do the realms of matter and mind interact? How can the mind be sure of the existence of the material world? And so on. This book is an attempt to break away from the assumptions that led to these questions.

Ryle’s philosophy is often compared with that of the later Wittgenstein, and justly so. The main thrusts of their argument are remarkably similar. This may have been due simply to the influence of Wittgenstein on Ryle, or vice versa—there appears to be some doubt. Regardless, it is appropriate to compare them, as I think, taken together, their ideas help to shed light on one another’s philosophy.

Both Wittgenstein and Ryle are extraordinary writers. Wittgenstein is certainly the better of the two, though this is not due to any defect on Ryle’s part. Wittgenstein is aphoristic, sometimes oblique, employing numerous allegories and similes to make his point. Ryle is sharp, direct, and epigrammatic. Wittgenstein is in the same tradition as Nietzsche and Schopenhauer, while Ryle is the direct descendent of Jane Austen. But both of them are witty, quotable, and brilliant. They have managed to create excellent works of philosophy without using any jargon and avoiding all obscurity. Why can’t philosophy always be written so well?

There is no contradiction, or even paradox, in describing someone as bad at practising what he is good at preaching. There have been thoughtful and original literary critics who have formulated admirable canons of prose style in execrable prose. There have been others who have employed brilliant English in the expression of the silliest theories of what constitute good writing.

Ryle also has the quality—unusual among philosophers—of being apparently quite extroverted. His eyes are turned not toward himself, but to his surroundings. He speaks with confidence and insight about the way people normally behave and talk, and in general prefers this everyday understanding of things to the tortured theories of his introverted colleagues.

Teachers and examiners, magistrates and critics, historians and novelists, confessors and non-commissioned officers, employers, employees and partners, parents, lovers, friends and enemies all know well enough how to settle their daily questions about the qualities of character and intellect of the individuals with whom they have to do.

This book, his most famous, is written not as a monograph or an analysis, but as a manifesto. Ryle piles epigram upon epigram until you are craving just one qualification, just one admission that he might be mistaken. He even seems to get carried away by the force of his own pen, leading to some needlessly long and repetitious sections. What is more, his style has the defect of all epigrammatists: he is utterly convincing in short gasps, but ultimately leaves his reader grasping for something more systematic.

Ryle is often called an ordinary language philosopher, and the label suits him. Like Wittgenstein, he thinks that philosophical puzzles come about by the abuse of words; philosophers fail to correctly analyze the logical category of words, and thus use them inappropriately, leading to false paradoxes. The Rylean philosopher’s task is to undo this damage. Ryle likens his own project to that of a cartographer in a village. The residents of the village are perfectly able to find their way around and can even give directions. But they might not be able to create an abstract representation of the village’s layout. This is the philosopher’s job: to create a map of the logical layout of language. This will prevent other foreigners from getting lost.

Ryle begins by pointing out some obvious problems with the Cartesian picture—a picture he famously dubs the ‘Ghost in the Machine.’ First, we have no idea how these two metaphysically distinct realms of mind and matter interact. Thus by attempting to explain the nature of human cognition, the Cartesians cordon it off from the familiar world and banish it to a shadow world, leaving unexplained how the shadow is cast.

Second, the Cartesian picture renders all acts of communication into a kind of impossible guessing game. You would constantly be having to fathom the significance of a word or gesture by making conjectures as to what is happening in a murky realm behind an impassible curtain (another person’s mind). Conjectures of this kind would be fundamentally dissimilar to other conjectures because there would be, in principle, no way to check them. In the Cartesian picture, people’s minds are absolutely cut off from all outside observation.

Ryle is hardly original in pointing out these two problems, although he does manage to emphasize these embarrassing conundrums with special force. His more original critique is what has been dubbed “Ryle’s Regress.” This is made against what Ryle calls the “intellectualist legend,” which is the notion that all intelligent behaviors are the products of thoughts.

For example, if you produced a grammatically correct English sentence, it means (according to the “legend”) that you have properly applied the correct criteria for English grammar. However, this must mean that you applied the proper criteria to the criteria, i.e. you applied the meta-criteria that allowed you to choose the rules for English grammar and not the rules for Spanish grammar. But what meta-meta-criteria allowed you to pick the correct meta-criteria for the criteria for the English sentence? (I.e., what anterior rule allowed you to pick the rule that allowed you to choose the rule for determining whether English or Spanish rules should be used instead of the rule for choosing whether salt or sugar should be added to a recipe?—sorry, that’s a mouthful.)

The point is that we are led down an infinite regress if we require rules to proceed action. This is one of the classic arguments against cognitive theories of the mind. (I believe Hubert Dreyfus used this same argument in his criticisms of artificial intelligence and cognitive psychology. Considering the strides that A.I. has made since then, I’m sure there must be some way around this regress, though I don’t know what. Hopefully somebody can explain it to me.)

These are his most forceful reasons for rejecting the Ghost in the Machine. From reading the other reviews here, I gather that many people are fairly convinced by these arguments. Nonetheless, some have accused Ryle of failing to replace the Cartesian picture with anything else. This is not a fair criticism. Ryle does his best to rectify the mistaken picture with his own view, though you may not find this view very satisfying.

After doing his best to discredit the Cartesian picture, the rest of the book is devoted to demonstrating Ryle’s view that none of the ways we ordinarily use language necessitate or even imply that “the mind is its own place.” This is where he most nearly approaches Wittgenstein, for his main contentions are the following: First, it is only when language is misused by philosophers (and laypeople) that we get the impression that the mind is a metaphysically distinct thing. Second, our intellectual and emotional lives are in fact not cut off and separate from the world; rather, public behavior is at the very core of our being.

Here is just one example. According to the Cartesian view, a person “really knows” how to divide if, when given a problem—let’s say, 144 divided by 24—his mind goes through the necessary steps. Let us say a professor gives a student this problem, and the student correctly responds: 6. The professor conjectures that the student’s mind has gone through the appropriate operation. But what if the professor asks him the exact same question five minutes later, and the student responds: 8? And what if he did it again, and the student responds: 3? The following dialogue ensues:

PROFESSOR: Ah, you’re just saying random numbers. You really don’t know how to divide.

STUDENT: But my mind performed the correct operation when you asked me the first time. I forgot how to do it after that.

PROFESSOR: How do you know your mind performed the correct operation the first time?

STUDENT: Introspection.

PROFESSOR: But if you can’t remember how to do it now, how can you be sure that you did know previously?

STUDENT: Introspection, again.

PROFESSOR: I don’t believe you. I don’t think you ever knew.

The point of the dialogue is this. According to the Cartesian view, introspection provides not merely the best, but the only true window into the mind. You are the only person who can know your own mind, and everyone else knows it via conjecture. Thus the student, and only the student, would really know if his mind performed the proper operation, and thus he alone would really know if he could divide. Yet this is not the case. We say somebody “knows how to divide” if they can consistently answer questions of division correctly.

Thus, Ryle argues, to “know how to divide” is a disposition. And a disposition cannot be analyzed into episodes. In other words, “knowing how to divide” is not a collection of discrete times when a mind went through the proper operations. Similarly, if I say “the glass is fragile,” I do not mean that it has broken or even that it will necessarily break, just that it would break easily. Fragility, like knowing long division, is a disposition.

According to Ryle, when philosophers misconstrued what it meant to know how to divide (and other things), they committed a “category mistake.” They miscategorized the phrase; they mistook a disposition for an episode. More generally, the Cartesians mix up two different sorts of knowledge: knowing how and knowing that. They confuse dispositions, capacities, and propensities for rules, facts, and criteria. This leads them into all sorts of muddles.

Here is a classic example. Since Berkley, philosophers have been perplexed by the mind’s capacity to form abstract ideas. The word “red” encompasses many different particular shades, and is thus abstract. Is our idea of red some sort of vague blend of all particular reds? Or is it a collections of different, distinct shades we bundle together into a group? Ryle contends that this question makes the following mistake: Recognizing the color red is a knowing how. It is a skill we learn, just like recognizing melodies, foreign accents, and specific flavors. It is a capacity we develop; it is not the forming of a mental object, an “idea,” that sits somewhere in a mental space.

Ryle applies this method to problem after problem, which seem to dissolve in the acid of his gaze. It is an incredible performance, and a great antidote for a lot of the conundrums philosophers like to tie themselves up in. Nevertheless, you cannot shake the feeling that for all his directness, Ryle dances around the main question: How does awareness arise from the brain?

Well, I’m not positive about this, but I believe it was never Ryle’s intention to explain this, since he considers the question outside the proper field of philosophy. It is a scientific, not a philosophical question. His goal was, rather, to show that the mind/body problem is not an insoluble mystery or evidence of metaphysical duality, and that the mind is not fundamentally private and untouchable. Humans are social creatures, and it is only with great effort that we keep some things to ourselves.

I certainly cannot keep this review to myself. This was the best work of philosophy I have read since finishing Wittgenstein’s Philosophical Investigations in 2014, and I hope you get a chance to read it too. Is it conclusive? No. Is it irrefutable? I doubt it. But it is witty, eloquent, original, and devoid of nonsense. This is as good as philosophy gets.

View all my reviews