Review: From a Logical Point of View

Review: From a Logical Point of View
From a Logical Point of View: Nine Logico-Philosophical Essays

From a Logical Point of View: Nine Logico-Philosophical Essays by Willard Van Orman Quine

My rating: 4 of 5 stars

This book is difficult for me to review, mainly because there were so many parts of it that I did not fully understand. Quine is not writing for the general reader; he is writing for professional philosophers—a category that excludes people such as myself, who have not taken a single course in formal logic. Nevertheless, there are some parts of this book—particularly the first two essays, “On What There Is” and “Two Dogmas of Empiricism”—which can be understood by the persistent amateur.

I will try to explain what I think I know about Quine, subject to the very important caveat that these are the general impressions of somebody who is not an expert. I might easily be wrong.

Quine is an American, and so is very literal; he likes things he can touch, or at least can clearly define. This leads him to a kind of ontological puritanism: he wishes to admit as few types of entities into existence as possible. The most obvious token of this is his materialism. Quine thinks the world is fundamentally matter; thus, he rejects the existence of spirits, and, more surprisingly, of minds—at least minds as distinctly different metaphysical objects. (He is fine with keeping mentalistic terminology, so long as it is understood as paraphrases of behavioral phenomena.) This also prompts Quine to reject other, more banal, sorts of things like meanings and properties. In fact, Quine only acknowledges the existence of two sorts of things: physical objects, and sets (or classes). If I am not mistaken, Quine’s belief in something so abstract as a logical set is motivated by his famous indispensability argument—that we ought to believe in the types of things our theories of the world need.

Quine’s materialism is tied to two other -isms: holism and naturalism. By naturalism, I mean that Quine thinks that our knowledge comes from observation, from experience, from science; furthermore, that this is the only type of knowledge we have available. Quine would never attempt something like Descartes did, seeking to ground all of the contingent assertions of science with an unquestionable first principle (in Descartes’ case, this being that he thinks, and therefore is). Quine is even uncomfortable with doctrines such as Wittgenstein’s, which hold philosophy to be a sort of second-level activity, a discipline which tackles questions of a fundamentally different sort than those investigated by scientists. For Quine, there are no fundamentally different sorts of questions: all questions are questions about the natural world, and thus on identical epistemological and ontological footing. The only difference between philosophy and science, for Quine, is that philosophers ask more general questions.

Quine’s holism is, perhaps, the most interesting aspect of his views. The logical positivists thought that individual statements could be accepted or rejected based on our experiences. In other words, we make a statement about the physical world, and then go about trying to verify it with some experience. But Quine points out that this is far too simple an account. Our statements do not exist in isolation, but are tied to an entire web of beliefs—some very abstract and remote from any experience.

Keep this in your mind’s eye: a huge, floating hunk of miscellaneous trash, adrift in the ocean. Now, only some of this trash directly touches the ocean; these are the parts of our knowledge that directly ‘touch’ the experiential world. A great part of this trash, however, lies in the center of the mass, far away from the water; and this is analogous to our most abstract beliefs. If this gigantic trash island were to hit something—let us say, a big boat—two things could happen. The boat could be destroyed, and its wreckage simply added onto the floating trash island; or, the boat could tear its way through the trash island, changing its shape dramatically. These are, roughly, the two things that can happen when we face a novel experience: we can somehow assimilate it into our old beliefs, or we can reconfigure our whole web of beliefs to accommodate this new information.

I will drop the metaphor. What Quine is saying is that there are no beliefs of ours that cannot be revised—nothing is sacred. We have even considered revising our principles of logic, previously so unquestionable, in the face of quantum weirdness. There are also no experiences that could not, in principle, be explained away: we could cite hallucinations or mental illness or human error as the reason behind the anomalous experience.

Keeping Quine’s naturalism and holism in mind, it is pretty clear why he rejects the main tenets of logical positivism. First, Quine points out the vagueness of what philosophers mean when they talk about ‘analytic statements’. The classic case of an analytic statement is “all bachelors are unmarried,” which is true by definition: since a bachelor is defined as an unmarried man, it could not be otherwise that bachelors are unmarried. But note that this relies on the idea that ‘bachelor’ has the same ‘meaning’ as the phrase ‘unmarried man’. But what is a ‘meaning’? It sounds like a mental phenomenon; and because Quine does not hold minds to exist, he is very skeptical about ‘meanings’. So in what sense do ‘meanings’ exist? Can they be paraphrased into behavioral terminology? Quine does not exactly rule it out, but is rather dubious.

Quine’s holism is also at odds with the project of logical positivism. For, as already noted, logical positivists regard the meaning of a statement to be its verification; but Quine believes—and I think quite rightly—that statements do not exist in isolation, but rely on a whole background web of beliefs and doctrines. Here is a concrete example. Let us say we wanted to go out and verify the statement ‘flying saucers are real.’ We wander around with our camera, and then suddenly see a shiny disk floating through the air. We snap some photos, and pronounce our statement ‘verified’. But will people believe us? Scientists look at the object, and say that it is a weather balloon; psychologists examine us, and say that we are demented. The statement has thus not been verified at all by our experience; and even if we had better evidence of flying saucers than a few photographs, it is at least conceivable that we could go on finding alternative explanations—secret government aircraft, some mad scientist’s invention, an elaborate prank, etc.

I will stop trying to summarize his arguments here, because I feel like I am already in over my head. I will say, however, that Quine’s argument against logical positivism seems to rely on his own presumptions about knowledge and the world—which may, after all, be quite reasonable, but this still does not make for a conclusive argument. In short, Quine may be arguing against the dogmas of logical empiricism with dogmas of his own. I often had this experience while reading Quine: at first I would disagree; but then, after formulating my disagreement, I would realize I was only begging the question, and that we were starting with very different assumptions.

Quine is preoccupied with this idea of ontological commitment. He is exercised by his felt necessity of postulating the existence of things used in discourse, like meanings, mathematical objects and so forth. These are, no doubt, important questions; yet I do not find them terribly interesting to think about. In my experience, wondering about whether something ‘really exists’ often leads up dark intellectual alleys. When it comes to things like UFOs, the question is doubtless a vital one to ask; but when it comes to things like ‘sets’ and ‘meanings’, it does not excite me: for what would be the difference if sets ‘really existed’ or if they were just tools used in discourse with no existence outside of names and thought? I will leave these desert landscapes of logic for ones more verdant.

To conclude, Quine was obviously a brilliant man; he was, in fact, so brilliant, that I cannot understand how brilliant he was.



View all my reviews

Review: The Complete Essays

Review: The Complete Essays

The Complete EssaysThe Complete Essays by Michel de Montaigne

My rating: 5 of 5 stars

e’ssay. (2) A loose sally of the mind; an irregular indigested piece; not a regular and orderly composition.
—From Samuel Johnson’s Dictionary of the English Language.

Now I finally have an answer to the famous “desert island book” question: This book. It would have to be. Not that Montaigne’s Essays is necessarily the greatest book I’ve ever read—it’s not. But here Montaigne managed to do something that has eluded the greatest of our modern science: to preserve a complete likeness of a person. Montaigne lives and breathes in these pages, just as much as he would if he’d been cryogentically frozen and brought back to life before your eyes.

Working your way through this book is a little like starting a relationship. At first, it’s new and exciting. But eventually the exhilaration wears off. You begin looking for other books, missing the thrill of first love. But what Montaigne lacks in bells and whistles, he more than compensates for with his constant companionship. You learn about the intimacies of his eating habits and bowel movements, his philosophy of sex as well as science, his opinion on doctors and horsemanship. He lets it all hang out. And after a long and stressful day, you know Montaigne will be waiting on your bedside table to tell you a funny anecdote, to have easygoing conversation, or to just pass the time.

To quote Francis Bacon’s Essays: “Some books are to be tasted, others to be swallowed, and some few to be chewed and digested.” Montaigne’s essays are to be sipped. This book took me a grand total of six months to read. I would dip into it right before bed—just a few pages. Sometimes, I tried to spend more time on the essays, but I soon gave it up. Montaigne’s mind drifts from topic to topic like a sleepwalker. He has no attention span for longwinded arguments or extended exposition. It’s not quite stream-of-consciousness, but almost. As a result, whenever I tried to spend an hour on his writing, I got bored.

Plus, burning your way through this book would ruin the experience of it. Another reviewer called Montaigne’s Essays the “introvert’s Bible”. This is a very perceptive comment. For me, there was something quasi-religious in the ritual of reading a few pages of this book right before bed—night after night after night. For everything Montaigne lacks in intelligence, patience, diligence, and humility, he makes up for with his exquisite sanity. I can find no other word to describe it. Dipping into his writing is like dipping a bucket into a deep well of pure, blissful sanity. It almost seems like a contradiction to call someone “profoundly down-to-earth,” but that’s just it. Montaigne makes the pursuit of living a reasonable life into high art.

Indeed, I find something in Montaigne’s quest for self-knowledge strangely akin to religious thinking. In Plato’s system, self-knowledge leads to knowledge of the abstract realm of ideals; and in the Upanishads, self-knowledge leads to the conception of the totality of the cosmos. For Montaigne, self-knowledge is the key to knowledge of the human condition. In his patient cataloging of his feelings and opinions, Montaigne shows that there is hardly anything like an unchanging ‘self’ at the center of our being, but we are rather an ever-changing flux of emotions, thoughts, memories, anxieties, hopes, and sensations. Montaigne is a Skeptic one moment, an Epicurean another, a Stoic still another, and finally a Christian.

And isn’t this how it always is? You may take pride in a definition of yourself—a communist, a musician, a vegan—but no simple label ever comes close to pinning down the chaotic stream that is human life. We hold certain principles near and dear one moment, and five minutes later these principles are forgotten with the smell of lunch. The most dangerous people, it seems, are those that do try to totalize themselves under one heading or one creed. How do you reason with a person like that?

I’ve read too much Montaigne—now I’m rambling. To return to this book, I’m both sorry that I’ve finished it, and excited that it’s done. Now I can move on to another bedside book. But if I ever feel myself drifting towards radicalism, extremism, or if I start to think abstract arguments are more important than the real stuff of human life, I will return to my old friend Montaigne. This is a book that could last you a lifetime.

Narcicus Caravaggio

View all my reviews

Quotes & Commentary #49: Orwell

Quotes & Commentary #49: Orwell

All left-wing parties in the highly industrialized parties are at-bottom a sham, because they make it their business to fight against something they do not really wish to destroy. They have internationalist aims, and at the same time they struggle to keep up a standard of life with which those aims are incompatible.

—George Orwell, A Collection of Essays

Yesterday I wrote an essay trying to answer this question: What’s the right thing to do in morally compromising circumstances? This is one of the oldest and most vexing questions of human existence; and there’s no way I’m going to crack this nut in one blog post. That’s why I’m writing another one.

As George Orwell points out, this question isn’t confined to any one sphere of our lives, but confronts us every day, in manifold and invisible ways. When we go to the grocery store, when we buy a shirt, when we download a song, when we get the latest model of smartphone, we are supporting business practices that are largely hidden from us, but which may be morally repulsive.

What is life like for the factory workers who made my computer? What are the conditions for the animals whose meat I eat? Where does the material from my jeans come from, how is it processed, who are the workers who make it? For all I know, I may be patronizing exploitative, abusive, oppressive, and otherwise unethical businesses—and, the more I consider it, the more it seems likely that I do.

Unethical business practices aside, there is the simple fact of inequality. On the left we spend a lot of time criticizing the vast wealth inequality that exists within the United States; and yet we do not often stop to realize how much wealthier are most of us than people elsewhere. Is the first situation unjust, and the second not? Is it right that some countries are wealthier than others? And if not, can we logically desire our present standard of life while maintaining our political ideals?

To the extent that opponents of inequality are immersed in a global economy—and we are, all of us—they are participating in a system whose consequences they find morally wrong. But how can you rebel against a global paradigm? You can try to minimize your damage. You can try to patronize businesses who have more humane business practices. You can become a vegan and buy second-hand clothes.

And yet, it is simply impossible—logistically, just from lack of time and resources—to be absolutely sure of the consequences of all your actions in a system so vast and so complex. It would be a full-time job to be a perfectly conscientious consumer. You can’t personally investigate each factory or tour each farm. You can’t know everything about the company you work for, the bank you store your money in, the supermarkets you buy your food from.

This is the enigma of being immersed in an ethically compromising system. To a certain extent, resist or not, you become complicit in a social system you did not design and whose consequences you don’t approve of. It is one of the tragic but unavoidable facts of human life that good people can still do bad things, simply by being immersed in a bad social system. An economy of saints can still sin.

In economics this has a technical name: the fallacy of composition. This is the fallacy of extrapolating from the qualities of the parts to the qualities of the whole. A nation full of penny-pinchers may still be in debt. A nation full of expert job-seekers may still have high unemployment. Morally, this means a nation of good people may yet do evil.

The question, for me, is this: Where do we draw the line separating the culpability of the individual from the culpability of the system? To illustrate this, let me take two extreme examples.

Since teaching, as a profession, tends to attract idealistic and left-wing people, I think many teachers, old and young, think that the educational system in the United States is deeply flawed. The standardized tests, the inequality between school districts, the way that we evaluate kids and impart knowledge—many aspects of the system seem unfair and ineffective.

And yet, I think very few people would condemn the teachers who continue to work within this system, even if the system tends to reproduce inequality. We naturally blame the policy-makers and not the teachers, who are only doing their best in compromising circumstances.

Take the opposite extreme: soldiers working in a concentration camp. Now, it is clear that these soldiers were not personally responsible for creating the camp, and were following the orders of their superiors. Like the teachers, they are immersed in a situation they did not design, in a system with morally reprehensible results. (Obviously, the results of a concentration camp are incomparably worse than even the most flawed school system.)

In this situation, I’d wager that most of us would maintain that the soldiers had some responsibility and, at the very least, some of the blame. That is, we do not simply blame the system, but blame the individuals who took part in it. The whole situation is so totally, fundamentally, indisputably unacceptable that there are no extenuating circumstances, no deferment of guilt.

Now, there is obviously a very big difference between a system that is (ostensibly at least) designed to reduce inequality and provide education, and a system that is designed to kill people by the thousands and millions. As a result, in both of these situations, the moral verdict seems relatively clear: the noble aims of the first system excuse its flaws, while the horrid aims of the second system condemn its participants.

The problem, for most of us, is that we so often find ourselves in between these two extremes (although, admittedly closer to the case of teachers than Nazi soldiers, I hope). But where exactly do we draw the line? Where does our responsibility—as participants in a system—begin? And in what circumstances are we morally excused by being immersed in a flawed system?

The more I think about it, the more I am led to the conclusion that being alive requires some ethical compromise. In this regard, I often think of something Joseph Campbell said: “You yourself are participating in the evil, or you are not alive. Whatever you do is evil for somebody. This is one of the ironies of the whole creation.”

And this quote, I think, is where I have to stop for now, since it brings me to another Quotes & Commentary.

Quotes & Commentary #44: Montaigne

Quotes & Commentary #44: Montaigne

Who does not see that I have taken a road along which I shall go, without stopping and without effort, as long as there is ink and paper in the world?

—Michel de Montaigne

One thing above all attracts me to Montaigne: we both have an addiction to writing.

It is a rather ugly addiction. I personally find those who love the sound of their own voice nearly intolerable—and unfortunately I fall into this category, too—but to be addicted to writing is far, far worse: Not only to I love airing my opinions in conversation, but I think my views are so valuable that they should be shared with the world and preserved for future generations.

Why do I write so much? Why do I so enjoy running my fingers over a keyboard and seeing letters materialize on the screen? What mad impulse keeps me going at it, day after day, without goal and without end? And why do I think it’s a day wasted if I don’t have time to do my scribbling?

In his essay “Why I Write,” George Orwell famously answered these questions for himself. His first reason was “sheer egoism,” and this certainly applies to me, although I would define it a little differently. Orwell characterizes the egoism of writers as the desire “to seem clever, to be talked about, to be remembered after death,” and in general to live one’s own life rather than to live in the service of others.

I would call this motivation “vanity” rather than “egoism,” which is undeniably one of my motivations to write—especially the desire to seem clever, one of my uglier qualities. But this vanity is rather superficial; there is a deeper egoism at work.

Ever since I can remember, I have had the awareness, at times keen and painful, that the world of my senses, the world that I share with everyone else, is separate and distinct from the world in my head—my feelings, imagination, thoughts, my dreams and fantasies. The two world were intimately related, and communicated constantly, but there was still an insuperable barrier cutting off one from the other.

The problem with this was that my internal world was often far more interesting and beautiful to me than the world outside. Everyone around me seemed totally absorbed in things that were, to me, boring and insipid; and I was expected to show interest in these things too, which was frustrating. If only I could express the world in my head, I thought, and bring my internal world into the external world, then people would realize that the things they busy themselves with are silly and would occupy their time with the same things that fascinated me.

But how to externalize my internal world? This is a constant problem. Some of my sweetest childhood memories are of playing all by myself, with sticks, rocks, or action figures, in my room or my backyard, in a landscape of my own imagination. While alone, I could endow my senses with the power of my inner thoughts, and externalize my inner world for myself.

Yet to communicate my inner world to others, I needed to express it somehow. This led to my first artistic habit: drawing. I used to draw with the same avidity as I write now, filling up pages and pages with my sketches. I advanced from drawings of whales and dinosaurs, to medieval arms and armor, to modern weaponry. Eventually this gave way to another passion: video games.

Now, obviously, video games are not a means of self-expression; but I found them addicting nonetheless, and played them with a seriousness and dedication that alarms me in retrospect—so many hours gone!—because they were an escape. When you play a video game you enter another world, in many ways a more exciting and interesting world, a world of someone’s imagination. And you are allowed to contribute to this dream world—in part, at least—and adopt a new identity, in a world that abides by different rules.

Clearly, escapism and self-expression, even if they spring from the same motive, are incompatible; in the first you abandon your identity, and in the second you share it. For this reason, I couldn’t be satisfied for long with gaming. In high school I began learn guitar, to sing, and eventually to write my own songs. This satisfied me for a while; and to a certain extent it still does.

But music, for me, is primarily a conduit of emotion; and as I am not a primarily emotional person, I’ve always felt, even after writing my own songs, that the part of myself I wanted to express, the internal world I still wanted to externalize, was still getting mostly left behind. It was this that led me to my present addiction: writing.

I should pause here and note that I’m aware how egotistical and cliché this narrative seems. My internal world is almost entirely a reflection of the world around me—far, far less interesting than the world itself—and my brain, I’m sorry to say, is mostly full of inanities. I am in every way a product—a specifically male, middle-class, suburban product—of my time and place; and even my narrative about trying to express myself is itself a product of my environment. My feeling of being original is unoriginal. My life story is a stereotype.

I know all of this very well, and yet I cannot shake this persistent feeling that I have something I need to share with the world. More than share, I want to shape the world, to mold it, to make it more in accordance with myself. And my writing is how I do that. This is egoism in its purist form: the desire to remake the world in my image.

A blank page is a parallel world, and I am its God. I control what happens and when, how it looks, what are its values, how it ends, and everything else. This feeling of absolute control and complete self-expression is what is so intoxicating about writing, at least for me. Once you get a taste of it, you can’t stop. Montaigne couldn’t, at least: he kept on editing, polishing, revising, and expanding his essays until his death. And I suspect I’ll do the same, in my pitiful way, pottering about with nouns and verbs, eventually running out of new things to write about and so endlessly rehashing old ones, until I finally succumb to the tooth of time.

After mentioning the egoism of writers, Orwell goes on to mention three other motivations: aesthetic enthusiasm, historical impulse, and political purpose. But I think he leaves two things out: self-discovery and thinking.

Our thoughts are fugitive and vague, like shadows flickering on the wall, forever in motion, impossible to get hold of. And even when we do seem to come upon a complete, whole, well-formed thought, as often as not it pops like a soap bubble as soon as we stretch out our fingers to touch it. Whenever I try to think something over silently, without recording my thoughts, I almost inevitably find myself grasping at clouds. Instead of reaching a conclusion, I get swept off track, blown into strange waters, unable to remember even where I started.

Writing is how I take the fleeting vapors of my thoughts and solidify them into definite form. Unless I write down what I’m thinking, I can’t even be sure what I think. This is why I write these quotes and commentary; so far it has been a journey of self-investigation, probing myself to find out my opinions.

When I commit to write, it keeps me on a certain track. Unless you are like Montaigne and write wherever your thoughts take you, writing inevitably means sticking to a handful of subjects and proceeding in an orderly way from one to the other. Since I am recording my progress, and since I am committed to reaching the conclusion, this counteracts my tendency to get distracted or to go off topic, as I do when I think silently.

This essay is a case in point. Although these are things I have often talked and thought about, I had never fully articulated to myself the reasons why I write, or strung all my obsessions into a narrative with a unified motivation, as I did above, until I decided to write about them. No wonder I’m addicted.

Quotes & Commentary #42: Montaigne

Quotes & Commentary #42: Montaigne

Everything has a hundred parts and a hundred faces: I take one of them and sometimes just touch it with the tip of my tongue or my fingertips, and sometimes I pinch it to the bone. I jab into it, not as wide but as deep as I can; and I often prefer to catch it from some unusual angle.

—Michel de Montaigne

The pursuit of knowledge has this paradoxical quality: it demands perfection and yet continuously, inevitably, and endlessly fails in its goal.

Knowledge demands perfection because it is meant to be true, and truth is either perfect or nonexistent—or so we like to assume.

Normally, we think about truth like this: I make a statement, like “the cat is on the mat,” and this statement corresponds to something in reality—a real cat on a real mat. This correspondence must be perfect to be valid; whether the cat is standing on the side of the mat, or if the cat is up a tree, then the statement is equally false.

To formulate true statements—about the cosmos, about life, about humanity—this is the goal of scholarship. But can scholarship end? Can we get to a point at which we know everything and we can stop performing experiments and doing research? Can we reach the whole truth?

This would require scholars to create works that were both definitive—unquestioned in their authority—and exhaustive—covering the entire subject. What would this entail? Imagine a scholar writing about the Italian Renaissance, for example, who wants to write the perfect work, the book that totally and completely encapsulates its subject, rendering all additional work unnecessary.

This seems as if it should be theoretically possible, at least. The Italian Renaissance was a sequence of events—individuals born, paintings painted, sculptures sculpted, trips to the toilet, accidental deaths, broken hearts, drunken brawls, late-night conversations, outbreaks of the plague, political turmoil, marriage squabbles, and everything else, great and small, that occurred within a specific period of time and space. If our theoretical historian could write down each of these events, tracing their causation, allotting each its proportional space, neutrally treating each fact, then perhaps the subject could be definitively exhausted.

There are many obvious problems with this, of course. For one, we don’t have all the facts available, but only a highly selective, imperfect, and tentative record, a mere sliver of a fraction of the necessary evidence. Another is that, even if we did have all the facts, a work of this kind would be enormously long—in fact, as long as the Italian Renaissance itself. This alone makes the undertaking impossible. But this is also not what scholars are after.

A book that represented each fact neutrally, in chronological sequence, would not be an explanation, but a chronicle; it would recapitulate reality rather than probe beneath the surface; or rather, it would render all probing superfluous by representing the subject perfectly. It would be a mirror of reality rather than search for its fundamental form.

And yet our brains are not, and can never be, impartial mirrors of reality. We sift, sort, prod, search for regularities, test our assumptions, and in a thousand ways separate the important from the unimportant. Our attention is selective of necessity, not only because we have a limited mental capacity, but because some facts are much more necessary than others for our survival.

We have evolved, not as impartial observers of the world, but as actors in a contest of life. It makes no difference, evolutionarily speaking, if our senses represent “accurately” what is out there in the world; it is only important that they alert us to threats and allow us to locate food. There is reason to believe, therefore, that our senses cannot be literally trusted, since they are adapted to survival, not truth.

Survival is, of course, not the operative motivation in scholarship. More generally, some facts are more interesting than others. Some things are interesting simply in themselves—charms that strike the sight, or merits that win the soul—while others are interesting in that they seem to hold within themselves the reason for many other events.

A history of the Italian Renaissance that gave equal space to a blacksmith as to Pope Julius II, or equal space to a parish church as to the Sistine Chapel, would be unsatisfactory, not because it was inaccurate, but because its priorities would be in disarray. All intellectual work requires judgment. A historian’s accuracy might be unimpeachable, and yet his judgment so faulty as to render his work worthless.

We have just introduced two vague concepts into our search for knowledge: interest and judgment—interest being the “inherent” value of a fact, and judgment our faculty for discerning interest. Both of these are clearly subjective concepts. So instead of impartially represented reality, our thinkers experience reality through a distorted lens—the lens of our senses, further shaped by culture and upbringing—and from this blurry image of the world, select what portion of that distorted reality they deem important.

Their opinion of what is beautiful, what is meritorious, what is crucial and what is peripheral, will be based on criteria—either explicit or implicit—that are not reducible to the content itself. In other words, our thinkers will be importing value judgments into their investigation, judgments that will act as sieves, catching some material and letting the rest slip by.

Even more perilous, perhaps, than the selection of facts, will be the forging of generalizations. Since, with our little brains, we simply cannot represent reality in all its complexity, we resort to general statements. These are statements about the way things normally happen, or the characteristics that things of the same type normally have—statements that attempt to summarize a vast number of particulars within one abstract tendency.

All generalizations employ inductive reasoning, and thus are vulnerable to Hume’s critique of induction. A thousand instances of red apples is no proof that the next apple will also be red. And even if we accept that generalizations are always more or less true—true as a rule, with some inevitable exceptions—this leaves undefined how well the generalization fits the particulars. Is it true nine times out of ten, or only seven? How many apples out of a hundred are red? Finally, to make a generalization requires selecting one quality—say, the color of apples, rather than their size or shape—among many that the particulars possess, and is consequently always arbitrary.

More hazardous still is the act of interpretation. By interpretation, I mean deciding what something means. Now, in some intellectual endeavors, such as the hard sciences, interpretation is not strictly necessary; only falsifiable knowledge counts. Thus, in quantum mechanics, it is unimportant whether we interpret the equations according to the Copenhagen interpretation or the Many-Worlds interpretation—whether the wave-function collapses, or reality splits apart—since in any case the equations predict the right result. In other words, we aren’t required to scratch our heads and ask what the equations “mean” if they spit out the right number; this is one of the strengths of science.

But in other fields, like history, interpretation is unavoidable. The historian is dealing with human language, not to mention the vagaries of the human heart. This alone makes any sort of “objective” knowledge impossible in this realm. Interpretation deals with meaning; meaning only exists in experience; experience is always personal; and the personal is, by definition, subjective. Two scholars may differ as to the meaning of, say, a passage in a diplomat’s diary, and neither could prove the other was incorrect, although one might be able to show her interpretation was far more likely than her counterpart’s.

Let me stop and review the many pitfalls on our road to perfect knowledge of the Italian Renaissance. First, we begin with an imperfect record of information; then we must make selections from this imperfect record. This selection will be based on vague judgments of importance and interest—what things are worth knowing, which facts explain other facts. We will also try to make generalizations about these facts—generalizations that are always tentative, arbitrary, and hazardous, and which are accurate to an undetermined extent. After during all this, we must interpret: What does this mean? Why did this happen? What is the crucial factor, what is mere surface detail? And remember that, before we even start, we are depending on a severely limited perception of the world, and a perspective warped by innumerable prejudices. Is it any wonder that scholarship goes on infinitely?

At this point, I am feeling a bit like Montaigne, chasing my thoughts left and right, trying to weave disparate threads into a coherent whole, and wondering how I began this already overlong essay. Well, that’s not so bad, I suppose, since Montaigne is the reason I am writing here in the first place.

Montaigne was a skeptic; he did not believe in the possibility of objective knowledge. For him, the human mind was too shifting, the human understanding too weak, the human lifespan too short, to have any hope of reaching a final truth. Our reasoning is always embodied, he observed, and is thus subjected to our appetites, excitements, passions, and fits of lassitude—to all of the fancies, hobbyhorses, prejudices, and vanities of the human personality.

You might think, from the foregoing analysis, that I take a similar view. But I am not quite so cheerfully resigned to the impossibility of knowledge. It is impossible to find out the absolute truth (and even if we could, we couldn’t be sure when or if we did). Through science, however, we have developed a self-correcting methodology that allows us to approach ever-nearer to the truth, as evidenced by our increasing ability to manipulate the world around us through technology. To be sure, I am no worshiper of science, and I think science is fallible and limited to a certain domain. But total skepticism regarding science would, I think, by foolish and wrong-headed: science does what it’s supposed to do.

What about domains where the scientific method cannot be applied, like history? Well, here more skepticism is certainly warranted. Since so much interpretation is needed, and since the record is so imperfect, conclusions are always tenuous. Nevertheless, this is no excuse to be totally skeptical, or to regard all conclusions as equally valid. The historian must still make logically consistent arguments, and back up claims with evidence; their theories must still plausibly explain the available evidence, and their generalizations must fit the facts available. In other words, even if a historian’s thesis cannot be falsified, it must still conform to certain intellectual standards.

Unlike in science, however, interpretation does matter, and it matters a great deal. And since interpretation is always subjective, this makes it possible for two historians to propose substantially different explanations for the same evidence, and for both of their theories to be equally plausible. Indeed, in a heuristic field, like history, there will be as many valid perspectives as there are practitioners.

This brings us back to Montaigne again. Montaigne used his skepticism—his belief in the subjectivity of knowledge, in the embodied nature of knowing—to justify his sort of dilettantism. Since nobody really knows what they’re talking, why can’t Montaigne take a shot? This kind of perspective, so charming in Montaigne, can be dangerous, I think, if it leads one to abandon intellectual standards like evidence and argument, or if it leads to an undiscerning distrust in all conclusions.

Universal skepticism can potentially turn into a blank check for fundamentalism, since in the absence of definite knowledge you can believe whatever you want. Granted, this would never have happened to Montaigne, since he was wise enough to be skeptical of himself above all; but I think it can easily befall the less wise among us.

Nevertheless, if proper respect is paid to intellectual standards, and if skepticism is always turned against oneself as well as one’s peers, then I think dilettantism, in Montaigne’s formulation, is not only acceptable but admirable:

I might even have ventured to make a fundamental study if I did not know myself better. Scattering broadcast a word here, a word there, examples ripped from their contexts, unusual ones, with no plan and no promises, I am under no obligation to make a good job of it nor even to stick to the subject myself without varying it should it so please me; I can surrender to doubt and uncertainty and to my master-form, which is ignorance.

Nowadays it is impossible to be an expert in everything. To be well-educated requires that we be dilettantes, amateurs, whether we want to or not. This is not to be wholly regretted, for I think the earnest dilettante has a lot to contribute in the pursuit of knowledge.

Serious amateurs (to use an oxymoron) serve as intermediaries between the professionals of knowledge and the less interested lay public. They also serve as a kind of check on professional dogmatism. Because they have one tiptoe in the subject, and the rest of their body out of it, they are less likely to get swept away by a faddish idea or to conform to academic fashion. In other words, they are less vulnerable to groupthink, since they do not form a group.

I think serious amateurs might also make a positive contribution, at least in some subjects that require interpretation. Although the amateur likely has less access to information and lacks the resources to carry out original investigation, each amateur has a perspective, a perspective which may be highly original; or she may notice something previously unnoticed, which puts old material in a new light.

Although respect must be paid to expertise, and academic standards cannot be lightly ignored, it is also true that professionals do not have a monopoly on the truth—and for all the reasons we saw above, absolute truth is unattainable, anyway—so there will always be room for fresh perspectives and highly original thoughts.

Montaigne is the perfect example: a sloppy thinker, a disorganized writer, a total amateur, who was nonetheless the most important philosopher and man of letters of his time.

Review: How to Live, a Life of Montaigne

Review: How to Live, a Life of Montaigne

How to Live: Or A Life of Montaigne in One Question and Twenty Attempts at an AnswerHow to Live: Or A Life of Montaigne in One Question and Twenty Attempts at an Answer by Sarah Bakewell
My rating: 4 of 5 stars

It had the perfect commercial combination: startling originality and easy classification.

With the state of the world—especially of the United States—growing more unsettling and absurd by the day, I felt a need to return to Montaigne, the sanest man in history. Luckily, I had Bakewell’s book tucked away in the event of any crisis of this kind; and I’m happy to report it did take the edge off.

How to Live is a beguiling mixture. While purportedly a biography of Montaigne, it is also, as many reviewers have noted, a biography of Montaigne’s Essays, tracking how they have been reread and reinterpreted in the centuries since their publication. This double-biography is structured as a series of answers to the question: How to live? In the hands of a less able writer, this organizational principle could easily have become a cheap, tacky gimmick; but Bakewell’s skill and taste allow the book to transcend biography into philosophy—or, at the very least, into self-help.

Bakewell herself is hardly a Montaignesque writer. Her prose is disciplined and controlled; and though she must weave philosophy, history, literary criticism, and biography into a coherent narrative, she keeps her material on a tight rein. While Montaigne serves as the “massive gravitational core” of his own essays, holding all the disparate topics together by the force of his personality, Bakewell herself is mostly absent from these pages. Instead, she gives us a loving portrait of Montaigne—the man, his times, and his book. And this was especially interesting for me, since Montaigne, despite writing reams about himself, never manages to give his readers a coherent picture of his life or his society. Bakewell’s book is thus most recommended as a compliment to Montaigne’s Essays, providing a background for Montaigne’s rambles.

Montaigne himself was interesting enough. Best-selling author; modern-day sage; dissatisfied lawyer; literary executor for his deceased friend, Étienne de la Boétie; translator of the obscure theologian, Raymond Sebond; and the reluctant mayor of Bordeaux: Montaigne wore many hats, and most of them well. He even played an important role in the negotiations and maneuverings that took place after the death of Henri III over the question of succession. Today, however, Montaigne is remembered more for his painful descriptions of his kidney stones than his political accomplishments.

The career of Montaigne’s reception was, for me, even more interesting than the story of his life. At first, he was interpreted as a later-day Stoic sage, a Seneca for the sixteenth century. In the next generation, both Pascal and Descartes didn’t like him, the former because Montaigne was too cheerful, the latter because he was too comfortable with uncertainty. The philosophes were fond of Montaigne’s secularism, though they had a very different conception of good prose. Rousseau and the romantics liked Montaigne for his praise of naturalness, his fondness for exotic customs, and his exploration of his own personality. Later, more puritanical generations chided Montaigne for his open attitude towards sex and his detached attitude toward society. Nowadays Montaigne is seen as a prophet of the postmodern, with his emphasis on shifting perspectives and the subjectivism of truth.

As far as Montaigne’s pieces of advice go, I’m happy to report that I was already putting most of them into practice. I don’t worry too much about death (no. 1), I like to travel (no. 14), and, to the best of my knowledge, I have been born (no. 3). I am particularly adept at number 4, “Read a lot, forget most of what you read, and be slow-witted,” though I’m still working on number 13, “Do something no one has done before.” Well, as much as I’d like to be original, I’m happy following in Montaigne’s footsteps; indeed, I agree with Bakewell in thinking that Montaigne’s example is more useful now than ever. I will let her have the final word:

The twenty-first century has everything to gain from a Montaignean sense of life, and, in its most troubled moments so far, it has been sorely in need of a Montaignean politics. It could use his sense of moderation, his love of sociability and courtesy, his suspension of judgment, and his subtle understanding of the psychological mechanisms involved in confrontation and conflict. It needs his conviction that no vision of heaven, no imagined Apocalypse, and no perfectionist fantasy can ever outweigh the tiniest of selves in the real world.

View all my reviews

Quotes & Commentary #35: Bacon

Quotes & Commentary #35: Bacon

Revenge is a kind of wild justice; which the more man’s nature runs to, the more ought law to weed it out.

—Francis Bacon, “Of Revenge”

The thirst for revenge is one of our ugliest, most satisfying, and most basic tendencies. It isn’t hard to see why.

The urge to revenge ourselves is a straightforward consequence of the urge to preserve ourselves. If somebody has hurt us in some way—by stealing a mate, by physical violence, or merely by a rude remark—then they have clearly shown themselves to be a threat, a dangerous person who can’t be trusted. The logical thing to do then becomes to neutralize this threat, to diminish or destroy his capacity to further hinder us.

This counter-attack will serve two purposes: first, it will harm the enemy, reducing his capacity to harm you in the future; second, by publically revenging yourself on an enemy, it will signal to others that you are not one to be trifled with, and that you will retaliate if anybody tries something funny. The practical benefits of revenge are thus preventative.

It is paradoxical, therefore, that revenge is not often thought of as oriented towards future security, but instead toward bygone injuries. The purpose of revenge, we feel in our bones, is to right the wrongs of the universe, to put the cosmic scale of justice back to zero, balancing a good action for a bad one.

When revenge is conceived this way, as retaliatory and not as preventative, then it can lead to absurdly unproductive actions, notable only for the resources they waste. In this connection, I can’t help thinking of Iñigo Montoya from The Princess Bride, whose obsessive quest to kill the murderer of his father consumed decades of his life.

Ask anyone to tell you about their ex, and there’s a good chance you will be met with the same vengeful fixation. The revenge intoxicated man is something of a narrative cliche, repeated ten thousand times in novels and television and movies. I would guess that revenge is second only to romantic love as the emotional engine of drama.

The folly of orienting your life around getting back at an enemy is clear to anyone with healthy sense of perspective. The best form of revenge, after all, is being happy, and all-consuming quests for personal justice are not conducive to happiness.

Even as a preventative measure—to incapacitate an enemy and prevent others from springing up—revenge often backfires. This is for two reasons.

First, if you attempt to render an enemy incapable of harming you in the future, there is always a risk you will fall short of full incapacitation. This is dangerous because, if you don’t succeed in fully disabling your enemy—whether psychologically, politically, logistically, socially, or physically—then there is a good chance that you will only embitter him, who will then counter-attack after he recovers his strength.

The second risk, related to the first, is the question of third-parties. If you succeed in fully disabling your enemy, there is still the possibility that he may have powerful friends. The friend of every enemy is another potential enemy, and can be mobilized against you. After successful revenge, you may yourself be the victim of a vengeful act by one of the enemy’s allies. If this revenge against you is successful, then one of your friends might retaliate against this new foe. 

This logic of attack and counterattack is how feuds start. Every act of vengeance can breed another, until half the world is embroiled in a bitter, pointless war against the other half. The most emblematic of these vindictive conflicts was the feud between the Hartfields and the McCoys, but you see this sort of thing in every section and at every scale of human life.

Revenge, as you can see, is a strategy of limited utility. It would, however, be untrue to say that revenge is always futile. In a situation similar to Hobbes’s “State of Nature,” vengeful acts are hardly avoidable. If there is no structure in place to resolve disputes, no laws and thus no method to punish law-breakers, then each party must enforce their own version of right and wrong.

Remember that, for each individual, taken separately, right and wrong are products of self-interest. In other words, in the absence of law, “right” is simply what helps you, “wrong” what hurts you; and without any legal system, you must enforce your own version of right and wrong, since no one else will.

In order to survive in an anarchic world, you must retaliate against those who interfere with your self-interest. If not, it will send the message to those around you that you are a pushover, and that they can take advantage of you without any risk; and you can only expect more enemies to interfere with you in the future. (I teach adolescents, so I know something about an anarchic world.) Some retaliation is therefore necessary. But care must be taken not to take vengeance too readily or too forcefully, or you may be the victim of revenge yourself.

Humans were born into anarchy and we still have the instincts that helped us get through it. This is why revenge comes to naturally to us, and why it tastes so sweet. But this emotional armory does not help us when we live in a society governed by law.

Law is a substitute for revenge, with all of its advantages and none of its defects. With recourse to the legal prosecution—organized retaliation, approved by the community—then we can neutralize threats and protect ourselves from future harm, with only a minimal chance that our enemies’ friends will try to get back at us. Law replaces private desire with public safety; and because the will of the community sanctions the law’s consequences, the law is joined with overwhelming force, to protect its adherents and attack its antagonists.

Living, as we all do, in states governed by law, the emotional urge to take revenge becomes a hindrance rather than an asset. If you are wronged, you can seek legal retribution. But if that is not available, then it is usually unwise to take matters into your own hands, since this makes it possible that legal retribution can be used against you.

True, there are many things that fall outside the confines of the law, the most notable of these being romance. And as expected,  vindictiveness is alive and well in matters of the heart. You still find people revenging themselves on their exes and their rivals, waxing indignant at perceived wrongs and organizing their friends in concerted actions of revenge. Having no social structure to resolve disputes, people fall into anarchy.

Yet I would argue that, even in these cases, revenge is a poor strategy. The revenge mentality is only justified, I think, in anarchic situations, specifically when the consequences for not retaliating are potentially severe. But in the case of romance, there is no chance that you will be seriously damaged. Heartbreak hurts, but it is seldom fatal.

In cases like these—where you can be sure of surviving any enemy attack—then I think another strategy is called for: returning love for hate. This sounds Biblical, but its justification is logical.

Keep in mind that I am talking of a situations like romance, in which harm cannot incapacitate either you or your enemies. (By “incapacitate” I mean render them unable to do future harm.) Since harming your enemies cannot disable them, it can only embitter them and potentially create new enemies; and since you cannot be disabled by being harmed, you have nothing to fear by not retaliating.

Returning harm for harm is thus clearly a poor long-term strategy, even if it might be satisfying in the short-term. You are left with two options: do nothing, or return help for harm. The first option seems superficially like the more logical one. By doing nothing, you don’t risk creating new enemies, and you don’t use resources to benefit your enemy that could be used elsewhere.

The second strategy, returning help for harm, is quite obviously more expensive, not to mention less satisfying. (Who likes to see their enemies happy?) Yet I think it is wiser as a long-term strategy, since it is by returning help for harm that enemies are converted into friends. A friend, after all, is somebody who acts in our interest; and it would be a stubborn enemy indeed who could persist in hating somebody who showed them only love and kindness.

Revenge, born of anarchy, is both a social and a personal ill. It is rendered obsolete as soon as people begin living in a society governed by law. It is a waste of resources and a poor survival strategy, and has no place in a just legal system or in the conduct of a wise individual.

Quotes & Commentary #1: Montaigne

Quotes & Commentary #1: Montaigne

“I only quote others to better quote myself.”

—Michel de Montaigne

For many years now, it’s been my habit to write down my favorite quotes from the books I read. These can be passages that are particularly pungent, sentences with a memorable turn of phrase, or an insight that, for whatever reason, resonated with me. This collection of quotes has gradually grown into a sprawling mass, hundreds of pages long.

At present my hoard resembles an attic of precious jewelry, old antique furniture, forgotten knick-knacks, and fading photographs, collecting dust from age and neglect. It is time I put this attic into good order. To do this, I will select a quote I find especially appealing and write a short commentary on it, briefly explaining why I like it, what I think it means, and why I think it’s valuable.

This quote, by Montaigne, is the perfect place to start. On a verbal level, it is arresting because of his phrase “quote myself,” which is intentionally paradoxical. In my experience, this paradox is so true: We learn to express ourselves by imitating others. This is how we learn to speak, sing, paint, and write.

Montaigne did a great deal of quoting in his Essays, most often from ancient authors like Plutarch, Seneca, and Lucretius. Although the effect is sometimes pedantic, by the end you see how Montaigne’s style, tone, and perspective gradually emerges from these influences. He teaches himself how to write by selecting and digesting the writings of others. And this is what I hope to do, too.