Review: Heidegger’s Basic Writings

Review: Heidegger’s Basic Writings
Basic Writings

Basic Writings by Martin Heidegger

My rating: 4 of 5 stars

Every valuing, even where it values positively, is a subjectivizing. It does not let things: be.

A Gentle Warning

In matters philosophical, it is wise to be skeptical of interpretations. An interpretation can be reasonable or unreasonable, interesting or uninteresting, compelling or uncompelling; but an interpretation, by its very nature, can never be false or true. Thus, we must be very careful when relying on secondary literature; for what is secondary literature but a collection of interpretations? Personally, I don’t like anybody to come between me and a philosopher. When a philosopher’s views are being explained to me, I feel as if I’m on the wrong end of a long game of telephone. Even if an interpreter is excellent—quoting extensively and making qualified assertions—his interpretation is, like all interpretations, an argument from authority; to interpret a text is to assert that one is an authority on the text, and thus should be believed.

Over generations, these interpretations can harden into dogmas; we are taught the “received interpretation” of a philosopher, and not the philosopher himself. This is dangerous; for, what makes a classic book classic, is that it can be read repeatedly—not just in one lifetime, but down the centuries—while continuing to yield new and interesting interpretations. In other words, a philosophical classic is a book that can be validly and compelling interpreted a huge number of ways. So if you subscribe to another person’s interpretation you are depriving the world of something invaluable: your own take on the matter.

In matters philosophical, I say that it is better to be stupid with one’s own stupidity, than smart with another’s smarts. To put the matter another way, to read a great book of philosophy is not, I think, like reading a science textbook; the goal is not simply to assimilate a certain body of knowledge, but to have a genuine encounter with the thinker. In this way, reading a great work of philosophy is much more like travelling someplace new: what matters is the experience of having been there, and not the snapshots you bring back from the trip. Even if you go someplace where you can’t speak the language, where you are continually baffled the whole time by strange customs and incomprehensible speech, it is more valuable than just sitting at home and reading guide books. So go and be baffled, I say!

This is all just a way of warning you not to take what I will say too seriously, for what I will offer is my own interpretation, my own guide-book, so to speak. I will make some assertions, but I’d like you to be very skeptical. After all, I’m just some dude on the internet.


An Attempt at a Way In

The best advice I’ve ever gotten in regard to Heidegger was in my previous job. My boss was a professor from Europe, a very well educated man, who naturally liked to talk about books with me. At around this time, I was reading Being and Time, and floundering. When I complained of the book’s difficulty, this is what he said:

“In the Anglophone tradition, they think of language as a tool for communication. But in the European tradition, they think of language as a tool to explore the world.” He said this last statement as he reached out his arm in front of him, as if grabbing at something far away, to make it clear what he meant.

Open one of Heidegger’s books, and you will be confronted with something strange. First is the language. He invents new words; and, more frustratingly, he uses old words in unfamiliar ways, often relying on obscure etymological connections and German puns. Even more frustrating is the way Heidegger does philosophy: he doesn’t make logical arguments, and he doesn’t give straightforward definitions for his terms. Why does he write like this? And how can a philosopher do philosophy without attempting to persuade the reader with arguments? You’re right to be skeptical; but, in this review, I will try to provide you with a way into Heidegger’s philosophy, so at least his compositional and intellectual decisions make sense, even if you disagree with them. Since Heidegger’s frustrating and exasperating language is extremely conspicuous, let us start there.

Imagine a continuum of attitudes towards language. On the far end, towards the left, is the scientific attitude. There, we find linguists talking of phonemes, morphemes, syntax; we find analytic philosophers talking about theories of meaning and reference. We see sentences being diagrammed; we hear researchers making logical arguments. Now, follow me to the middle of this continuum. Here is where most speech takes place. Here, language is totally transparent. We don’t think about it, we simply use it in our day to day lives. We argue, we order pizzas, we make excuses to our bosses, we tell jokes; and sometimes we write book reviews. Then, we get to the other end of the spectrum. This is the place where lyric poetry resides. Language is not here being used to catalogue knowledge, nor is it transparent; here, in fact, language is somehow mysterious, foreign, strange: we hear familiar words used in unfamiliar ways; rules of syntax and semantics are broken here; nothing is as it seems.

Now, what if I ask you, what attitude gets to the real essence, the real fundamentals of language? If you’re like me, you’d say the first attitude: the scientific attitude. It seems commonsensical to think that you understand language more deeply the more you rigorously study it; and one studies language by setting up abstract categories, such as ‘syntax’ and ‘phoneme’. But this is where Heidegger is in fundamental disagreement; for Heidegger believes that poetry reveals the essence of language. In his words: “Language itself is poetry in the essential sense.”

But isn’t this odd? Isn’t poetry a second or third level phenomenon? Doesn’t poetry presuppose the usual use of language, which itself presupposes the factual underpinning of language investigated by science? In trying to understand why Heidegger might think this, we are led to his conception of truth.

If you are like me, you have a commonsense understanding of what makes a statement true or false. A statement is “true” if it corresponds to something in reality; if I say “the glass is on the table,” it is only true if the glass really is on the table. Heidegger thinks this is entirely wrong; and in place of this conception of truth, Heidegger proposes the Greek word “aletheia,” which he defines as “unconcealment,” or “letting things reveal themselves as themselves.”

It’s hard to describe what this means abstractly, so let me give you an example. Let’s say you are a peasant, and a rich nobleman just invited you to his house. You get lost, and wander into a room. It is filled with strange objects that you’ve never seen before. You pick something up from a table. You hold it in your hands, entranced by the strange shape, the odd colors, the weird noises it omits. You are totally lost in contemplation of the object, when suddenly the nobleman waltzes into the room and says “Oh, I see you’ve found my watch.” According to Heidegger, what the nobleman just did was to cover up the watch in a kind of veneer of obviousness. It is simply a watch, he says, just one among many of its kind, and therefore obvious. The peasant, meanwhile, was experiencing the object as an object, and letting it reveal itself to him.

This kind of patina of familiarity is, for Heidegger, what prevents us from engaging in serious thinking. This is why Heidegger spends so much time talking about the dangers of conformity, and also why he is ambivalent about the scientific project: for what is science but the attempt to make what is not obvious, obvious? To bring the unfamiliar into the realm of familiarity? Heidegger thinks that this feeling of unfamiliarity is, on the contrary, the really valuable thing; and this is why Heidegger talks about moods—such as anxiety, which, he says, discloses the “Nothing.” Now, it is a favorite criticism of some philosophers to dismiss Heidegger as foolish by treating “Nothing” as something; but this misses his point. When Heidegger is talking of anxiety as the mood that discloses the “Nothing” to us, he means that our mood of anxiety is the subrational realization of the bizarreness of existence. That is, our anxiety is the way that the question faces us: “Why is there something rather than nothing?”

This leads us quite naturally to Heidegger’s most emblematic question, the question of Being: what does it mean to be? Heidegger contends that this question has been lost to history. But has it? Philosophers have been discussing metaphysics for millennia. We have idealism, materialism, monism, monadism—aren’t these answers to the question of Being? No, Heidegger says, and for the following reason. When one asserts, for example, that everything is matter, one is asserting that everything is, at base, one type of thing. But the question of Being cannot be answered by pointing to a specific type of being; so we can’t answer the question, “what does it mean to be?” by saying “everything is mind,” or “everything is matter,” since that misses the point. What does it mean to be at all?

So now we have to circle back to Heidegger’s conception of truth. If you are operating with the commonsense idea of truth as correspondence, you will quite naturally say: “The question of ‘Being’ is meaningless; ‘Being’ is the most empty of categories; you can’t give any further analysis to what it ‘means’ to exist.” In terms of correspondence, this is quite true; for how can any statement correspond with the answer to that question? A statement can only correspond to a state of affairs; it cannot correspond to the “stateness” of affairs: that’s meaningless. However, if you are thinking of truth along Heidegger’s lines, the question becomes more sensible; for what Heidegger is really asking is “How can we have an original encounter with Being? How can I experience what it means to exist? How can I let the truth of existence open itself up to me?”

To do this, Heidegger attempts to peel back the layers of familiarity that, he feels, prevents this genuine encounter from happening. He tries to strip away our most basic commonsense notions: true vs. false, subject vs. object, opinion vs. fact, and virtually any other you can name. In so doing, Heidegger tries to come up with ways of speaking that do not presuppose these categories. So in struggling through his works, you are undergoing a kind of therapy to rid yourself of your preconceptions, in order to look at the world anew. In his words: “What is strange in the thinking of Being is its simplicity. Precisely this keeps us from it. For we look for thinking—which has its world-historical prestige under the name “philosophy”—in the form of the unusual, which is accessible only to initiates.”

What on earth are we to make of all this? Is this philosophy or mystical poetry? Is it nonsense? That’s a tough question. If by “philosophy” we mean the examination of certain traditional questions, such as those of metaphysics and epistemology, then it might be fair to say that Heidegger wasn’t a philosopher—at least, not exactly. But if by “philosophy” we mean thinking for the sake of thinking, then Heidegger is a consummate philosopher; for, in a sense, this is the point of his whole project: to get us to question everything we take for granted, and to rethink the world with fresh minds.

So should we accept Heidegger’s philosophy? Should we believe him? And what does it even mean to “believe” somebody who purposely doesn’t make assertions or construct arguments? Is this acceptable in a thinker? Well, I can’t speak for you, but I don’t accept his picture of the world. To sum up my disagreement with Heidegger as pithily as possible, I disagree with him when he says: “Ontology is only possible as phenomenology.” On the contrary, I do not think that ontology necessarily has anything to do with phenomenology; in other words, I don’t think that our experiences of the world necessarily disclose the world in a fundamental way. For example, Heidegger thinks that everyday sounds are more basic than abstract acoustical signals, and he argues this position like so:

We never really first perceive a throng of sensations, e.g., tones and noises, in the appearance of things—as this thing-concept alleges; rather we hear the storm whistling in the chimney, we hear the three-motored plane, we hear the Mercedes in immediate distinction from the Volkswagen. Much closer to us than all sensations are the things themselves. We hear the door shut in the house and never hear acoustical sensations or even mere sounds. In order to hear a bare sound we have to listen away from things, divert our ear from them, i.e., listen abstractly.

To Heidegger, the very fact that we perceive sounds this way implies that this is more fundamental. But I cannot accept this. Hearing “first” the door shut is only a fact of our perception; it does not tell us anything about how our brains process auditory signals, nor what sound is, for that matter. This is why I am a firm believer in science, because it seems that the universe doesn’t give up its secrets lightly, but must be probed and prodded! When we leave nature to reveal itself to us, we aren’t left with much.

And it was clear that I’m not a Heideggerian from my introduction. As the opening quote shows, he was partly remonstrating against our dichotomy of subjective opinion vs. objective fact; whereas this notion is the very one I began my review with. You’ve been hoodwinked from the start, dear reader; for by acknowledging that this is just one opinion among many, you have, willingly or unwillingly, disagreed with Heidegger.

So was reading Heidegger a waste of time for me? If I disagree with him on almost everything, what did I gain from reading him? Well, for one thing, as a phenomenologist pure and simple, Heidegger is excellent; he gets to the bottom of our experience of the world in a way way few thinkers can. What’s more, even if we reject his ontology, many of Heidegger’s points are interesting as pure cultural criticism; by digging down deep into many of our preconceptions, Heidegger manages to reveal some major biases and assumptions we make in our daily lives. But the most valuable part of Heidegger is that he makes you think: agree or disagree, if you decide he is a loony or a genius, he will make you think, and that is invaluable.

So, to bring this review around to this volume, I warmly push it into your hands. Here is an excellent introduction to the work and thought of an original mind—much less imposing than Being and Time. I must confess that I was pummeled by Heidegger’s first book—I was beaten senseless. This book was, by contrast, often pleasant reading. It seems that Heidegger jettisoned a lot of his jargon later in life; he even occasionally comes close to being lucid and graceful. I especially admire “The Origin of the Work of Art.” I think it’s easily one of the greatest reflections on art that I’ve had the good fortune to read.

I think it’s only fair to give Heidegger the last word:

… if man is to find his way once again into the nearness of Being he must first learn to exist in the nameless. In the same way he must recognize the seductions of the public realm as well as the impotence of the private. Before he speaks man must first let himself be claimed again by Being, taking the risk that under this claim he will seldom have much to say.

View all my reviews

Review: Pascal’s Pensées

Review: Pascal’s Pensées

Pensées by Blaise Pascal

My rating: 4 of 5 stars

Pascal seems to have been born for greatness. At a young age he displayed an intense talent for mathematics, apparently deducing a few propositions of Euclid by himself; and he matured into one of the great mathematical minds of Europe, making fundamental contributions to the science of probability. While he was at it, he invented an adding machine: the beginning of our adventures in computing.

Later on in his short life, after narrowly escaping a carriage accident, the young man had an intense conversion experience; and he devoted the rest of his energies to religion. A committed Jansenist (a sect of Catholics deeply influenced by Calvinism), he set out to defend his community from the hostile Jesuits. This resulted in his Provincial Letters, a series of polemical epistles now considered a model and a monument of French prose. This was not all. His most ambitious project was a massive apology for the Christian faith. But disease struck him down before he could bring his book to term; and now all we are left with are fragments—scattered bits of thought.

Strangely, it is this unfinished book—not his polished prose, not his contributions to mathematics—which has become Pascal’s most lasting work. It is a piece of extraordinary passion and riveting eloquence. Yet it is also disorganized, tortured, incomplete, uneven, abrupt—at times laconic to the point of inscrutability, at times rambling, diffuse, and obscure. How are we to judge such a book?

Pascal alternates between two fundamental moods in the text: the tortured doubter, and the zealous convert. Inevitably I found the former sections to be far more compelling. Pascal was an avid reader of Montaigne, and seems to have taken that French sage’s skepticism to heart. Yet Pascal could never simulate Montaigne’s easy acceptance of his own ignorance; the mathematician wanted certainty, and was driven to despair by Montaigne’s gnawing doubt. Thus, though Pascal often echoes Montaigne’s thoughts, the tone is completely different: anguish rather than acceptance.

Montaigne’s influence runs very deep in Pascal. Harold Bloom famously called the Pensées “a bad case indigestion in regard to Montaigne,” and notes the many passages of Pascal which directly echo Montaigne’s words. Will Durant goes even further, writing that Pascal was driven nearly to madness by Montaigne’s skepticism. There is, indeed, a shadow of mania and mental imbalance that falls over this work. Pascal gives the impression of one who is profoundly unhappy; and this despair both propels him to his heights and drags him to his depths.

At his best, Pascal strikes one as a kind of depressed charismatic genius, writing in the mood of a Hamlet. Cynicism at times overwhelms him, as he notes how our vanity leads us to choose our professions and our habits just to receive praise from other people. He can also be a pessimist—noting, like Schopenhauer, that all earthly pleasures are unsatisfactory and vain. Pascal had a morbid streak, too.

Imagine a number of men in chains, all under sentence of death, some of whom are each day butchered in the sight of the others; those remaining see their own condition in that of their fellows, and looking at each other with grief and despair await their turn. This is an image of the human condition.

We also have the misanthrope, in which mood he most nearly approached the Danish prince:

What sort of freak then is man! How novel, how monstrous, how chaotic, how paradoxical, how prodigious! Judge of all things, feeble earthworm, repository of truth, sink of doubt and error, glory and refuse of the universe!

But I think even more moving that these moods is Pascal’s metaphysical despair. He wants certainty with every inch of his soul, and yet the universe only inspires doubt and anguish: “The eternal silence of these infinite spaces fills me with dread.” As a scientist during the age of Galileo, Pascal is painfully aware of humanity’s smallness in relation to the vast void of the universe. He struggles to establish our dignity: “Man is only a reed, the weakest in nature, but he is a thinking reed.” Yet his existential desperation continually reasserts itself, no matter how often he defends himself against it:

This is what I see and what troubles me. I look around in every direction and all I see is darkness. Nature has nothing to offer me that does not give rise to doubt and anxiety. If I saw no sign there of a Divinity I should decide on a negative solution: if I saw signs of a Creator everywhere I should peacefully settle down in the faith.

He finds neither negative nor positive confirmation, however, and so must resort to a frenzied effort. Perhaps this is where the famous idea of the wager arose. Pascal’s Wager is simple: if you choose to be religious you have very much to gain and comparatively little to lose, so it is an intelligent bet. Of course there are many problems with this line of thinking. For one, would not an omniscient God know that you are choosing religion for calculated self-interest? Pascal’s solution is that, if you force yourself to undergo the rituals of religion—fasting, confession, mass, and the rest—the belief will gradually become genuine.

Perhaps. Yet there are many other problems with the wager. Most noticeable, nowadays, is Pascal’s treatment of the religious problem as a binary choice—belief or unbelief—whereas now we have hundreds of options to choose from as regards religions. Further, Pascal’s insistence that we have everything to gain and nothing to lose is difficult to accept. For we do have something to lose: our life. Living a strictly religious life is no easy thing, after all. Also, his insistence that the finite existence of our life is nothing compared to the potential infinity of heavenly life leaves out one crucial thing: If there is no afterlife, than our finite existence is infinitely more valuable than the nothingness that awaits. So the wager does not clarify anything.

In any case, it is unclear what use Pascal wished to make of his wager. The rest of this book does not make any mention of this kind of strategic belief. Indeed, at times Pascal seems to directly contradict this idea of an intellectually driven faith, particularly in his emphasis on the role of emotion: “It is the heart which perceives God and not the reason. That is what faith is: God perceived by the heart, not by the reason.” Or, more pithily: “The heart has its reasons of which the reason knows nothing.”

This, for me, summarizes the more enjoyable sections of the book. But there is a great deal to criticize. Many of the arguments that Pascal makes for belief are frankly bad. He notes, for example, that Christianity has been around since the beginning of the world—something that only a convinced young-earther could believe nowadays. There are many passages about the Jews, most of which are difficult to read. One of his most consistent themes is that God hardened the hearts of the Jews against Christ, in order that they be unwilling “witnesses” to future generations. But what kind of divine justice is it to sacrifice a whole people, intentionally blinding them to the truth?

Indeed, virtually every statement Pascal makes about other religions reveals both an ignorance and a hostility greatly unbecoming of the man. And his explanation of the existence of other religions, as a kind of specious temptation, is both absurd and disrespectful: “If God had permitted only one religion, it would have been too easily recognizable. But, if we look closely, it is easy to distinguish the true religion amidst all this confusion.”

I suppose this is one of the great paradoxes of any kind of religious faith: Why did God allow so many to go astray? But conceiving of other religions as snares deliberately placed by God seems extremely cruel on God’s part (as well as wholly dismissive of other faiths). In any case, it is just one example of Pascal’s pitiless piety. He himself warns of the danger of the moral sense armed with certainty: “We never do evil so fully and cheerfully as when we do it out of conscience.” And yet his own religious convictions can seem cruel, at least psychologically: dwelling obsessively on the need to hate oneself, and insisting that “I am culpable if I make anyone love me.”

Pascal also has a habit of dwelling on prophesy, repeatedly noting that the Old Testament prefigured the coming and the life of Jesus—which is clear if we interpret the text in the “right” way. Of course, this is open to the obvious objection that any text can predict anything if it is interpreted in the “right” way. Pascal’s response to this is that God is intentionally mysterious, and it would have been too obvious to have literally predicted Jesus and his works. The ability to see the prophesy differentiates those to whom God sheds light, and those whom God blinds. Once again, therefore, we have this strangely cruel conception of God, as a Being which arbitrarily prevents His creatures from seeing the truth.

As I think is clear from the frantic tone, and the many different and contradictory ways that Pascal tries to justify belief, he himself was not fully convinced by any of them. His final desperate intellectual move is to abandon the principle of logical consistency altogether. As he says: “A hundred contradictions might be true.” Or elsewhere he tells us: “All their principles are true, skeptics, stoics, atheists, etc. … but their conclusions are false, because the contrary principles are also true.” Yet if he had taken this idea seriously, he would have seen that it completely erodes the possibility of justifying any belief. All we have left is to go where the “heart” guides us; but what if my heart guides me towards Chinese ancestor worship?

Another reviewer on this site noted Pascal’s power to convince religious skeptics. But, as you can see, I found the opposite to be true. Pascal’s morbid unhappiness, his frantic doubt, his shoddy reasoning, do not inspire any wish to join him. To the contrary, one regrets that such a fine mind was driven to such a self-destructive fixation. Still, this book deserves its canonical status. Though at times nearly unreadable, in its finest passages the Pensées is as sublime as anything in literature. And, though Pascal falls short of Montaigne in many respects, he is able to capture the one element of experience forbidden to the benign essayist: an all-consuming despair.

View all my reviews

Review: Being and Nothingness

Review: Being and Nothingness
Being and Nothingness

Being and Nothingness by Jean-Paul Sartre

My rating: 3 of 5 stars

Slime is the agony of water.

I first heard of this book from my dad. “I had to read this in college,” he told me. “We looked at every type of being. Being-in-myself, being-for-myself, being-of-myself, being-across-myself, being-by-myself. I went crazy trying to read that thing.” Ever since that memorable description, this book has held a special allure for me. It has everything to attract a self-styled intellectual: a reputation for difficulty, a hefty bulk, a pompous title, and the imprimatur of a famous name. Clearly I had to read it.

Jean-Paul Sartre was the defining intellectual of his time, at least on the European continent. He did everything: writing novels and plays, founding and editing a journal, engaging in political activism, and pioneering a philosophical school: existentialism. This book is the defining monument of that school. An eight-hundred-page treatise on ontology which, somehow, became widely read—or at least widely talked about. Nearly eighty years later, we are still talking about this book. In 2016 Sarah Bakewell released a best-selling book about Sartre’s movement; and a new translation of Being and Nothingness will be released next year. Interest in existentialism has not abated.

Yet what is existentialism? And how has it weathered the passing years? This is what I set out to determine, and this review will show whether my attempt bore fruit.

One should begin by examining the subtitle of this book: “A Phenomenological Essay on Ontology.” Already we have a contradiction. Phenomenology is a philosophical school founded by Edmund Husserl, which attempted to direct philosophers’ attention back “to the things themselves”—that is, to their own experience of the world. One of Husserl’s most insistent commandments was that the philosopher should “bracket,” or set aside, the old Cartesian question of the reality of these experiences (is the world truly as I perceive it?); rather, the philosopher should simply examine the qualities of the experience itself. Thus, Sartre’s promise of a phenomenological ontology (ontology being the investigation of the fundamental nature of reality) is a flagrant violation of Husserl’s principles.

Still, it does have a lot to tell us about Sartre’s method. This book is an attempt to deduce the fundamental categories of being from everyday experience. And this attempt leads Sartre to the two most basic categories of all: being and nothingness. Being is all around us; it is manifest in every object we experience. Sartre defines existing objects as those which are self-identical—that is, objects which simply are what they are—and he dubs this type of being the “in-itself.” Humans, by contrast, cannot be so defined; they are constantly shifting, projecting themselves into an uncertain future. Rather than simply existing, they observe their own existence. Sartre calls this type of human existence the “for-itself.”

Already we see the old Cartesian dualism reappearing in these categories. Are we not confronted, once again, with the paradoxes of matter and mind? Not exactly. For Sartre does not consider the in-itself and the for-itself to be two different types of substances. In fact, the for-itself has no existence at all: it is a nothingness. To use Sartre’s expressions, human consciousness can be compared to “little pools of non-being that we encounter in the heart of being,” or elsewhere he says that the for-itself “is like a hole in being at the heart of Being.” The for-itself (a consciousness) is a particular privation of a specific in-itself (a human body), which functions as a nihilation that makes the world appear: for there would not be a “world” as we know it without perception, and perception is, for Sartre, a type of nihilation.

Putting aside all of the difficulties with this view, we can examine the consequences which Sartre draws from these two sorts of being. If the for-itself is a nothingness, then it is forever removed from the world around it. That is, it cannot be determined, either by its past or by its environment. In short, it is free—inescapably free. Human behavior can thus never be adequately explained or even excused, since all explanations or excuses presuppose that humans are not fundamentally self-determining. But of course we explain and excuse all the time. We point to economic class, occupation, culture, gender, race, sexuality, upbringing, genetic background, mood—to a thousand different factors in order to understand why people act the way they do.

This attempt to treat humans as things rather than free beings Sartre calls “bad faith.” This constitutes the fundamental sin of existentialism. He gives the example of a waiter who so embraces his role as a waiter that his motions become calculated and mechanical; the waiter tries to embody himself in his role to the extent that he gives up his individual freedom and becomes a kind of automaton whose every movement is predictable. But of course life is full of examples of bad faith. I excuse my mistake by saying I hadn’t had my coffee yet; my friend cheats on his girlfriend, but it was because his father cheated on his mother; and so on.

This is the basic situation of the for-itself. Yet there is another type of being which Sartre later introduces us to: the for-others. Sartre introduces this category with a characteristically vivid example: Imagine a peeping Tom is looking through a keyhole into a room. His attention is completely fixed on what he sees. Then, suddenly, he hears footsteps coming down the hall; and he immediately becomes aware of himself as a body, as a thing. Sartre considers experiences like this to prove that we cannot doubt the existence of others, since being perceived by others totally changes how we experience ourselves.

This allows Sartre to launch into an analysis of human interaction, and particularly into love and sexuality. This analysis bears the obvious influence of Hegel’s famous Master-Slave dialectic, and it centers on the same sorts of paradoxes: the contradictory urges to subjugate and be subjugated, to be embodied and desired, to be free and to be freely chosen, and so on. However, Sartre’s best writing in this vein is not to be found here, but in his great play No Exit, where each character exhibits a particular type of bad faith. All three of the characters wish to be looked at in a particular way, yet each of them is stuck with others whose own particular sort of bad faith renders them unable to look in the “right” way.

Sartre concludes from all this that our most fervent desire, and the reason we so often slip into bad faith, is that we wish to be an impossible combination of the in-itself and the for-itself. We want to be the foundation of our own being, a perfect self-identical creature, and yet absolutely free. We want to become gods. But, for Sartre, this is self-contradictory: the in-itself and the for-itself can never coexist. Thus, the idea of God arises as a sort of wish-fulfillment; but God is impossible by definition. As a result, human life “is a useless passion”—a relentless striving to be something which cannot exist.

All this may be clearer if we avoid Sartre’s terminology and, instead, compare his philosophy to that of Buddhism (at least, the type of Western Buddhism I’m acquainted with). The mind is constantly searching for a sense of permanent identity. Though the mind is, by nature, groundless, we are uncomfortable with this; we want put ground under our feet. So we seek to identify ourselves with our jobs, our families, our marriages, our hobbies, our success, our money—with any external good that lets us forget that our consciousness is constantly shifting and flowing, and that our identities can never be absolutely determined. So far, Buddhism and Sartrean existentialism have similar diagnoses of our problems. But Buddhism prescribes detachment, while Sartre prescribes the embrace of absolute freedom and the adoption of complete responsibility of our actions.

No summary of the book would be complete without Sartre’s critique of Freud. Sartre was clearly intrigued by Freud’s theories and wanted to use them in some way. However, Freud’s unconscious motivations and superconscious censorship is clearly incompatible with Sartre’s philosophy of freedom. In particular, Sartre found it self-contradictory to say that there could be a part of the mind which “wants” without us knowing it, or a part that is able to hide information from our awareness. For Sartre, all consciousness is self-consciousness, and it therefore does not make sense to “want” or “know” something unconsciously.

In place of Freud’s psychoanalysis, then, Sartre proposes an existential psychoanalysis. For Sartre, every person is defined by a sort of fundamental choice that determines their stance towards the world (though, strangely, it seems that most people are not aware of having made this choice). It is the task of the existential psychoanalyst is to uncover this fundamental choice by a close examination of everyday actions. Indeed, Sartre believes that everything from one’s preference for onions to one’s aversion to cold water is a consequence of this fundamental choice. Sartre even goes so far as to insist that some things, by virtue of being so clearly suggestive of metaphor, have a universal meaning for the for-itself. As an example of this, he gives “slime”—viscous liquid which Sartre thinks inspires a universal horror of the weight of existence.

This fairly well rounds out a summary of the book. So what are we to make of this?

The comparison with Heidegger is unavoidable. Sartre himself seems to have encouraged the comparison by giving his metaphysical tome a title redolent of the German professor’s magnum opus. The influence is clear: Sartre wrote Being and Nothingness after reading Being and Time during his brief imprisonment in a prisoner-of-war camp; and Heidegger is referenced throughout the book. Nevertheless, I think it would be inaccurate to describe Sartre as a follower of Heidegger, or his philosophy merely as an interpretation of Heidegger’s. Indeed, I think that the superficial similarities between the two thinkers (stylistic obscurity, disregard of religion and ethics, a focus on human experience, a concern with “being”) mask far more important differences.

Heidegger’s project, insofar as I understand it, is radically anti-Cartesian. He sought to replace the thinking and observing ego with the Dasein, a being thrown into the world, a being fundamentally ensconced in a community and surrounded by tools ready-to-hand. For Heidegger, the Cartesian perspective—of withdrawing from the world and deliberately reflecting and reasoning—is derivative of, and inferior to, this far more fundamental relationship to being. Sartre could not be further from this. Sartre’s perspective, to the contrary, is insistently Cartesian and subjectivist; it is the philosophy of a single mind urgently investigating its experience. Further, the concept of “freedom” plays almost no role in Heidegger’s philosophy; indeed, I believe he would criticize the very idea of free choice as enmeshed in the Cartesian framework he hoped to destroy.

In method, then, Sartre is far closer to Husserl—another professed Cartesian—than to Heidegger. However, as we observed above, Sartre breaks Husserl’s most fundamental tenet by using subjective experiences to investigate being; and this was done clearly under the influence of Heidegger. These two, along with Freud, and Hegel, constitute the major intellectual influences on Sartre.

It should be no surprise, then, that Sartre’s style often verges on the obscure. Many passages in this book are comparable in ugliness and density to those German masters of opacity (Freud excluded). Heidegger is the most obvious influence here: for Sartre, like Heidegger, enjoys using clunky hyphenated terms and repurposing quotidian words in order to give them a special meaning. There is an important difference, however. When I did decipher Sartre’s more difficult passages, I usually found that the inky murkiness was rather unnecessary.

Believe me when I say that I am no lover of Heidegger’s writing. Nevertheless, I think Heidegger’s tortured locutions are more justifiable than Sartre’s, for Heidegger was attempting to express something that is truly counter-intuitive, at least in the Western philosophical tradition; whereas Sartre’s philosophy, whatever novelties it possesses, is far more clearly in the mainline of Cartesian thinking. As a result, Sartre’s adventures in jargon come across as mere displays of pomp—a bejewelled robe he dons in order to appear more weighty—and, occasionally, as mere abuses of language, concealing simple points in false paradoxes.

This is a shame, for when Sartre wished he could be quite a powerful writer. And, indeed, the best sections of this book are when Sartre switches from his psuedo-Heideggerian tone to that of the French novelist. The most memorable passages in this book are Sartre’s illustrations of his theories: the aforementioned waiter, or the Peeping Tom, or the passage on skiing. Whatever merit Sartre had as a philosopher, he was undoubtedly a genius in capturing the intricacies of subjective experience—the turns of thought and twinges of emotion that rush through the mind in everyday situations.

But what are we to make of his system? To my mind, the most immediately objectionable aspect is his idea of nothingness. Nothing is just that—nothing: a complete lack of qualities, attributes, or activity of any kind. Indeed, if a nothingness can be defined at all, it must be via elimination: by excluding every existing thing. It seems incoherent, then, to say that the human mind is a nothingness, and is therefore condemned to be free. Consciousness has many definite qualities and, besides that, is constantly active and (in Sartre’s opinion at least) able to choose itself and change the world. How can a nothingness do that? And this is putting to the side the striking question of how the human brain can produce a complete absence of being. Maybe I am taking Sartre’s point too literally; but it is fair to say that he provides no account of how this nothingness came into being.

Once this idea of nothingness is called into question, the rest of Sartre’s conclusions are on extremely shaky ground. Sartre’s idea of freedom is especially suspect. If human consciousness is not separated from the world and from its past by a nothingness, then Sartre’s grand pronouncements of total freedom and total responsibility become dubious. To me it seems unlikely to the highest degree that, of all the known objects in the universe, including all of the animals (some of which are closely related to us), humans are the only things that are exempt from the chain of causality that binds everything together.

Besides finding it implausible, I also cannot help finding Sartre’s idea of total freedom and responsibility to be morally dubious. He himself, so far as I know, never managed to make his system compatible with a system of ethics. In any case, an emphasis on total responsibility can easily lead to a punitive mentality. According to Sartre, everyone deserves their fate.

Admittedly I do think his conception of “bad faith” is useful. Whether or not we are metaphysically “free,” we often have more power over a situation than we admit. Denying our responsibility can lead to inauthenticity and immorality. And Sartre’s embrace of freedom can be a healthy antidote to an apathetic despair. Still, I do not think an elaborate ontological system is necessary in order to make this point.

Reading Sartre nowadays, I admit that it is difficult to take his conclusions seriously. For one, the next generation of French intellectuals set to work demonstrating that our freedom is constrained by society (Bourdeiu), psychology (Lacan), language (Derrida), and history (Foucault), among other factors. (Of course, these intellectual projects were not necessarily any more solid than Sartre’s.) More importantly, Sartre’s system seems to be so completely bound up in both his times and his own psychology—two things which he denied could determine human behavior—that it ironically belies his conclusions. (As an example of the latter influence, Sartre’s revulsion and even horror of sex is apparent throughout the book, especially in the strange section on “slime.”)

In the end I was somewhat disappointed by this work. And I think my disappointment is ultimately a consequence of Sartre’s method: phenomenological ontology. It is simply incorrect to believe that we can closely interrogate our own experiences to determine the fundamental categories of being. Admittedly, Sartre is not entirely averse to making logical argument; but too many of his conclusions rest on the shaky ground of these narrations of subjective experience. Sartre is, indeed, a brilliant observer of this experience, and his descriptions are worth reading for their psychological insight alone. Nevertheless, as a system of ontology, I do not think it can stand on its own two feet.



View all my reviews

Review: Either/Or

Review: Either/Or
Either/Or: A Fragment of Life

Either/Or: A Fragment of Life by Søren Kierkegaard

My rating: 4 of 5 stars

Of course, a critic resembles a poet to a hair, except that he has no anguish in his heart, no music on his lips.

This is one of those rare unclassifiable books, whose genre was born the day it was published and which has since left no heirs. Kierkegaard gives us what appears, at first, to be a sort of literary experiment: the papers of two imaginary characters, found inside the escritoire by a third imaginary character. These two characters—referred to as ‘A’ and ‘B’—serve as the titular either/or; and their writings are a study in contrast. Specifically, Kierkegaard uses these two personages to juxtapose the aesthetic with the ethical modes of life, presumably asking the reader to choose between them. You might say it is a ‘choose your own adventure’ book of philosophy, except the adventure chosen turns out to be your life.

Part 1, by A, gives us the aesthetic man. We are presented with extracts from a journal, essays on Mozart’s Don Giovanni and ancient tragedy, a study of boredom, and the famous Seducer’s Diary: A’s record of his carefully planned seduction of a young girl. Part 2 is more focused, consisting of two long letters sent by B (who is supposed to be a middle-aged judge) to A, both exhorting the latter to turn towards a more ethical view of life. The styles of the two writers are suitably different: A is excitable, hyperbolic, and aphoristic, while B is more staid and focused. Nevertheless, it is never difficult to tell that Kierkegaard is the true author.

Neatly summarizing the difference in perspectives would be difficult, since Kierkegaard tends to be flexible with his own definitions. Perhaps the best way to capture the contrast is with the book’s central metaphor: seduction vs. marriage. In the first, A is concerned with attaining a maximum of pleasure. He is not a hedonist, and is not very interested in sex. Rather, he is interested in avoiding boredom by carefully shaping his developing relationship like a well-plotted novel, ensuring that each emotion is felt to the utmost. His primary concern, in other words, is to avoid the stale, the cliché, the repetitive. The judge, by contrast, sees marriage as far preferable to seduction, since it is through commitments like marriage that the inner self develops and becomes fully actualized. While the aesthete prefers to live in the moment, the ethical man notes that, even if every moment is novel, the self remains the same. Change requires commitment.

Interpreting the book is difficult. Are we being asked to make a choice in values? Such a choice could have no basis but chance or personal whim, since no pre-existing value could guide us between two incompatible value-systems. This, you might say, is the existentialist interpretation of the book: the primacy of choice over values. Yet other options are available. For example, despite Kierkegaard’s famous opposition to Hegel’s philosophy, this text is open to a Hegelian reading. Specifically, B’s perspective seems in many respects superior to A’s, since B demonstrates that he is able to understand A, while A presumably cannot understand B. Thus, you can perhaps regard B as the Hegelian antithesis to A’s thesis; and perhaps both of these can be united in a wider perspective, such as in Kierkegaard’s Knight of Faith—a religious unity of inner feeling and outer obligation. There is also the unmistakable autobiographical element in this writing, since Kierkegaard had not long before broken off his own engagement.

This is just to scrape the surface of possibility. And this shows both the strength and weakness of Kierkegaard’s writing. On the one hand, this book is highly rich and suggestive, with brilliant passages buried amid piles of less compelling material. On the other hand, to call a book “rich” and “suggestive” is also to call it confused. Since no clear message emerges, and since there are no arguments to guide the way, the book can easily yield interpretations consonant with pre-conceived opinions. In other words, it is hard to me to imagine somebody being convinced to change their mind by reading this. But Kierkegaard can perhaps better be likened to a good art critic than to a systematic philosopher, for the value in his writing consists more in illuminating comments than in a final conclusion.

On the whole, however, I must say that I emerged with a distaste for Kierkegaard’s writing. At times he rises to commanding eloquence; but so often he seems to wallow in confusing and repetitive intricacies. More to the point, I find the general tenor of his writing to be anti-rationalist; and this is exemplified in the complete lack of argument in his writings. But nobody could deny that, all told, this is an extraordinary book and a worthy addition to the philosophical tradition.



View all my reviews

Review: At the Existentialist Café

Review: At the Existentialist Café
At the Existentialist Café: Freedom, Being, and Apricot Cocktails

At the Existentialist Café: Freedom, Being, and Apricot Cocktails by Sarah Bakewell

My rating: 4 of 5 stars

Ideas are interesting, but people are vastly more so.

Sarah Bakewell has followed her lovely book about Montaigne with an equally lovely book about the existentialist movement. Comparing the books, one can see an obvious theme emerge in Bakewell’s writing: the interest in practical philosophy. Montaigne and the existentialists share the tendency to write about their own lives and, in various ways, to attempt to live out the tenets of their philosophies. This makes Bakewell’s biographical method especially revealing and rewarding, while at the same time adding a subtle, highbrow self-help aspect to her books—life lessons with the imprimatur of big names and fine prose.

Bakewell attempts to tell the story of the existentialist movement from its twentieth-century beginnings (skipping over precursors such as Dostoyevsky and Kierkegaard) to its apparent end, with the deaths of its principle architects. The four main protagonists are Martin Heidegger, Maurice Merleau-Ponty, Simone de Beauvoir, and Jean-Paul Sartre (who, unsurprisingly, is the dominant personality), along with shorter appearances by other thinkers: Husserl, Camus, Raymond Aron, Karl Jaspers, and Simone Weil, to name the most prominent. When you consider the sheer amount of biographical and philosophical material this list represents, you realize the magnitude of the task set before Bakewell, and the consequent skill she demonstrated in producing a readable, elegant, and stimulating book.

I am sorry to say that I have read very little of the writings of the principle actors, with the exception of Heidegger. Bakewell’s account of him mostly confirmed my own experiences with the infuriating metaphysician, especially in his disturbing lack of character and, indeed, of basic humanity. Sartre comes across as far more human, if not exactly more likable. Few people could hear of Sartre’s enormous philosophical, biographical, journalistic, and literary output, over so many years, without feeling a sense of awe. Nevertheless, Sartre’s opinions rarely struck me as measured or reasonable. Though I often mourn the decline of the public intellectual, Sartre’s example gives me pause, for his influence on contemporary politics was not necessarily salubrious. Perhaps it is true that intellectuals, seeking consistency and clarity, are naturally inclined towards extreme positions. Sartre was, in any case, and it led him into some foolish and even reprehensible positions.

By contrast to these two giants, Beauvoir and Merleau-Ponty come off rather well in this story. The former tempered her political opinions with a greater subtlety, thoroughness, and empathy; while the latter lived a quietly productive and happy life, while creating a philosophy that Bakewell argues constitutes the greatest intellectual legacy of the bunch.

Just as Bakewell argued that Montaigne’s writings are newly relevant for his sense of moderation, so she argues that the existentialists are newly relevant for exploring the questions of authenticity and freedom. Not having read most of their work, I cannot comment on this. But what I found most inspiring was their burning desire to think and to write—and to write like mad, for hours each day, in every genre, for decades on end. Though most of this writing was born today to die tomorrow, each one of them produced a magisterial tome for future readers to beat their heads against. I suppose I will have to pick them up sometime soon.



View all my reviews

Review: ¿Qué es filosofía?

Review: ¿Qué es filosofía?

¿Qué es filosofía?¿Qué es filosofía? by José Ortega y Gasset

My rating: 3 of 5 stars

I have always believed that clarity is the courtesy of philosophy…

When I picture Ortega to myself, I imagine a man seated in the middle of a room full of books—the atmosphere smoky from frequent cigarettes—banging furiously away at a typewriter, going at it from morning till evening, rapidly accumulating piles of written pages by his side. Ortega was so prolific, and wrote about so many different things, that he could have filled an entire journal by himself—and nearly did. I have read only a fraction of his collected works, but this has included: an analysis of love, a political reckoning of Spain, a diagnosis of the social ills of Europe, and essays on literature and modern art. Now added to this list is an introduction to philosophy.

What I most admire in Ortega is this flexibility and his fluency: his omnivorous interest in the world and his ability to write smooth prose about complex issues. What I most deprecate is his tendency to rush headlong into a problem, sweep away controversy with grand gestures, and then to drop it at once. In other words, he is profligate with ideas but stingy with systems. His theories are always germinal; he leaves to others the difficult work of rigorous arguments and concrete applications. This is not damaging in cases such as aesthetic criticism, where rigor is hardly possible anyway; but it is ruinous in the case of philosophy, where logical consistency is so crucial.

The result of his approach is this series of lectures, which does not give a coherent view of philosophy’s history or its method. Instead, Ortega offers an essayistic series of opinions about the shortcomings of previous incarnations of philosophy and where he thinks philosophy should go next. I say “opinions” because, crucially, Ortega does not offer anything resembling a formal argument. This makes it difficult to accept his conclusions and, worse, makes it difficult to understand his opinions in the first place, since without the supporting skeleton of an argument his views remain formless.

Nevertheless, a short summary is still possible. Ortega derides science for being concerned with merely “secondary” problems, and mysticism for being irrational. Materialists metastasize existence into something inhuman and discrete, while idealists (such as Descartes) divorce the subject from his surroundings. Ortega’s solution is his phrase, “I am myself and my surroundings,” considering human experience—composed of the interpenetration of subject and surrounding circumstances—the basic fact of philosophy. In this, as in his emphasize on human freedom, he fits in well with existentialists like Heidegger and Sartre. But he differs from then, first, in writing legibly; and second in his strong emphasis on reason.

I think there are the germs of some worthy ideas contained here; but in order to really understand the ontological and epistemological ramifications of his positions, he would have to argue for them in a way entirely absent from this book.

View all my reviews

Review: The New Organon

Review: The New Organon

The New OrganonThe New Organon by Francis Bacon

My rating: 4 of 5 stars

Since I’ve lately read Aristotle’s original, I thought I’d go ahead and read Bacon’s New Organon. The title more or less says it all. For this book is an attempt to recast the method of the sciences in a better mold. Whereas Aristotle spends pages and pages enumerating the various types of syllogisms, Bacon dismisses it all with one wave of the hand—away with such scholarly nonsense! Because Aristotle is so single-mindedly deductive, his scientific research came to naught; or, as Bacon puts it, “Aristotle, who made his natural philosophy a mere bond servant to his logic, thereby [rendered] it contentious and well-nigh useless.”

What is needed is not deduction—which draws trivial conclusions form absurd premises—but induction. More specifically, what is needed is a great deal of experiments, the results of which the careful scientist can sort into air-tight conclusions. Down with the syllogism; up with experiment. Down with the schoolmen; up with the scientists.

In my (admittedly snotty) review of Bacon’s Essays, I remarked that he would have done better to have written a work entirely in aphorisms. Little did I know that Bacon did just that, and it is this book. Whatever Bacon’s defects were as a politician or a philosopher, Bacon is the undisputed master of the pithy, punchy maxim. In fact, his writing style can be almost sickening, so dense is it with aphorism, so rich is it with metaphor, so replete is it with compressed thought.

In the first part of his New Organon all of the defects of Bacon’s style are absent, and all of his strengths are present in full force. Indeed, if this work consisted of only the first part, it would have merited five stars, for it is a tour de force. Bacon systematically goes through all of the errors the human mind is prone to when investigating nature, leaving no stone unturned and no vices unexamined, damning them all in epigram after epigram. The reader hardly has time to catch his breath from one astonishing insight, when Bacon is on to another.

Among these insights are, of course, Bacon’s famous four idols. We have the Idol of the Tribe, which consist of the errors humans are wont to make by virtue of their humanity. For our eyes, our ears, and our very minds distort reality in a systematic way—something earlier philosophers had, so far as I know, neglected to account for. We have then the Idols of the Cave, which are the foibles of the individual person, over and above the common limitations of our species. Of these may include certain pet theories, preferences, accidents of background, peculiarities of taste. And then finally we have the Idols of the Market Place, which are caused by the deceptive nature of language and words, as well as the Idols of the Theater, which consists of the various dogmas present in the universities and schools.

Bacon also displays a remarkable insight into psychology. He points out that humans are pattern-seeking animals, which leads us to sometimes see patterns which aren’t there: “The human understanding is of its own nature prone to suppose the existence of more order and regularity in the world than it finds.” Bacon also draws the distinction, made so memorable in Isaiah Berlin’s essay, between foxes and hedgehogs: “… some minds are stronger and apter to mark the differences of things, others to mark their resemblances.” Bacon also notes, in terms no psychologist could fault, a description of confirmation bias:

The human understanding when it has once adopted an opinion (either as being the received opinion or as being agreeable to itself) draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, or else by some distinction sets aside and rejects, in order that by this great and pernicious predetermination the authority of its former conclusions may remain inviolate.

Part two, on the other hand, is a tedious, rambling affair, which makes the patient reader almost forget the greatness of the first half. Here, Bacon moves on from condemning the errors of others to setting up his own system. In his opinion, scientific enquiry is a simple matter of tabulation: make a table of every situation in which a given phenomenon is always found, and then make a table of every situation in which a given phenomenon is never found; finally, make a table of every situation in which said phenomenon is sometimes found, shake well, and out comes your answer.

The modern reader will not recognize the scientific method in this process. For we now know that Bacon’s induction is not sufficient. (Though, he does use his method to draw an accurate conclusion about the nature of heat: “Heat is a motion, expansive, restrained, and acting in its strife upon the smaller particles of bodies.”) What Bacon describes is more or less what we’d now call ‘natural history’, a gathering up of facts and a noting of regularities. But the scientific method proper requires the framing of hypotheses. The hypothesis is key, because it determines what facts need to be collected, and what relationship those facts will have with the theory in question. Otherwise, the buzzing world of facts is too lush and fecund to tabulate; there are simply too many facts. Furthermore, Bacon makes the somewhat naïve—though excusable, I think—assumption that a fact is simply a fact, whereas we now know that facts are basically meaningless unless contextualized; and, in science, it is the theory in question which contextualizes said facts.

The importance of hypotheses also makes deduction far more important than Bacon acknowledges. For the aspiring experimentalist must often go through a long chain of deductive reasoning before he can determine what experiment should be performed in order to test a theory. In short, science relies on both deductive and inductive methods, and the relationship of theory to data is far more intertwined than Bacon apparently thinks. (As a side note, I’d also like to point out that Bacon wasn’t much of a scientist himself; he brings up the Copernican view of the heliocentric solar system many times, only to dismiss it as ridiculous, and also seems curiously unaware of the other scientific advances of his day.)

In a review of David Hume’s Enquiry Concerning the Principles of Morals, I somewhat impertinently remarked that the English love examples—or, to use a more English word, instances. I hope not to offend any English readers, but Bacon confirms me in this prejudice—for the vast bulk of this work is a tedious enumeration of twenty-seven (yes, that’s almost thirty) types of ‘instances’ to be found in nature. Needless to say, this long and dry list of the different sorts of instances makes for both dull reading and bad philosophy, for I doubt any scientist in the history of the world ever made progress by sorting his results into one of Bacon’s categories.

So the brilliant, brash, and brazen beginning of this book fizzles out into pedantry that, ironically enough, rivals even Aristotle’s original Organon. So, to repeat myself, the title of this book more or less says it all.

View all my reviews

Review: Organon (Aristotle)

Review: Organon (Aristotle)

OrganonOrganon by Aristotle

My rating: 3 of 5 stars

Aristotle continues to provoke conflicting reactions in me. I am always torn between realizing his tremendous originality and historical importance, and suffering from his extraordinary dullness. This book exemplifies both sides of the coin. Seeing a man single-handedly create the field of logic, ex nihilo, is tremendous; yet reading through these treatises could put a coffee-addict in a coma.

I am not insensitive to the appeals of philosophy. Far from it; I think reading philosophy is thrilling. Some of my most acute aesthetic experiences have been had contemplating some philosopher’s idea. Yet I have never had this reaction to Aristotle’s writings. Part of this is due to his formidable difficulty; another part, to the nature of the works (which, I must constantly remind myself, are lecture-notes).

Nevertheless, Aristotle had a prosaic mind; even when faced with the most abstract phenomena in the universe, his first reaction is to start parceling everything into neat categories, and to go on making lists and explanations of these categories. He does make logical arguments, but they are often brief, and almost as often unsatisfactory. Much of the time the student is faced with the dreary task of working his way through Aristotle’s system, simply because it is his system, and not because it is empirically or logically compelling.

(Every time I write a review for Aristotle, it comes out so disappointed. Let me try to be more positive.)

My favorite piece in this was the Posterior Analytics, which is a brilliant treatise on epistemology, logic, and metaphysics. Aristotle succinctly presents an entire theory of knowledge, and it’s incomparably more rigorous and detailed than anything Plato could have produced. I also particularly liked the Topics, as there we see Aristotle as a seasoned debater, in addition to a bumbling professor. Of course, there is much of strictly philosophic interest in this work as well; a particularly memorable problem is that of the future naval-battle.

For me, Aristotle is at his best when he is discussing the acquisition of knowledge. For Aristotle, whatever his faults, more perfectly embodied the love of knowledge than any other thinker in history. He wanted to know all; and, considering his historical limitations, he came damn near close. Us poor moderns have to content ourselves with either a mastery of one tiny slice of reality, or a dilettante acquaintance with all of it; Aristotle had the whole world at his fingertips.

View all my reviews

Review: The Structure of Scientific Revolutions

Review: The Structure of Scientific Revolutions

The Structure of Scientific RevolutionsThe Structure of Scientific Revolutions by Thomas S. Kuhn

My rating: 5 of 5 stars

Observation and experience can and must drastically restrict the range of admissible scientific belief, else there would be no science. But they cannot alone determine a particular body of such belief. An apparently arbitrary element, compounded of personal and historical accident, is always a formative ingredient of the beliefs espoused by a given scientific community at a given time.

This is one of those wonderfully rich classics, touching on many disparate fields and putting forward ideas that have become permanent fixtures of our mental furniture. Kuhn synthesizes insights from history, sociology, psychology, and philosophy into a novel conception of science—one which, despite seemingly nobody agreeing with it, has become remarkably influential. Indeed, this book made such an impact that the contemporary reader may have difficulty seeing why it was so controversial in the first place.

Kuhn’s fundamental conception is of the paradigm. A paradigm is a research program that defines a discipline, perhaps briefly, perhaps for centuries. This is a not only a dominant theory, but a set of experimental methodologies, ontological commitments, and shared assumptions about standards of evidence and explanation. These paradigms usually trace their existence to a breakthrough work, such as Newton’s Principia or Lavoisier’s Elements; and they persist until the research program is thrown into crisis through stubborn anomalies (phenomena that cannot be accounted for within the theory). At this point a new paradigm may arise and replace the old one, such as the switch from Newton’s to Einstein’s system.

Though Kuhn is often spoken of as responding to Popper, I believe his book is really aimed at undermining the old positivistic conception of science: where science consists of a body of verified statements, and discoveries and innovations cause this body of statements to gradually grow. What this view leaves out is the interconnection and interdependence between these beliefs, and the reciprocal relationship between theory and observation. Our background orients our vision, telling us where to look and what to look for; and we naturally do our best to integrate a new phenomenon into our preexisting web of beliefs. Thus we may extend, refine, and elaborate our vision of the world without undermining any of our fundamental theories. This is what Kuhn describes as “normal science.”

During a period of “normal science” it may be true that scientific knowledge gradually accumulates. But when the dominant paradigm reaches a crisis, and the community finds itself unable to accommodate certain persistent observations, a new paradigm may take over. This cannot be described as a mere quantitative increase in knowledge, but is a qualitative shift in vision. New terms are introduced, older ones redefined; previous discoveries are reinterpreted and given a new meaning; and in general the web of connections between facts and theories is expanded and rearranged. This is Kuhn’s famous “paradigm shift.” And since the new paradigm so reorients our vision, it will be impossible to directly compare it with the older one; it will be as if practitioners from the two paradigms speak different languages or inhabit different worlds.

This scandalized some, and delighted others, and for the same reason: that Kuhn seemed to be arguing that scientific knowledge is socially solipsistic. That is to say that scientific “truth” was only true because it was given credence by the scientific community. Thus no paradigm can be said to be objectively “better” than another, and science cannot be said to really “advance.” Science was reduced to a series of fashionable ideas.

Scientists were understandably peeved by the notion, and social scientists concomitantly delighted, since it meant their discipline was at the crux of scientific knowledge. But Kuhn repeatedly denied being a relativist, and I think the text bears him out. It must be said, however, that Kuhn does not guard against this relativistic interpretation of his work as much as, in retrospect, he should have. I believe this was because Kuhn’s primary aim was to undermine the positivistic, gradualist account of science—which was fairly universally held in the past—and not to replace it with a fully worked-out theory of scientific progress himself. (And this is ironic since Kuhn himself argues that an old paradigm is never abandoned until a new paradigm takes its place.)

Though Kuhn does say a good deal about this, I think he could have emphasized more strongly the ways that paradigms contribute positively to reliable scientific knowledge. For we simply cannot look on the world as neutral observers; and even if we could, we would not be any the wiser for it. The very process of learning involves limiting possibilities. This is literally what happens to our brains as we grow up: the confused mass of neural connections is pruned, leaving only the ones which have proven useful in our environment. If our brains did not quickly and efficiently analyze environmental stimuli into familiar categories, we could hardly survive a day. The world would be a swirling, jumbled chaos.

Reducing ambiguities is so important to our survival that I think one of the primary functions of human culture is to further eliminate possibilities. For humans, being born with considerable behavioral flexibility, must learn to become inflexible, so to speak, in order to live effectively in a group. All communication presupposes a large degree of agreement within members of a community; and since we are born lacking this, we must be taught fairly rigid sets of assumptions in order to create the necessary accord. In science this process is performed in a much more formalized way, but nevertheless its end is the same: to allow communication and cooperation via a shared language and a shared view of the world.

Yet this is no argument for epistemological relativism, any more than the existence of incompatible moral systems is an argument for moral relativism. While people commonly call themselves cultural relativists when it comes to morals, few people are really willing to argue that, say, unprovoked violence is morally praiseworthy in certain situations. What people mean by calling themselves relativists is that they are pluralists: they acknowledge that incompatible social arrangements can nevertheless be equally ethical. Whether a society has private property or holds everything in common, whether it is monogamous or polygamous, whether burping is considered polite or rude—these may vary, and yet create coherent, mutually incompatible, ethical systems. Furthermore, acknowledging the possibility of equally valid ethical systems also does not rule out the possibility of moral progress, as any given ethical system may contain flaws (such as refusing to respect certain categories of people) that can be corrected over time.

I believe that Kuhn would argue that scientific cultures may be thought of in the same pluralistic way: paradigms can be improved, and incompatible paradigms can nevertheless both have some validity. Acknowledging this does not force one to abandon the concept of “knowledge,” any more than acknowledging cultural differences in etiquette forces one to abandon the concept of “politeness.”

Thus accepting Kuhn’s position does not force one to embrace epistemological relativism—or, at least not the strong variety, which reduces knowledge merely to widespread belief. I would go further, and argue that Kuhn’s account of science—or at least elements of his account—can be made to articulate even with the system of his reputed nemesis, Karl Popper. For both conceptions have the scientist beginning, not with observations and facts, but with certain arbitrary assumptions and expectations. This may sound unpromising; but these assumptions and expectations, by orienting our vision, allow us to realize when we are mistaken, and to revise our theories. The Baconian inductivist or the logical positivist, by beginning with an raw mass of data, has little idea how to make sense of it and thus no basis upon which to judge whether an observation is anomalous or not.

This is not where the resemblance ends. According to both Kuhn and Popper (though the former is describing while the second is prescribing), when we are revising our theories we should if possible modify or discard the least fundamental part, while leaving the underlying paradigm unchanged. This is Kuhn’s “normal science.” So when irregularities were observed in Uranus’ orbit, the scientists could have either discarded Newton’s theories (fundamental to the discipline) or the theory that Uranus was the furthest planet in the solar system (a superficial fact); obviously the latter was preferable, and this led to the discovery of Neptune. Science could not survive if scientists too willingly overturned the discoveries and theories of their discipline. A certain amount of stubbornness is a virtue in learning.

Obviously, the two thinkers also disagree about much. One issue is whether two paradigms can be directly compared or definitively tested. Popper envisions conclusive experiments whose outcome can unambiguously decide whether one paradigm or another is to be preferred. There are some difficulties to this view, however, which Kuhn points out. One is that different paradigms may attach very different importance to certain phenomena. Thus for Galileo (to use Kuhn’s example) a pendulum is a prime exemplar of motion, while to an Aristotelian a pendulum is a highly complex secondary phenomenon, unfit to demonstrate the fundamental properties of motion. Another difficulty in comparing theories is that terms may be defined differently. Einstein said that massive objects bend space, but Newtonian space is not a thing at all and so cannot be bent.

Granting the difficulties of comparing different paradigms, I nevertheless think that Kuhn is mistaken in his insistence that they are as separate as two languages. I believe his argument rests, in part, on his conceiving of a paradigm as beginning with definitions of fundamental terms (such as “space” or “time”) which are circular (such as “time is that measured by clocks,” etc.); so that comparing two paradigms would be like comparing Euclidian and non-Euclidian geometry to see which is more “true,” though both are equally true to their own axioms (while mutually incompatible). Yet such terms in science do not merely define, but denote phenomena in our experience. Thus (to continue the example) while Euclidian and non-Euclidian geometries may both be equally valid according to their premises, they may not be equally valid according to how they describe our experience.

Kuhn’s response to this would be, I believe, that we cannot have neutral experiences, but all our observations are already theory-laden. While this is true, it is also true that theory does not totally determine our vision; and clever experimenters can often, I believe, devise tests that can differentiate between paradigms to most practitioners’ satisfaction. Nevertheless, as both Kuhn and Popper would admit, the decision to abandon one theory for another can never be a wholly rational affair, since there is no way of telling whether the old paradigm could, with sufficient ingenuity, be made to accommodate the anomalous data; and in any case a strange phenomena can always be tabled as a perplexing but unimportant deviation for future researchers to tackle. This is how an Aristotelian would view Galileo’s pendulum, I believe.

Yet this fact—that there can be no objective, fool-proof criteria for switching paradigms—is no reason to despair. We are not prophets; every decision we take involves risk that it will not pan out; and in this respect science is no different. What makes science special is not that it is purely rational or wholly objective, but that our guesses are systematically checked against our experience and debated within a community of dedicated inquirers. All knowledge contains an imaginative and thus an arbitrary element; but this does not mean that anything goes. To use a comparison, a painter working on a portrait will have to make innumerable little decisions during her work; and yet—provided the painter is working within a tradition that values literal realism—her work will be judged, not for the taste displayed, but for the perceived accuracy. Just so, science is not different from other cultural realms in lacking arbitrary elements, but in the shared values that determine how the final result is judged.

I think that Kuhn would assent to this; and I think it was only the widespread belief that science was as objective, asocial, and unimaginative as a camera taking a photograph that led him to emphasize the social and arbitrary aspects of science so strongly. This is why, contrary to his expectations, so many people read his work as advocating total relativism.

It should be said, however, that Kuhn’s position does alter how we normally think of “truth.” In this I also find him strikingly close to his reputed nemesis, Popper. For here is the Austrian philosopher on the quest for truth:

Science never pursues the illusory aim of making its answers final, or even probable. Its advance is, rather, towards the infinite yet attainable aim of ever discovering new, deeper, and more general problems, and of subjecting its ever tentative answers to ever renewed and ever more rigorous tests.

And here is what his American counterpart has to say:

Later scientific theories are better than earlier ones for solving puzzles in the often quite different environments to which they are applied. That is not a relativist’s position, and it displays the sense in which I am a convinced believer in scientific progress.

Here is another juxtaposition. Popper says:

Science is not a system of certain, or well-established, statements; nor is it a system which steadily advances towards a state of finality. Our science is not knowledge (episteme): it can never claim to have attained truth, or even a substitute for it, such as probability. … We do not know: we can only guess. And our guesses are guided by the unscientific, the metaphysical (though biologically explicable) faith in laws, in regularities which we can uncover—discover.

And Kuhn:

One often hears that successive theories grow ever closer to, or approximate more and more closely to, the truth… Perhaps there is some other way of salvaging the notion of ‘truth’ for application to whole theories, but this one will not do. There is, I think, no theory-independent way to reconstruct phrases like ‘really there’; the notion of a match between the ontology of a theory and its ‘real’ counterpart in nature now seems to me illusive in principle.

Though there are important differences, to me it is striking how similar their accounts of scientific progress are: the ever-increasing expansion of problems, or puzzles, that the scientist may investigate. And both thinkers are careful to point out that this expansion cannot be understood as an approach towards an ultimate “true” explanation of everything, and I think their reasons for saying so are related. For since Popper begins with theories, and Kuhn with paradigms—both of which stem from the imagination of scientists—their accounts of knowledge can never be wholly “objective,” but must contain an aforementioned arbitrary element. This necessarily leaves open the possibility that an incompatible theory may yet do an equal or better job in making sense of an observation, or that a heretofore undiscovered phenomenon may violate the theory. And this being so, we can never say that we have reached an “ultimate” explanation, where our theory can be taken as a perfect mirror of reality.

I do not think this notion jeopardizes the scientific enterprise. To the contrary, I think that science is distinguished from older, metaphysical sorts of enquiry in that it is always open-ended, and makes no claim to possessing absolute “truth.” It is this very corrigibility of science that is its strength.

This review has already gone on for far too long, and much of it has been spent riding my own hobby-horse without evaluating the book. Yet I think it is a testament to Kuhn’s work that it is still so rich and suggestive, even after many of its insights have been absorbed into the culture. Though I have tried to defend Kuhn from accusations of relativism or undermining science, anyone must admit that this book has many flaws. One is Kuhn’s firm line between “normal” science and paradigm shifts. In his model, the first consists of mere puzzle-solving while the second involves a radical break with the past. But I think experience does not bear out this hard dichotomy; discoveries and innovations may be revolutionary to different degrees, which I think undermines Kuhn’s picture of science evolving as a punctuated equilibrium.

Another weakness of Kuhn’s work is that it does not do justice to the way that empirical discoveries may cause unanticipated theoretical revolutions. In his model, major theoretical innovations are the products of brilliant practitioners who see the field in a new way. But this does not accurately describe what happened when, say, DNA was discovered. Watson and Crick worked within the known chemical paradigm, and operated like proper Popperians in brainstorming and eliminating possibilities based on the evidence. And yet the discovery of DNA’s double helix, while not overturning any major theoretical paradigms, nevertheless had such far-reaching implications that it caused a revolution in the field. Kuhn has little to say about events like this, which shows that his model is overly simplistic.

I must end here, after thrashing about ineffectually in multiple disciples in which I am not even the rankest amateur. What I hoped to re-capture in this review was the intellectual excitement I felt while reading this little volume. In somewhat dry (though not technical) academic prose, Kuhn caused a revolution that still forceful enough to make me dizzy.

View all my reviews

Review: The Logic of Scientific Discovery

Review: The Logic of Scientific Discovery

The Logic of Scientific DiscoveryThe Logic of Scientific Discovery by Karl R. Popper

My rating: 4 of 5 stars

We do not know: we can only guess.

Karl Popper originally wrote Logik der Forchung (The Logic of Research) in 1934. This original version—published in haste to secure an academic position and escape the threat of Nazism (Popper was of Jewish descent)—was heavily condensed at the publisher’s request; and because of this, and because it remained untranslated from the German, the book did not receive the attention it deserved. This had to wait until 1959, when Popper finally released a revised and expanded English translation. Yet this condensation and subsequent expansion have left their mark on the book. Popper makes his most famous point within the first few dozen pages; and much of the rest of the book is given over to dead controversies, criticisms and rejoinders, technical appendices, and extended footnotes. It does not make for the most graceful reading experience.

This hardly matters, however, since it is here that Popper put forward what has arguably become the most famous concept in the philosophy of science: falsification.

This term is widely used; but its original justification is not, I believe, widely understood. Popper’s doctrine must be understood as a response to inductivism. Now, in 1620 Francis Bacon released his brilliant Novum Organum. Its title alludes to Aristotle’s Organon, a collection of logical treatises, mainly focusing on how to make accurate deductions. This Aristotelian method—dominated by syllogisms: deriving conclusions from given premises—dominated the study of nature for millennia, with precious little to show for it. Francis Bacon hoped to change all that with his new doctrine of induction. Instead of beginning with premises (‘All men are mortal’), and reasoning to conclusions (‘Socrates is mortal’), the investigator must begin with experiences (‘Socrates died,’ ‘Plato died,’ etc.) and then generalize a conclusion (‘All men are mortal’). This was how science was to proceed: from the specific to the general.

This seemed all fine and dandy until, in 1738, David Hume published his Treatise of Human Nature, in which he explained his infamous ‘problem of induction.’ Here is the idea. If you see one, two, three… a dozen… a thousand… a million white swans, and not a single black one, it is still illogical to conclude “All swans are white.” Even if you investigated every swan in the world but one, and they all proved white, you still could not conclude with certainty that the last one would be white. Aside from modus tollens (concluding from a negative specific to a negative general), here is no logically justifiable way to proceed from the specific to the general. To this argument, many are tempted to respond: “But we know from experience that induction works. We generalize all the time.” Yet this is to use induction to prove that induction works, which is paradoxical. Hume’s problem of induction has proven to be a stumbling block for philosophers ever since.

In the early parts of the 20th century, the doctrine of logical positivism arose in the philosophical world, particularly in the ‘Vienna Circle’. This had many proponents and many forms, but the basic idea, as explained by A.J. Ayer, is the following. The meaning of a sentence is equivalent to its verification; and verification is performed through experience. Thus the sentence “The cat is on the mat” can be verified by looking at the mat; it is a meaningful utterance. But the sentence “The world is composed of mind” cannot be verified by any experience; it is meaningless. Using this doctrine the positivists hoped to eliminate all metaphysics. Unfortunately, however, the doctrine also eliminates human knowledge, since, as Hume showed, generalizations can never be verified. No experience corresponds, for example, to the statement: “Gravitation is proportional to the product of mass and the inverse square of distance,” since this is an unlimitedly general statement, and experiences are always particular.

Karl Popper’s falsificationism is meant to solve this problem. First, it is important to note that Popper is not, like the positivists, proposing a criterion of ‘meaning’. That is to say that, for Popper, unfalsifiable statements can still be meaningful; they just do not tell us anything about the world. Indeed, he continually notes how metaphysical ideas (such as Kepler’s idea that circles are more ‘perfect’ than other shapes) have inspired and guided scientists. This is itself an important distinction because it prevents him from falling into the same paradox as the positivists. For if only the statements with empirical content have meaning, then the statement “only the statements with empirical content have meaning” is itself meaningless. Popper, for his part, regarded himself as the enemy of linguistic philosophy and considered the problem of epistemology quite distinct from language analysis.

To return to falsification, Popper’s fundamental insight is that verification and falsification are not symmetrical. While no general statement can be proved using a specific instance, a general statement can indeed be disproved with a specific instance. A thousand white swans does not prove all swans are white; but one black swan disproves it. (This is the aforementioned modus tollens.) All this may seem trivial; but as Popper realized, this changes the nature of scientific knowledge as we know it. For science, then, is far from what Bacon imagined it to be—a carefully sifted catalogue of experiences, a collection of well-founded generalizations—and is rather a collection of theories which spring up, as it were, from the imagination of the scientist in the hopes of uniting several observed phenomena under one hypothesis. Or to put it more bluntly: a good scientific theory is a guess that does not prove wrong.

With his central doctrine established, Popper goes on to the technicalities. He discusses what composes the ‘range’ or ‘scope’ of a theory, and how some theories can be said to encompass others. He provides an admirable justification for Occam’s Razor—the preference for simpler over more complex explanations—since theories with fewer parameters are more easily falsified and thus, in his view, more informative. The biggest section is given over to probability. I admit that I had some difficulty following his argument at times, but the gist of his point is that probability must be interpreted ‘objectively,’ as frequency distributions, rather than ‘subjectively,’ as degrees of certainty, in order to be falsifiable; and also that the statistical results of experiments must be reproducible in order to avoid the possibility of statistical flukes.

All this leads up to a strangely combative section on quantum mechanics. Popper apparently was in the same camp as Einstein, and was put off by Heisenberg’s uncertainty principle. Like Einstein, Popper was a realist and did not like the idea that a particle’s properties could be actually undetermined; he wanted to see the uncertainty of quantum mechanics as a byproduct of measurement or of ‘hidden variables’—not as representing something real about the universe. And like Einstein (though less famously) Popper proposed an experiment to decide the issue. The original experiment, as described in this book, was soon shown to be flawed; but a revised experiment was finally conducted in 1999, after Popper’s death. Though the experiment agreed with Popper’s prediction (showing that measuring an entangled photon does not affect its pair), it had no bearing on Heisenberg’s uncertainty principle, which restricts arbitrarily precise measurements on a single particle, not a pair of particles.

Incidentally, it is difficult to see why Popper is so uncomfortable with the uncertainty principle. Given his own dogma of falsifiability, the belief that nature is inherently deterministic (and that probabilistic theories are simply the result of a lack of our own knowledge) should be discarded as metaphysical. This is just one example of how Popper’s personality was out of harmony with his own doctrines. An advocate of the open society, he was famously authoritarian in his private life, which led to his own alienation. This is neither here nor there, but it is an interesting comment on the human animal.

Popper’s doctrine, like all great ideas, has proven both influential and controversial. For my part I think falsification a huge advance over Bacon’s induction or the positivists’ verification. And despite the complications, I think that falsifiability is a crucial test to distinguish, not only science from pseudo-science, but all dependable knowledge from myth. For both pseudo-science and myth generally distinguish themselves by admirably fitting the data set, but resisting falsification. Freud’s theories, for example, can accommodate themselves to any set of facts we throw at them; likewise for intelligent design, belief in supernatural beings, or conspiracy theories. All of these seem to explain everything—and in a way they do, since they fit the observable data—but really explain nothing, since they can accommodate any new observation.

There are some difficulties with falsification, of course. The first is observation. For what we observe, or even what we count as an ‘observation’, is colored by our background beliefs. Whether to regard a dot in the sky as a plane, a UFO, or an angel is shaped by the beliefs we already hold; thus it is possible to disregard observations that run counter to our theories, rather than falsifying the theories. What is more, theories never exist in isolation, but in an entire context of beliefs; so if one prediction is definitively falsified, it can still be unclear what we must change in our interconnected edifice of theories. Further, it is rare for experimental predictions to agree exactly with results; usually they are approximately correct. But where do we draw the line between falsification and approximate correctness? And last, if we formulate a theory which withstands test after test, predicting their results with extreme accuracy time and again, must we still regard the theory as a provisional guess?

To give Popper credit, he responds to all of these points in this work, though perhaps not with enough discussion. But all these criticisms belie the fact that so much of the philosophy of science written after Popper has taken his work as a starting point, either attempting to amplify, modify, or (dare I say it?) falsify his claims. For my part, though I was often bored by the dry style and baffled by the technical explanations, I found myself admiring Popper’s careful methodology: responding to criticisms, making fine distinctions, building up his system piece by piece. Here is a philosopher deeply committed to the ideal of rational argument and deeply engaged with understanding the world. I am excited to read more.

View all my reviews