Review: Stoic Pragmatism

Review: Stoic Pragmatism

Stoic PragmatismStoic Pragmatism by John Lachs
My rating: 3 of 5 stars

The questions of philosophy will continue to haunt us so long as we remain finite, baffled animals. The fact that philosophy offers no final answers is not an impediment but a lesson. That first great lesson of philosophy is that we must learn to live with uncertainty.

Since it’s that time of year, I’ve lately been seeing many of my friends—struggling artists, mostly—reposting graduation speeches by famous actors, musicians, entrepreneurs, and other celebrities. So many of these communal pep talks boil down to one message: persist. Every artist worth her salt has a story about how they struggled in the purgatory of unsuccessful oblivion for ten centuries—eating ramen and living in a closet—before finally ascending to the paradise of fame. Jonathan Goldsmith, for example—now famous as the Most Interesting Man Alive for the Dos Equis ads—was an obscure actor for over forty years before his “breakout” role.

But success stories and inspiring graduation speeches all have one obvious, debilitating shortcoming: survivor bias. Of course, every successful person was once unsuccessful and then became successful; so for them hard work paid off. But the vital question is not whether hard work ever pays off, but how often, and for whom? History has been the silent witness of generations of brilliant musicians and talented actors who remained obscure all their lives. The world is simply stuffed with artists of all kinds, many mediocre, but a fair number extremely talented—far more than will ever be able to support themselves in comfort with their craft. The plain fact is that, even if every budding artist in those ceremonies follows the advice to persist, not even half will achieve anything close to the level of success as the person on the podium.

And, indeed, even if there is an appealing wisdom in carrying on in the teeth of disappointment and failure, there is also a wisdom in throwing in the towel. Better to cut your losses and do something else, rather than struggle pointlessly for years on end. The real difficulty, though, is knowing which choice to make. What if you give up right when you’re on the cusp of a breakthrough? Or what if you persist for years and get nowhere? And this isn’t just a question for young artists; it is one of the basic questions of life. I recently encountered it in the philosophy of science: When should a hypothesis be abandoned or pursued? An overly tractable scientist may give up on a truly promising theory with the first hint of difficulty; and an overly stubborn scientist may spend a career working on a bankrupt idea, in the vain hope of making it work.

Seeking an analysis of this dilemma, I picked up John Lachs’s book, Stoic Pragmatism, which explicitly promises to address just this question. Lachs is attempting here to combine the pragmatist doctrine that we must improve the world with the stoic resignation to the inevitable. Unfortunately, he does not get any further than noting what I hope is obvious—that we should improve what we can and resign ourselves to what we can’t change. This is true; but of course we very often have no idea what we can or can’t change, what will or won’t work, whether we’ll be successful or not, which leaves us in the same baffled place we started. Insofar as truly answering this question would require knowing the future, it is unanswerable. Uncertainty about success and the need to commit to potentially doomed actions are inescapable elements of our existential situation. The best we can hope for, I think, are a few good rules of thumb; and these will likely depend on personal preference.

In any case, this book is far more than an analysis of this common dilemma, but an attempt to give a complete picture of Lachs’s philosophical perspective. Lachs promises a new philosophical system, but delivers only a disorganized gallimaufry of opinions that do not cohere. For example, Lachs begins by denigrating the professionalization of philosophy, holding that philosophy is not a discipline that seeks the truth—he asserts that not a single proposition would command assent by the majority of practitioners (though I disagree!)—rather, philosophy is better thought of as intellectual training that helps us to make sense of other activities. But the book includes lengthy analyses of ethics, ontology, and epistemology, so apparently Lachs does see the value in answering the traditional problems of philosophy. To make matters worse, Lachs continually excoriates philosophers who do not practice what they preach; and then he goes on to outline an ethical system wholly compatible with a middle-class, bourgeois lifestyle (our main obligations are to do our jobs and to leave other people alone, it seems).

I am being unfairly satirical. I actually agree with most of what Lachs says; and this of course means I must make fun of him. (According to the “Lotz Theory of Agreement” no intellectual will permit herself to simply agree with another intellectual, but will search out any small point of difference, even a difference in attitude or emphasis, in order to seem superior.) Lachs is an inspiring example of an academic trying to address himself to broader problems using more accessible language. He is an attractive thinker and a skilled writer, a humane intellectual capable of fine prose.

Nevertheless, I must admit that this book makes me despair a little. Here we have a man explicitly and repeatedly repudiating his profession and trying to write for non-specialists; and yet Lachs is so palpably an academic that he simply cannot do it. The book begins with his opinions about the canonical philosophers, frequently breaks off to criticize fellow professors and intellectual movements, and includes academic controversies (such as how to interpret Santayana’s use of the word “matter” in his ontological work) of no interest to a general reader. Lachs tries to come up with an ethical system that he can follow himself as an example of a committed intellectual, and then ends up creating an ethical system with no obligations other than to do one’s job (which, in his case, consists of writing books and advising graduate students). Lachs’s primary example of committed moral action, to which he returns again and again, is signing a petition to remove the president of his university (and he notes that most of his colleagues refused to do even this!).

I am being unduly harsh on Lachs. Really, he is one of the very best examples of what academics can and should do to engage with the world around them. And yet his example demonstrates, to me, the enormous gap that separates academia from the rest of society. Lachs dwells again and again on the pointless abstractions of professional philosophers and the wisdom of everyday people, and then the next moment he launches into an analysis of the concept of the individual in the metaphysics of Josiah Royce—Royce, someone who not even most professional philosophers are interested in, much less the general public—and all this in the context of a book that emphasizes self-consistency over and over again.

This makes me sad, because I think we really do need more intellectuals in the public sphere, intellectuals who are capable of communicating clearly and elegantly to non-specialists about problems of wide interest. And yet our age seems to be conspicuously bereft of anyone resembling a public intellectual. Yes, we have popularizers, but that’s a different thing entirely.

Seeking an answer to this absence, I usually return to the model of specialization in the university.

To get a doctorate, you need to write a dissertation on something, usually a topic of excessive, often ludicrous specificity—the upper-arm tattoos of Taiwanese sailors, the arrangement of furniture inside French colonial homes in North Africa in the 1890s, and so on. This model originated in German research universities, I believe; and indeed it makes perfect sense for many disciplines, particularly the natural sciences. But I do not think this model is at all suited to the humanities, where seeing human things in a wide context is so important. This is not to deny that specialized research can make valuable contributions in the humanities—indeed, I think it is necessary, especially in fields like history—but I do not think it should be the only, or even the dominant, pattern for academics in the humanities.

If I can put forward my own very modest proposal in this review, it would be the creation of another class of academic—let’s call them “scholars”—who would focus, not on specialized research, but on general coverage in several related fields (I’m thinking specifically of philosophy, literature, and history, but this is just one possibility). These scholars would be mainly responsible for teaching courses, not publishing research; and this would give them an incentive to communicate to undergraduates, and by extension the general public, rather than to disappear into arcane regions of the inky night.

These scholars could also be responsible for writing reviews and critiques of research. Their more general knowledge might make them more capable of seeing connections between fields; and by acting as gatekeepers to publication (in the form of a reviewer), they could serve as a check on the groupthink, and also the lack of accountability, that can prevail within a discipline where sometimes research is so obscure that nobody outside the community can adequately judge it (thus proving a shield to shoddy work).

I’m sure my own proposal is impractical, has already been tried, is already widespread, or just plain bad, and so on. (Even if you agree with it, the Lotz Theory of Agreement will apply.) But whatever the solution, I think it is a palpable and growing problem that there is so much intellectual work—especially in the humanities, where there is far less excuse for unintelligibility and sterile specialization—that is totally disconnected with the wider society, and is unreadable and uninteresting to most people, even well-educated people. We simply cannot have a functioning society where intellectuals only talk to each other in their own special language. Lachs, to his credit, is doing his best to break this pattern. But this book, to me, is evidence that the problem is far too serious for well-intentioned individuals to solve on their own.

View all my reviews

Quotes & Commentary #42: Montaigne

Quotes & Commentary #42: Montaigne

Everything has a hundred parts and a hundred faces: I take one of them and sometimes just touch it with the tip of my tongue or my fingertips, and sometimes I pinch it to the bone. I jab into it, not as wide but as deep as I can; and I often prefer to catch it from some unusual angle.

—Michel de Montaigne

The pursuit of knowledge has this paradoxical quality: it demands perfection and yet continuously, inevitably, and endlessly fails in its goal.

Knowledge demands perfection because it is meant to be true, and truth is either perfect or nonexistent—or so we like to assume.

Normally, we think about truth like this: I make a statement, like “the cat is on the mat,” and this statement corresponds to something in reality—a real cat on a real mat. This correspondence must be perfect to be valid; whether the cat is standing on the side of the mat, or if the cat is up a tree, then the statement is equally false.

To formulate true statements—about the cosmos, about life, about humanity—this is the goal of scholarship. But can scholarship end? Can we get to a point at which we know everything and we can stop performing experiments and doing research? Can we reach the whole truth?

This would require scholars to create works that were both definitive—unquestioned in their authority—and exhaustive—covering the entire subject. What would this entail? Imagine a scholar writing about the Italian Renaissance, for example, who wants to write the perfect work, the book that totally and completely encapsulates its subject, rendering all additional work unnecessary.

This seems as if it should be theoretically possible, at least. The Italian Renaissance was a sequence of events—individuals born, paintings painted, sculptures sculpted, trips to the toilet, accidental deaths, broken hearts, drunken brawls, late-night conversations, outbreaks of the plague, political turmoil, marriage squabbles, and everything else, great and small, that occurred within a specific period of time and space. If our theoretical historian could write down each of these events, tracing their causation, allotting each its proportional space, neutrally treating each fact, then perhaps the subject could be definitively exhausted.

There are many obvious problems with this, of course. For one, we don’t have all the facts available, but only a highly selective, imperfect, and tentative record, a mere sliver of a fraction of the necessary evidence. Another is that, even if we did have all the facts, a work of this kind would be enormously long—in fact, as long as the Italian Renaissance itself. This alone makes the undertaking impossible. But this is also not what scholars are after.

A book that represented each fact neutrally, in chronological sequence, would not be an explanation, but a chronicle; it would recapitulate reality rather than probe beneath the surface; or rather, it would render all probing superfluous by representing the subject perfectly. It would be a mirror of reality rather than search for its fundamental form.

And yet our brains are not, and can never be, impartial mirrors of reality. We sift, sort, prod, search for regularities, test our assumptions, and in a thousand ways separate the important from the unimportant. Our attention is selective of necessity, not only because we have a limited mental capacity, but because some facts are much more necessary than others for our survival.

We have evolved, not as impartial observers of the world, but as actors in a contest of life. It makes no difference, evolutionarily speaking, if our senses represent “accurately” what is out there in the world; it is only important that they alert us to threats and allow us to locate food. There is reason to believe, therefore, that our senses cannot be literally trusted, since they are adapted to survival, not truth.

Survival is, of course, not the operative motivation in scholarship. More generally, some facts are more interesting than others. Some things are interesting simply in themselves—charms that strike the sight, or merits that win the soul—while others are interesting in that they seem to hold within themselves the reason for many other events.

A history of the Italian Renaissance that gave equal space to a blacksmith as to Pope Julius II, or equal space to a parish church as to the Sistine Chapel, would be unsatisfactory, not because it was inaccurate, but because its priorities would be in disarray. All intellectual work requires judgment. A historian’s accuracy might be unimpeachable, and yet his judgment so faulty as to render his work worthless.

We have just introduced two vague concepts into our search for knowledge: interest and judgment—interest being the “inherent” value of a fact, and judgment our faculty for discerning interest. Both of these are clearly subjective concepts. So instead of impartially represented reality, our thinkers experience reality through a distorted lens—the lens of our senses, further shaped by culture and upbringing—and from this blurry image of the world, select what portion of that distorted reality they deem important.

Their opinion of what is beautiful, what is meritorious, what is crucial and what is peripheral, will be based on criteria—either explicit or implicit—that are not reducible to the content itself. In other words, our thinkers will be importing value judgments into their investigation, judgments that will act as sieves, catching some material and letting the rest slip by.

Even more perilous, perhaps, than the selection of facts, will be the forging of generalizations. Since, with our little brains, we simply cannot represent reality in all its complexity, we resort to general statements. These are statements about the way things normally happen, or the characteristics that things of the same type normally have—statements that attempt to summarize a vast number of particulars within one abstract tendency.

All generalizations employ inductive reasoning, and thus are vulnerable to Hume’s critique of induction. A thousand instances of red apples is no proof that the next apple will also be red. And even if we accept that generalizations are always more or less true—true as a rule, with some inevitable exceptions—this leaves undefined how well the generalization fits the particulars. Is it true nine times out of ten, or only seven? How many apples out of a hundred are red? Finally, to make a generalization requires selecting one quality—say, the color of apples, rather than their size or shape—among many that the particulars possess, and is consequently always arbitrary.

More hazardous still is the act of interpretation. By interpretation, I mean deciding what something means. Now, in some intellectual endeavors, such as the hard sciences, interpretation is not strictly necessary; only falsifiable knowledge counts. Thus, in quantum mechanics, it is unimportant whether we interpret the equations according to the Copenhagen interpretation or the Many-Worlds interpretation—whether the wave-function collapses, or reality splits apart—since in any case the equations predict the right result. In other words, we aren’t required to scratch our heads and ask what the equations “mean” if they spit out the right number; this is one of the strengths of science.

But in other fields, like history, interpretation is unavoidable. The historian is dealing with human language, not to mention the vagaries of the human heart. This alone makes any sort of “objective” knowledge impossible in this realm. Interpretation deals with meaning; meaning only exists in experience; experience is always personal; and the personal is, by definition, subjective. Two scholars may differ as to the meaning of, say, a passage in a diplomat’s diary, and neither could prove the other was incorrect, although one might be able to show her interpretation was far more likely than her counterpart’s.

Let me stop and review the many pitfalls on our road to perfect knowledge of the Italian Renaissance. First, we begin with an imperfect record of information; then we must make selections from this imperfect record. This selection will be based on vague judgments of importance and interest—what things are worth knowing, which facts explain other facts. We will also try to make generalizations about these facts—generalizations that are always tentative, arbitrary, and hazardous, and which are accurate to an undetermined extent. After during all this, we must interpret: What does this mean? Why did this happen? What is the crucial factor, what is mere surface detail? And remember that, before we even start, we are depending on a severely limited perception of the world, and a perspective warped by innumerable prejudices. Is it any wonder that scholarship goes on infinitely?

At this point, I am feeling a bit like Montaigne, chasing my thoughts left and right, trying to weave disparate threads into a coherent whole, and wondering how I began this already overlong essay. Well, that’s not so bad, I suppose, since Montaigne is the reason I am writing here in the first place.

Montaigne was a skeptic; he did not believe in the possibility of objective knowledge. For him, the human mind was too shifting, the human understanding too weak, the human lifespan too short, to have any hope of reaching a final truth. Our reasoning is always embodied, he observed, and is thus subjected to our appetites, excitements, passions, and fits of lassitude—to all of the fancies, hobbyhorses, prejudices, and vanities of the human personality.

You might think, from the foregoing analysis, that I take a similar view. But I am not quite so cheerfully resigned to the impossibility of knowledge. It is impossible to find out the absolute truth (and even if we could, we couldn’t be sure when or if we did). Through science, however, we have developed a self-correcting methodology that allows us to approach ever-nearer to the truth, as evidenced by our increasing ability to manipulate the world around us through technology. To be sure, I am no worshiper of science, and I think science is fallible and limited to a certain domain. But total skepticism regarding science would, I think, by foolish and wrong-headed: science does what it’s supposed to do.

What about domains where the scientific method cannot be applied, like history? Well, here more skepticism is certainly warranted. Since so much interpretation is needed, and since the record is so imperfect, conclusions are always tenuous. Nevertheless, this is no excuse to be totally skeptical, or to regard all conclusions as equally valid. The historian must still make logically consistent arguments, and back up claims with evidence; their theories must still plausibly explain the available evidence, and their generalizations must fit the facts available. In other words, even if a historian’s thesis cannot be falsified, it must still conform to certain intellectual standards.

Unlike in science, however, interpretation does matter, and it matters a great deal. And since interpretation is always subjective, this makes it possible for two historians to propose substantially different explanations for the same evidence, and for both of their theories to be equally plausible. Indeed, in a heuristic field, like history, there will be as many valid perspectives as there are practitioners.

This brings us back to Montaigne again. Montaigne used his skepticism—his belief in the subjectivity of knowledge, in the embodied nature of knowing—to justify his sort of dilettantism. Since nobody really knows what they’re talking, why can’t Montaigne take a shot? This kind of perspective, so charming in Montaigne, can be dangerous, I think, if it leads one to abandon intellectual standards like evidence and argument, or if it leads to an undiscerning distrust in all conclusions.

Universal skepticism can potentially turn into a blank check for fundamentalism, since in the absence of definite knowledge you can believe whatever you want. Granted, this would never have happened to Montaigne, since he was wise enough to be skeptical of himself above all; but I think it can easily befall the less wise among us.

Nevertheless, if proper respect is paid to intellectual standards, and if skepticism is always turned against oneself as well as one’s peers, then I think dilettantism, in Montaigne’s formulation, is not only acceptable but admirable:

I might even have ventured to make a fundamental study if I did not know myself better. Scattering broadcast a word here, a word there, examples ripped from their contexts, unusual ones, with no plan and no promises, I am under no obligation to make a good job of it nor even to stick to the subject myself without varying it should it so please me; I can surrender to doubt and uncertainty and to my master-form, which is ignorance.

Nowadays it is impossible to be an expert in everything. To be well-educated requires that we be dilettantes, amateurs, whether we want to or not. This is not to be wholly regretted, for I think the earnest dilettante has a lot to contribute in the pursuit of knowledge.

Serious amateurs (to use an oxymoron) serve as intermediaries between the professionals of knowledge and the less interested lay public. They also serve as a kind of check on professional dogmatism. Because they have one tiptoe in the subject, and the rest of their body out of it, they are less likely to get swept away by a faddish idea or to conform to academic fashion. In other words, they are less vulnerable to groupthink, since they do not form a group.

I think serious amateurs might also make a positive contribution, at least in some subjects that require interpretation. Although the amateur likely has less access to information and lacks the resources to carry out original investigation, each amateur has a perspective, a perspective which may be highly original; or she may notice something previously unnoticed, which puts old material in a new light.

Although respect must be paid to expertise, and academic standards cannot be lightly ignored, it is also true that professionals do not have a monopoly on the truth—and for all the reasons we saw above, absolute truth is unattainable, anyway—so there will always be room for fresh perspectives and highly original thoughts.

Montaigne is the perfect example: a sloppy thinker, a disorganized writer, a total amateur, who was nonetheless the most important philosopher and man of letters of his time.