Review: The Poetry of John Keats

Review: The Poetry of John Keats

Keats: Poems by John Keats

My rating: 4 of 5 stars

A thing of beauty is a joy for ever

As a dedicated book reviewer, it is my job to say why I like certain books and dislike others. When it comes to nonfiction, this is reasonably straightforward: if the exposition is clear, if the arguments are logical, if the ideas are reasonable—then it is a worthy book. Nonfiction aims for truth, and truth can at least be tested. With literature, however, the task is somewhat more fraught. Beauty is an unfalsifiable hypothesis. We can break down a novel’s strengths and weaknesses by category—good prose, bad pacing, fine dialogue, shallow characterization—but ultimately these evaluations, however much we justify them, rest upon gut reactions.

Why does one sequence of musical notes create a pleasing melody, another a forgettable ditty, and a third a nonsensical jangle? Why do certain combinations of words strike the ear as just right, and others as discordant? Formal analysis can clarify and categorize the sorts of sounds and structures that people tend to enjoy. But it can never explain why we enjoy them in the first place, nor why different people enjoy them to different extents. If literary criticism is to be a worthwhile exercise it requires, then, that the gut reactions of the audience members are at least roughly alike—that we are similarly constituted as regards to beauty.

Shared education contributes towards this similarity; as does, presumably, the basic resemblance of our natures. But does this bedrock of shared taste constitute something durable and permanent enough so that we may say a great artist hits upon the “truth” of art—appealed to something permanent in ourselves—in the same way that a scientist may hit upon a “truth” of nature? Many have thought so. And it strikes me that something like this must be the case if we wish to call any form of art “universal”—namely, that it is a true expression of what we share.

I mention this because the relationship of beauty to truth is one of the great themes of Keats’ poetry. At the end of his “Ode on a Grecian Urn” he tells us that “Beauty is truth, truth beauty”—a line that has been endlessly analyzed. Certainly the widespread and steady popularity of his poems may argue that, indeed, Keats hit upon some basic truth of art. But what could that mean?

The issue of translation may bear on the question. It is often said that poetry is untranslatable; and the bilingual edition I read ironically proved the point. The Spanish consistently failed to evoke the sublimity of the original. Here, for example, are two famous lines from Keats’ “On First Looking into Chapman’s Homer”:

Then felt I like some watcher of the skies
When a new planet swims into his ken

And here is the Spanish translation:

Entonces me sentí como un astrónomo
cuando ve frente a sí un Nuevo planeta

Translated back into English this reads something like: “Then I felt like an astronomer when he sees a new planet in front of him.” Despite preserving the literal meaning, this obviously loses all of the magic of the line. “Watcher of the skies” is infinitely more romantic than “astronomer,” and “sees in front of him” has none of the mystery of “swims into his ken.” In short, the rich beauty of the language does not survive; and the poem becomes a rather bland statement about enjoying a new edition of Homer, rather than an evocation of the grandeur of nature and art.

(I do not think it was the translator’s fault. Spanish is very different to versify than English; and the literal Spanish translation would preserve meaning at the expense of rhythm.)

Yet if Keats’ poetry is truly untranslatable, then how could it contain truth? After all, one could translate Newton’s work into Spanish, French, German, or Japanese, and it would contain just as much truth (or untruth) as in the original. Science is not linguistically bound. Admittedly, the boundary of translation is not equivalent for all forms of poetry. Homer’s works are still riveting in English; and Dante’s vision survives (at least partially) its journey from Medieval Florentine. Lyrical poetry seems to fare the most poorly.

The obvious difference between Homer and Keats is that Homer’s appeal lies in the story, while Keats’ relies on his linguistic brilliance. And, for my part, it is easier to see how a story can contain a semblance of “truth,” rather than a beautiful string of words. Assuming that some experiences in life are universal, that some emotional crises are recurring, that some existential state is inescapable, then a great story may be able to capture something common and durable about the human condition. A beautiful poetic line, on the other hand, has a purely formal appeal—charming not in what it says, but in how it says it—and this perfection of expression, being untranslatable, must fall short of universal art.

Nevertheless, to describe Keats as merely a brilliant wordsmith would be an absurd underestimation. As his letters prove, he was thoroughly educated and keenly intelligent. His poems abound with perplexing classical references. And, in any case, words are never mere sounds; they are laden with meaning; and even the briefest of lyrical poems are pregnant with thought. Contemplation permeates Keats’ work. In his poems we find the focused musings of a highly original man as he meditates on entirely common occurrences: Autumn, Melancholy, Nature, Art—the list goes on.

Here is where Keats’ art may be said to be “universal”—and, in some sense, “true” to the human condition. For many of us have stood, amazed, before a work of art, or felt thrilled upon opening a book, or listened yearningly to a bird singing outside a window—or any number of comparable experiences. Yet only Keats and his ken have taken these fleeting twinges of emotion, reflected deeply upon them, and captured them in words so felicitous that they are impossible to forget once heard. Like the revelers on the Grecian Urn, Keats has frozen time.

It may be that this lyrical form of art, being so bound up in brilliance of expression, is less universal and less durable than works of narrative. But for those who are, by chance, linguistically equipped to enter Keats’ world, then his poems contain just as much artistic “truth” as the oldest tales and the finest melodies.

View all my reviews

Quotes & Commentary #42: Montaigne

Quotes & Commentary #42: Montaigne

Everything has a hundred parts and a hundred faces: I take one of them and sometimes just touch it with the tip of my tongue or my fingertips, and sometimes I pinch it to the bone. I jab into it, not as wide but as deep as I can; and I often prefer to catch it from some unusual angle.

—Michel de Montaigne

The pursuit of knowledge has this paradoxical quality: it demands perfection and yet continuously, inevitably, and endlessly fails in its goal.

Knowledge demands perfection because it is meant to be true, and truth is either perfect or nonexistent—or so we like to assume.

Normally, we think about truth like this: I make a statement, like “the cat is on the mat,” and this statement corresponds to something in reality—a real cat on a real mat. This correspondence must be perfect to be valid; whether the cat is standing on the side of the mat, or if the cat is up a tree, then the statement is equally false.

To formulate true statements—about the cosmos, about life, about humanity—this is the goal of scholarship. But can scholarship end? Can we get to a point at which we know everything and we can stop performing experiments and doing research? Can we reach the whole truth?

This would require scholars to create works that were both definitive—unquestioned in their authority—and exhaustive—covering the entire subject. What would this entail? Imagine a scholar writing about the Italian Renaissance, for example, who wants to write the perfect work, the book that totally and completely encapsulates its subject, rendering all additional work unnecessary.

This seems as if it should be theoretically possible, at least. The Italian Renaissance was a sequence of events—individuals born, paintings painted, sculptures sculpted, trips to the toilet, accidental deaths, broken hearts, drunken brawls, late-night conversations, outbreaks of the plague, political turmoil, marriage squabbles, and everything else, great and small, that occurred within a specific period of time and space. If our theoretical historian could write down each of these events, tracing their causation, allotting each its proportional space, neutrally treating each fact, then perhaps the subject could be definitively exhausted.

There are many obvious problems with this, of course. For one, we don’t have all the facts available, but only a highly selective, imperfect, and tentative record, a mere sliver of a fraction of the necessary evidence. Another is that, even if we did have all the facts, a work of this kind would be enormously long—in fact, as long as the Italian Renaissance itself. This alone makes the undertaking impossible. But this is also not what scholars are after.

A book that represented each fact neutrally, in chronological sequence, would not be an explanation, but a chronicle; it would recapitulate reality rather than probe beneath the surface; or rather, it would render all probing superfluous by representing the subject perfectly. It would be a mirror of reality rather than search for its fundamental form.

And yet our brains are not, and can never be, impartial mirrors of reality. We sift, sort, prod, search for regularities, test our assumptions, and in a thousand ways separate the important from the unimportant. Our attention is selective of necessity, not only because we have a limited mental capacity, but because some facts are much more necessary than others for our survival.

We have evolved, not as impartial observers of the world, but as actors in a contest of life. It makes no difference, evolutionarily speaking, if our senses represent “accurately” what is out there in the world; it is only important that they alert us to threats and allow us to locate food. There is reason to believe, therefore, that our senses cannot be literally trusted, since they are adapted to survival, not truth.

Survival is, of course, not the operative motivation in scholarship. More generally, some facts are more interesting than others. Some things are interesting simply in themselves—charms that strike the sight, or merits that win the soul—while others are interesting in that they seem to hold within themselves the reason for many other events.

A history of the Italian Renaissance that gave equal space to a blacksmith as to Pope Julius II, or equal space to a parish church as to the Sistine Chapel, would be unsatisfactory, not because it was inaccurate, but because its priorities would be in disarray. All intellectual work requires judgment. A historian’s accuracy might be unimpeachable, and yet his judgment so faulty as to render his work worthless.

We have just introduced two vague concepts into our search for knowledge: interest and judgment—interest being the “inherent” value of a fact, and judgment our faculty for discerning interest. Both of these are clearly subjective concepts. So instead of impartially represented reality, our thinkers experience reality through a distorted lens—the lens of our senses, further shaped by culture and upbringing—and from this blurry image of the world, select what portion of that distorted reality they deem important.

Their opinion of what is beautiful, what is meritorious, what is crucial and what is peripheral, will be based on criteria—either explicit or implicit—that are not reducible to the content itself. In other words, our thinkers will be importing value judgments into their investigation, judgments that will act as sieves, catching some material and letting the rest slip by.

Even more perilous, perhaps, than the selection of facts, will be the forging of generalizations. Since, with our little brains, we simply cannot represent reality in all its complexity, we resort to general statements. These are statements about the way things normally happen, or the characteristics that things of the same type normally have—statements that attempt to summarize a vast number of particulars within one abstract tendency.

All generalizations employ inductive reasoning, and thus are vulnerable to Hume’s critique of induction. A thousand instances of red apples is no proof that the next apple will also be red. And even if we accept that generalizations are always more or less true—true as a rule, with some inevitable exceptions—this leaves undefined how well the generalization fits the particulars. Is it true nine times out of ten, or only seven? How many apples out of a hundred are red? Finally, to make a generalization requires selecting one quality—say, the color of apples, rather than their size or shape—among many that the particulars possess, and is consequently always arbitrary.

More hazardous still is the act of interpretation. By interpretation, I mean deciding what something means. Now, in some intellectual endeavors, such as the hard sciences, interpretation is not strictly necessary; only falsifiable knowledge counts. Thus, in quantum mechanics, it is unimportant whether we interpret the equations according to the Copenhagen interpretation or the Many-Worlds interpretation—whether the wave-function collapses, or reality splits apart—since in any case the equations predict the right result. In other words, we aren’t required to scratch our heads and ask what the equations “mean” if they spit out the right number; this is one of the strengths of science.

But in other fields, like history, interpretation is unavoidable. The historian is dealing with human language, not to mention the vagaries of the human heart. This alone makes any sort of “objective” knowledge impossible in this realm. Interpretation deals with meaning; meaning only exists in experience; experience is always personal; and the personal is, by definition, subjective. Two scholars may differ as to the meaning of, say, a passage in a diplomat’s diary, and neither could prove the other was incorrect, although one might be able to show her interpretation was far more likely than her counterpart’s.

Let me stop and review the many pitfalls on our road to perfect knowledge of the Italian Renaissance. First, we begin with an imperfect record of information; then we must make selections from this imperfect record. This selection will be based on vague judgments of importance and interest—what things are worth knowing, which facts explain other facts. We will also try to make generalizations about these facts—generalizations that are always tentative, arbitrary, and hazardous, and which are accurate to an undetermined extent. After during all this, we must interpret: What does this mean? Why did this happen? What is the crucial factor, what is mere surface detail? And remember that, before we even start, we are depending on a severely limited perception of the world, and a perspective warped by innumerable prejudices. Is it any wonder that scholarship goes on infinitely?

At this point, I am feeling a bit like Montaigne, chasing my thoughts left and right, trying to weave disparate threads into a coherent whole, and wondering how I began this already overlong essay. Well, that’s not so bad, I suppose, since Montaigne is the reason I am writing here in the first place.

Montaigne was a skeptic; he did not believe in the possibility of objective knowledge. For him, the human mind was too shifting, the human understanding too weak, the human lifespan too short, to have any hope of reaching a final truth. Our reasoning is always embodied, he observed, and is thus subjected to our appetites, excitements, passions, and fits of lassitude—to all of the fancies, hobbyhorses, prejudices, and vanities of the human personality.

You might think, from the foregoing analysis, that I take a similar view. But I am not quite so cheerfully resigned to the impossibility of knowledge. It is impossible to find out the absolute truth (and even if we could, we couldn’t be sure when or if we did). Through science, however, we have developed a self-correcting methodology that allows us to approach ever-nearer to the truth, as evidenced by our increasing ability to manipulate the world around us through technology. To be sure, I am no worshiper of science, and I think science is fallible and limited to a certain domain. But total skepticism regarding science would, I think, by foolish and wrong-headed: science does what it’s supposed to do.

What about domains where the scientific method cannot be applied, like history? Well, here more skepticism is certainly warranted. Since so much interpretation is needed, and since the record is so imperfect, conclusions are always tenuous. Nevertheless, this is no excuse to be totally skeptical, or to regard all conclusions as equally valid. The historian must still make logically consistent arguments, and back up claims with evidence; their theories must still plausibly explain the available evidence, and their generalizations must fit the facts available. In other words, even if a historian’s thesis cannot be falsified, it must still conform to certain intellectual standards.

Unlike in science, however, interpretation does matter, and it matters a great deal. And since interpretation is always subjective, this makes it possible for two historians to propose substantially different explanations for the same evidence, and for both of their theories to be equally plausible. Indeed, in a heuristic field, like history, there will be as many valid perspectives as there are practitioners.

This brings us back to Montaigne again. Montaigne used his skepticism—his belief in the subjectivity of knowledge, in the embodied nature of knowing—to justify his sort of dilettantism. Since nobody really knows what they’re talking, why can’t Montaigne take a shot? This kind of perspective, so charming in Montaigne, can be dangerous, I think, if it leads one to abandon intellectual standards like evidence and argument, or if it leads to an undiscerning distrust in all conclusions.

Universal skepticism can potentially turn into a blank check for fundamentalism, since in the absence of definite knowledge you can believe whatever you want. Granted, this would never have happened to Montaigne, since he was wise enough to be skeptical of himself above all; but I think it can easily befall the less wise among us.

Nevertheless, if proper respect is paid to intellectual standards, and if skepticism is always turned against oneself as well as one’s peers, then I think dilettantism, in Montaigne’s formulation, is not only acceptable but admirable:

I might even have ventured to make a fundamental study if I did not know myself better. Scattering broadcast a word here, a word there, examples ripped from their contexts, unusual ones, with no plan and no promises, I am under no obligation to make a good job of it nor even to stick to the subject myself without varying it should it so please me; I can surrender to doubt and uncertainty and to my master-form, which is ignorance.

Nowadays it is impossible to be an expert in everything. To be well-educated requires that we be dilettantes, amateurs, whether we want to or not. This is not to be wholly regretted, for I think the earnest dilettante has a lot to contribute in the pursuit of knowledge.

Serious amateurs (to use an oxymoron) serve as intermediaries between the professionals of knowledge and the less interested lay public. They also serve as a kind of check on professional dogmatism. Because they have one tiptoe in the subject, and the rest of their body out of it, they are less likely to get swept away by a faddish idea or to conform to academic fashion. In other words, they are less vulnerable to groupthink, since they do not form a group.

I think serious amateurs might also make a positive contribution, at least in some subjects that require interpretation. Although the amateur likely has less access to information and lacks the resources to carry out original investigation, each amateur has a perspective, a perspective which may be highly original; or she may notice something previously unnoticed, which puts old material in a new light.

Although respect must be paid to expertise, and academic standards cannot be lightly ignored, it is also true that professionals do not have a monopoly on the truth—and for all the reasons we saw above, absolute truth is unattainable, anyway—so there will always be room for fresh perspectives and highly original thoughts.

Montaigne is the perfect example: a sloppy thinker, a disorganized writer, a total amateur, who was nonetheless the most important philosopher and man of letters of his time.

Quotes & Commentary #39: Emerson

Quotes & Commentary #39: Emerson

 

Each soul is a soul or an individual by virtue of its having or I may say being a power to translate the universe into some particular language of its own.

—Ralph Waldo Emerson

What does it mean for something to be subjective? This means that it depends upon a perspective to exist.

Pleasure and pain are subjective, for example, since they cannot exist independently of an observer; they must be felt to be real. Mt. Everest, on the other hand, exists objectively—or at least we think it does—since that hunk of rock and snow would persist even if there were no humans left to climb it and plant flags on its summit.

Humans, of course, can never get out of their own perspectives and know, for good and certain, that anything exists objectively. Thus “objective” facts are really inter-subjective; that is, they can be verified by other observers.

Contrary to common belief, facts cannot be verified purely through experience, since experience is always personal and therefore private. This is why we are justified in disbelieving mystic visions and reports of miracles.

Two things must happen for raw experience to be turned into objective knowledge.

First the experience must be communicated to another observer through language. Language is a bridge between observers, allowing them to compare, in symbolic form, the reality they perceive. Language is a highly imperfect bridge, to be sure, and much information is lost by turning our raw experience into symbols; nevertheless it is the best we have.

Second, another observer must try to have an experience that matches the experience of the first one. This verification is, again, constrained by the vagueness of language.

Somebody points and says “Look, a helicopter!” Their friend looks up into the sky and says “I see it too!” This correspondence of experience, communicated through words, is the basis for our notion of the objective world.

(There is, of course, the thorny Cartesian question: How can we know for certain that both the helicopter and our friend aren’t hallucinations? We can’t.)

Subjective and objective knowledge share this quality. Our knowledge of the external world—whether a fleeting sensation of a chilly breeze, or a scientific doctrine repeatedly checked—is always symbolic.

A symbol is an arbitrary representation. All words are symbols. The relationship between the word “tree” and actual trees is arbitrary; we could also say arbol or Baum and accomplish the same end. By saying that knowledge is symbolic, I mean that the relationship between the objective facts and our representation of those facts is arbitrary. 

First, the relationship between the external stimulus and our private sensation is an arbitrary one.

Light in itself is electromagnetic radiation. In other words, light in itself doesn’t look like anything; it only has an appearance when photosensitive eyes evolve. Our visual cortex represents the photons that strike our eyes as colors. There is only a symbolic connection between the objective radiation and the internal sensation. The red I experience is only a symbol of a certain wavelength of light that my eyes pick up.

As Santayana said, the senses are poets and only portray the external world as a singer communicates his love: in metaphors. This is the basis for the common observation that there is no way of knowing whether the red I experience is the same as the red you experience. Since the connection between the objective stimulus and the subjective representation is arbitrary, and since it is only me who can observe the result, we can never know for certain how colors look to other individuals.

When we communicate our experiences to others, we translate our direct experience, which is already a symbolic representation of the world, into still more general symbols. As I said above, much information is lost during this second translation. We can, for example, say that we’re seeing the color red, but we cannot say exactly what it looks like.

Modern science, not content with the vagueness of daily speech, uses a stricter language: mathematics. And it also uses a stricter method of confirmation: controlled experiments through which rival hypotheses are tested. Nevertheless, while stricter, scientific knowledge isn’t any less symbolic. To the contrary, modern physics is distinguished for being abstract in the extreme.

To call knowledge symbolic is not to discredit it; it is merely to acknowledge the arbitrariness of our representations of the natural world. Nature can be understood, but first we must translate her into our language. The truth can be spoken, but always with an accent.