My friend Hui Liang interviewed me for her blog about my experience living and traveling in Europe. She has a great travel blog herself, which you might want to check out.
Here’s the link!
My friend Hui Liang interviewed me for her blog about my experience living and traveling in Europe. She has a great travel blog herself, which you might want to check out.
Here’s the link!
I was fortunate enough to be featured in the lastest episode of the podcast, Thought Stack. Jon Stenstrom, its host, interviewed me about reading—how to read, what to read, why to read, when to read—and also asked me for a few writing tips. Here’s your chance to hear my sonorous voice!
Click here to listen.
There is really no way for a man to put his arms around a big house plant and still remain a gentleman.
E.B. White’s name, along with Will Strunk’s, is now synonymous with good style. If that isn’t a compliment to a writer, I don’t know what is.
My first encounter with the duo was in my high school English class of junior year. My teacher was old-fashioned enough to believe that we should learn how to use punctuation. This came as a shock, since none of her predecessors had spared so much as a moment on a semicolon. It was with bewilderment and wonder, then, that I opened up The Elements of Style and encountered this sentence: “The colon has more effect than the comma, less power to separate than the semicolon, and more formality than the dash.” How often is so much instruction packed into so few words?
In college I picked up the habit of rereading Strunk and White at least once a year. Probably I should do so more often, since verbal profligacy—Strunk’s sworn enemy, the capital sin of writing—is something that I can’t seem to shake, no matter how often I try. One of the reasons I picked up this book was the hope that, by observing White at work, his example might serve where his precepts failed.
With White, the style is the man; and any discussion of his works inevitably becomes an analysis of his prose. To begin with, White is not what I’d call a vocal writer. A vocal writer is one whose writing seems to come alive and speak, whose writing cannot be read in your own voice, only in the author’s own accent. White’s writing, while personable, charming, and full of feeling, does not leap from the page into your living room. It is writerly writing.
His style is conversational, not aphoristic. His sentences are not pointed, his wit is not barbed, his lines are not militantly memorable. His writing is loose; it breathes like a cotton shirt; it is drafty like an old wooden cabin. You might say that his essays are a controlled ramble, a balancing act that looks like a casual stroll. They take their time. Like a scatterbrained errand boy, they pause in a thousand places for momentary rendezvous and covert dalliances before reaching their destinations.
White seldom speaks in abstractions, and hardly makes an argument. His writing is held together not by the logic of ideas but by the tissue of memory. This is partly why the style is unfilterable from the content. There is no thesis to take away. He is not trying to make a point, but to communicate his perspective, to encapsulate a piece of his personality.
White’s personality is delightful. Modest and gently humorous, he is animated by a curiosity for the little things that comprise his world. He can study a train schedule with avidity, he can spend hours gazing at a spider’s web, he can write poetry on the life-cycle of a pig. This is what makes him such a consummate essayist. In the humdrum facts and quotidian occurrences of life he hears music and meaning, and spiderlike weaves his own web to stitch them into a delicate structure:
As I sat at table, gnawing away at a piece of pie, snow began falling. At first it was an almost imperceptible spitting from the gray sky, but it soon thickened and came driving in from the northeast. I watched it catch along the edge of the drive, powder the stone wall, and whiten the surface of the dark frozen pond, and I knew that all along the coast from Kittery on, the worst mistakes of men were being quietly erased, the lines of their industrial temples softened, and U.S. 1 crowned with a cold, inexpensive glory
There is not much to be said against these essays, except what can be said against all stylists. Since what White says is less important than the way he says it, upon finishing the reader is left with nothing but echoes and aftertastes. Yet it is a delicious aftertaste, tart and tangy with a touch of smoke, and it whets my appetite for more.
Revenge is a kind of wild justice; which the more man’s nature runs to, the more ought law to weed it out.
The thirst for revenge is one of our ugliest, most satisfying, and most basic tendencies. It isn’t hard to see why.
The urge to revenge ourselves is a straightforward consequence of the urge to preserve ourselves. If somebody has hurt us in some way—by stealing a mate, by physical violence, or merely by a rude remark—then they have clearly shown themselves to be a threat, a dangerous person who can’t be trusted. The logical thing to do then becomes to neutralize this threat, to diminish or destroy his capacity to further hinder us.
This counter-attack will serve two purposes: first, it will harm the enemy, reducing his capacity to harm you in the future; second, by publically revenging yourself on an enemy, it will signal to others that you are not one to be trifled with, and that you will retaliate if anybody tries something funny. The practical benefits of revenge are thus preventative.
It is paradoxical, therefore, that revenge is not often thought of as oriented towards future security, but instead toward bygone injuries. The purpose of revenge, we feel in our bones, is to right the wrongs of the universe, to put the cosmic scale of justice back to zero, balancing a good action for a bad one.
When revenge is conceived this way, as retaliatory and not as preventative, then it can lead to absurdly unproductive actions, notable only for the resources they waste. In this connection, I can’t help thinking of Iñigo Montoya from The Princess Bride, whose obsessive quest to kill the murderer of his father consumed decades of his life.
Ask anyone to tell you about their ex, and there’s a good chance you will be met with the same vengeful fixation. The revenge intoxicated man is something of a narrative cliche, repeated ten thousand times in novels and television and movies. I would guess that revenge is second only to romantic love as the emotional engine of drama.
The folly of orienting your life around getting back at an enemy is clear to anyone with healthy sense of perspective. The best form of revenge, after all, is being happy, and all-consuming quests for personal justice are not conducive to happiness.
Even as a preventative measure—to incapacitate an enemy and prevent others from springing up—revenge often backfires. This is for two reasons.
First, if you attempt to render an enemy incapable of harming you in the future, there is always a risk you will fall short of full incapacitation. This is dangerous because, if you don’t succeed in fully disabling your enemy—whether psychologically, politically, logistically, socially, or physically—then there is a good chance that you will only embitter him, who will then counter-attack after he recovers his strength.
The second risk, related to the first, is the question of third-parties. If you succeed in fully disabling your enemy, there is still the possibility that he may have powerful friends. The friend of every enemy is another potential enemy, and can be mobilized against you. After successful revenge, you may yourself be the victim of a vengeful act by one of the enemy’s allies. If this revenge against you is successful, then one of your friends might retaliate against this new foe.
This logic of attack and counterattack is how feuds start. Every act of vengeance can breed another, until half the world is embroiled in a bitter, pointless war against the other half. The most emblematic of these vindictive conflicts was the feud between the Hartfields and the McCoys, but you see this sort of thing in every section and at every scale of human life.
Revenge, as you can see, is a strategy of limited utility. It would, however, be untrue to say that revenge is always futile. In a situation similar to Hobbes’s “State of Nature,” vengeful acts are hardly avoidable. If there is no structure in place to resolve disputes, no laws and thus no method to punish law-breakers, then each party must enforce their own version of right and wrong.
Remember that, for each individual, taken separately, right and wrong are products of self-interest. In other words, in the absence of law, “right” is simply what helps you, “wrong” what hurts you; and without any legal system, you must enforce your own version of right and wrong, since no one else will.
In order to survive in an anarchic world, you must retaliate against those who interfere with your self-interest. If not, it will send the message to those around you that you are a pushover, and that they can take advantage of you without any risk; and you can only expect more enemies to interfere with you in the future. (I teach adolescents, so I know something about an anarchic world.) Some retaliation is therefore necessary. But care must be taken not to take vengeance too readily or too forcefully, or you may be the victim of revenge yourself.
Humans were born into anarchy and we still have the instincts that helped us get through it. This is why revenge comes to naturally to us, and why it tastes so sweet. But this emotional armory does not help us when we live in a society governed by law.
Law is a substitute for revenge, with all of its advantages and none of its defects. With recourse to the legal prosecution—organized retaliation, approved by the community—then we can neutralize threats and protect ourselves from future harm, with only a minimal chance that our enemies’ friends will try to get back at us. Law replaces private desire with public safety; and because the will of the community sanctions the law’s consequences, the law is joined with overwhelming force, to protect its adherents and attack its antagonists.
Living, as we all do, in states governed by law, the emotional urge to take revenge becomes a hindrance rather than an asset. If you are wronged, you can seek legal retribution. But if that is not available, then it is usually unwise to take matters into your own hands, since this makes it possible that legal retribution can be used against you.
True, there are many things that fall outside the confines of the law, the most notable of these being romance. And as expected, vindictiveness is alive and well in matters of the heart. You still find people revenging themselves on their exes and their rivals, waxing indignant at perceived wrongs and organizing their friends in concerted actions of revenge. Having no social structure to resolve disputes, people fall into anarchy.
Yet I would argue that, even in these cases, revenge is a poor strategy. The revenge mentality is only justified, I think, in anarchic situations, specifically when the consequences for not retaliating are potentially severe. But in the case of romance, there is no chance that you will be seriously damaged. Heartbreak hurts, but it is seldom fatal.
In cases like these—where you can be sure of surviving any enemy attack—then I think another strategy is called for: returning love for hate. This sounds Biblical, but its justification is logical.
Keep in mind that I am talking of a situations like romance, in which harm cannot incapacitate either you or your enemies. (By “incapacitate” I mean render them unable to do future harm.) Since harming your enemies cannot disable them, it can only embitter them and potentially create new enemies; and since you cannot be disabled by being harmed, you have nothing to fear by not retaliating.
Returning harm for harm is thus clearly a poor long-term strategy, even if it might be satisfying in the short-term. You are left with two options: do nothing, or return help for harm. The first option seems superficially like the more logical one. By doing nothing, you don’t risk creating new enemies, and you don’t use resources to benefit your enemy that could be used elsewhere.
The second strategy, returning help for harm, is quite obviously more expensive, not to mention less satisfying. (Who likes to see their enemies happy?) Yet I think it is wiser as a long-term strategy, since it is by returning help for harm that enemies are converted into friends. A friend, after all, is somebody who acts in our interest; and it would be a stubborn enemy indeed who could persist in hating somebody who showed them only love and kindness.
Revenge, born of anarchy, is both a social and a personal ill. It is rendered obsolete as soon as people begin living in a society governed by law. It is a waste of resources and a poor survival strategy, and has no place in a just legal system or in the conduct of a wise individual.
This at least of flamelike our life has, that it is but the concurrence, renewed from moment to moment, of forces parting sooner or later on their ways.
So ends 2016, already a proverbially bad year. Both in the world at large and in my private life, this year has been one of disappointment and disruption. Things previously taken for granted have crumbled and collapsed; the inconceivable has happened, the impossible is already normal. History, instead of ending, has been frustratingly persistent.
Yet this year has easily been one of the best of my life. And this, not in spite of the disappointment and disruption, but because of it. Now I feel immunized against life’s bitter flavor, or at least toughened against it, since I have come to terms with impermanence. By this I do not mean that I have become embittered and fatalistic; rather, I have learned to enjoy myself more, to drink life’s pleasures to the dregs, to take the cash and let the credit go. Endings will do that; and what has this year been but a series of endings?
The basic theme of this year’s reading has been practice. I have endeavored, as far as I could, to read things that applied directly to my day-to-day life. This endeavor has taken many forms. One has been to read about Spain, her history, her people, and her culture, and this has been one of my most intellectually rewarding projects. Another was a flirtation with spiritual practices, during which I sampled Christian prayer and Hindu meditation, and became a daily meditator. This emphasis on practice even influenced my reading of fiction, leading me to focus on the moral lessons that could be learned from novels.
The mirror-image of this focus on finding the practical in my reading was finding the stories in my actions. This took the form of travel writing. I traveled like mad this year, dragging myself through dozens of cities, climbing walls, ransacking castles, profaning cathedrals with my presence, sampling strange dishes, trying to find a wink of sleep on buses, trains, and planes, and walking, walking, always walking, through fields and meadows, down dark alleys and cobblestone streets, and after each trip I tried to write something about what I did. I am not especially proud of this writing. But the very act of writing was a form of meditation, when I put my memories into order and reflected on what I saw. And just as in book reviewing, this retrospective travel writing allowed me to appreciate my travels more keenly. Indeed, I think travel writing is much like book reviewing, each city a different volume in the world’s library.
The biggest event in my reading and writing life, however, has been learning Spanish. Although very far from fluent, and still bumbling and confused much of the time, I have managed to learn enough Spanish to read at a high level. True, this reading is painful, slow, and difficult, but every day it gets easier, and some of the best books I’ve read this year have been in my new language.
There has been another result of living in Spain. Because of the abundance of beautiful monuments and museums, and perhaps the clearer sunlight and unclouded skies of Madrid, I have belatedly developed an appreciation of visual art. Before this year, I derived very little pleasure from paintings, sculptures, and architecture; but this year I have been moved and shaken to the core by what I have seen.
In that spirit, I will leave you with an image with which I began 2016. Last January I visited Granada to see the Alhambra. On a sunny but chilly day, I stood in the gardens of the Generalife and looked out across the hill at the Moorish palace. The Alhambra is the flower of an entire civilization, the product of a people who built up their knowledge year by year, slowly accumulating sophistication and resources until, in their hour of decadence, they could leave that enchanted place as a monument. Those people are now gone, their civilization vanished; and one day, hopefully far in the future, the Alhambra will crumble too.
I thought about this as I looked at the decaying walls, crowded on the hillside, slowly succumbing to the tooth of time, and felt melancholy in the winter breeze. How tragic, I thought, that nothing lasts. But now I don’t think of this as tragic. I think it is the very principle of beauty.
Thanks to all of you for being a part of this terrible and wonderful year. I look forward to the next one.
In fact, I believe the best definition of man is the ungrateful biped.
As part of my job as a professional American (being an English teacher in Spain is little more than being a professional American), I had to give a presentation on thanksgiving for my class.
Thanksgiving is really the quintessential American holiday. We watch American football—our defining sport, which involves taking land by force. We watch the Macy’s Parade—which consists of giant cartoons floating above our heads, a combination of our love of pop culture and excessive size. And finally we eat, and eat and eat. And then, the next day, we shop. No series of activities could more perfectly encapsulate the American identity.
The most conspicuous absence from this list of activities is being thankful. Theoretically, at least, we all know we’re supposed to be thankful; but we have no specific ritual of thanksgiving. In my classes, I tried to get my students to say one thing they’re thankful for. Some were very forthcoming, but most were extremely shy.
Why are people shy about thanking others? Being thankful is difficult because it requires vulnerability. To thank someone sincerely is to acknowledge a debt—not just a material but an emotional debt—a debt that perhaps cannot be repaid. To seriously communicate this gratitude requires that you let down your guard, something easier said than done.
For whatever reason, most of us go through life pretending that we are self-sufficient. We don’t like to think we owe anything to anybody. Instead, like Satan in Paradise Lost, we like to pretend that we are self-generated, self-sufficient, self-caused:
“I disdained subjection, and thought one step higher / Would set me highest, and in a moment quit / The debt immense of endless gratitude, / So burdensome still paying, still to owe; / Forgetful what from him I still received.”
Ingratitude is Satan’s principal sin. He does not want to live a life of gratitude, constantly and eternally singing a hosanna to God. He doesn’t want to acknowledge that he owes anything to anybody, not even the creator of the universe.
It must be admitted that owing a debt can be humiliating and crushing. Here I am reminded of the potlatch, a ritualized form of combat practiced by the natives of the Northwest Coast of Canada. During a potlatch, the headmen from competing groups would do symbolic battle by giving each other ostentatious gifts. The loser would be the man who received more than he could reciprocate.
This sounds bizarre, but consider: have you ever received a gift from somebody you’re not very fond of? I have, and I know that receiving gifts can engender bitterness as well as gratefulness. Being the recipient of a gift puts you under the giver’s power; and few people are grateful to be under somebody else’s power.
But is this necessarily true? Is gift giving necessarily aggressive? John Milton’s Satan goes on to say “And [I] understood not that a grateful mind / By owing owes not, but still pays, at once / Indebted and discharged.”
Here Satan realizes what many people forget. Being thankful is not a sign of weakness—although often appears so to the egotistical mind—but a sign of strength. It is a sign of strength because it requires sincerity, and being sincere always involved being vulnerable, letting your guard down. Being grateful means dispensing with the illusion that you’re self-caused and self-sufficient, and revealing your weaknesses to the world. Nothing requires more strength than showing weakness.
So I’d like to take this opportunity to personally thank the universe and everyone in it. I’m luckier than I could ever put into words.
Let us then suppose the mind to be, as we say, white paper, void of all characters, without any ideas; how comes it to be furnished? Whence comes it by that vast store, which the busy and boundless fancy of man has painted on it, with an almost endless variety? Whence has it all the materials of reason and knowledge? To this I answer, in one word, from experience: in that, all our knowledge is founded; and from that it ultimately derives itself.
This passage is one of the most famous formulation of the tabula rasa account of the human mind. Tabula rasa is Latin for “blank slate,” which is the traditional metaphor used to explain the theory. At birth, the mind is like a blank chalk board, devoid of writing; our experience is the hand that writes upon us; and our knowledge is the end result.
John Locke held that there was nothing in the mind that did not originate in the senses. Yes, we could have abstract ideas, like our notion of a triangle; but these ideas were simply generalizations from individual triangles that we have experienced through our eyes. Thus all knowledge, however general, abstract, or theoretical, was just a summary of our experience.
In Locke’s own lifetime, this idea was contested by Leibniz, who wrote an entire book-length response to Locke’s Essay, arguing that the mind needed certain innate principles in order to acquire knowledge. And this puzzle—the respective roles of experience, sense data, induction, deduction, and abstraction in our knowledge of the world—forms the basis of Kant’s magnificent Critique of Pure Reason.
Locke was a philosopher, and thus his Essay is largely concerned with epistemology—the nature, limit, and acquisition of knowledge. Yet this debate—empiricism versus rationalism, the “blank slate” versus “innate ideas”—is often reframed, in today’s world, as a scientific controversy.
The most famous example of this that I’m aware of is the controversy in linguistics. How much structure must we posit in the human brain in order to account for language acquisition? These classic answer to this question was given by Chomsky. He argued that, contrary to Locke, we can’t imagine the brain at birth as a blank slate, but must assume an enormous amount of complex machinery.
Several arguments led him to this conclusion, the most famous of which was the “poverty of input.” This is the observation that, without some kinds of basic assumptions guiding their derivation, children are not exposed to nearly enough examples of language in order to derive the correct grammatical form. For the human infant trying to guess the meaning of an unknown sentence, there are an enormous number of logical possibilities. If the language learner had to eliminate each one of these possibilities one by one, then it would take far too much time. Thus some in-built, innate schema must allow them to guess intelligently.
Not only that, but for the learner attempting to divine the deep structure from the surface structure, they must contend with the fact that the surface structure of a language is often misleading. Consider these two sentences: (A) “I expected the doctor to examine John,” and (B) “I persuaded the doctor to examine John.” Now let’s say we transform the first sentence into the passive voice: “I expected John to be examined by the doctor.” Notice that the meaning of this sentence is identical with the earlier sentence.
Suppose the learner, reasoning by analogy, transformed sentence (B) the same way, resulting in “I persuaded John to be examined by the doctor.” Now notice that the meaning of this new sentence is different from the first one. In the active voice the doctor is being persuaded, and in the passive voice John is. And this, despite undergoing what, superficially at least, appears to be the same transformation as sentence (A). Clearly, there is more to the grammar than meets the eye.
From all this, Chomsky concludes that there must be a “Universal Grammar,” which is a schema in the brain that determines which types of grammatical rules are permissible. Put more simply, Universal Grammar is something that allows learners to guess intelligently, rather than randomly, about the structure of language. Clearly such a schema would be a lot of information to be born with. In this, Chomsky resembles Leibniz and Kant far more than Locke and Hume.
But you don’t really need any of Chomsky’s arguments to realize that there must be some innate organization in our brains that allow us to learn language. After all, almost every person learns a language, while dogs and cats, who also have brains, and who are exposed to about as much language, never pull it off. Computers are better at many cognitive tasks than humans; and yet a few minutes with Google Translate is enough to convince anyone that computers haven’t quite gotten the hang of language. Clearly there is something special about the human brain that allows us to acquire language, while cats and computers struggle.
Thus we are left with several interesting questions. First, how much information and organization does the human brain possess at birth? How much of this information consists of general learning strategies, and how much is specific to language acquisition? And what exactly does this information consist of? Chomsky’s model of Universal Grammar, for example, was an attempt to answer this last question, by proposing a set of conditions that all languages must abide by. But his model has of late been criticized, first, for positing too much organization, and second, for failing to account for the structure of certain rare languages.
I am not a linguist, and thus I cannot hope to solve this controversy, or even make an interesting contribution to it. I only want to point out that this debate, although new in form, harks all the way back to Plato and Aristotle. Plato thought all knowledge was buried in the mind, and all philosophers had to do was uncover it; and Aristotle, like Locke, thought that knowledge derived from the senses. It is obvious to everyone by now that either extreme must be wrong. But apparently 2,500 years hasn’t been enough time for us to come to a conclusion.
The first thing we see as we travel around the world is our own filth, thrown into the face of mankind.
Lévi-Strauss made this exclamation while he was describing the increasingly pervasive influence of Western culture on the rest of the world. It is worth quoting the preceding sentences:
Our great Western civilization, which has created the marvels we now enjoy, has only succeeded in producing them at the cost of corresponding ills. The order and harmony of the Western world, its most famous achievement, and a laboratory in which structures of a complexity as yet unknown are being fashioned, demand the elimination of a prodigious mass of noxious by-products which now contaminate the globe.
Lévi-Strauss wrote this in 1955, and it has only gotten more true. Of increasing concern is the damage we have done to the natural world. We have polluted the air, changed the climate, and succeeded in imperiling our own survival with our machines. We have hunted species to extinction, we have introduced other invasive species to wreak havoc, and we have disrupted whole ecosystems. Truly, it is impossible to exaggerate the extent to which we have altered the globe—all too often causing problems for other species.
When is the last time you went somewhere truly natural? Have you ever? I don’t mean a park, a nature reserve, or a forest. I mean places where you can’t see any signs of human tampering. The closest I have ever come to this has been in Canada, when I have paddled a row-boat to the side of a lake, and walked into the pine forest. But even there, without any humans around for miles, I could still hear jet skis and speed boats humming in the distance. And even if I couldn’t hear or see any signs of human activity, the very forest has been altered already by human activity. Both moose and bear are hunted in those parts.
Ironically enough, this environmental damage—damage that now poses a grave danger to us—has been caused by our miraculous technology, the same technology that allows us to lead such comfortable lives. Our addiction to convenience will someday cause us a very great inconvenience.
But Lévi-Strauss was not primarily interested in the environment. Rather, he was thinking about culture. He was bemoaning the emergence of a global culture, primarily Western in origin: a culture that would soon swallow up all the traditional cultures that anthropologists like Lévi-Strauss were interested in studying. To quote Lévi-Strauss once more: “Mankind has opted for monoculture; it is in the process of creating a mass civilization, as beetroot is grown in the mass.”
To an enormous extent, this has already happened. I know this very well. Once, while I was studying in Turkana, a remote part of Kenya, I walked into a store. On the radio was Rihanna; on the shelves were products I recognized: Oreos, Pringles, Coca Cola.
Here in Spain, English is slowly taking over. There are English slogans in advertisements, there are hundreds and hundreds of English language academies, and more and more public schools are bilingual. And Spain is comparatively behind in this regard, partly because Spaniards already speak an international language. If you go to Portugal or Germany, for example, where American movies and shows are consumed in the original language, seemingly everyone can speak English, or at least understand it. Western culture is taking over the globe, and American culture is taking over the West.
It would be unreasonable to regard this is an unambiguously bad thing. At the very least, it has the potential to make the world more peaceful. When we become more similar; when we eat the same foods, watch the same shows, and wear the same clothes; when we speak the same language and have the same values; when, in short, we are all part of the same culture, it will be more difficult to persuade people to dress up in uniforms and kill each other. Well, I hope so at least. And besides, there’s nothing necessarily nefarious about this process. People have voted with their wallets, and voluntarily opted into this mass culture. Every time somebody watches an American show or wear Western clothes, they are reinforcing this process, regardless of their ideological beliefs.
Even so, I find something terribly sad about this growing uniformity of the world. There are no wild places anymore, and even foreign cultures are less foreign. Many people, myself included, are still afflicted with Wanderlust; but where can we wander to? Travel is cheaper than ever; for that reason, more people than ever are traveling; for that reason, traveling is no longer an escape. This is why I loved studying anthropology, and why I loved reading Lévi-Strauss, with his tales of adventure and hunter-gatherers in the rainforest. Such things promised a more substantial escape, at least in imagination.
To quote Lévi-Strauss once again, “I can understand the mad passion for travel books and their deceptiveness. They create the illusion of something which no longer exists but still should exist, if we were to have any hope of avoiding the overwhelming conclusion that the history of the last twenty thousand years is irrevocable.”
My rating: 4 of 5 stars
Nostalgia is the enemy of historical understanding.
After reading and being disappointed with Menocal’s famous book on Moorish Spain, The Ornament of the World, I decided to take another crack with this book. And I am happy to report that Fletcher’s book is much better.
While Menocal is wistful and romantic, Fletcher is more detached and occasionally wry. While Menocal hardly acknowledges her sources, Fletcher is usually careful to note where he is getting his information from, even if this book lacks a scholarly bibliography. I found this a great relief, as I have been discovering that Moorish Spain is one of the most persistently mythologized periods in history. Washington Irving set the tone for this in his Tales of the Alhambra, but other writers have been following in his romantic footsteps ever since. Thus Fletcher’s dispassionate treatment was refreshing.
The main drawbacks of this book is that it is too short, and too scholarly. Fletcher was explicitly aiming for a popular audience, but the book he wrote would be better suited for an undergraduate class than a tourist. You cannot, for example, find many good vacation ideas in these pages; indeed, if this was your introduction to Moorish Spain, you might not even want to travel there at all.
Instead of focusing on intellectual and cultural history, the majority of this text deals with political and military history—the invasions, battles, territorial expansions, and so on. Admittedly, Fletcher also quotes poems, autobiographies, and includes pictures of famous buildings; he even has a whole chapter on the relations between Christians and Muslims during this time. But this information jostles for space among dozens of unfamiliar names of rulers who I do not much care to remember. Probably, if he wanted a better-selling book, he could have bot expanded it and included more of a personal touch. He is a fine writer and rather opinionated, so it would have served him well, I think, to have written something less formal.
In any case, I doubt there are any better books on the market for the history hungry tourist visiting Andalusia. This book will give you an overview of the period, and in the process inoculate you against much of the nonsense that gets thrown around about al-Andalus. It was not a paradise of tolerance, nor was it a perpetual war of faith against faith. As Fletcher said: “The past, like the present, is for most of the time rather flavourless.”
To this day, the most interesting research project that I’ve ever done was the very first. It was on the Homeric Question.
I was a sophomore in college—a student with (unfortunate) literary ambitions who had just decided to major in anthropology. By this point, I had at least tacitly decided that I wanted to be a professor. In my future lay the vast and unexplored ocean of academia. What was the safest vessel to travel into that forbidden wine-dark sea? Research.
I signed up for a reading project with an anthropology professor. Although I was too naïve to sense it at the time, he was a man thoroughly sick of his job. Lucky for him, he was on the cusp of retirement. So his world-weariness manifested itself as a total, guilt-free indifference to his teaching duties. Maybe that’s why I liked him so much. I envied a man that could apparently care so little about professional advancement. That’s what I wanted.
In any case, now I had to come up with a research topic. I had just switched into the major, and so had little idea what typical anthropology research projects were like. And because my advisor was so indifferent, I received no guidance from him. The onus lay entirely on me. One night, as I groped half-heartedly through Wikipedia pages, I stumbled on something fascinating, something that I hadn’t even considered before.
Who is Homer? Nobody knew. Nobody could know. The man—if man he was—was lost to the abyss of time. No trace of him existed. We can’t even pin down what century he lived. And yet, we have these glorious poems—poems at the center of our history, the roots of the Western literary canon. Stories of the Greek Gods had fascinated me since my childhood; Zues and Athena were as familiar as Little Red Riding Hood and the Big Bad Wolf. That the person (or persons) responsible could be so totally lost to history baffled me—intrigued me.
But I was not majoring in literature or the humanities. I was in anthropology, and so had to do a proper anthropological project. At the very least, I needed an angle.
Milman Parry and Albert Lord duly provided this angle. The two men were classicists—scholars of ancient Greece. But instead of staying in their musty offices reading dusty manuscripts, they did something no classicist had done before: they attempted to answer the Homeric question with field work.
At the time (and perhaps now?) a vibrant oral tradition existed in Serbo-Croatia. Oral poets (guslars, they’re called there) would tell massive stories at public gatherings, some stories even approaching the length of the Homeric poems. But what was most fascinating was that these stories were apparently improvised.
In our decadent culture, we have a warped idea of improvisation. Many of us believe improvisation to be the spontaneous outflowing of creative energies, manifesting themselves in something totally new. Like God shaping the Earth out of the infinite void, these imaginary improvisers shape their art from nothing whatsoever. Unfortunately, this never happens.
Whether you’re a jazz saxophonist playing on a Coltrane tune, a salesperson dealing with a new client, or an oral bard telling a tale, improvisation is done via a playful recombining of preexisting, formulaic elements. This was Milman and Parry’s great discovery. By carefully transcribing hundreds of these Serbo-Croation poems, they discovered that—although a single poem may vary from person to person, place to place, or performance to performance—the variation took place within predictable boundaries.
The poet’s brains were full of stock-phrases (“when dawn with her rose-red fingers shone once more”), common epithets (“much-enduring Odysseus”), and otherwise formulaic verses that allowed them to quickly put together their poems. Individual scenes, in turn, also followed stereotypical outlines—feasts, banquets, catalogues of forces, battles, athletic contests, etc. Of course, this is not to say that the poet was not original. Rather, it is to say that they are just as original as John Coltrane or Charlie Parker—individuals working within a tradition. These formulas and stereotypical scenes were the raw material with which the poet worked. They allowed him to compose material quickly enough to keep up the performance, and not break his rhythm.
But could poems as long as The Odyssey and The Iliad come wholly from an oral tradition? It seems improbable: it would take multiple days to recite, and the bard would have to pick up where he left off. But Milman and Parry, during their fieldwork, managed to put our fears at rest. They found a singer that could (and did) compose poems equal in length to Homer’s. (I actually read one. It’s called The Wedding of Smailagic Meho, and was recited by a poet named Avdo. It’s no Odyssey, but still entertaining.)
All this is impressive, but one question remained: how could the oral poems get on paper? Did an oral poet—Homer, presumably—learn to write, and copy it down? Not possible, says Alfred Lord, in his book The Singer of Tales. According to him, once a person becomes literate, the frame of mind required to learn the art of oral poetry cannot be achieved. A literate person thinks of language in an entirely different way as a non-literate one, and so the poems couldn’t have been written by a literate poet who had learned from his oral predecessors.
According to Lord, this left only one option: Homer must have been a master oral poet, and his poems must have been transcribed by someone else. (This is how the aforementioned poem by Avdo was taken down by the researchers.) At the time, this struck me as perfectly likely—indeed, almost certain. But the more I think about it, the less I can imagine an oral poet submitting himself to sit with a scribe, writing in the cumbersome Linear B script, for the dozens and dozens of hours it would have taken to transcribe these poems. It’s possible, but seems unlikely.
But according to Ruth Finnegan, Alfred Lord’s insistence that literacy destroys the capacity to improvise poems is mistaken. An anthropologist, Finnegan found many cases in Africa of semi-literate or fully literate people who remained capable of improvising poetry. So it’s at least equally possible that Homer was an oral poet who learned to read, and then decided to commit the poems to paper (or whatever they were writing on back then).
I submit this longwinded overview of the Homeric Question because, despite my usual arrogance, I cannot even imagine writing a ‘review’ for this poem. I feel like that would be equivalent to ‘reviewing’ one’s own father and mother. For me, and everyone alive in the Western world today, The Odyssey is flesh of my flesh, blood of my blood. Marvelously sophisticated, fantastically exciting, it is the alpha and omega of our tradition. From Homer we sprang, and unto Homer shall we return.
[Note: I’d also like to add that this time, my third or forth time through the poem, I decided to go through it via audiobook. Lucky for me, the Fagles translation (a nice one if you’re looking for readability) is available as an audiobook, narrated by the great Sir Ian McKellen. It was a wonderful experience, not only because Sir Ian has such a beautiful voice (he’s Gandalf, after all), but because hearing it read rather than reading it recreated, however dimly, the original experience of the poem: as a performance. I highly recommend it.]