Review: Debt

Review: Debt

Debt: The First 5,000 Years by David Graeber

My rating: 4 of 5 stars

For a very long time, the intellectual consensus has been that we can no longer ask Great Questions. Increasingly, it’s looking like we have no other choice.

Three years ago, I went on vacation in the north of Spain, to the city of A Coruña. There, perched on the jagged rocks below the Roman lighthouse, I read Oswald Spengler’s Decline of the West. The crashing sound of ocean waves just seemed an appropriate accompaniment to Spengler’s grandiose attempt to analyze all of human history.

At it happened, I ended up reading David Graeber’s Debt in the exact same circumstances. And perhaps this coincidence highlighted the odd similarities between Graeber’s book and Spengler’s. On the surface, the two men are quite radically opposed: Spengler is mystical, conservative, and mainly preoccupied with ‘high culture,’ while Graeber is conversational, leftist, and usually focused on more humdrum human affairs. But both The Decline of the West and Debt are sweeping scholarly exercises which attempt to completely alter our view of history. As a consequence, the books have similar merits—a large perspective, unusual connections, an original angle—while suffering from the same basic weakness: the attempt to strap history into a Procrustean bed.

But I am getting ahead of myself, as I should explain what this book is about. Graeber set out to write about debt, partly as a response to the 2008 financial crash, but also to respond to a certain moral confusion he noticed in the general culture. This is the notion that one always ought to ‘pay one’s debts.’ Most of us, I suspect, would agree that this is the right and proper thing to do. But there are many cases in which debt can be morally questionable. Consider a man who had an unexpected heart attack and was taken to a hospital out of his insurance network, or a young student who took out college loans but then had to drop out because her father had a heart attack, or a family who had agreed to a predatory mortgage for a house that the bank knew they could not afford, or a poor country forced to adopt austerity policies by the IMF in order to pay their debts richer countries—in any of these cases, is it moral to pay one’s debts?

As Graeber points out, standard economic theory does not hold that all debts must be repaid. Rather, both the lender and the debtor enter into an arrangement with a certain amount of risk. The loan is, in a sense, an investment like buying stock, and may or may not yield money according to the fortunes of the debtor. But this is not how we typically treat debt. Bolstered by our moral sense that debts should be paid, we accept a moral lopsidedness in the relationship, giving lenders quite extraordinary powers (garnishing wages, confiscating property) to extract money from debtors. Yet Graeber is not an economist, and does not want to restore a balance to the arrangement. Rather, he is disturbed by the very concept of debt. For what sets debt apart from an obligation is that it can be precisely quantified. This means debts require a system of money.

This leads Graeber to examine the origins of money, which for me was easily the strongest section of the book. Most economist textbooks explain money by pointing out that money solves the problem of a double coincidence of wants. That is, if I have some extra boots, and I would like to trade them for some beer, it is quite possible the brewer already has all the boots he needs. But if I can sell the boots for money, and the brewer accepts cash payments, then we are in business. The problem with this story is that there is no historical evidence that such a thing happened. Indeed, this hypothetical situation is rather bizarre—essentially taking a world very much like our own, and then removing the money.

Instead, it appears from the historical record that credit systems developed before actual money. These could be formal or quite informal. As an example of the latter, imagine you are living in a small village. One day, you see your neighbor wearing a nice pair of boots, and you ask if he has any extras. He does, and offers them to you as a gift. Next month, you make a big brew of beer and then give him a jug of it, offering it as a gift. The key is that, using such a credit system, you effectively get around the double coincidence of wants, since there is a very good chance that you will eventually have something your neighbor wants, and vice versa. This is just one informal example of how such a credit system could work with ‘virtual money.’ Graeber, being an anthropologist, is full of fun examples of exchange practices from around the world, all of which fly in the face of our idealized notions of purely economic transactions.

After quite effectively demolishing what Graeber calls the ‘myth of barter,’ he embarks on a grand tour of history. And here is where the book fell off the rails for me. Now, this is not to say I did not enjoy the ride: Graeber is an engaging writer and is full of fascinating factoids and radical notions. But I was constantly bugged by the sensation that either I was misunderstanding Graeber, or that he was not proving what he thought he was proving. To give you a smattering of Graeber’s points, he argues that the use of coinage influenced ancient Greek philosophers’ concepts of matter, that religions emphasizing selfless charity arose in reactions to markets emphasizing selfish acquisition, that our notions of property derive through Roman law from slavery, that money was actually introduced by kings who used it to debt-finance wars, and that the Spanish conquistadores were driven to commit such atrocities because they were in debt.

As you can see, that is an awful lot of material to cover; and this is just a sample. Each of these arguments is, in my opinion, quite interesting (if not always convincing). But, again, I was always unsure as to the larger point that Graeber was trying to make. On the one hand, Graeber seemed to be saying that money and debt are inextricably bound up in an ugly history of violence; but on the other, Graeber demonstrates that debt financing is a remarkably old and persistent practice, and is partly responsible for what we (pretentiously) call ‘civilization.’ At the end of the book, Graeber states that his purpose was to give his readers a wider taste of what is possible, so that we can reimagine our society. However, one of Graeber’s main insights is that history is cyclical: alternating from periods of hard money (like precious metals) and virtual money (like IOUs and fiat currency)—though both of these systems involve debt. If anything, then, this book left me with the impression that debt is an inescapable part of life.

Allow me, if you please, to mention one of my pet peeves here. Graeber is a big fan of etymologies. This book is peppered with words and their unexpected origins, which Graeber often uses as evidence in his arguments. In my opinion, this is a very lazy and unconvincing way of arguing. Do not misunderstand me: I like a good etymology as much as anyone. But the fact that a word once meant one thing and now means another does not, in my opinion, prove that these two concepts are somehow secretly connected. I would have much preferred more detailed examinations of historical evidence; but Graeber actually goes out of his way in the afterward to criticize historians for being overly empirical. This is not a message I can get behind.

But enough of that. I am sorry to be writing even a moderately critical review in the wake of Graeber’s tragic passing. For all of this book’s (perceived) faults, I am very glad to have read it. Like Spengler, Graeber had a mind full of fire, and was always letting off sparks in every direction. He was, in advertising parlance, an idea man; and this book is full of bold new ways of seeing our past and present. And even if Graeber’s grand theories about society and history do not, ultimately, pan out, one can say of Graeber what Walter Pater said of aesthetic theorists:

Many writers have been made by writers on art and poetry to define beauty in the abstract, and express it in the most general terms, to find a universal formula for it. The value of these attempts has most often been in the suggestive and penetrating things said by the way.

View all my reviews

Quotes & Commentary #46: Wittgenstein

Quotes & Commentary #46: Wittgenstein

If language is to be a means of communication there must be agreement not only in definitions but also (queer as this may sound) in judgments.

—Ludwig Wittgenstein, Philosophical Investigations

I often think about the relationship between the public and the private. As a naturally introverted person, I feel very keenly the separation of my own experience from the rest of reality. I make music, take pictures, and write this blog as a way of communicating this inner reality—of manifesting my private world in a publically consumable form.

Having an ‘inner world’ is one of the basic facts of life. Each of us is aware that there is a part of us—the most vital and most mysterious part, perhaps—that is inaccessible to others; we can keep secrets, we can make judgments without anyone else noticing, we can have private pleasures and pains. All of our experience takes place in this space; the only world we ever see, hear, or touch is in our heads.

And yet we are also aware that this reality is, in a sense, insubstantial and ultimately secondary. Our inner world exists in reference to the outer world, the world of objective facts, the world that is publically known. My senses are not just mental facts, but point outward; my thoughts, actions, and desires are oriented towards a world that does not exist in me. Rather, I exist in it, and my experience is just one interpretation of this world, and one vantage point from which to view it.

How are these two worlds related? How do they interact? Is one more important? What is the relationship of our private minds to our public bodies? These are classic philosophical conundrums, mysterious still after all these millennia.

Historical philosophers aside, most of us, in our more reflective moments, become acutely aware of the division between subjective and objective. When you are, for example, searching for a word—when a word is on the tip of your tongue—you feel as though you are rummaging through your own mind. The word is in you somewhere, and nobody but you can find it.

From this, and other experiences like it, we get the feeling that speaking (and by extension, writing) consists of taking something internal and externalizing it. Language is, in this view, an expression of thought; and words take their significance from cogitations. That is to say, our private mental world is the wellspring of significance; our minds imbue our language with meaning. The word “pizza,” for example, means pizza because I am thinking of pizza when I say it.

And yet, as Wittgenstein tried to show in his later philosophy, this is not how language really works. To the contrary, words are defined by their social use: what they accomplish in social situations. In other words, language is public. The meaning of words is determined, not by referring to any inner thought, nor by referring to any objective facts, but by convention, in a community of speakers. (I don’t have the space here to recapitulate his arguments; but you can see my review of his book here.) The word “pizza” means pizza because you can use it to order in a restaurant.

This may seem to be a merely academic matter; but when you begin to think of meaning as determined socially rather than psychologically, then you realize that your cognitive apparatus is not nearly as private as you are wont to believe. In order to communicate thought, you must transform it into something socially consumable: language. All of our vague notions must be put into boxes, whose dimensions are determined by the community, not by us.

But the social does not only intrude when we try to communicate with others; we also understand ourselves through these same social concepts. That is to say, insofar as we think in words, and we understand our own personalities through language, we are subjecting our deepest selves to public categories; even in our most private moments, we are seeing ourselves in the light of the community. We are social beings to our very core.

This does not only extend to the definitions of words. As Wittgenstein points out, to use language effectively, we must also judge like the community.

Any word, however well-defined, is ambiguous in its application. To apply the word “car” to a vehicle, for example, requires not only that I know the definition—whatever that may be—but that I learn how to differentiate between a car, a truck, a van, and an SUV. Every member of a community is involved in educating one another’s judgment, and keeping their opinions in tune. If I call an SUV a “car,” or a pickup truck a “van,” any fellow speakers will correct me, and in this way they will educate me to judge like a member of the community.

As I learn Spanish, I have firsthand experience of this. To pick a trivial example, English word “sausage” is more broad than any corresponding Spanish word. Here in Spain they differentiate between salchicha and salchichón, a difference that my American mind has a hard time understanding. Although Spaniards have tried to define this difference to me, I have found that the only way for me to learn it is by being corrected every time I apply the wrong word.

More significantly, in order to conjugate properly in Spanish, I must not only learn how to change the ending and so forth, but I must learn when it is appropriate to use each tense. To pick the most troubling example, in English we have only the simple past, whereas in Spanish there is both the imperfecto and the indefinido. I constantly use the wrong form, not because I don’t know their technical usage (it has been explained to me countless times, using various metaphors and examples, and I can recite this technical definition from memory), but because my judgment is out of alignment.

Whether an action is continuous, periodic, completed, ongoing, or occasional—this is not as self-apparent as every native-speaker likes to assume, but indeed requires a good deal of interpretation. My judgment has not yet been properly educated by the community, and so, despite my knowing the technical usage of these two forms, I still misuse them.

In a way, this aspect of language learning is somewhat chilling. In order to speak effectively, not only must I use communal vessels to contain my thoughts, but I must learn to judge along the same lines as other members of the community—to interpret, analyze, and distinguish like them. What is left of our private selves when we subtract everything shaped and put there by the community? Am I a self-existent person, or just a reflection of my social milieu?

Yet I do not think that all this is something to dread. Having communally defined categories, and a communally shaped judgment, gives permanence and exactitude to communication. Left on our own, thinking without symbols, communicating with no one but ourselves, there is nothing that grants stability to our reflections; they constantly slip through our fingers, an ever-changing flux tied to nothing. With no fixed points, our judgment flounders in a torrent of ideas, thrashing ineffectually.

When we learn a language, and learn to use it well, we learn how to pour the ambiguous stuff of thought into stable vessels, how to cast the molten metal of our mental life into solid forms. This way, not only can we understand the world better, but we can learn to understand ourselves better. This, I think, is the very purpose of culture itself: to partition reality into sections, to impose structure on ambiguous reality.

Let me give you a common example.

A relationship is a naturally ambiguous thing. The affection and commitment that two people feel for one another exists on a spectrum. And often we do not really know how committed we are to somebody until we examine the relationship in retrospect. And yet, relationships must be defined, and defined early-on, for the sake of the community.

Every culture on earth has rituals and categories associated with courtship, for the simple fact that somebody’s relationship status is a big part of their social identity. Ambiguities in social identity are not tolerated, because they impede normal social life; to deal with somebody effectively, you need them to have a recognizable social status, a status they tells you what to expect from them and what you can ask of them and a million other things.

In modern culture, as we delay marriage ever-more into the distant horizon, we have developed the need for new relationship categories. Now we are “dating,” and then “in a relationship.” The status of being “boyfriend” or “girlfriend” is now socially understood and approved as one level of commitment.

The interesting thing, to me, is that the decision to be in a relationship, to become boyfriend and girlfriend (or whatever the case may be), seems like a private decision, affecting only two people. And yet, it is really a decision for the benefit of the community. To be in a relationship defines where you stand in relation to everyone else: whether it is appropriate to flirt with you, to ask you out, to dance with you, to ask about your significant other, and so forth.

Now, this is not to say that the decision is solely for the benefit for the community. To put this another way, this also benefits you and your partner, because you are also part of the community. It puts a publicly understood category, indicating a certain level of commitment, on your naturally ambiguous and shifting feelings. In other words, by applying a public category to a private feeling, you are, in effect, imposing a certain level of stability on the feeling.

Look what happens next. This level of commitment, being publically labeled, is also bolstered. Friends, family, and coworkers treat you differently. You are now in a different category. And this response of the community helps to form and reinforce your private feelings of commitment. Relationships are never wholly private affairs between two people. It takes a village to make a couple.

Again, I am not suggesting that this is a bad thing. To the contrary, I think that having communal definitions is what allows us to understand our own selves at all. This is also why I write these quotes and commentary. By forcing myself to take my ambiguous thoughts and put them into words, into public vessels, not only do I communicate with others, but I find out what I myself think.

Review: People of the Sierra

Review: People of the Sierra

The People of the SierraThe People of the Sierra by Julian Alfred Pitt-Rivers
My rating: 4 of 5 stars

Writing is an activity which links a person with the world of formality.

Julian A. Pitt-Rivers was, in the words of his mentor E.E. Evans-Pritchard, “in every sense a son of Oxford and an Oxford anthropologist.” Julian was the descendant of an aristocratic family. His grandfather, Augustus, the pioneering archaeologist, was along with Sir Edward Taylor a founder of the anthropology department at Oxford. (The famous anthropology museum in Oxford is named after him.*) Julian’s father, whose absurdly long name I will not write—alright, fine, it is George Henry Lane-Fox Pitt-Rivers—was enormously wealthy. A vigorous anti-Semite and Eugenics proponent, George was jailed during the Second World War. Julian himself, among his other accomplishments, was tutor to the King of Iraq.

With a pedigree like that, it’s easy to see how his book became a classic in the field.

My interest in The People of the Sierra was sparked, naturally, by it being about a village in Spain. But for those with an interest in anthropology, such as myself, the book is significant independent of your specialty. This is because this book was one of the first ethnographies published about a community in Europe. True, it was a small, poor, agricultural community, and it was in a region of Europe commonly regarded as exotic, but it is Europe nonetheless. As such, the book is a landmark in the field.

The book’s classic status is due not only to its groundbreaking subject matter, however, but also to its high quality. Julian Pitt-Rivers was a true disciple of his advisor, E.E. Evans-Pritchard. Everything from the writing style to the analysis bears the traces of EP’s influence. And this is a good thing, for EP was one of the great masters of ethnography.

The central theme of Pitt-Rivers’s analysis is the contrast between the local and the national forces that shape the pueblo. In nearly every sector of life, there are two social structures at play. The first is that of the pueblo; it is self-contained. Moral rules are enforced by the community; there are certain—unwritten but universally known—appropriate ways of acting, and infractions are punished by loss of respect. The second structure is that of the state. Its authority is derived from somewhere far outside of the pueblo. Its laws are explicit, and infractions are punished with fines or jail time.

This theme is explored from a variety of different angles. One chapter, for example, explains the practice of giving members of the town nicknames. These nicknames are never used to a person’s face, and yet everybody in the village knows them. Indeed, you might know a person’s nickname without knowing their surname. Surnames are important, most of all, in dealings with the state. Thus you can see the contrast between local and national, informal and official, even in people’s names.

Tension exists in this state of affairs, because these two systems are often out of alignment. Many things are regarded as immoral which are not illegal, and vice versa. An important concept, for example, is vergüenza, shame, which is the regard that one pays to the social norms of the pueblo. To call someone a sinvergüenza is a serious insult; to be without shame is to be almost inhuman, since it puts you beyond the realm of society; it is to be a pariah. By contrast, it’s obviously not illegal to be without shame, and many of these pariahs are employed by the state as informers.

This is the book’s theme in a nutshell. For me, however, the book’s lasting value has far more to do with its style than its substance. Pitt-Rivers’s writing is remarkable more for what it excludes than for what it includes. There is not a word of jargon in these pages; a polysyllabic word is never used when a shorter one will do; sentences are crisp and short; there is no pretentious name-dropping, no unnecessary citations.

The book itself is brief, and yet Pitt-Rivers’s writing is so economical that he manages to give a full-blooded picture of the community. The first two sentences give an adequate taste of what follows: “This book is about a Spanish town. More precisely, it examines the social structure of a rural community in the mountains of southern Spain.”

Why social scientists no longer write like this, I cannot say. So read this, if only to remember a time when clear, strong English was used in anthropology.

*Thanks to Wastrel for bringing this to my attention.

View all my reviews

Review: People of the Plain

Review: People of the Plain

The People of the PlainThe People of the Plain by David D. Gilmore

My rating: 4 of 5 stars

Since the day of this altercation they have not communicated; however, each serves as an excellent source of gossip about the other.

What drew me to this book was not only its subject—a village in Spain—but its author.

David D. Gilmore was my first anthropology professor. His classes were unforgettable. Standing over six feet tall, solidly built, he towered over the lecture hall. Professor Gilmore was the very picture of a professor. He had a preference for tweed jackets, complete with elbow patches; and his hair was equally professorial, an electrified shock of snowy white. His voice needed no amplification; it boomed throughout the space, keeping even the most sleep-deprived students semi-conscious.

What I remember most, however, was not his appearance, but his attitude. He had an understated ironic humor, and couldn’t help punctuating his classes with sardonic comments. After one of these comments, he would pause and grin very slightly. A few of the students, myself included, snickered; the majority scrunched up their brows, unsure if it was a joke. Unperturbed, Professor Gilmore then continued the class.

To my youthful eyes, this ironic sensibility seemed to pervade his entire attitude towards life. It was not only a sense of humor, but a philosophy, allowing him to maintain a sense of perspective and take nothing too seriously. I could not help concluding that studying anthropology—living abroad, in another culture, away from his native prejudices—engendered this witty sort of wisdom. By the end of the semester I had switched my major to anthropology, and Professor Gilmore was my advisor.*

I am delighted, therefore, upon reading his first book, to find that Professor Gilmore was an excellent anthropologist in addition to a striking professor.

As the title page indicates, and the rest of the pages make clear, this book is largely a response to Pitt-Rivers’s classic ethnography, The People of the Sierra. In that book, Pitt-Rivers maintained that the pueblos of Andalusia are governed by a powerful egalitarian ethos. Class is not acknowledged to exist, and friendships crosscut differences of wealth and power. Indeed, Pitt-Rivers considered all recognized forms of authority to be imposed from outside the pueblo, not arising within it.

The pueblo that Gilmore studied was quite different. Significantly larger, and situated on the plains rather than in the mountains, Gilmore’s village—he calls it Fuenmayor but it’s a pseudonym—is remarkably stratified. Three distinct social classes exist: the señoritos, the rich, landowning gentry; the mayetes, the middle class; and the jornaleros, the landless, working, migrant poor. These classes had existed for at least a hundred years, and their contours were engrained into the culture of the village.

The mutual isolation of the classes borders on absurdity. Friendships and marriages between members of different classes are nonexistent. Brother will shun brother if he “loses class.” There are three different seating sections in the movie theater, three different sections of pews in the church, and three different sections in the town cemetery. Señoritos are patriarchal, whereas jornalero women have far more power than men in the home. Señoritos are piously Catholic, while jornaleros rarely attend mass and openly scorn the church. The rich view the poor with contempt, and the poor view the rich as hateful oppressors. The mayetes, for their part, focus on keeping themselves afloat.

The picture that emerges is of a society strongly divided, almost bursting at the seams with social tensions, kept together only by the oppressive force of Franco’s regime. (The fieldwork was done in 1973.)

The book, although short, is stuffed with information and anecdotes. Gilmore is always careful to compare the opinions of his informants with objective data, including statistics of land ownership, crop growth, and church attendance. Thankfully these data are usually illustrated with field anecdotes (which are half the fun of any ethnography). I especially appreciated these, because his ironic sensibility shone through:

The mayete is also known occasionally to affect a broad-brimmed fedora hat, which for some reason the workers find indescribably hilarious. One day I appeared in a working-class tavern sporting a new straw fedora, purchased earlier in the city. After a moment of amused silence, one of the laborers shouted, ‘Hey, look at this new mayete we have here!’ Loud, prolonged laughter followed; yet my friends could not explain their merriment.

As an academic work about the anthropology of Andalusia, this book is therefore excellent: well-written, thoroughly researched, and original. But as a piece of personal nostalgia, it is priceless.


*This is how I introduced myself to Professor Gilmore. In this first class, we did a unit on monsters. One of these monsters was the Windigo, a cannibalistic beast from Algonquian folklore. I found this monster fascinating and wrote a silly poem about him. I believe these were the first two lines: “Oh, Windigo, Windigo / Beast of blue and indigo.” It was certainly not a masterpiece. Nevertheless, one day after class I showed Professor Gilmore a copy of the poem. He seemed genuinely amused. It was an auspicious beginning.

View all my reviews

Review: Modern Romance

Review: Modern Romance

Modern RomanceModern Romance by Aziz Ansari

My rating: 4 of 5 stars


One firm takeaway from all our interviews with women is that most dudes out there are straight-up bozos.

My introduction to modern romance was abrupt and unexpected. I was back in New York for the holidays, drinking with a few friends, sipping and gulping the wonderful IPAs that I miss when I’m here in Spain.

Sometime deep into the night, one of my friends, who is a gay man—this is relevant to the story; you should also know that I’m a straight guy—asked if anyone wanted to go on his Tinder. “I do!” I said, and soon found myself face to face with the infamous app for the first time in my life.

Now, for the three remaining people who don’t know how Tinder works, it’s very simple: You look at pictures of people, and swipe left if you don’t want to talk to them, right if you do. (In this respect it’s like the Last Judgment.) If someone you’ve approved of also approves of you, then you are both given the option to send messages.

My friend was obviously a stud, because I was getting matches left and right (well, only right). One of these matches was a young man who I’ll call Woodrow Wilson. With permission from my friend, I sent Woodrow a message. The conversation went something like this:

Me: What’s your favorite tree?

Woodrow Wilson: Uh, White Pines are pretty cool I guess.

Me: White Pines? So cliché.

Woodrow Wilson: You’re right, I was only testing the waters. I’m really fond of Quaking Aspens. You?

Me: Now we’re talking. I’ve always been fond of the Shagbark Hickory.

The conversation proceeded like this for about four days, by which time it was clear that I had found my soul mate through my gay friend’s Tinder. Unfortunately, many barriers stood in the way—I’m straight, I was going back to Spain, and I was basically deceiving him—so I didn’t meet Woodrow Wilson. (If you ever read this—hello, and sorry!) But the experience was enough to make me curious about the opportunities and hazards of romance in the modern world.

Being a reluctant single, a very reluctant millennial, and a very, very reluctant member of the modern world, you can imagine I was, well, reluctant to tackle this topic. This book enticed me, not because it was written by Aziz Ansari—I didn’t consider myself a fan, and in college I even passed up the opportunity to see him live on campus—but because he teamed up with a sociologist, Eric Klinenberg, to write it. I listened to the audiobook, nasally narrated by Aziz.

The most striking thing about this book is that, despite its lighthearted tone and frequent funny asides, it is basically a serious and even an earnest book. Sociological statistics, psychological studies, and anthropological analyses are mixed with anecdotes and interviews and a bit of humor to give a quick but surprisingly thorough tour of romance in the contemporary world.

Aziz begins by pointing out that dating in today’s world is strikingly different from dating in my grandparents’ or even my parents’ generation. This is not only because of advances in technology but, more importantly, because of shifts in values. We now have developed what you might call a perfectionistic attitude towards finding a partner. We want to find a “soul mate,” “the one,” somebody who fulfills us and thrills us. Aziz contrasts this with what he calls the “good enough” marriages of yesteryears—finding a partner that satisfies some basic criteria, like having a job and a shiny pocket watch

I myself have noticed this shift from studying anthropology and history. In cultures all around the world—and in the West until quite recently—marriages were considered a communal affair. Aziz’s own parents had an arranged marriage, and according to him have had a long, successful relationship. (To be honest the idea of an arranged marriage has always been strangely appealing to me, since I don’t think any decision of such importance should be left in my hands. But the rest of my generation disagrees, apparently, so now I’m left to rummage through apps.)

Connected to this rise in the “soul mate” marriage is a rise in our preoccupation with romantic love. According to the biological anthropologist, Helen Fisher, there are two distinct types of love in the human brain: romantic, and companionate. Romantic love is the kind that writes bad poetry; companionate love is the kind that does the dishes. Romantic love hits early in a relationship and lasts up to a year and a half; companionate love grows slowly over time, perhaps over decades. This division accords well with my own experience.

(Parenthetically, I have long been skeptical, even morbidly suspicious, of romantic love: that kind of idealizing, gushing, delicious, walking on air feeling. To me it seems to be a form of self-deception, convincing yourself that your partner is perfect, even divine, and that nobody else in the world could make you so happy—when the truth is that your partner is a flawed person, only one of many flawed people who could induce the same delirious sensation. Wow, I sound really bitter in this paragraph.)

This cultural shift has been bolstered by our new dating technology. Now we do not only have the expectation that we can find the perfect partner, but we have the tools to do the searching. I can, and sometimes do, scroll through hundreds of faces on my phone per day. All this is very exciting; never before could I have so many romantic options at my fingertips.

But there are some major drawbacks to this. One is what the psychologist Barry Schwartz called the “paradox of choice.” Although you’d think having more options would make people more satisfied, in fact the reverse occurs. I remember watching TV was a lot more fun when I was a kid and I only had a few dozen channels; when we upgraded to hundreds of channels, it became stressful—what if there was something better on? Similarly, after spending three months in a camp in Kenya, eating whatever I was given, I found it overwhelming to go to a pizza place and order. How could I choose from so many toppings?

Along with these broader observations is a treasure trove of statistics and anecdotes that, if you’re like me, you’ll be quoting and misquoting for weeks. I found the little vignettes on the dating cultures in Japan, where there’s a sex crisis, Buenos Aires, where there’s a machismo crisis, and Paris, where there’s lots of infidelity but apparently no crisis, to be particularly memorable.

These anecdotes are not just for mental titillation, but are used to support several tenets of dating advice. Here are just a few takeaways. Check your punctuation before you send a text. When you ask someone out on a date, include a specific time and location, not “wanna hang out some time?” vagueness. Texting people is not a reliable way to gauge if you’ll like them in person; it’s best to ask them out sooner and not prolong a meaningless texting conversation. Take the time to get to know people; rarely do you see the more interesting side of someone’s personality on a first date.

As you can see, this book is quite a rare hybrid: part social science, and part self-help, and part comedy. And yet the book rarely feels disorganized or scatterbrained. Aziz keeps a tight rein on his materials; the writing is compact, clever, and informative. With the notable limitation that this book deals only with heterosexual couples, and covers no topic in serious depth, I can say that it’s hard for me to imagine how any such short book could give so complete a picture of modern romance.

Most impressive is the human touch. What could have potentially been a mere smattering of facts and stories, Aziz makes into a coherent whole by grounding everything in the day-to-day frustrations and realities of the dating world. Aziz knows firsthand how much dating can suck, how tiresome, uncomfortable, and stressful it can be. Yet, for all this, the book is ultimately hopeful.

Beneath all these shifts in values and demographics, all the innovations in dating technologies and changes in romantic habits, all the horror stories and the heartbreaks, beyond the lipstick and the cologne, below the collared shirts and high heeled shoes, above the loud music and the strong liquor, pushing every button and writing every text, is the universal human itch to connect.

This itch has always been with us and always will be. Each generation just learns to scratch it in new and interesting ways.

(If interested in setting something up, please direct all inquiries to my mom.)

View all my reviews

Quotes & Commentary #17: Spinoza

Quotes & Commentary #17: Spinoza

Men are mistaken in thinking themselves free; their opinion is made up of consciousness of their own actions, and ignorance of the causes by which they are conditioned. Their idea of freedom, therefore, is simply their ignorance of any cause of their actions. As for their saying that human action depends on the will, this is a mere phrase without any idea to correspond thereto. What the will is, and how it moves the body, none of them know; those who boast of such knowledge, and feign dwellings and habitations for the soul, are wont to provoke either laughter or disgust.

—Baruch Spinoza, Ethics

Few things can make you more skeptical about free will than studying anthropology. For me, this had three components.

The first was cultural. I read about the different customs, rituals, religions, arts, superstitions, and worldviews that have existed around the world. Many “facts” that I assumed were universal, obvious, or unquestionable were shown to be pure prejudice. And many behaviors that I assumed to be “natural” were shown to be products of the cultural environment.

It is unsettling, but nonetheless valuable, to consider all the things you do just because that’s what your neighbors, family, and friends do. These include not only superficial habits, but our most basic opinions and values. Our culture is not like a jacket that we put on when we go out into the world; culture is not a superficial layer on our deeper selves. Rather, culture penetrates to the very core of our beings, shaping our most intimate thoughts and sensations.

The next influence was primatology, the study of primate behavior. This came to me most memorably in the books of Jane Goodall, about the chimpanzees she studied. Chimpanzees are our closest relatives. They are recognizably animals and yet so strangely human. They get jealous, become infatuated, bicker, fight, make up, and joke around. They make tools and solve puzzles.

I remember the story of a small chimp who, while walking through the forest with his group, saw a banana out of the corner of his eye. The rest of his group didn’t notice it; and this chimp knew that the bigger ones would take the banana away if they saw him eating it. So he ran off in another direction, causing everyone to follow him, and then secretly snuck back to get the banana. If that’s not human, I don’t know what is.

Last was the study of human evolution. This also involves the study of archaeology: the material culture that hominins have left behind. I held reproductions of the skulls of human ancestors, and examples of the stone tools made by our smaller-brained predecessors. I saw how the tools became more advanced as the brain size increased. Crude choppers became the beautiful hand axes of the homo erectus, and these large axes became refined into serrated blades and arrow heads by later species. Finally our species began showing evidence of symbolic thinking: burying people, crafting statues, painting caves, carving flutes, and almost definitely using language.

After seeing the obvious influence of evolution on our capacities and tendencies, after learning about the striking similarities between us and our ape cousins, and after witnessing the pervasive effects of culture upon behavior, my belief in free will was in tatters. True, even if we take all these evolutionary and cultural factors into account, we can’t predict the exact moment when I’m going to scratch my nose. But neither can we predict where a fly will land, or which patch of skin a mosquito will bite. Nobody thinks flies or mosquitoes have free will, so why us?

I normally understand “free will” to mean the ability of an organism to fully determine its own actions. In other words, a free organism is one whose actions cannot be predicted or explained by pointing to anything outside, including genes or upbringing. Not DNA, nor culture, nor childhood experiences would be enough to fully explain a free individual’s behavior. A free action is, in principle, unpredictable; and thus the free agent is morally responsible for his actions.

I do not believe in this type of freedom, and I have not for a long time. For my part, I think Spinoza is exactly right: “free will” is just a name for our ignorance of the causes of our own behavior. If we knew these causes, our actions could be predicted like any other natural phenomenon, and “freedom” would disappear.

This ignorance is not difficult to explain. Human behavior is the product, first, of our environment, which is infinitely varied and constantly changing; and, second, of the human brain, one of the most complex things in the universe. Because of the amount and complexity of the data, along with our lack of understanding, we can’t even come close to making predictions on the scale of individual human actions, like scratching one’s nose. But we can’t conclude from our inability that our actions are thus “free,” anymore that we can conclude from our inability to predict where a fly will land that flies possess a mystical “freedom.”

Kurt Vonnegut made this point, with much more wit, in Slaughterhouse Five. His Tralfamadorians, who can see in the time dimension as well as space dimensions, already know everything that will happen. Thus they have no concept of freedom, and find it puzzling that humans do: “I’ve visited thirty-one inhabited planets in the universe, and I have studied reports on one hundred more. Only on Earth is there any talk of free will.”

To me it seems manifest that the traditional definition of freedom has been thoroughly discredited by what we know about the natural and cultural world. Humans are made of matter obeying physical laws, shaped by evolution, subject to genetic influence, and responsive to the cultural environment. The mind is not a mysterious metaphysical substance, but a product of the human brain; thus the mind and its behavior, like the brain, can be understood scientifically, just like any other animal’s.

All this being said, there are nevertheless ways to redefine free will so that it is compatible with what we know about physics, biology, anthropology, and psychology.

Perhaps free will is simply the inability of a thinking organism to predict what it is about to do? Every person has, at one time or another, been surprised by their own actions. This is because, as the philosopher Gilbert Ryle explained, “A prediction of a deed or a thought is a higher order operation, the performance of which cannot be among the things considered in making the prediction.That is to say that it is logically impossible to predict how the act of predicting an action will alter the action, because the prediction itself cannot factor into the prediction (you can try to predict how you will predict, but this leads to an infinite regress).

Or perhaps free will is a condition caused by our ignorance of the future? After all, difficult decisions are difficult because we can’t be sure what will happen or how we’ll react. Deciding between two job offers, for example, is only difficult because we can’t be sure which one we’ll like more. If we could be sure—and I mean absolutely sure—which job would make us happier, then there wouldn’t be a decision at all; we would simply take the better job without a dilemma even occurring to us. In this way, our freedom is as much a product of our ignorance of the future as it is our ignorance of the causes of our actions.

What sets humans apart from other animals is not our freedom per se, but our behavioral flexibility. Humans are able to continually adapt to new environments, and to learn new habits, techniques, and concepts throughout their lives. This ability to adapt and to learn, which serves us so well, is not freedom so much as slavery to a different master: our environment. Our genes do not instill in us a specific behavioral pattern, as in ants, but give us the capability to develop many different behavioral patterns in response to our cultural and climatic surroundings. But is it any more “noble” or “free” for our behavior to be determined by social and environmental pressure rather than from genetic predestination?

Probably the best practical definition of freedom I can come up with is this: Humans are free because we are able to alter our behavior based on anticipated consequences. This is what makes morality possible: we can influence people’s behavior by telling them what will happen if they don’t follow the rules. What is more, people can understand that they have more to gain by playing along and helping their neighbors than by acting impulsively and at the expense of their neighbors. Thus our intelligence, by allowing us to understand the consequences of our actions, gives us the ability to be more intelligently selfish: we can weigh long-term benefits with short-term pleasures.

Freedom is, of course, a fundamental concept in our political philosophy. So if we choose to stop believe in freedom as traditionally defined, how are we to proceed? Here is my answer.

The important distinction to be made in political philosophy, regarding freedom, is what separates freedom from coercion. The difference between freedom and coercion is not that one is self-caused and the other caused by the outside—since even the freest person imaginable has been profoundly shaped by their environment, and is making decisions in response to their environment. Rather, there are two important differences: coercion implies force (or the threat of force) while freedom doesn’t; and “free” actions usually benefit the acting individual, while “coerced” actions usually benefit an outside party at the expense of the acting individual.

The difference thus has nothing to do with freedom as such (freedom from environmental influences), but is determined by the type of environmental influence (violent or non-violent), and by the party (actor or not) that receives the benefits. (Even though an altruistic act benefits a party besides the actor, it is not a coerced act because, first, it’s not motivated by threat of violence, and, second, because altruistic acts usually benefit the actor in some way, either socially or psychologically.)

I find that some people become horrified when I tell them about my rejection of freedom. For my part, I find that my disbelief in freedom has made me more tolerant. When I consider that people are products of their environment and their genes, I stop judging and blaming them. I know that, ultimately, they are not responsible for who they are. In a profound sense, they can’t help it. We are each born with certain desires, and throughout our lives other desires are instilled into us. Our behavior is the end product of an internal battle of competing desires.

If you think that morality is impossible with this worldview, I beg you to read Spinoza’s Ethics. You will find that, not only is morality possible, but it is necessary, logical, and beautiful.

Review: Pride and Prejudice

Review: Pride and Prejudice

Pride and PrejudicePride and Prejudice by Jane Austen
My rating: 5 of 5 stars

Jane Austen’s tools are tweezers and a nail file. Tolstoy built cityscapes; Dostoyevsky dug sewer systems; Joyce made funhouses; Kafka put up prisons; Cervantes created carnivals. Austen crafts ivory figurines, incredibly lifelike portraits that fit in the palm of your hand.

When you read Pride and Prejudice you see the novel in its barest form. Her prose has no pyrotechnics, her descriptions have little poetry. Word-play, fantastic events, bizarre characters, cliff-hangers, and elaborate plots are similarly absent. When you read this book you realize that these tools are not only unnecessary, but can distract from the craft of the novel.

In all of the annals of social science—the ethnographies of anthropology, the studies of sociology—there has never been an observer of social life more keen than Ms. Austen. What we have here is one of the best accounts of marriage customs ever written. That information alone would make this book invaluable.

But of course, this is no academic treatise; it is a novel, and a brilliant one at that. Unlike other authors, who use the dialogue to present information about the plot, for Austen the dialogue is the plot. It is a story of information and misunderstanding. Who thinks what, who knows what, who tells what to whom—all form the intimate tapestry of events that propel this book forward to its merry conclusion.

Austen also excels at omitting unnecessary information. She never tells instead of shows. She does not beleaguer the reader with descriptions of personalities, or even of appearances. Descriptions of setting are similarly kept to the barest minimum; in fact, they are almost apologetic.

The scenes of our greatest struggles and triumphs aren’t always aboard a whaling ship or before the gates of Troy. Sometimes they are conversations held over the soft plunking of a piano-forte

View all my reviews

Review: The Odyssey

The OdysseyThe Odyssey by Homer
My rating: 5 of 5 stars

To this day, the most interesting research project that I’ve ever done was the very first. It was on the Homeric Question.

I was a sophomore in college—a student with (unfortunate) literary ambitions who had just decided to major in anthropology. By this point, I had at least tacitly decided that I wanted to be a professor. In my future lay the vast and unexplored ocean of academia. What was the safest vessel to travel into that forbidden wine-dark sea? Research.

I signed up for a reading project with an anthropology professor. Although I was too naïve to sense it at the time, he was a man thoroughly sick of his job. Lucky for him, he was on the cusp of retirement. So his world-weariness manifested itself as a total, guilt-free indifference to his teaching duties. Maybe that’s why I liked him so much. I envied a man that could apparently care so little about professional advancement. That’s what I wanted.

In any case, now I had to come up with a research topic. I had just switched into the major, and so had little idea what typical anthropology research projects were like. And because my advisor was so indifferent, I received no guidance from him. The onus lay entirely on me. One night, as I groped half-heartedly through Wikipedia pages, I stumbled on something fascinating, something that I hadn’t even considered before.

Who is Homer? Nobody knew. Nobody could know. The man—if man he was—was lost to the abyss of time. No trace of him existed. We can’t even pin down what century he lived. And yet, we have these glorious poems—poems at the center of our history, the roots of the Western literary canon. Stories of the Greek Gods had fascinated me since my childhood; Zues and Athena were as familiar as Little Red Riding Hood and the Big Bad Wolf. That the person (or persons) responsible could be so totally lost to history baffled me—intrigued me.

But I was not majoring in literature or the humanities. I was in anthropology, and so had to do a proper anthropological project. At the very least, I needed an angle.

Milman Parry and Albert Lord duly provided this angle. The two men were classicists—scholars of ancient Greece. But instead of staying in their musty offices reading dusty manuscripts, they did something no classicist had done before: they attempted to answer the Homeric question with field work.

At the time (and perhaps now?) a vibrant oral tradition existed in Serbo-Croatia. Oral poets (guslars, they’re called there) would tell massive stories at public gatherings, some stories even approaching the length of the Homeric poems. But what was most fascinating was that these stories were apparently improvised.

In our decadent culture, we have a warped idea of improvisation. Many of us believe improvisation to be the spontaneous outflowing of creative energies, manifesting themselves in something totally new. Like God shaping the Earth out of the infinite void, these imaginary improvisers shape their art from nothing whatsoever. Unfortunately, this never happens.

Whether you’re a jazz saxophonist playing on a Coltrane tune, a salesperson dealing with a new client, or an oral bard telling a tale, improvisation is done via a playful recombining of preexisting, formulaic elements. This was Milman and Parry’s great discovery. By carefully transcribing hundreds of these Serbo-Croation poems, they discovered that—although a single poem may vary from person to person, place to place, or performance to performance—the variation took place within predictable boundaries.

The poet’s brains were full of stock-phrases (“when dawn with her rose-red fingers shone once more”), common epithets (“much-enduring Odysseus”), and otherwise formulaic verses that allowed them to quickly put together their poems. Individual scenes, in turn, also followed stereotypical outlines—feasts, banquets, catalogues of forces, battles, athletic contests, etc. Of course, this is not to say that the poet was not original. Rather, it is to say that they are just as original as John Coltrane or Charlie Parker—individuals working within a tradition. These formulas and stereotypical scenes were the raw material with which the poet worked. They allowed him to compose material quickly enough to keep up the performance, and not break his rhythm.

But could poems as long as The Odyssey and The Iliad come wholly from an oral tradition? It seems improbable: it would take multiple days to recite, and the bard would have to pick up where he left off. But Milman and Parry, during their fieldwork, managed to put our fears at rest. They found a singer that could (and did) compose poems equal in length to Homer’s. (I actually read one. It’s called The Wedding of Smailagic Meho, and was recited by a poet named Avdo. It’s no Odyssey, but still entertaining.)

All this is impressive, but one question remained: how could the oral poems get on paper? Did an oral poet—Homer, presumably—learn to write, and copy it down? Not possible, says Alfred Lord, in his book The Singer of Tales. According to him, once a person becomes literate, the frame of mind required to learn the art of oral poetry cannot be achieved. A literate person thinks of language in an entirely different way as a non-literate one, and so the poems couldn’t have been written by a literate poet who had learned from his oral predecessors.

According to Lord, this left only one option: Homer must have been a master oral poet, and his poems must have been transcribed by someone else. (This is how the aforementioned poem by Avdo was taken down by the researchers.) At the time, this struck me as perfectly likely—indeed, almost certain. But the more I think about it, the less I can imagine an oral poet submitting himself to sit with a scribe, writing in the cumbersome Linear B script, for the dozens and dozens of hours it would have taken to transcribe these poems. It’s possible, but seems unlikely.

But according to Ruth Finnegan, Alfred Lord’s insistence that literacy destroys the capacity to improvise poems is mistaken. An anthropologist, Finnegan found many cases in Africa of semi-literate or fully literate people who remained capable of improvising poetry. So it’s at least equally possible that Homer was an oral poet who learned to read, and then decided to commit the poems to paper (or whatever they were writing on back then).

I submit this longwinded overview of the Homeric Question because, despite my usual arrogance, I cannot even imagine writing a ‘review’ for this poem. I feel like that would be equivalent to ‘reviewing’ one’s own father and mother. For me, and everyone alive in the Western world today, The Odyssey is flesh of my flesh, blood of my blood. Marvelously sophisticated, fantastically exciting, it is the alpha and omega of our tradition. From Homer we sprang, and unto Homer shall we return.

[Note: I’d also like to add that this time, my third or forth time through the poem, I decided to go through it via audiobook. Lucky for me, the Fagles translation (a nice one if you’re looking for readability) is available as an audiobook, narrated by the great Sir Ian McKellen. It was a wonderful experience, not only because Sir Ian has such a beautiful voice (he’s Gandalf, after all), but because hearing it read rather than reading it recreated, however dimly, the original experience of the poem: as a performance. I highly recommend it.]

View all my reviews