Review: The End of Policing

Review: The End of Policing

The End of Policing by Alex S. Vitale

My rating: 5 of 5 stars

A kinder, gentler, and more diverse war on the poor is still a war on the poor.

Like many white Americans, I was complacent about the problem of police violence for many years. I figured that there would always be tragic accidents, always a few bad officers, and that we must make allowances for people doing what is, no doubt, a very difficult job. My attitude started to change when I left the country, and realized that the levels of police violence and incarceration in America are exceptionally high. Still, I figured that the United States was always going to be a uniquely violent country, and that an over-aggressive police force was simply one aspect of this.

The killing of George Floyd and the recent protests have been a turning point for me, as they have been for many people in the country. The death of yet another unarmed black man in police custody—yet another citizen choked to death by government workers, as he repeated that he could not breathe—was gruesome enough. But the seemingly infinite videos of flagrant police abuse that surfaced during the protests pushed me from complacent, to skeptical, to indignant. Peaceful protesters and journalists were shoved, beaten, sprayed, gassed, shot with “less-lethal” ammunition, and arrested.

Few people, I hope, can see the video of Martin Gugino—a 75-year old man pushed to the ground by Buffalo police, bleeding from his head as police march past him indifferently—without a sense of outrage. The only way to rationalize such an obviously unnecessary use of force is to embrace ridiculous conspiracy theories, as the president recently has. Meanwhile, the police response to this incident is entirely typical: after the two offending officers were suspended, the rest of the 57 members of the emergency response team resigned in protest.

It is in this context that the rallying cry “Defund the Police” has begun to circulate. In other circumstances, such a statement would strike me as absurdly Utopian; but once I learned that its proponents were not proposing to eliminate policing entirely, but to reduce it and divert resources to other social services, it began to sound all the more reasonable. (I do I fear the slogan is poorly chosen, however, since it gives many people the mistaken idea that nobody will be around to solve murders or investigate thefts. If a slogan requires a lengthy clarification, then it is not an effective slogan; and it risks alienating people by making the idea seem more radical than it really is. Personally, I think something like “Reimagine Policing” may capture the idea much better, even if it sounds a bit twee.)

This book is an excellent resource for those who wish to reimagine the role of police in America. (It is now available for free download on Verso.) Alex Vitale, a professor at Brooklyn College, examines the many ways that police are asked to do a job they are ill-suited for, and proposals to replace them. His first essential point is that the problem goes far beyond the conventional discourse about police reform. Body cameras, implicit bias training, and diversifying police forces do not reliably reduce police violence. Certainly, there are reforms that can and should be made—such as stopping the 1033 program which transfers military equipment to police departments, or changing the training regimes that instill a “warrior” mentality into police officers—but even the best of these reforms miss the point.

As with the issues of healthcare and higher education financing, there is a tendency in America to frame the issue of policing in terms of technocratic fixes, as if value-neutral reforms could be instituted that would make the police a perfect institution. But this ignores the greater moral and philosophical question: What do we have police for?

The police are distinguished from other public servants in being armed and authorized to use violence. Their presence is warranted if somebody poses a violent threat—as in the case of an assault, a sexual predator, or someone on a shooting spree—and even then, it is their responsibility to use a minimum of force. The problem, however, is that the vast bulk of police work does not consist in dealing with violent threats; it consists of traffic stops, border patrol, noise complaints, domestic abuse calls, drug busts, school fights, or prostitution. What connects so many of these situations is not the threat of violence, but poverty—which in America is inevitably racialized.

The life of George Floyd exemplifies the problem with policing. Born into difficult circumstances, he had many run-ins with the police during his life, none of which helped him. He served ten months in a state prison for a $10 drug deal, and then five more years after a plea deal for armed robbery. In the incident that led to his death, he was allegedly trying to pay for cigarettes with a fake $20 bill. What ties these together is that they are crimes of poverty—and that the only government intervention available came in the form of a punitive criminal justice system.

Nobody is in favor of robbery or counterfeit money; but I think that such crimes are inevitable if people are forced to endure a low standard of living with few legitimate economic opportunities to improve their situations. The question we need to ask, then, is whether locking people away, or saddling them with police records—or, in the case of Floyd, outright murder—is the right way to improve our country. Put another way, the essential question is whether a criminal justice mentality—which treats crime as an individual choice, subject to moral sanctions—is appropriate for the many social problems besetting our communities.

The case of police in schools is illustrative of how this mentality is applied to social problems. In the United States, we have apparently come to accept the constant possibility of school shootings; and partly as a response to this, armed police officers have been stationed in tens of thousands of schools across the country. In fact, two-thirds of American high school students attend a school with at least one police officer present; many schools have officers but lack counselors or nurses.

In too many cases, the police are not present merely to prevent violence, but actively take part in disciplining students. In this way, schools become a microcosm of American society: Inequality of opportunity (since schools are funded by property taxes) and an increasingly narrow metric of success (in this case, standardized tests) lead to undesirable behavior, which is dealt with through increasingly punitive measures. Who benefits from this system?

Another clear illustration of the criminal-justice approach is the war on prostitution and drugs. One does not need to be in favor of either of these activities to see that criminalizing them has not worked. Anyone who wants to buy drugs or sex can do so, just as any college student under 21 (the legal drinking age in America) can find a way to buy alcohol. Meanwhile, this approach has resulted in millions of people—most of whom are non-violent—being thrown into prison. Not only does our approach fail to address the problem, then, but we multiply the social harm into the bargain.

Any visitor to Amsterdam can see that the legalization of prostitution and marijuana has not caused the social order to descend into chaos. On the contrary, the condition of sex workers in places where sex work is legal and regulated, such as New Zealand, is far better than in the United States (even though we justify our approach as preventing human rights abuses). The case of Portugal’s drug policy is even stronger evidence of the failure of our approach. After decriminalizing drug use in 2001, and treating it as a public health issue, Portugal now has the lowest drug mortality rate in Europe, fifty times lower than the United States—and this is on top of the huge reduction in drug-related arrests.

As a final point, we also must remember that America’s War on Drugs has not only had devastating consequences domestically, but has contributed to drug-related violence around the world. Indeed, the destabilizing effects of these policies have, in part, driven unauthorized immigration, a problem that we have chosen to address using—of course—more policing.

Prostitution, drug use, and policing in schools are just three of the examples that Vitale examines. In these as in so many other cases—such as homelessness and mental illness—we must ask: Should a police officer be handling this problem? That is to say, should we have armed personnel, authorized to use violent force, treating these problems as matters of individual choice that deserve punishment? In so many cases, I believe the answer is no. I am sure that many police officers try to do these jobs conscientiously and diligently, but a gun, a baton, and handcuffs are simply not the proper tools, and imprisonment is not the proper approach.

If we are to learn from the current pandemic, I think it should be that a public health approach to social problems is both more rational and more humane. We would, of course, never throw somebody in jail for testing positive for COVID-19, even if having the disease can put other people’s lives at risk. When it comes to disease, we do not think of it as a problem of individual choice, personal responsibility, and deserved punishment. Just so, I think that we should see drug use, prostitution, school misbehavior, petty theft, and unauthorized immigration as processes that are driven by factors that go far beyond individual choice, and which merit coordinated social support rather than criminal prosecution. Imagine if the thousands of dollars that were spent sending George Floyd to jail for a $10 drug deal were instead spent on improving his situation.

As one final point, I think there is a significant factor of police violence that is not addressed in this book: gun ownership. If we choose to live in a society where, at any moment, somebody can open fire into a crowd, then I think this puts serious constraints on the degree to which we can disarm or reduce police forces. So many stories of police killings involve somebody being killed for reaching into their pocket, holding a shiny object, or even for a car backfiring. In places where gun ownership is rare, this almost never happens. This is another issue that could benefit from a public health approach. But even if we eliminated all civilian guns in the country, we would still be left with policing practices that exacerbate, rather than alleviate, the immense social divides in America. With a little bit of imagination, I think we can find a better way.

View all my reviews

Review: Paying the Price

Review: Paying the Price

Paying the Price: College Costs, Financial Aid, and the Betrayal of the American Dream by Sara Goldrick-Rab

My rating: 4 of 5 stars

Soaring rhetoric about the value of hard work obscures the fact that family money has long been one of the best predictors of college success.

Among the so-called developed nations, the United States is unique in many respects, not least in higher education. Whereas going to university is either free or quite cheap in most of Europe, in America college can often be a serious financial burden. Admittedly, America also has many of the best universities in the world, so perhaps the extra cost is justified for the small fraction of students who attend these elite institutions. But costs are also high in public universities; indeed, even our community colleges are expensive by European standards.

This book is not primarily concerned with why the cost is so high. Instead, sociologist Sara Goldrick-Rab focuses her attention on how students go about paying for it. To do this, she conducted Wisconsin Scholars Longitudinal Study, in which she and her team followed 6,000 low-income students from 2008 onwards. The study involved many surveys and much statistical work, but also in-depth interviews of selected students. Her aim was to find out how the various forms of financial aid—grants, loans, work-study—affect graduation rates. With such a wealth of evidence, Goldrick-Rab can speak with quite a bit of authority about the challenges faced by low-income students.

The picture that emerges is of a complicated and often ineffective financial aid bureaucracy, which allows too many low-income students to fall through the gaps. Much of this is due to financial aid being outpaced by the rising cost of college. Consider the Pell Grant, the primary economic subsidy provided by the federal government to low-income students. When it was passed into law, in 1965, it covered 80% of an undergraduate degree. Nowadays, however, a Pell Grant covers less than 1/3 of that price. Meanwhile, contributions by state governments have steadily dropped off, leaving students with ever-higher tuition costs.

Federal loans have arisen as a way to bridge the gap between the rising price of attending college and the decreasing purchasing power of grants. Though better than private student loans, even the federal loans cannot be discharged through bankruptcy. To qualify, the student must fill out a FAFSA application every year, which determines what types of loans the student qualifies for, how big a loan, and the “expected family contribution”—the amount the government expect the student’s family to be able to pay.

In Goldrick-Rab’s telling, there are many ways in which this system fails. One obvious difficulty is the expected family income. Most obviously, even if a student’s family has the available money, they may not be willing to pay; and of course the government cannot force families to make the expected contribution. What is more, the formulas used by the government to determine how much a student’s family can pay are not always realistic. They do not account for the reality that, rather than being financially supported by their families, many students financially contribute to their families while they are studying.

Basing financial aid on last year’s tax returns can also subject students to sudden shifts in their aid. If a parent gets a new job, for example, a student may suddenly find that they no longer qualify for grants or subsidized loans, and that their expected family contribution has drastically risen. To take an example from my own life, FAFSA also factors in whether a student has any siblings in college; and the graduation of a sibling can also cause a dramatic reduction in financial support.

The call for increased financial aid has sometimes been met with the response that students ought to work more in university. But this advice is misguided for a multitude of reasons. For one, students are already working a great deal. But the days when students could pay their way through college by stacking boxes during the summers are long gone. The students in this book did work, often for long hours, making the minimum wage of $7.25 an hour. At that salary, it is difficult to even make a significant dent in the cost of attending college (the average debt of a graduating student is about $37,000).

Working while in school presents other difficulties, too. Scheduling is an obvious example. Most college classes are during normal work hours, which forces many of the students in the study to work night shifts and weekends. Time spent working is time spent not studying, and often time spent not sleeping, which hinders academic progress. The requirements on loans and grants only compound the problem of balancing work and study, since many forms of financial aid require a minimum GPA and full-time enrollment. This can result in a double-bind, since if a student drops one class to focus on the others, she may switch from full-time to part-time; and if she keeps all her classes, her GPA may suffer.

The Federal Work-Study program provides financial assistance to students working on campus, and according to Goldrick-Rab is a beloved program. However, the rules for applying to the program are somewhat confusing. Many students assume that acceptance into the program means that they will be given a job. But this only means that they can start applying to jobs that are covered under the program. Aside from this, work-study depends on the amount of funds and work available at a given moment, and so has not proved to be a reliable way for most students to pay for college.

College is meant to be a stepping-stone to a greater life. But for many of the people in this book, their time at university became a weight around their neck. As low-income students struggle to balance family, jobs, and study— negotiating a complex financial aid system that relies heavily on loans—they often found themselves unable to keep up with their classes and unable to pay for even basic living expenses. Indeed, since there is no university equivalent to subsidized school lunch programs, many of the students were literally too hungry to focus.

Worst of all, if a student decides that they cannot complete their studies, then they do not only lose the opportunity of a degree: they are saddled with debt, without the job opportunities to pay it off. As a result, the decision to study in university carries considerable financial risk in the United States, especially for low-income students. If a student succeeds in graduating, they will have significantly better job opportunities than their peers; but if they fail, they will be significantly worse off than their peers who did not even try.

To me it seems clear that the American system for funding higher education is not working. Student debt is the fastest growing type of debt, and the second highest private debt category in the country, after house mortgages. This was not true twenty years ago. Collectively, $1.5 trillion is owed by 45 million students—about 7.5% of America’s GDP. Roughly one in ten students defaults on their loan payments within three years, and, unsurprisingly, default rates are three times higher among students who did not complete their degrees. When you consider that education has one of the highest returns on investment of any government expenditure, our failure to support it is especially baffling.

Why has higher education become so outrageously expensive? I am still unclear on this, and it is not the main focus of this book. But we are not powerless to change it. In so many other countries, the decision to study in university is not nearly so fraught with financial peril. Indeed, the phrase “Student Debt,” like “Medical Debt,” is virtually unknown here in Spain. Of course, this is not an isolated problem. Even if we fixed financial aid, students from poorer neighborhoods would still be at intense disadvantages, not least because American public schools are funded by property taxes (and so reflect a neighborhood’s wealth). But making higher education affordable and financially risk-free would go a long way in bolstering the economy and widening opportunity.



View all my reviews

Review: A War on Normal People

Review: A War on Normal People

The War on Normal People: The Truth About America’s Disappearing Jobs and Why Universal Basic Income Is Our Future by Andrew Yang

My rating: 4 of 5 stars

I admit that I hardly paid attention to Andrew Yang during the primaries. I knew that he was for Universal Basic Income (UBI), but little else; and it did not seem to matter, given his pole numbers. But during the economic fallout caused by the coronavirus lockdowns, UBI is starting to look all the more reasonable (especially after I received a direct deposit from the federal government!). So I decided that it was time to take a second look.

This book could easily have been mushy pap—a boilerplate campaign book only published for publicity. Yang could have gone on and on about his good work in Venture for America, all the inspiring young people he met, all the businesses he helped grow, and all of the wonderful places he visited across America. He could have talked about his own story from first generation American to entrepreneur and politician. Some of that is in here, of course; but not nearly as much as one might expect. Instead, Yang has written a serious work on the problems facing America.

Yang covers a remarkable amount of ground in this short book—video game addiction, the importance of malls in communities, the rising cost of universities—but his primary message is fairly simple: Automation is going to eliminate many millions of jobs, and we need to transform the economy accordingly. As someone with many friends in Silicon Valley, Yang speaks convincing on the subject of automation. An obvious example is self-driving cars. Once the technology becomes reliable enough, virtually all driving jobs are threatened. Considering the numbers of people whose work involves transporting either passengers or cargo, this alone can be dramatic. What would happen to all the taxi, bus, and truck drivers of the world?

But according to Yang, self-driving cars would only be the beginning. While automation may call to mind robotic arms laboring in factories, white collar jobs are also liable to being automated. Chances are, if you work in an office, at least some of your work is rote and repetitive; and that means a computer could potentially do it, and do it far better than you can. While most of us are far removed from the world of artificial intelligence, those in the community routinely seem alarmed by the prospect of increasingly powerful A.I. Every year a new program accomplishes another “impossible” task, such as mastering the Chinese game Go. Yang even mentions computer-written symphonies and computer-generated artwork! (I was happy to note, however, that Yang did not seem to think teachers could be automated away.)

This will result in still more intense economic stratification. Many parts of America have already hollowed out as a result of recent economic trends. Most of the country’s factories have closed, destroying some of the most well-compensated blue-collar jobs. The rise of online retail—only accelerated by the coronavirus crisis—threatens to permanently destroy much more employment. Of course, when some jobs are eliminated, other types of jobs come into being. But we cannot rely on this process to correct the imbalance—first, because automation destroys more jobs than it creates (think of the one trouble-shooter for every five self-checkout registers), and second, because the new jobs usually require different skills, and exist in different parts of the country.

There are many proposed solutions in this book, but Yang’s signature idea is UBI. This would be a monthly payment of $1,000, or $12,000 a year, to every citizen over the age of 18; and it would be given a very patriotic name: the Freedom Dividend. Yang proposes to pay for this with a Value Added Tax (VAT) of 10%. (I was actually unaware of the difference between a VAT and a sales tax before reading this book, which is that a VAT must be paid at every step in the production process. This has the added advantage of taxing automated industries, since robots do not pay an income tax.) But the hefty price tag of UBI would also be partially compensated by the reduction or elimination of other government welfare programs. And, of course, if you put more money into the hands of consumers, most of them will spend rather than save it, and this will in turn increase tax revenue.

One obvious objection to UBI is that, by giving money indiscriminately, we will inevitably be giving it to people who do not need it. The most apposite reply to this objection, for me, is that subjecting government assistance to means-testing creates a host of problems. For one, there is a great deal of cumbersome bureaucracy involved in determining whether a particular person ‘deserves’ aide—bureaucracy that would be rendered entirely redundant by UBI, since the checks can be sent out through the IRS. Indeed, this cumbersome bureaucracy only creates added waste, since many NGOs exist simply to help people navigate the complex government paperwork. Of every, say, $100 spent on welfare, what portion of that goes to those in need, and what portion to the paychecks of bureaucrats laboring to determine who gets the money and how they can spend it?

Indiscriminate giving would also eliminate the pesky problem of disincentivizing work. At the moment, Republicans and Democrats are in a dispute over this very issue, as Republicans are arguing that the extra $600 of unemployment money (as part of the coronavirus aid package) will encourage people not to work. While some on the left disagree, personally I think this is a rather strong objection—not to giving people money, but to making the money conditional on not having a job. The same issue is present in many other sorts of government aid, such as disability payments, which cease as soon as the recipient becomes employed. If the money were unconditional, however, then people would have no disincentive to work; on the contrary, they would be able to substantially improve their economic situation by working, perhaps even making enough to start saving and investing.

UBI, then, has potential appeal for both those on the left and on the right. Those on the left may like it because it is a way of redistributing wealth, while those on the right may like it since it is a way of shrinking the government. The latter statement might seem more far-fetched, but I do think that a solid, conservative case could be made for UBI. After all, Milton Friedman was quite an avid supporter of the concept, for a multitude of reasons: it shrinks government, it reduces government paternalism, it promotes both work and consumption, and it would avoid dividing people into different categories.

This last point merits some comment. Presently, a great deal of anti-welfare rhetoric is concerned with parasitism—the idea that lazy people are simply ‘on the dole,’ dragging down the rest of society. It is the perfect recipe for shame and resentment, since inevitably it divides up society into groups of givers and takers; and even the best government bureaucracy in the world could not hope to distribute money in the fairest way possible. Inevitably, some people who ‘deserve’ aid will not get it; and others who do not ‘deserve’ it will—since no definition of ‘deserving’ will be perfect, and in any case there is no way of perfectly measuring how much somebody ‘deserves.’

UBI works against this psychology in a powerful way, by being entirely indiscriminate. Though the rich would be paying more in taxes than they receive back, they too would receive their monthly payment, and I think this fact alone would help create an added sense of social solidarity. UBI would be something shared by everyone, everywhere, rather than something that marks you out as being poor and dependent, a mark of stigma and shame. This strikes me as quite a positive thing in the age of dramatic political polarization.

Another aspect of UBI that I find deeply appealing is that it will give people the freedom to pursue less well-remunerated, but more socially beneficial, work. As Yang points out, many of the most humanly important jobs—being a parent, an artist, or even an online book reviewer—are quite poorly compensated, if they are compensated at all. An economist might argue that this is justified, since the free market determines the value of work based on supply-and-demand. But I think that this logic will become less appealing as robots start to out-compete humans. Indeed, perhaps automation will erode our faith in the wisdom of markets and meritocracy, since it will be difficult to believe that a delivery drone is more deserving than a delivery driver, even if it gets more work done.

There are, of course, many objections to UBI, one being that it will encourage widespread free-loading. But the evidence for this is quite weak. As Yang demonstrates, in the many UBI trials that have been conducted, work reduction was quite low, mostly taking place among new mothers. And as I mentioned above, our current welfare system arguably encourages free-loading far more effectively than UBI would, since UBI does not disincentive work. In any case, I think all of us—especially new mothers!—could do with a modest reduction in work hours, given the fact that study after study shows that long hours do not benefit productivity. Instead of having humans emulate work machines, then, it would be far better to automate as much work as possible—since machines never sleep, never eat, and never get sick—and focus on the remaining work which really does require a human touch.

Yang addresses many other objections to UBI, and most of his arguments are convincing. I do have one nagging question, however, and it is this: If the purchasing power of the general population is increased across the board, will prices of food and housing correspondingly increase? Though I am economically naïve, it strikes me that this is bound to happen, at least somewhat; and this may partially offset the gains of UBI. But perhaps I am mistaken. Another question is whether automation will go as far as Yang predicts. I found most of his forecasts—particularly about self-driving vehicles—quite compelling. But it does seem possible that the affects will be less sweeping than Yang supposes. For example, I cannot imagine couples turning to an A.I. marriage counselor with the voice of Morgan Freeman, as Yang somewhat fancifully imagines.

In any case, while Yang’s twin themes of automation and UBI are his central message, his book has far more to offer. I particularly appreciated his portrayal of the economic plight facing many parts of America, and the increasingly stark divide between those with and without a college degree. For example, I often find myself forgetting that the majority of American adults do not have degrees, if only because almost all of my friends and family have one. Considering how many jobs—including low-skilled jobs—require a degree, this is a major economic disadvantage nowadays.

The fact that I can forget about this economic disadvantage is a measure of the degree to which different parts of the country are insulated from one another. And the university system is not helping to even the playing field. After all, most of the people who do obtain degrees are already from comparatively better-off families. The university system also does not add to economic diversification, since students are pursuing an increasingly narrow range of majors; and after college, most graduates move to one of a handful of large cities. The result is an increasingly stark economic divide between Americans with college degrees living in large cities, working in a shrinking number of industries, and those living in more rural areas, or hallowed out cities, without degrees. It is an inimical process.

Yang also deserves credit for his mental flexibility. Besides UBI, this book contains a range of proposals, all of them quite new to me. Considering the degree to which political debate is dominated by decades-old proposals, I found this extremely refreshing. Admittedly, I do think that Yang’s Silicon Valley message failed to resonate with the voting public for a reason. While he has much to say about the future of America’s economy, he is less convincing on problems besetting many Americans now, most notably health care. Yang does favor a version of universal coverage, and he has some very intriguing things to say about how technology can change the role of the doctor, but I think it is fair to say that this was a minor part of his book.

Yet if this book fails as political marketing, it succeeds in being both a thoughtful meditation on the problems facing the average American, and a set of bold proposals to address these problems. While so many politicians come across as blindly ideological, stupidly partisan, or simply as creatures of the political system, Yang is intelligent, imaginative, and unconventional. I hope that this is not the last we hear from him.



View all my reviews

Review: Priced Out

Review: Priced Out

Priced Out: The Economic and Ethical Costs of American Health Care by Uwe Reinhardt

My rating: 4 of 5 stars

Mantras about the virtues of markets are no substitute for serious ethical convictions.

There are a great many things for Americans to feel embarrassed about. Depending on your politics, you may bemoan the rise of identity politics and the snowflake culture predominating on college campuses; or perhaps you rage against racist policing or our lax gun laws. But I think that, as Americans, we can all come together and feel a deep and lasting shame over our health care system—specifically, how we finance it. According to Reinhardt, our system is so bad that it is routinely invoked in international conferences as a kind of boogey man, an example of what to avoid. And after reading this book, it is easy to see why.

I did not suspect that our system was quite so bad until I left the country. But, in retrospect, the evidence was quite apparent. Virtually all of my friends have expressed anxiety about their health care at some point—high premiums, high deductibles, or simply no health insurance at all. I have seen family members spend weeks negotiating with insurance companies for payment of their medicines (and even after the insurance chips in, the cost is still breathtaking). Meanwhile, in my five years here, I have yet to hear a single Spaniard express anxiety over how they will pay for a medicine or a medical procedure. Here, as in most of Europe, this type of anxiety is quite uncommon.

The U.S. system fails on many different fronts. Most simply, there is coverage. Millions of Americans have no coverage, and millions more have inadequate coverage (such as many of my friends, whose deductibles are so high that they may as well not have insurance). Second is cost. Both medical procedures and medicines are significantly more expensive in the United States. For example, the drug Xarelto (for blood clots) costs $101 in Spain, $292 in the U.S. The average cost of an appendectomy is $2,003 in Spain, $15,930 in America. A third failure— closely related to cost—is waste. Our byzantine payment system requires doctors and hospitals to spend great amounts of time and money communicating with insurance companies, which of course costs money, which of course gets transferred to the consumer.

But most fundamental failure is a failure of ethics. Or perhaps it is better to say a lack of ethical vision. As Reinhardt explains, while much of the debate on health care in America concerns itself with technicalities—risk pools, risk exposure, whether premiums should be actuarially fair or community-rated, etc.—this debate conceals the fact that we have yet to come to a consensus on the moral foundations of health care. Most of the world’s developed nations have established their systems on the presumption that health care is a social good. In the United States, on the other hand, we are sort of muddled, at times treating health care as if it is a commodity, and yet unwilling to face up to the implications of that choice—such as letting poor people die without treatment.

Aside from the ethical issues involved, health care has many features that make it unlike a typical commodity, and thus poorly governed by supply-and-demand. If I want to buy a car, for example, typically I am not in a great rush to do so. I can shop around, test-drive cars, compare prices across companies and locations, and read reviews. I can even decide that I do not want to buy a car after all, and instead buy a train pass. All of this contributes to control the price of cars, and incentivizes car companies to give us the best value for our money.

None of this is the case in our health care system. The demand is non-negotiable and, very often, time-sensitive. Furthermore, most patients lack the knowledge needed to evaluate what procedures or tests are justified or not, so oftentimes we cannot even be fully aware of our own ‘demand.’ Besides that, we have no ability to compare prices or to compare treatment efficacy. And even if we are careful to go to a hospital in our insurance network, there may be doctors ‘out of network’ working there, leading to the ugly phenomenon of surprise medical bills.

Added together, it is as if the car salesman blindfolded me, put a gun to my head, told me I had to choose a car in five minutes, while he was the only source of information about what car I needed (and medical bills can be quite as expensive as cars!). This is the position of the American “consumer” of healthcare.

My own brief experience with emergency medical care highlights the situation. The only time that I have ever been taken to an emergency room, I was unconscious. I woke up after being transported by the ambulance. Luckily, I was quickly discharged, and I also had insurance. But even though my insurance covered the hospital bill, it did not cover the ambulance, which I had to pay out of pocket. Again, I was lucky, since I was able to afford it. Many cannot, however, and have the experience of waking up from an accident, an injury, or an operation in debt. How can you be an intelligent consumer when you are unconscious?

The helplessness of the consumer creates a perverse incentive in our system. There is little downward pressure on prices. Instead, what results is a kind of arms race between health care providers and insurers. Insurers are incentivized to put up as many barriers as possible to paying out, which requires doctors and hospitals to invest ever-more resources into their billing departments, which of course only increases the cost to the patient. In many hospitals, there are more billing clerks than hospital beds; and when you realize that these billing clerks have their own counterparts in the insurance companies, you can get some idea of the enormous bloat created by our financing system.

I think there is a particular irony to this situation, since our American insistence on market values has created a labyrinthine network of incomprehensible rules, endless paperwork, and legions of bureaucrats—the very thing that capitalist principles were supposed to eliminate. Indeed, ironies abound in our system. For example, we endlessly discuss the affordability of government programs, while the tax incentives for employment-based insurance (which costs the federal and state governments an annual $300 billion in foregone revenues) is never mentioned. What is more, while the insurance mandates of Obamacare were roundly criticized as forcing the healthy to subsidize the unhealthy, as Reinhardt points out, the exact same thing occurs in insurance-based healthcare. And as a final irony:

It is fair to ask why, if socialized medicine is so bad, Americans for almost a century now have preserved precisely that construct for their military Veterans, and, indeed, why the latter are so defensive and protective of that socialized medicine system.

After reading this review, you may be excused for thinking that this book is a fiery manifesto about the evils of the system. Far from it. Uwe Reinhardt was a prominent economist and much of this book consists of tables and graphs. The writing is, if anything, on the dry side, and the tone is one of intellectual criticism rather than passionate outrage. Yet, strangely, this is why I found the book so effective. It is one thing for an arm-swinging socialist to condemn the evils of the system, but quite another for a calm economist to go through the data, point by point, and explain how it all works and how it compares with other countries’ performance.

You may also be excused for thinking that, given all this, Reinhardt would be an advocate for a single-payer system in the United States. After all, he was one of the architects of Taiwan’s single-payer system, which costs about 6% of the country’s GDP. (For comparison, America’s system costs us 17% of GDP!) But Reinhardt thinks that such a system would not work on American soil. For one, the libertarian streak in our culture runs too deep for such a system to be broadly acceptable. More importantly, however, Reinhardt thinks that our campaign finance system is so corrupt that the health care lobby would be able to exert a heavy influence on the government, thus canceling the benefit.

He instead advocates for an ‘all-payer’ system. The idea is to consolidate the market power of consumers by having standard prices set either by the government, or by associations of care providers and insurers. This would, at the very least, avoid the wild price variability that can be found in even a single city in the United States. It also helps to bring costs down, as demonstrated in Maryland, which has had an all-payer system for quite a while. Japan’s system is also established on this principle, and spends far less money per capita on its health care system, despite having a significantly older population than the United States.

In normal times, I was not exactly optimistic about the prospect of reforming our broken health care system. But in the wake of this pandemic, it does seem as if major reforms might not only be possible, but inevitable. Employment-based insurance makes little sense if people lose their jobs during a major health crisis, as has already happened to many millions of Americans. And high unemployment may persist for some time. What is more, a major health crisis, resulting in many thousands of additional hospital stays, will put pressure on private insurance firms and lead to a significant rise in insurance premiums. Basically, higher-risk patients create higher cost, and a pandemic puts far more people into the high-risk category. The greater strain on an already teetering system may be the proverbial straw on the camel’s back. We shall see.

View all my reviews

Review: The Uninhabitable Earth

Review: The Uninhabitable Earth

The Uninhabitable Earth: Life After Warming by David Wallace-Wells

My rating: 4 of 5 stars

You do not need to consider worst-case scenarios to become alarmed.

In normal times, the apocalypse bored me. Any discussion of catastrophic events put me in a mood of defensive skepticism. This was true whether I was considering an asteroid, a supervolcano, or rampant artificial intelligence—events that are so far out of my control that I would immediately dismiss them from my mind. However, the current coronavirus situation prompted me to read a book by epidemiologist Michael Osterholm, which includes a detailed prediction of what the next pandemic could look like. The experience of reading a scientist predicting, with considerable accuracy, what I was living through was profoundly eerie. And as a result, I was prepared to take a book about how climate change will play out a little more seriously

The picture is not rosy. Much like a pandemic, climate change it not localized, but strikes everywhere at once. If a bad hurricane hits one city, we can rush resources to the area. But if we are battered by massive hurricanes, destructive floods, severe droughts, raging wildfires, and deadly heatwaves, in many different parts of a country, over and over again, then we cannot effectively respond. To put it mildly, dealing with ever-escalating natural disasters will take an increasingly severe economic toll—measuring in the hundreds of trillions, by Wallace-Wells’s calculation—and will also put huge pressure on political systems, thus further reducing our ability to respond.

Wallace-Wells paints this coming future in such vividly chilling detail that even the most stoic reader will have an elevated pulse. Indeed, in this future world, the term “natural disaster” will start to lose its meaning, since disasters will be so common as to become simply become “weather,” and they will be, in part, caused by human activity. Millions may be displaced because of rising sea levels, at a time when we face food and freshwater shortages from drought and desertification. This is not a world that any of us would freely choose.

The tone lightens somewhat—from pitch-black to ashen gray—in the third section, where Wallace-Wells shifts from painting the looming threat to making some predictions about how our culture might respond. His conclusion, in a nutshell, is that the scope of the threat is so all-encompassing that we cannot psychologically come to terms with it. For example, many Americans are tempted to blame climate change on Republicans; but this obscures the fact that the Republican party is the only major climate-change denying party in the world, and the United States does not produce the majority of the world’s carbon emissions—not by a long shot.

Other common reactions to the crisis come in for criticism as well. One is to focus on individual behavior, trying to reduce consumption and to purchase the most eco-friendly items possible. But individual choices do not, Wallace-Wells thinks, have the potential to make more than the tiniest difference. For example, though there is much scolding of people, say, watering their lawns during a drought, personal water consumption is only a small fraction of the society’s total. Only large-scale changes in infrastructure can make the difference, and that must come from political pressure.

In both of these above cases, the common thread is the inability of our normal moral circuitry to deal with the problem. We want to tell a story with heroes and villains who are directly responsible through their personal choices for the crisis we are facing. But the reality, as Wallace-Wells says, is that culpability is widely-dispersed and our responsibility is collective, not individual. This goes sharply against the grain of our psychology, which I think partly accounts for our inaction.

One more common reaction is to think that technology will save us. There is, of course, very little evidence for this, and what it amounts to is using a blind faith in timely innovation to justify inaction. At the moment, carbon-capture technology is so inefficient that we would need hundreds of acres of such plants to make a difference. One other option is to start pumping ammonia into the air, in order to make the atmosphere more reflective of sunlight. But of course this would have quite awful effects on the environment and human health. And as Wallace-Wells points out, we have reached quite an odd stage in history when these ideas strike people as more practicable than reducing consumption of fossil fuels—as if the market were a more unbending force than the climate itself.

Advocates have a variety of emotions to choose from if they wish to motivate people—hope, outrage, and fear being the most common. Wallace-Wells leans heavily on fear, which arguably puts this book in the same tradition as Rachel Carson’s Silent Spring. But is fear the right choice? Certainly Carson was effective; and since Uninhabitable Earth was a #1 best-seller, it seems that Wallace-Wells achieved his goal. However, the challenge of getting rid of pesticides pales to nothingness in comparison with the challenge of reconfiguring our economies, infrastructures, and ways of life. Can fear propel us through this great transition? Personally, I found the tone of the book so bleak that I was exhausted even before reaching the end. But I suppose everyone will react their own way.

Will the COVID-19 pandemic make us more inclined to trust the warnings of scientists? I hope so, though perhaps that is asking too much. And as we collectively recover from the economic downturn, will we use the opportunity to pass something like the Green New Deal? I hope so, too, though I very much doubt it. Indeed, for me, one of the biggest lessons of this pandemic has been that we need not resort to selfish evil as an explanation for climate inaction. Virtually nobody had anything to gain from the pandemic, and it came anyway, catching every Western nation with its proverbial pants down—despite repeated warnings from epidemiologists. Human stupidity, then, is a sufficient explanation for our climate inaction. And, unfortunately, that will be around until the earth truly is uninhabitable.

[I did not scrupulously check for errors, but I still caught two. Well-Wallace says that “1 in 6” people die from air pollution, but the true figure is about 6-7%—still a lot, but much less than 1 in 6. Later on, Wallace-Wells says: “H.G. Wells’s The Time Machine, which depicted a distant future in which most humans were enslaved troglodytes, laboring underground for the benefit of a pampered and very small aboveground elite…” But of course this is an incorrect description of the book: most humans are not enslaved, but prey; and they live above ground, while the predators live below. It seems odd to me that he would reference a book that he clearly did not read. I am sure there are more errors lurking about.]


View all my reviews

Processing…
Success! You're on the list.

Review: Mosquito

Review: Mosquito

Mosquito: A Natural History of Our Most Persistent and Deadly Foe by Andrew Spielman

My rating: 4 of 5 stars

This is a thoroughly fascinating book about one of my least favorite things in the world. And I am one of the lucky ones. Even when those around me are getting eaten alive, I am normally spared the worst of the mosquito onslaught, for reasons that are largely elusive. Indeed, when I was an undergraduate studying in Kenya, one of my classmates did a small study on us, counting our bites and trying to see if they correlated with blood-type or other variables like perfume or shampoo. Since all of us had the same schedule, it seemed a promising study. But, alas, no insight was gained, though I was surprised to find that some of us had well over 60 bites, while I had less than 10.

Yet mosquitoes are more than annoyances; they are major vectors of disease, as I was reminded of daily when I took my malaria prophylactic. And after giving the reader some basic facts of mosquito biology, the book switches focus to disease control. There was much I did not know. For example, I had no idea that malaria was once present in New Jersey and New York, until aggressive government policies in the early 1900s eliminated the scourge. Similarly, I had no notion of the role that the Tennessee Valley Authority had in freeing America’s south from the malarial menace, largely by destroying mosquito nesting sites.

I also learned more about the story of Yellow Fever in the Americas. Though it may seem obvious to us nowadays that a disease can be transmitted by a mosquito bite, this was quite a controversial claim in the year 1900. It took careful work by a team of doctors in Cuba to prove that mosquitoes, not blood or bile, communicated the illness. This insight quickly led to the program of insect control that was instrumental in the building of the Panama Canal—a project that had proven impossible for the French, who labored under ignorance of the disease’s cause, and had to abandon the project as thousands of workers succumbed.

The authors of the book also have much to say on the subject of DDT. Having only read Rachel Carson’s Silent Spring, I had only been exposed to the argument against this popular pesticide. But Spielman and D’Antonio make a good case that, when used responsibly, the potential benefits of DMT far outweighs its health risks. Unfortunately, the pesticide was used to such a huge extent during the anti-malaria wars of the 1950s that it has lost much of its efficacy via accumulated resistance in mosquito populations. Spielman (the book’s entomologist) believes that this effort was ill-conceived, since it aimed for the impossible goal of total vector elimination, and it only resulted in the blunting of DDT, our most powerful weapon (not to mention decreased resistance in the human population from temporary reduction in malaria rates).

Malaria remains a major problem in vast areas of the world. We do not have an effective vaccine, and the plasmodium which causes the disease can evolve in response to drug treatments in just the same way that mosquitoes can evolve in response to DDT. And while those in temperate climates may be inclined to view it as a distant concern, this may soon prove not to be the case, as global warning expands the range of malaria-carrying mosquitoes northward. For my part, I think we are due for another big anti-malaria push, this time using smarter methods. But like the mosquito itself, the malaria parasite is one of our oldest enemies, having evolved with us for millions of years; so it may not be easy.

The authors close with a modern example of a tropical disease making it to a temperate zone: the 1999 West Nile outbreak in the New York City region. Surprisingly, I can remember this, even though I was only eight years old at the time. My mother told me that I had to stay inside on a beautiful summer night because they were spraying for mosquitoes. Soon, the helicopter came roaring by, dusting the area with insecticide. My brother remembers the entire playground in his Kindergarten being covered in a tarp to avoid getting sprayed. Such efforts did not succeed to eliminate West Nile in the United States, and now it circulates in the local bird and mosquito populations, closely monitored.

If the current pandemic helps to spur us to more aggressive public health measures, then I think mosquito control should be close to the top of the agenda. As Spielman himself notes, the mosquito does not serve any crucial functions in ecosystems—not as pollinators or even as prey—and are the most significant animal vectors of disease on the planet. Indeed, the mosquito is so perfectly useless and so perfectly dreadful that you wonder how anyone can maintain their faith in an almighty and infinitely loving God when faced with such a horrid product of blind evolution. They really are awful little things. And though we can never hope to eliminate them entirely, there is hope that we can break the chain of disease transmission long enough to at least make their bites mere itchy annoyances rather than a harbinger of doom.



View all my reviews

Review: Plagues and Peoples

Review: Plagues and Peoples

Plagues and Peoples by William H. McNeill

My rating: 4 of 5 stars

Looked at from the point of view of other organisms, humankind therefore resembles an acute epidemic disease, whose occasional lapses into less virulent forms of behavior have never sufficed to permit any really stable, chronic relationship to establish itself.

It is risky to write a book like this. When William H. McNeill set out to analyze the manifold ways that infectious diseases have shaped world history, it was almost an entirely novel venture. Though people had been writing history for millennia, specialized works focusing on the ways that civilizations have been shaped by illness were few and far between. This seems rather strange when you consider that it was only in the twentieth century when disease reliably caused fewer casualties than enemy action during war.

Perhaps thinking about faceless enemies like viruses and bacteria simply does not come naturally to us. We personify the heavens readily enough, and do our best to appease it. But it is more difficult to personify a disease: it strikes too randomly, too mysteriously, and often too suddenly. It is, in other words, a completely amoral agent; and the thought that we are at the mercy of such an agent is painful to consider.

This tendency to leave diseases out of history books has come down to our own day. The 1918 flu pandemic is given a fraction of the coverage in standard textbooks as the First World War, even though the former caused more casualties. Curiously, however, that terrible disease did not even leave a lasting impression on those who survived it, judging by its absence in the works of the major writers of the day. It seems that memory of disease fades fast, at least most of the time. The 1968 Hong Kong flu killed 100,000 Americans that year (which would translate to 160,000 today), and yet neither of my parents remembers it.

This is why I think this book was a risky venture: there was not much precedent for successful books written about the history of diseases. Further, since there was not much in the way of prior research, much of this book must perforce consist of speculation using the spotty records that existed. While this does leave the historian open to the criticism of making unfounded claims, as McNeill himself says, such speculations can usefully precede a more thorough inquiry, since at least it gives researchers an orientation in the form of theories to test. Indeed, in my opinion, speculative works have just as important a role as careful research in the advancement of knowledge.

McNeill most certainly cannot be accused of a lack of ambition. He had completed an enormous amount of research to write his seminal book on world history, The Rise of the West; and this book has an equally catholic orientation. He begins with the emergence of our species and ends with the twentieth century, examining every inhabited continent (though admittedly not in equal detail). The result is a tantalizing view of how the long arc of history has been bent and broken by creatures lighter than a dust mite.

Some obvious patterns emerge. The rise of agriculture and cities created population densities capable of supporting endemic diseases, unknown to hunter-gatherers. Living near large masses of domesticated animals contributed much to our disease regimes; and the lack of such animals was decisive in the New World, leaving indigenous populations vulnerable to the invading Europeans’ microbes. Another recurring pattern is that of equilibrium and disturbance. Whenever a new disease breaks in upon a virgin population, the results are disastrous. But eventually stasis is achieved, and population begins to rebound.

One of McNeill’s most interesting claims is that the great population growth that began in the 18th century was partly a result of a new disease regime. By that time, fast overland and sea travel had exposed most major urban centers to common diseases from around the world, thus rendering them less vulnerable to new shocks. I was also surprised to learn that it was only the rise of modern sanitation and medicine—in the mid 19th century—that allowed city populations to be self-sustaining. Before this, cities were population sinks because of endemic diseases, and required constant replenishment from the countryside in order to maintain their numbers.

As I hope you can see, almost fifty years after publication, this book still puts forward a compelling view of world history. And I think it is a view that we still have trouble digesting, since it challenges our basic sense of self-determination. Perhaps one small benefit of the current crisis will be an increased general curiosity about how we still are, and have always been, mired in the invisible web of the microscopic world.



View all my reviews

Review: A Journal of the Plague Year

Review: A Journal of the Plague Year

A Journal of the Plague Year by Daniel Defoe

My rating: 3 of 5 stars

It was a very ill time to be sick in…

My pandemic reading continues with this classic work about one of the worst diseases in European history: bubonic plague. Daniel Defoe wrote this account when the boundaries between fiction and non-fiction were looser. He freely mixes invention, hearsay, anecdote, and real statistics, in pursuit of a gripping yarn. Defoe himself was only a young boy when the Great Plague struck London, in 1664-6; but he writes the story in the person of a well-to-do, curious, if somewhat unimaginative burgher, with the initials “H.F.” The result is one of literature’s most enduring portraits of a city besieged by disease.

Though this account purports to be a “journal,” it is not written as a series of dated entries, but as one long scrawl. What is more, Defoe’s narrator is not the most orderly of writers, and frequently repeats himself or gets sidetracked. The book is, thus, rather slow and painful to read, since it lacks any conspicuous structure to grasp onto, but approaches a kind of bumbled stream-of-consciousness. Even so, there are so many memorable details and stories in this book that it is worth the time one spends with it.

The Great Plague carried off one fourth of London’s population—about 100,000 souls—and it was not even the worst outbreak of plague in the city. The original wave of the Black Death, in the middle ages, was undoubtedly worse. Still, losing a quarter of a city’s population is something that is difficult for most of us to even imagine. And when you consider that the Great Fire of London was quick on the plague’s heels, you come to the conclusion that this was not the best time to be a Londoner.

What is most striking about reading this book now is how familiar it is. The coronavirus is no bubonic plague, but it seems our reactions to disease have not come a long way. There are, of course, the scenes of desolation: empty streets and mass graves. The citizens anxiously read the statistics in the newspaper, to see if the numbers are trending upwards or downwards. And then there are the quacks and mountebanks, selling sham remedies and magical elixirs to the desperate. We also see the ways that disease affects the rich and the poor differently: the rich could afford to flee the city, while the poor faced disease and starvation. And the economic consequences were dreadful—shutting up business, leaving thousands unemployed, and halting commerce.

Medical science was entirely useless against the disease. Nowadays, we can effectively treat the plague with antibiotics (though the mortality rate is still 10%). But at the time, little could be done. Infection with the bacillus causes swollen lymph nodes—in the groin, armpits, and neck—called buboes, and it was believed that the swellings had to be punctured and drained. This likely did more harm than good, and in practice the plague doctors’ only useful purpose was to keep records of the dead.

Quite interesting to observe were the antique forms of social distancing (a term that of course did not exist) that the Londoners practiced. As now, people tried to avoid going out of their homes as much as possible, and if they did go out they tried to keep a distance from others and to avoid touching anything. Defoe describes people picking up their own meat at the butcher’s and dropping their money into a pan of vinegar to disinfect it. There was also state-mandated quarantining, as any house with an infection got “shut up”—meaning the inhabitants could not leave.

Ironically, though these measures would have been wise had the disease been viral, they made little sense for a disease communicated by rat fleas. (Defoe does mention, by the way, that the people put out rat poison—which probably helped more than all of the distancing.)

One more commonality is that the virus outlasted people’s patience and prudence. As soon as an abatement was observed in the weekly deaths, citizens rushed out to embrace each other and resume normal life, despite the warning of the town’s physicians. Not much has changed, after all.

So while not exactly pleasant to read, A Journal of the Plague Year is at least humbling for the contemporary reader, as it reminds us that perhaps we have not come so far as we thought. And it is also a timely reminder that, far from a novel and unpredictable event, the current crisis is one of many plagues that we have weathered in our time on this perilous globe.

[Cover photo by Rita Greer; licensed under FAL; taken from Wikimedia Commons.]

View all my reviews

Review: The Plague

Review: The Plague

The Plague by Albert Camus

My rating: 5 of 5 stars

Officialdom can never cope with something really catastrophic.

As with all of Camus’s books, The Plague is a seamless blend of philosophy and art. The story tells of an outbreak of plague—bubonic and pneumonic—in the Algerian city of Oran. The narration tracks the crisis from beginning to end, noting the different psychological reactions of the townsfolk; and it must be said, now that we are living through a pandemic, that Camus is remarkably prescient in his portrayal a city under siege from infection. Compelling as the story is, however, I think its real power resides in its meaning as a parable of Camus’s philosophy.

Camus’s philosophy is usually called absurdism, and explained as a call to embrace the absurdity of existence. But this is not as simple as giving up church on Sundays. Absurdism is, indeed, incompatible with conventional religion. Camus makes this abundantly clear in his passage on the priest’s sermon—which argues that the plague is god’s punishment for our sins—an idea that Camus thinks incompatible with the randomness of the disaster: appearing out of nowhere, striking down children and adults alike. But absurdism is also incompatible with traditional humanism.
The best definition of humanism is perhaps Protagoras’s famous saying: “Man is the measure of all things.” In many respects this seems to be true. Gold is valuable because we value it; an elephant is big and a mouse is small relative to human size; and so on. However, on occasion, the universe throws something our way that is not made to man’s measure. A plague is a perfect example of this: an ancient organism, too small to see, which can colonize our bodies, causing sickness and death and shutting down conventional life as we know it. Whenever a natural disaster makes life impossible, we are reminded that, far from being the measure of all things, we exist at the mercy of an uncaring universe.

This idea is painful to contemplate. Nobody likes to feel powerless; and the idea that our suffering and striving do not, ultimately, mean anything is downright depressing. Understandably, most of us prefer to ignore this situation. And of course economies and societies invite us to do so—to focus on human needs, human goals, human values—to be, in short, humanists. But there are moments when the illusion fades, and it does not take a pandemic. A simple snowstorm can be enough. I remember watching snow fall out of an office window, creating a blanket of white that forced us to close early, go home, and stay put the next day. A little inclement weather is all it takes to make our plans seem small and irrelevant.

A plague, then, is an ideal situation for Camus to explore his philosophy. But absurdism does not merely consist in realizing that the universe is both omnipotent and indifferent. It also is a reaction to this realization. In this book, Camus is particularly interested in what it means to be moral in such a world. And he presents a model of heroism very different from that which we are used to. The humanist hero is one who is powerful and free—a person who could have easily chosen not to be a hero, but who chose to because of their goodness.

The hero of this story, Dr. Bernard Rieux, does not fit this mold. His heroism is far humbler and more modest: it is the heroism of “common decency,” of “doing my job.” For the truth is that Rieux and his fellows do not have much of a choice. Their backs are against the wall, leaving them only the choice to fight or give up. An absurdist hero is thus not making a choice between good and evil, but against a long and ultimately doomed fight against death—or death. It is far better, in Camus’s view, to take up the fight, since it is only in a direct confrontation with death that we become authentically alive.

You might even say that, for Camus, life itself is the only real ethical principle. This becomes apparent in the speech of Tarrou, Rieux’s friend, who is passionately against the death sentence. Capital punishment crystalizes the height of absurdist denial: decreeing that a human value system is more valid that the basic condition of existence, and that we have a right to rule when existence is warranted or not. To see the world with clear eyes means, for Camus, to see that life is something beyond any value system—just as the entire universe is. And the only meaningful ethical choice, for Camus, is whether one chooses to fight for life.

This book is brilliant because its lessons can be applied to a natural disaster, like a plague, or a human disaster, like the holocaust. Indeed, before the current pandemic, the book was normally read as a reaction to that all-too-human evil. In either case, our obligation is to fight for life. This means rejecting ideologies that decree when life is or is not warranted, it means not giving up or giving in, and it means, most of all, doing one’s job.



View all my reviews

Review: And the Band Played On

Review: And the Band Played On

And the Band Played On: Politics, People, and the AIDS Epidemic by Randy Shilts

My rating: 5 of 5 stars

The story of these first five years of AIDS in America is a drama of national failure, played out against a backdrop of needless death.

Though this book has been on my list for years, it took a pandemic to get me to finally pick it up. I am glad I did. And the Band Played On is both a close look at one medical crisis and an examination of how humans react when faced with something that does not fit into any of our mental boxes—not our ideas of civil liberty, not our categories of people, and not our notions of government responsibility. As such, this book has a lot to teach us, especially these days.

Randy Shilts was working as a reporter for the San Francisco Chronicle. This position allowed him to track the spread of this disease from nearly the very beginning. Putting this story together was a work of exemplary journalism, involving a lot of snooping and a lot more interviewing. What emerges is a blow-by-blow history of the crisis as it unfolded in its first five years, from 1980-85. And Shilts’s lens is broad: he examines the gay community, the epidemiologists, the press, the blood banks, the medical field, the research scientists, and the politicians. After all, a pandemic is not just caused by a virus; it is the sum of a virus and a society that allows it to spread.

The overarching theme of this book is individual heroism in the face of institutional failure. There are many admirable people in these pages: epidemiologists trying to raise the alert, doctors struggling to treat a mysterious ailment, gay activists trying to educate their communities, and a few politicians who take the disease seriously. But the list of failures is far longer: from the scientists squabbling over claims of priority, to the academic bureaucracies squashing funding requests, to the blood bankers refusing to test their blood, to the government—on every level—failing to take action or set aside sufficient funding.

A lot of these failures were due simply to the sorts people who normally caught AIDS: gay men and intravenous drug users. Because both of these groups were (and to some extent still are) social pariahs, major newspapers simply did not cover the epidemic. This was crucial in many respects, since it gave the impression that it simply was not worth worrying about (the news sets the worry agenda, after all), giving politicians an excuse to do nothing and giving people at risk an excuse not to take any precautions. The struggle in the gay community over how to proceed was particularly vexing, since it was their very efforts to preserve their sexual revolution which cost time and lives. As we are seeing nowadays, balancing civil liberties and disease control is not an easy thing.

But what made these failure depressing, rather than simply frustrating, was the constant drumbeat of death. So many young men lost their lives to this disease, dying slow and agonizing deaths while baffled doctors tried to treat them. When these deaths were occurring among gay men and drug users, the silence of the country was deafening. It was only when the disease showed the potential to infect heterosexuals and movie stars—people who matter—that society suddenly spurred itself into action. This seems to be a common theme to pandemics: society only responds when “normal” people are at risk.

Another common theme to pandemic is the search for a panacea. At the beginning of the AIDS crisis, there were many claims of “breakthroughs” and promises of vaccines. But we still have neither a cure nor a vaccine. Fortunately, treatment for HIV/AIDS has improved dramatically since this book was written, when a diagnosis meant death. Pills are now available (Pre-Exposure Prophylactic, or PrEP) which, if taken daily, can reduce the chance of contracting HIV through sex by almost 99% percent. And effective anti-viral therapies exist for anyone who has been infected, greatly extending lifespans.

Unfortunately, these resources are mostly available in the “developed” world. In Sub-Saharan Africa, where resources are scarce, the disease is still growing, taking many lives in the process. Once again, a disease is allowed to ravage in communities that the world can comfortably ignore.

One day, a hardworking journalist will write a similar book about the current coronavirus crisis and our institutions’ response to it. And I am sure there will be just as much failure to account for. But there will also be just as much heroism.



View all my reviews