Review: A War on Normal People

Review: A War on Normal People

The War on Normal People: The Truth About America’s Disappearing Jobs and Why Universal Basic Income Is Our Future by Andrew Yang

My rating: 4 of 5 stars

I admit that I hardly paid attention to Andrew Yang during the primaries. I knew that he was for Universal Basic Income (UBI), but little else; and it did not seem to matter, given his pole numbers. But during the economic fallout caused by the coronavirus lockdowns, UBI is starting to look all the more reasonable (especially after I received a direct deposit from the federal government!). So I decided that it was time to take a second look.

This book could easily have been mushy pap—a boilerplate campaign book only published for publicity. Yang could have gone on and on about his good work in Venture for America, all the inspiring young people he met, all the businesses he helped grow, and all of the wonderful places he visited across America. He could have talked about his own story from first generation American to entrepreneur and politician. Some of that is in here, of course; but not nearly as much as one might expect. Instead, Yang has written a serious work on the problems facing America.

Yang covers a remarkable amount of ground in this short book—video game addiction, the importance of malls in communities, the rising cost of universities—but his primary message is fairly simple: Automation is going to eliminate many millions of jobs, and we need to transform the economy accordingly. As someone with many friends in Silicon Valley, Yang speaks convincing on the subject of automation. An obvious example is self-driving cars. Once the technology becomes reliable enough, virtually all driving jobs are threatened. Considering the numbers of people whose work involves transporting either passengers or cargo, this alone can be dramatic. What would happen to all the taxi, bus, and truck drivers of the world?

But according to Yang, self-driving cars would only be the beginning. While automation may call to mind robotic arms laboring in factories, white collar jobs are also liable to being automated. Chances are, if you work in an office, at least some of your work is rote and repetitive; and that means a computer could potentially do it, and do it far better than you can. While most of us are far removed from the world of artificial intelligence, those in the community routinely seem alarmed by the prospect of increasingly powerful A.I. Every year a new program accomplishes another “impossible” task, such as mastering the Chinese game Go. Yang even mentions computer-written symphonies and computer-generated artwork! (I was happy to note, however, that Yang did not seem to think teachers could be automated away.)

This will result in still more intense economic stratification. Many parts of America have already hollowed out as a result of recent economic trends. Most of the country’s factories have closed, destroying some of the most well-compensated blue-collar jobs. The rise of online retail—only accelerated by the coronavirus crisis—threatens to permanently destroy much more employment. Of course, when some jobs are eliminated, other types of jobs come into being. But we cannot rely on this process to correct the imbalance—first, because automation destroys more jobs than it creates (think of the one trouble-shooter for every five self-checkout registers), and second, because the new jobs usually require different skills, and exist in different parts of the country.

There are many proposed solutions in this book, but Yang’s signature idea is UBI. This would be a monthly payment of $1,000, or $12,000 a year, to every citizen over the age of 18; and it would be given a very patriotic name: the Freedom Dividend. Yang proposes to pay for this with a Value Added Tax (VAT) of 10%. (I was actually unaware of the difference between a VAT and a sales tax before reading this book, which is that a VAT must be paid at every step in the production process. This has the added advantage of taxing automated industries, since robots do not pay an income tax.) But the hefty price tag of UBI would also be partially compensated by the reduction or elimination of other government welfare programs. And, of course, if you put more money into the hands of consumers, most of them will spend rather than save it, and this will in turn increase tax revenue.

One obvious objection to UBI is that, by giving money indiscriminately, we will inevitably be giving it to people who do not need it. The most apposite reply to this objection, for me, is that subjecting government assistance to means-testing creates a host of problems. For one, there is a great deal of cumbersome bureaucracy involved in determining whether a particular person ‘deserves’ aide—bureaucracy that would be rendered entirely redundant by UBI, since the checks can be sent out through the IRS. Indeed, this cumbersome bureaucracy only creates added waste, since many NGOs exist simply to help people navigate the complex government paperwork. Of every, say, $100 spent on welfare, what portion of that goes to those in need, and what portion to the paychecks of bureaucrats laboring to determine who gets the money and how they can spend it?

Indiscriminate giving would also eliminate the pesky problem of disincentivizing work. At the moment, Republicans and Democrats are in a dispute over this very issue, as Republicans are arguing that the extra $600 of unemployment money (as part of the coronavirus aid package) will encourage people not to work. While some on the left disagree, personally I think this is a rather strong objection—not to giving people money, but to making the money conditional on not having a job. The same issue is present in many other sorts of government aid, such as disability payments, which cease as soon as the recipient becomes employed. If the money were unconditional, however, then people would have no disincentive to work; on the contrary, they would be able to substantially improve their economic situation by working, perhaps even making enough to start saving and investing.

UBI, then, has potential appeal for both those on the left and on the right. Those on the left may like it because it is a way of redistributing wealth, while those on the right may like it since it is a way of shrinking the government. The latter statement might seem more far-fetched, but I do think that a solid, conservative case could be made for UBI. After all, Milton Friedman was quite an avid supporter of the concept, for a multitude of reasons: it shrinks government, it reduces government paternalism, it promotes both work and consumption, and it would avoid dividing people into different categories.

This last point merits some comment. Presently, a great deal of anti-welfare rhetoric is concerned with parasitism—the idea that lazy people are simply ‘on the dole,’ dragging down the rest of society. It is the perfect recipe for shame and resentment, since inevitably it divides up society into groups of givers and takers; and even the best government bureaucracy in the world could not hope to distribute money in the fairest way possible. Inevitably, some people who ‘deserve’ aid will not get it; and others who do not ‘deserve’ it will—since no definition of ‘deserving’ will be perfect, and in any case there is no way of perfectly measuring how much somebody ‘deserves.’

UBI works against this psychology in a powerful way, by being entirely indiscriminate. Though the rich would be paying more in taxes than they receive back, they too would receive their monthly payment, and I think this fact alone would help create an added sense of social solidarity. UBI would be something shared by everyone, everywhere, rather than something that marks you out as being poor and dependent, a mark of stigma and shame. This strikes me as quite a positive thing in the age of dramatic political polarization.

Another aspect of UBI that I find deeply appealing is that it will give people the freedom to pursue less well-remunerated, but more socially beneficial, work. As Yang points out, many of the most humanly important jobs—being a parent, an artist, or even an online book reviewer—are quite poorly compensated, if they are compensated at all. An economist might argue that this is justified, since the free market determines the value of work based on supply-and-demand. But I think that this logic will become less appealing as robots start to out-compete humans. Indeed, perhaps automation will erode our faith in the wisdom of markets and meritocracy, since it will be difficult to believe that a delivery drone is more deserving than a delivery driver, even if it gets more work done.

There are, of course, many objections to UBI, one being that it will encourage widespread free-loading. But the evidence for this is quite weak. As Yang demonstrates, in the many UBI trials that have been conducted, work reduction was quite low, mostly taking place among new mothers. And as I mentioned above, our current welfare system arguably encourages free-loading far more effectively than UBI would, since UBI does not disincentive work. In any case, I think all of us—especially new mothers!—could do with a modest reduction in work hours, given the fact that study after study shows that long hours do not benefit productivity. Instead of having humans emulate work machines, then, it would be far better to automate as much work as possible—since machines never sleep, never eat, and never get sick—and focus on the remaining work which really does require a human touch.

Yang addresses many other objections to UBI, and most of his arguments are convincing. I do have one nagging question, however, and it is this: If the purchasing power of the general population is increased across the board, will prices of food and housing correspondingly increase? Though I am economically naïve, it strikes me that this is bound to happen, at least somewhat; and this may partially offset the gains of UBI. But perhaps I am mistaken. Another question is whether automation will go as far as Yang predicts. I found most of his forecasts—particularly about self-driving vehicles—quite compelling. But it does seem possible that the affects will be less sweeping than Yang supposes. For example, I cannot imagine couples turning to an A.I. marriage counselor with the voice of Morgan Freeman, as Yang somewhat fancifully imagines.

In any case, while Yang’s twin themes of automation and UBI are his central message, his book has far more to offer. I particularly appreciated his portrayal of the economic plight facing many parts of America, and the increasingly stark divide between those with and without a college degree. For example, I often find myself forgetting that the majority of American adults do not have degrees, if only because almost all of my friends and family have one. Considering how many jobs—including low-skilled jobs—require a degree, this is a major economic disadvantage nowadays.

The fact that I can forget about this economic disadvantage is a measure of the degree to which different parts of the country are insulated from one another. And the university system is not helping to even the playing field. After all, most of the people who do obtain degrees are already from comparatively better-off families. The university system also does not add to economic diversification, since students are pursuing an increasingly narrow range of majors; and after college, most graduates move to one of a handful of large cities. The result is an increasingly stark economic divide between Americans with college degrees living in large cities, working in a shrinking number of industries, and those living in more rural areas, or hallowed out cities, without degrees. It is an inimical process.

Yang also deserves credit for his mental flexibility. Besides UBI, this book contains a range of proposals, all of them quite new to me. Considering the degree to which political debate is dominated by decades-old proposals, I found this extremely refreshing. Admittedly, I do think that Yang’s Silicon Valley message failed to resonate with the voting public for a reason. While he has much to say about the future of America’s economy, he is less convincing on problems besetting many Americans now, most notably health care. Yang does favor a version of universal coverage, and he has some very intriguing things to say about how technology can change the role of the doctor, but I think it is fair to say that this was a minor part of his book.

Yet if this book fails as political marketing, it succeeds in being both a thoughtful meditation on the problems facing the average American, and a set of bold proposals to address these problems. While so many politicians come across as blindly ideological, stupidly partisan, or simply as creatures of the political system, Yang is intelligent, imaginative, and unconventional. I hope that this is not the last we hear from him.



View all my reviews

Review: Priced Out

Review: Priced Out

Priced Out: The Economic and Ethical Costs of American Health Care by Uwe Reinhardt

My rating: 4 of 5 stars

Mantras about the virtues of markets are no substitute for serious ethical convictions.

There are a great many things for Americans to feel embarrassed about. Depending on your politics, you may bemoan the rise of identity politics and the snowflake culture predominating on college campuses; or perhaps you rage against racist policing or our lax gun laws. But I think that, as Americans, we can all come together and feel a deep and lasting shame over our health care system—specifically, how we finance it. According to Reinhardt, our system is so bad that it is routinely invoked in international conferences as a kind of boogey man, an example of what to avoid. And after reading this book, it is easy to see why.

I did not suspect that our system was quite so bad until I left the country. But, in retrospect, the evidence was quite apparent. Virtually all of my friends have expressed anxiety about their health care at some point—high premiums, high deductibles, or simply no health insurance at all. I have seen family members spend weeks negotiating with insurance companies for payment of their medicines (and even after the insurance chips in, the cost is still breathtaking). Meanwhile, in my five years here, I have yet to hear a single Spaniard express anxiety over how they will pay for a medicine or a medical procedure. Here, as in most of Europe, this type of anxiety is quite uncommon.

The U.S. system fails on many different fronts. Most simply, there is coverage. Millions of Americans have no coverage, and millions more have inadequate coverage (such as many of my friends, whose deductibles are so high that they may as well not have insurance). Second is cost. Both medical procedures and medicines are significantly more expensive in the United States. For example, the drug Xarelto (for blood clots) costs $101 in Spain, $292 in the U.S. The average cost of an appendectomy is $2,003 in Spain, $15,930 in America. A third failure— closely related to cost—is waste. Our byzantine payment system requires doctors and hospitals to spend great amounts of time and money communicating with insurance companies, which of course costs money, which of course gets transferred to the consumer.

But most fundamental failure is a failure of ethics. Or perhaps it is better to say a lack of ethical vision. As Reinhardt explains, while much of the debate on health care in America concerns itself with technicalities—risk pools, risk exposure, whether premiums should be actuarially fair or community-rated, etc.—this debate conceals the fact that we have yet to come to a consensus on the moral foundations of health care. Most of the world’s developed nations have established their systems on the presumption that health care is a social good. In the United States, on the other hand, we are sort of muddled, at times treating health care as if it is a commodity, and yet unwilling to face up to the implications of that choice—such as letting poor people die without treatment.

Aside from the ethical issues involved, health care has many features that make it unlike a typical commodity, and thus poorly governed by supply-and-demand. If I want to buy a car, for example, typically I am not in a great rush to do so. I can shop around, test-drive cars, compare prices across companies and locations, and read reviews. I can even decide that I do not want to buy a car after all, and instead buy a train pass. All of this contributes to control the price of cars, and incentivizes car companies to give us the best value for our money.

None of this is the case in our health care system. The demand is non-negotiable and, very often, time-sensitive. Furthermore, most patients lack the knowledge needed to evaluate what procedures or tests are justified or not, so oftentimes we cannot even be fully aware of our own ‘demand.’ Besides that, we have no ability to compare prices or to compare treatment efficacy. And even if we are careful to go to a hospital in our insurance network, there may be doctors ‘out of network’ working there, leading to the ugly phenomenon of surprise medical bills.

Added together, it is as if the car salesman blindfolded me, put a gun to my head, told me I had to choose a car in five minutes, while he was the only source of information about what car I needed (and medical bills can be quite as expensive as cars!). This is the position of the American “consumer” of healthcare.

My own brief experience with emergency medical care highlights the situation. The only time that I have ever been taken to an emergency room, I was unconscious. I woke up after being transported by the ambulance. Luckily, I was quickly discharged, and I also had insurance. But even though my insurance covered the hospital bill, it did not cover the ambulance, which I had to pay out of pocket. Again, I was lucky, since I was able to afford it. Many cannot, however, and have the experience of waking up from an accident, an injury, or an operation in debt. How can you be an intelligent consumer when you are unconscious?

The helplessness of the consumer creates a perverse incentive in our system. There is little downward pressure on prices. Instead, what results is a kind of arms race between health care providers and insurers. Insurers are incentivized to put up as many barriers as possible to paying out, which requires doctors and hospitals to invest ever-more resources into their billing departments, which of course only increases the cost to the patient. In many hospitals, there are more billing clerks than hospital beds; and when you realize that these billing clerks have their own counterparts in the insurance companies, you can get some idea of the enormous bloat created by our financing system.

I think there is a particular irony to this situation, since our American insistence on market values has created a labyrinthine network of incomprehensible rules, endless paperwork, and legions of bureaucrats—the very thing that capitalist principles were supposed to eliminate. Indeed, ironies abound in our system. For example, we endlessly discuss the affordability of government programs, while the tax incentives for employment-based insurance (which costs the federal and state governments an annual $300 billion in foregone revenues) is never mentioned. What is more, while the insurance mandates of Obamacare were roundly criticized as forcing the healthy to subsidize the unhealthy, as Reinhardt points out, the exact same thing occurs in insurance-based healthcare. And as a final irony:

It is fair to ask why, if socialized medicine is so bad, Americans for almost a century now have preserved precisely that construct for their military Veterans, and, indeed, why the latter are so defensive and protective of that socialized medicine system.

After reading this review, you may be excused for thinking that this book is a fiery manifesto about the evils of the system. Far from it. Uwe Reinhardt was a prominent economist and much of this book consists of tables and graphs. The writing is, if anything, on the dry side, and the tone is one of intellectual criticism rather than passionate outrage. Yet, strangely, this is why I found the book so effective. It is one thing for an arm-swinging socialist to condemn the evils of the system, but quite another for a calm economist to go through the data, point by point, and explain how it all works and how it compares with other countries’ performance.

You may also be excused for thinking that, given all this, Reinhardt would be an advocate for a single-payer system in the United States. After all, he was one of the architects of Taiwan’s single-payer system, which costs about 6% of the country’s GDP. (For comparison, America’s system costs us 17% of GDP!) But Reinhardt thinks that such a system would not work on American soil. For one, the libertarian streak in our culture runs too deep for such a system to be broadly acceptable. More importantly, however, Reinhardt thinks that our campaign finance system is so corrupt that the health care lobby would be able to exert a heavy influence on the government, thus canceling the benefit.

He instead advocates for an ‘all-payer’ system. The idea is to consolidate the market power of consumers by having standard prices set either by the government, or by associations of care providers and insurers. This would, at the very least, avoid the wild price variability that can be found in even a single city in the United States. It also helps to bring costs down, as demonstrated in Maryland, which has had an all-payer system for quite a while. Japan’s system is also established on this principle, and spends far less money per capita on its health care system, despite having a significantly older population than the United States.

In normal times, I was not exactly optimistic about the prospect of reforming out broken health care system. But in the wake of this pandemic, it does seem as if major reforms might not only be possible, but inevitable. Employment-based insurance makes little sense if people lose their jobs during a major health crisis, as has already happened to many millions of Americans. And high unemployment may persist for some time. What is more, a major health crisis, resulting in many thousands of additional hospital stays, will put pressure on private insurance firms and lead to a significant rise in insurance premiums. Basically, higher-risk patients create higher cost, and a pandemic puts far more people into the high-risk category. The greater strain on an already teetering system may be the proverbial straw on the camel’s back. We shall see.

View all my reviews

Review: The Uninhabitable Earth

Review: The Uninhabitable Earth

The Uninhabitable Earth: Life After Warming by David Wallace-Wells

My rating: 4 of 5 stars

You do not need to consider worst-case scenarios to become alarmed.

In normal times, the apocalypse bored me. Any discussion of catastrophic events put me in a mood of defensive skepticism. This was true whether I was considering an asteroid, a supervolcano, or rampant artificial intelligence—events that are so far out of my control that I would immediately dismiss them from my mind. However, the current coronavirus situation prompted me to read a book by epidemiologist Michael Osterholm, which includes a detailed prediction of what the next pandemic could look like. The experience of reading a scientist predicting, with considerable accuracy, what I was living through was profoundly eerie. And as a result, I was prepared to take a book about how climate change will play out a little more seriously

The picture is not rosy. Much like a pandemic, climate change it not localized, but strikes everywhere at once. If a bad hurricane hits one city, we can rush resources to the area. But if we are battered by massive hurricanes, destructive floods, severe droughts, raging wildfires, and deadly heatwaves, in many different parts of a country, over and over again, then we cannot effectively respond. To put it mildly, dealing with ever-escalating natural disasters will take an increasingly severe economic toll—measuring in the hundreds of trillions, by Wallace-Wells’s calculation—and will also put huge pressure on political systems, thus further reducing our ability to respond.

Wallace-Wells paints this coming future in such vividly chilling detail that even the most stoic reader will have an elevated pulse. Indeed, in this future world, the term “natural disaster” will start to lose its meaning, since disasters will be so common as to become simply become “weather,” and they will be, in part, caused by human activity. Millions may be displaced because of rising sea levels, at a time when we face food and freshwater shortages from drought and desertification. This is not a world that any of us would freely choose.

The tone lightens somewhat—from pitch-black to ashen gray—in the third section, where Wallace-Wells shifts from painting the looming threat to making some predictions about how our culture might respond. His conclusion, in a nutshell, is that the scope of the threat is so all-encompassing that we cannot psychologically come to terms with it. For example, many Americans are tempted to blame climate change on Republicans; but this obscures the fact that the Republican party is the only major climate-change denying party in the world, and the United States does not produce the majority of the world’s carbon emissions—not by a long shot.

Other common reactions to the crisis come in for criticism as well. One is to focus on individual behavior, trying to reduce consumption and to purchase the most eco-friendly items possible. But individual choices do not, Wallace-Wells thinks, have the potential to make more than the tiniest difference. For example, though there is much scolding of people, say, watering their lawns during a drought, personal water consumption is only a small fraction of the society’s total. Only large-scale changes in infrastructure can make the difference, and that must come from political pressure.

In both of these above cases, the common thread is the inability of our normal moral circuitry to deal with the problem. We want to tell a story with heroes and villains who are directly responsible through their personal choices for the crisis we are facing. But the reality, as Wallace-Wells says, is that culpability is widely-dispersed and our responsibility is collective, not individual. This goes sharply against the grain of our psychology, which I think partly accounts for our inaction.

One more common reaction is to think that technology will save us. There is, of course, very little evidence for this, and what it amounts to is using a blind faith in timely innovation to justify inaction. At the moment, carbon-capture technology is so inefficient that we would need hundreds of acres of such plants to make a difference. One other option is to start pumping ammonia into the air, in order to make the atmosphere more reflective of sunlight. But of course this would have quite awful effects on the environment and human health. And as Wallace-Wells points out, we have reached quite an odd stage in history when these ideas strike people as more practicable than reducing consumption of fossil fuels—as if the market were a more unbending force than the climate itself.

Advocates have a variety of emotions to choose from if they wish to motivate people—hope, outrage, and fear being the most common. Wallace-Wells leans heavily on fear, which arguably puts this book in the same tradition as Rachel Carson’s Silent Spring. But is fear the right choice? Certainly Carson was effective; and since Uninhabitable Earth was a #1 best-seller, it seems that Wallace-Wells achieved his goal. However, the challenge of getting rid of pesticides pales to nothingness in comparison with the challenge of reconfiguring our economies, infrastructures, and ways of life. Can fear propel us through this great transition? Personally, I found the tone of the book so bleak that I was exhausted even before reaching the end. But I suppose everyone will react their own way.

Will the COVID-19 pandemic make us more inclined to trust the warnings of scientists? I hope so, though perhaps that is asking too much. And as we collectively recover from the economic downturn, will we use the opportunity to pass something like the Green New Deal? I hope so, too, though I very much doubt it. Indeed, for me, one of the biggest lessons of this pandemic has been that we need not resort to selfish evil as an explanation for climate inaction. Virtually nobody had anything to gain from the pandemic, and it came anyway, catching every Western nation with its proverbial pants down—despite repeated warnings from epidemiologists. Human stupidity, then, is a sufficient explanation for our climate inaction. And, unfortunately, that will be around until the earth truly is uninhabitable.

[I did not scrupulously check for errors, but I still caught two. Well-Wallace says that “1 in 6” people die from air pollution, but the true figure is about 6-7%—still a lot, but much less than 1 in 6. Later on, Wallace-Wells says: “H.G. Wells’s The Time Machine, which depicted a distant future in which most humans were enslaved troglodytes, laboring underground for the benefit of a pampered and very small aboveground elite…” But of course this is an incorrect description of the book: most humans are not enslaved, but prey; and they live above ground, while the predators live below. It seems odd to me that he would reference a book that he clearly did not read. I am sure there are more errors lurking about.]


View all my reviews

Processing…
Success! You're on the list.

Review: Mosquito

Review: Mosquito

Mosquito: A Natural History of Our Most Persistent and Deadly Foe by Andrew Spielman

My rating: 4 of 5 stars

This is a thoroughly fascinating book about one of my least favorite things in the world. And I am one of the lucky ones. Even when those around me are getting eaten alive, I am normally spared the worst of the mosquito onslaught, for reasons that are largely elusive. Indeed, when I was an undergraduate studying in Kenya, one of my classmates did a small study on us, counting our bites and trying to see if they correlated with blood-type or other variables like perfume or shampoo. Since all of us had the same schedule, it seemed a promising study. But, alas, no insight was gained, though I was surprised to find that some of us had well over 60 bites, while I had less than 10.

Yet mosquitoes are more than annoyances; they are major vectors of disease, as I was reminded of daily when I took my malaria prophylactic. And after giving the reader some basic facts of mosquito biology, the book switches focus to disease control. There was much I did not know. For example, I had no idea that malaria was once present in New Jersey and New York, until aggressive government policies in the early 1900s eliminated the scourge. Similarly, I had no notion of the role that the Tennessee Valley Authority had in freeing America’s south from the malarial menace, largely by destroying mosquito nesting sites.

I also learned more about the story of Yellow Fever in the Americas. Though it may seem obvious to us nowadays that a disease can be transmitted by a mosquito bite, this was quite a controversial claim in the year 1900. It took careful work by a team of doctors in Cuba to prove that mosquitoes, not blood or bile, communicated the illness. This insight quickly led to the program of insect control that was instrumental in the building of the Panama Canal—a project that had proven impossible for the French, who labored under ignorance of the disease’s cause, and had to abandon the project as thousands of workers succumbed.

The authors of the book also have much to say on the subject of DDT. Having only read Rachel Carson’s Silent Spring, I had only been exposed to the argument against this popular pesticide. But Spielman and D’Antonio make a good case that, when used responsibly, the potential benefits of DMT far outweighs its health risks. Unfortunately, the pesticide was used to such a huge extent during the anti-malaria wars of the 1950s that it has lost much of its efficacy via accumulated resistance in mosquito populations. Spielman (the book’s entomologist) believes that this effort was ill-conceived, since it aimed for the impossible goal of total vector elimination, and it only resulted in the blunting of DDT, our most powerful weapon (not to mention decreased resistance in the human population from temporary reduction in malaria rates).

Malaria remains a major problem in vast areas of the world. We do not have an effective vaccine, and the plasmodium which causes the disease can evolve in response to drug treatments in just the same way that mosquitoes can evolve in response to DDT. And while those in temperate climates may be inclined to view it as a distant concern, this may soon prove not to be the case, as global warning expands the range of malaria-carrying mosquitoes northward. For my part, I think we are due for another big anti-malaria push, this time using smarter methods. But like the mosquito itself, the malaria parasite is one of our oldest enemies, having evolved with us for millions of years; so it may not be easy.

The authors close with a modern example of a tropical disease making it to a temperate zone: the 1999 West Nile outbreak in the New York City region. Surprisingly, I can remember this, even though I was only eight years old at the time. My mother told me that I had to stay inside on a beautiful summer night because they were spraying for mosquitoes. Soon, the helicopter came roaring by, dusting the area with insecticide. My brother remembers the entire playground in his Kindergarten being covered in a tarp to avoid getting sprayed. Such efforts did not succeed to eliminate West Nile in the United States, and now it circulates in the local bird and mosquito populations, closely monitored.

If the current pandemic helps to spur us to more aggressive public health measures, then I think mosquito control should be close to the top of the agenda. As Spielman himself notes, the mosquito does not serve any crucial functions in ecosystems—not as pollinators or even as prey—and are the most significant animal vectors of disease on the planet. Indeed, the mosquito is so perfectly useless and so perfectly dreadful that you wonder how anyone can maintain their faith in an almighty and infinitely loving God when faced with such a horrid product of blind evolution. They really are awful little things. And though we can never hope to eliminate them entirely, there is hope that we can break the chain of disease transmission long enough to at least make their bites mere itchy annoyances rather than a harbinger of doom.



View all my reviews

Review: Plagues and Peoples

Review: Plagues and Peoples

Plagues and Peoples by William H. McNeill

My rating: 4 of 5 stars

Looked at from the point of view of other organisms, humankind therefore resembles an acute epidemic disease, whose occasional lapses into less virulent forms of behavior have never sufficed to permit any really stable, chronic relationship to establish itself.

It is risky to write a book like this. When William H. McNeill set out to analyze the manifold ways that infectious diseases have shaped world history, it was almost an entirely novel venture. Though people had been writing history for millennia, specialized works focusing on the ways that civilizations have been shaped by illness were few and far between. This seems rather strange when you consider that it was only in the twentieth century when disease reliably caused fewer casualties than enemy action during war.

Perhaps thinking about faceless enemies like viruses and bacteria simply does not come naturally to us. We personify the heavens readily enough, and do our best to appease it. But it is more difficult to personify a disease: it strikes too randomly, too mysteriously, and often too suddenly. It is, in other words, a completely amoral agent; and the thought that we are at the mercy of such an agent is painful to consider.

This tendency to leave diseases out of history books has come down to our own day. The 1918 flu pandemic is given a fraction of the coverage in standard textbooks as the First World War, even though the former caused more casualties. Curiously, however, that terrible disease did not even leave a lasting impression on those who survived it, judging by its absence in the works of the major writers of the day. It seems that memory of disease fades fast, at least most of the time. The 1968 Hong Kong flu killed 100,000 Americans that year (which would translate to 160,000 today), and yet neither of my parents remembers it.

This is why I think this book was a risky venture: there was not much precedent for successful books written about the history of diseases. Further, since there was not much in the way of prior research, much of this book must perforce consist of speculation using the spotty records that existed. While this does leave the historian open to the criticism of making unfounded claims, as McNeill himself says, such speculations can usefully precede a more thorough inquiry, since at least it gives researchers an orientation in the form of theories to test. Indeed, in my opinion, speculative works have just as important a role as careful research in the advancement of knowledge.

McNeill most certainly cannot be accused of a lack of ambition. He had completed an enormous amount of research to write his seminal book on world history, The Rise of the West; and this book has an equally catholic orientation. He begins with the emergence of our species and ends with the twentieth century, examining every inhabited continent (though admittedly not in equal detail). The result is a tantalizing view of how the long arc of history has been bent and broken by creatures lighter than a dust mite.

Some obvious patterns emerge. The rise of agriculture and cities created population densities capable of supporting endemic diseases, unknown to hunter-gatherers. Living near large masses of domesticated animals contributed much to our disease regimes; and the lack of such animals was decisive in the New World, leaving indigenous populations vulnerable to the invading Europeans’ microbes. Another recurring pattern is that of equilibrium and disturbance. Whenever a new disease breaks in upon a virgin population, the results are disastrous. But eventually stasis is achieved, and population begins to rebound.

One of McNeill’s most interesting claims is that the great population growth that began in the 18th century was partly a result of a new disease regime. By that time, fast overland and sea travel had exposed most major urban centers to common diseases from around the world, thus rendering them less vulnerable to new shocks. I was also surprised to learn that it was only the rise of modern sanitation and medicine—in the mid 19th century—that allowed city populations to be self-sustaining. Before this, cities were population sinks because of endemic diseases, and required constant replenishment from the countryside in order to maintain their numbers.

As I hope you can see, almost fifty years after publication, this book still puts forward a compelling view of world history. And I think it is a view that we still have trouble digesting, since it challenges our basic sense of self-determination. Perhaps one small benefit of the current crisis will be an increased general curiosity about how we still are, and have always been, mired in the invisible web of the microscopic world.



View all my reviews

Review: A Journal of the Plague Year

Review: A Journal of the Plague Year

A Journal of the Plague Year by Daniel Defoe

My rating: 3 of 5 stars

It was a very ill time to be sick in…

My pandemic reading continues with this classic work about one of the worst diseases in European history: bubonic plague. Daniel Defoe wrote this account when the boundaries between fiction and non-fiction were looser. He freely mixes invention, hearsay, anecdote, and real statistics, in pursuit of a gripping yarn. Defoe himself was only a young boy when the Great Plague struck London, in 1664-6; but he writes the story in the person of a well-to-do, curious, if somewhat unimaginative burgher, with the initials “H.F.” The result is one of literature’s most enduring portraits of a city besieged by disease.

Though this account purports to be a “journal,” it is not written as a series of dated entries, but as one long scrawl. What is more, Defoe’s narrator is not the most orderly of writers, and frequently repeats himself or gets sidetracked. The book is, thus, rather slow and painful to read, since it lacks any conspicuous structure to grasp onto, but approaches a kind of bumbled stream-of-consciousness. Even so, there are so many memorable details and stories in this book that it is worth the time one spends with it.

The Great Plague carried off one fourth of London’s population—about 100,000 souls—and it was not even the worst outbreak of plague in the city. The original wave of the Black Death, in the middle ages, was undoubtedly worse. Still, losing a quarter of a city’s population is something that is difficult for most of us to even imagine. And when you consider that the Great Fire of London was quick on the plague’s heels, you come to the conclusion that this was not the best time to be a Londoner.

What is most striking about reading this book now is how familiar it is. The coronavirus is no bubonic plague, but it seems our reactions to disease have not come a long way. There are, of course, the scenes of desolation: empty streets and mass graves. The citizens anxiously read the statistics in the newspaper, to see if the numbers are trending upwards or downwards. And then there are the quacks and mountebanks, selling sham remedies and magical elixirs to the desperate. We also see the ways that disease affects the rich and the poor differently: the rich could afford to flee the city, while the poor faced disease and starvation. And the economic consequences were dreadful—shutting up business, leaving thousands unemployed, and halting commerce.

Medical science was entirely useless against the disease. Nowadays, we can effectively treat the plague with antibiotics (though the mortality rate is still 10%). But at the time, little could be done. Infection with the bacillus causes swollen lymph nodes—in the groin, armpits, and neck—called buboes, and it was believed that the swellings had to be punctured and drained. This likely did more harm than good, and in practice the plague doctors’ only useful purpose was to keep records of the dead.

Quite interesting to observe were the antique forms of social distancing (a term that of course did not exist) that the Londoners practiced. As now, people tried to avoid going out of their homes as much as possible, and if they did go out they tried to keep a distance from others and to avoid touching anything. Defoe describes people picking up their own meat at the butcher’s and dropping their money into a pan of vinegar to disinfect it. There was also state-mandated quarantining, as any house with an infection got “shut up”—meaning the inhabitants could not leave.

Ironically, though these measures would have been wise had the disease been viral, they made little sense for a disease communicated by rat fleas. (Defoe does mention, by the way, that the people put out rat poison—which probably helped more than all of the distancing.)

One more commonality is that the virus outlasted people’s patience and prudence. As soon as an abatement was observed in the weekly deaths, citizens rushed out to embrace each other and resume normal life, despite the warning of the town’s physicians. Not much has changed, after all.

So while not exactly pleasant to read, A Journal of the Plague Year is at least humbling for the contemporary reader, as it reminds us that perhaps we have not come so far as we thought. And it is also a timely reminder that, far from a novel and unpredictable event, the current crisis is one of many plagues that we have weathered in our time on this perilous globe.

[Cover photo by Rita Greer; licensed under FAL; taken from Wikimedia Commons.]

View all my reviews

Review: The Plague

Review: The Plague

The Plague by Albert Camus

My rating: 5 of 5 stars

Officialdom can never cope with something really catastrophic.

As with all of Camus’s books, The Plague is a seamless blend of philosophy and art. The story tells of an outbreak of plague—bubonic and pneumonic—in the Algerian city of Oran. The narration tracks the crisis from beginning to end, noting the different psychological reactions of the townsfolk; and it must be said, now that we are living through a pandemic, that Camus is remarkably prescient in his portrayal a city under siege from infection. Compelling as the story is, however, I think its real power resides in its meaning as a parable of Camus’s philosophy.

Camus’s philosophy is usually called absurdism, and explained as a call to embrace the absurdity of existence. But this is not as simple as giving up church on Sundays. Absurdism is, indeed, incompatible with conventional religion. Camus makes this abundantly clear in his passage on the priest’s sermon—which argues that the plague is god’s punishment for our sins—an idea that Camus thinks incompatible with the randomness of the disaster: appearing out of nowhere, striking down children and adults alike. But absurdism is also incompatible with traditional humanism.
The best definition of humanism is perhaps Protagoras’s famous saying: “Man is the measure of all things.” In many respects this seems to be true. Gold is valuable because we value it; an elephant is big and a mouse is small relative to human size; and so on. However, on occasion, the universe throws something our way that is not made to man’s measure. A plague is a perfect example of this: an ancient organism, too small to see, which can colonize our bodies, causing sickness and death and shutting down conventional life as we know it. Whenever a natural disaster makes life impossible, we are reminded that, far from being the measure of all things, we exist at the mercy of an uncaring universe.

This idea is painful to contemplate. Nobody likes to feel powerless; and the idea that our suffering and striving do not, ultimately, mean anything is downright depressing. Understandably, most of us prefer to ignore this situation. And of course economies and societies invite us to do so—to focus on human needs, human goals, human values—to be, in short, humanists. But there are moments when the illusion fades, and it does not take a pandemic. A simple snowstorm can be enough. I remember watching snow fall out of an office window, creating a blanket of white that forced us to close early, go home, and stay put the next day. A little inclement weather is all it takes to make our plans seem small and irrelevant.

A plague, then, is an ideal situation for Camus to explore his philosophy. But absurdism does not merely consist in realizing that the universe is both omnipotent and indifferent. It also is a reaction to this realization. In this book, Camus is particularly interested in what it means to be moral in such a world. And he presents a model of heroism very different from that which we are used to. The humanist hero is one who is powerful and free—a person who could have easily chosen not to be a hero, but who chose to because of their goodness.

The hero of this story, Dr. Bernard Rieux, does not fit this mold. His heroism is far humbler and more modest: it is the heroism of “common decency,” of “doing my job.” For the truth is that Rieux and his fellows do not have much of a choice. Their backs are against the wall, leaving them only the choice to fight or give up. An absurdist hero is thus not making a choice between good and evil, but against a long and ultimately doomed fight against death—or death. It is far better, in Camus’s view, to take up the fight, since it is only in a direct confrontation with death that we become authentically alive.

You might even say that, for Camus, life itself is the only real ethical principle. This becomes apparent in the speech of Tarrou, Rieux’s friend, who is passionately against the death sentence. Capital punishment crystalizes the height of absurdist denial: decreeing that a human value system is more valid that the basic condition of existence, and that we have a right to rule when existence is warranted or not. To see the world with clear eyes means, for Camus, to see that life is something beyond any value system—just as the entire universe is. And the only meaningful ethical choice, for Camus, is whether one chooses to fight for life.

This book is brilliant because its lessons can be applied to a natural disaster, like a plague, or a human disaster, like the holocaust. Indeed, before the current pandemic, the book was normally read as a reaction to that all-too-human evil. In either case, our obligation is to fight for life. This means rejecting ideologies that decree when life is or is not warranted, it means not giving up or giving in, and it means, most of all, doing one’s job.



View all my reviews

Review: And the Band Played On

Review: And the Band Played On

And the Band Played On: Politics, People, and the AIDS Epidemic by Randy Shilts

My rating: 5 of 5 stars

The story of these first five years of AIDS in America is a drama of national failure, played out against a backdrop of needless death.

Though this book has been on my list for years, it took a pandemic to get me to finally pick it up. I am glad I did. And the Band Played On is both a close look at one medical crisis and an examination of how humans react when faced with something that does not fit into any of our mental boxes—not our ideas of civil liberty, not our categories of people, and not our notions of government responsibility. As such, this book has a lot to teach us, especially these days.

Randy Shilts was working as a reporter for the San Francisco Chronicle. This position allowed him to track the spread of this disease from nearly the very beginning. Putting this story together was a work of exemplary journalism, involving a lot of snooping and a lot more interviewing. What emerges is a blow-by-blow history of the crisis as it unfolded in its first five years, from 1980-85. And Shilts’s lens is broad: he examines the gay community, the epidemiologists, the press, the blood banks, the medical field, the research scientists, and the politicians. After all, a pandemic is not just caused by a virus; it is the sum of a virus and a society that allows it to spread.

The overarching theme of this book is individual heroism in the face of institutional failure. There are many admirable people in these pages: epidemiologists trying to raise the alert, doctors struggling to treat a mysterious ailment, gay activists trying to educate their communities, and a few politicians who take the disease seriously. But the list of failures is far longer: from the scientists squabbling over claims of priority, to the academic bureaucracies squashing funding requests, to the blood bankers refusing to test their blood, to the government—on every level—failing to take action or set aside sufficient funding.

A lot of these failures were due simply to the sorts people who normally caught AIDS: gay men and intravenous drug users. Because both of these groups were (and to some extent still are) social pariahs, major newspapers simply did not cover the epidemic. This was crucial in many respects, since it gave the impression that it simply was not worth worrying about (the news sets the worry agenda, after all), giving politicians an excuse to do nothing and giving people at risk an excuse not to take any precautions. The struggle in the gay community over how to proceed was particularly vexing, since it was their very efforts to preserve their sexual revolution which cost time and lives. As we are seeing nowadays, balancing civil liberties and disease control is not an easy thing.

But what made these failure depressing, rather than simply frustrating, was the constant drumbeat of death. So many young men lost their lives to this disease, dying slow and agonizing deaths while baffled doctors tried to treat them. When these deaths were occurring among gay men and drug users, the silence of the country was deafening. It was only when the disease showed the potential to infect heterosexuals and movie stars—people who matter—that society suddenly spurred itself into action. This seems to be a common theme to pandemics: society only responds when “normal” people are at risk.

Another common theme to pandemic is the search for a panacea. At the beginning of the AIDS crisis, there were many claims of “breakthroughs” and promises of vaccines. But we still have neither a cure nor a vaccine. Fortunately, treatment for HIV/AIDS has improved dramatically since this book was written, when a diagnosis meant death. Pills are now available (Pre-Exposure Prophylactic, or PrEP) which, if taken daily, can reduce the chance of contracting HIV through sex by almost 99% percent. And effective anti-viral therapies exist for anyone who has been infected, greatly extending lifespans.

Unfortunately, these resources are mostly available in the “developed” world. In Sub-Saharan Africa, where resources are scarce, the disease is still growing, taking many lives in the process. Once again, a disease is allowed to ravage in communities that the world can comfortably ignore.

One day, a hardworking journalist will write a similar book about the current coronavirus crisis and our institutions’ response to it. And I am sure there will be just as much failure to account for. But there will also be just as much heroism.



View all my reviews

Review: The Great Influenza

Review: The Great Influenza

The Great Influenza: The Story of the Deadliest Pandemic in History by John M. Barry

My rating: 4 of 5 stars

People write about war. They write about the Holocaust. They write about the horrors that people inflict on people. Apparently they forget the horrors that nature inflicts on people, the horrors that make humans least significant.

Like so many people nowadays, I have been scrambling to wrap my mind around the current pandemic. This led me, naturally, to the last major worldwide outbreak: the 1918 influenza. I have a distant connection to this disease. My great-grandfather (after whom I was named) was drafted out of Cornell’s veterinary school to work as a nurse in a temporary hospital set up for flu victims. I read the letters he sent to his mother, describing the experience.

John Barry’s account of this virulent flu is sobering to say the least. In a matter of months, the flu spread across the world and caused between 50 and 100 million deaths. More American soldiers died from this flu than from the entire Vietnam War. In most places the mortality rate hovered around two percent, but it struck much more fiercely elsewhere. In the Fiji Islands, 14 percent of its population succumbed; in Western Samoa, twenty-two percent; and in Labrador, a third of the population died. And because the disease mainly struck young people—people in their twenties and thirties—thousands were left orphans.

Barry’s book is not, however, simply a record of deaths. He sets the historical scene by giving a brief overview of contemporary medicine. In the early 1900s, modern medicine was just coming into its own. After centuries in which it was thought that bad air (“miasma”) caused illness, and in which bleeding was the most popular “cure,” researchers were beginning to discover viruses and bacteria, and were beginning to understand how the immune system combats these germs. Major public health initiatives were just getting underway. The John Hopkins School of Public Health had been founded, and the Rockefeller Institute was making new types of research possible. It was not the Dark Ages.

The other major piece of historical context is, of course, the First World War. Undoubtedly this played a major role in the epidemic. Not only did troop movements help to spread the disease, but press censorship virtually guaranteed that communities were unprepared. Barry notes how newspapers all across the country consistently downplayed the danger, which ironically only further increased panic. (The pandemic is sometimes called the “Spanish flu,” because the press in neutral Spain was uncensored, and so reported freely on the disease.) The war effort overrode all of the warnings of disease experts; and by the time the disease struck many communities, most of the available doctors and nurses had been sent to the military.

Barry’s narration mainly focuses on the United States. Partly this is because this is where he believes the disease originated (there are several competing theories), partly this is because the disease’s impact in Europe was overshadowed by the war, and partly this is simply because of the amount of easily available sources. I did wish he had spent more time on other countries—especially on India, which suffered horribly. The sections on science—both on the history of science, and summarizing what we know now about flu viruses—were in general quite strong. What was lacking, for me, were sections on the cultural impact of the disease.

But perhaps there are not so many. As Barry notes, no major novelist of the time—Hemingway, Fitzgerald, Lawrence—mentioned the pandemic in their works. I have noticed the same thing myself. I cannot recall a single mention of this flue in biographies and autobiographies of people who lived through the pandemic, such as John Maynard Keynes or even John D. Rockefeller (who personally funded research on the disease). This is perhaps understandable in Europe, where the deaths from the pandemic were swallowed up in news of the war; but it seems odd elsewhere. What is more, the pandemic did not seem to exacerbate existing racial or class tensions. In many ways the virus seems to have swept through communities and then disappeared from memory.

(Barry does have one fairly controversial claim in the book: that Woodrow Wilson contracted the flu while negotiating the treaty of Versailles, and that it caused him to capitulate to Clemenceau’s demands. If this is true, it would be a major historical consequence.)

It is illuminating to compare the 1918 pandemic to the current crisis. There are many similarities. Both are caused by easily transmissible viruses, and both spread around the world. The H1N1 flu virus and the SARS-CoV-2 virus both infect the respiratory system, causing fever, coughing, and in severe cases pneumonia and ARDS (acute respiratory distress syndrome). In both cases, no vaccine is available and no known treatment is effective. As in 1918, doctors are turning desperately to other therapies and medicines—those developed for other, unrelated diseases like malaria—and as in 1918, researchers are publishing at a frantic pace, with no time for peer review. Police are again wearing masks, hospitals are again overrun, and officials are struggling to catch up with the progress of the virus.

But of course, there are many important differences, too. One is the disease itself. The 1918 flu was almost certainly worse than the novel coronavirus. It was more deadly in general, and it killed younger people in far greater numbers—which resulted in a much bigger dip in life expectancy. (Young people died because their immune systems overreacted in what is called a “cytokine storm.”) The H1N1 flu also had a far shorter incubation period. This meant that the gap between infection and the first symptoms was short—often within 24 hours—and patients deteriorated far more quickly. Barry describes people being struck down within mere hours of showing their first symptoms. The challenge of the SARS-CoV-2 virus, however, is the very long incubation period—potentially up to two weeks—in which people may be infectious and yet not show symptoms. This makes it very difficult to keep track of who has it.

The explanation for this difference lies in the nature of the virus. A virus is basically a free-floating piece of genetic code incased in a protein shell. It needs to highjack animal cells in order to reproduce; and it infiltrates cells using proteins that link up with structures on the cells’ surface. Once inside, the virus begins to replicate until the cell literally bursts, spilling virus into adjacent cells, which in turn get infected, and which in turn burst. Each burst can release thousands of copies. The rate at which the virus replicates within the cells determine the incubation period (between first infection and first symptoms), and coronaviruses replicate significantly more slowly in animal cells, thus explaining the slower onset of symptoms. Their greater speed also means that flu viruses change faster, undergoing antigenic drift and antigenic shift, meaning that new strains of the virus are inevitable. The novel coronavirus is (likely) more stable.

Another potential difference is seasonality. Flu viruses come in seasonal waves. The 1918 virus struck first in spring, receded in summer, and then returned in autumn and one last time in the winter of 1919. Every wave hit very quickly—and then left just as quickly. Most cities experienced a sharp drop-off in cases after about six weeks of the first patients. The seasonality of the 1918 flu was partly a result of the genetic drift just mentioned, as the different waves of this flu were all at least subtly different strains of the virus. Atmospheric conditions—humidity and temperature—also presumably make some difference in the flu virus’s spread. COVID-19 may exhibit a very different pattern. It may, perhaps, be less affected by atmospheric conditions; and if it mutates and reproduces more slowly, it may linger around for one long wave rather than several short ones. This is just my speculation.

Well, so much for the virus. How about us? The world has changed a lot since 1918. However, not all of those changes have made us better prepared. Fast and cheap air travel allowed the virus to spread more quickly. And economic globalization did not help, either, as both medicines and medical equipment are often produced overseas and then imported, thus rendering countries more vulnerable to supply-chain disruption than in the past. As we witness countries and states compete for supplies, this vulnerability is very apparent.

But of course we have many advantages, too. Many of the deaths caused by the flu and the coronavirus are not from the virus infection itself, but because the virus renders us vulnerable to secondary infections by bacteria, causing pneumonia. Antibiotics (which did not exist in 1918) can save many lives. Another advantage is medical care. The most severe patients of both epidemics were struck with ARDS, a condition with an almost 100% mortality rate for those who do not receive intensive medical care (using a ventilator machine). In 1918 they were able to administer oxygen, but far less effectively than we can. Even so, even with the best intensive care, the survival rate of ARDS is between 40-60%. And our ability to administer intensive care is quite limited. The ventilator shortage has become a global emergency in itself, as hospitals are overrun.

Medical science has also advanced considerably. Now we can isolate the virus (which they could not do in 1918), test individuals for it, and work on a vaccine. However, testing has so far been unable to keep up with the virus. And the most optimistic estimate of an available vaccine is in a year. Arguably a much bigger advantage is information technology. The press is not censored—so citizens have a much better idea of the risks involved—and experts can communicate with each other in real time. We can coordinate large-scale societal responses to the pandemic, and can potentially even use technology to track individual cases. As we come to better understand the virus, we will be able to use more sophisticated statistical methods to understand its progress. None of this was possible in 1918.

One thing that we will have to contend with—something that is hardly even mentioned in Barry’s book—is the economic toll that this virus will take. Even in the ugliest days of the 1918 pandemic, governments did not require businesses or restaurants to close. War preparations went on unabated. (In 1918, after years of slaughter and at the height of the war, life was simply cheaper than it is now.) Our societal response will likely mitigate the health crisis but will create a secondary economic crisis that may ultimately be more difficult to solve. The solutions to this crisis could be our most lasting legacies. Already Spain’s government is talking of adopting universal basic income. Though of course it is far too early to predict anything with confidence.

Comparisons with 1918 are partly depressing, and partly uplifting. Depressing, because we knew this was possible and did not prepare. Depressing, because so many governments have gone through the same cycle of early denial and disorganized response as they did back then. Uplifting, because we do know much more than we did. Uplifting, because—after our early fumbles—we are finally coordinating as a global community to deal with the crisis. Perhaps most uplifting of all, despite some ugly stories here and there, the crisis has revealed a basic sense of solidarity in the face of a universal threat. Hopefully, unlike 1918, we will not do our best to forget about this one.



View all my reviews

Review: Deadliest Enemy

Review: Deadliest Enemy

Deadliest Enemy: Our War Against Killer Germs by Michael T. Osterholm

My rating: 5 of 5 stars

This is a critical point in history. Time is running out to prepare for the next pandemic. We must act now with decisiveness and purpose. Someday, after the next pandemic has come and gone, a commission much like the 9/11 Commission will be charged with determining how well government, business, and public health leaders prepared the world for the catastrophe when they had clear warning. What will be the verdict?

If I had read this book in more normal circumstances, I do not know how I would have responded. Perhaps I would have been slightly unnerved, but I think I would have been able to sleep soundly by dismissing most of it as alarmist. In fact, I did just this a few months ago, when I read Bill Bryson’s book on the body, and scoffed at his claim that another 1918-style pandemic was easily possible. Nowadays, however, reading this book is more depressing than anything. Those in the field saw this crisis coming from miles away, but few of us listened. The epidemiological community must feel rather like Cassandras right about now: uttering prophecies that nobody pays any attention to.

(As Osterholm was responsible for most of the ideas in this book, and as it is written from his perspective, I will refer to him as the author in this review.)

This book attempted to be the Silent Spring for infectious diseases. That it did not succeed in doing so is attributable just as much to human nature as to the book itself. Limiting the use of pesticides is fairly easy and relatively painless for most of us. But mobilizing the political will necessary to prepare for health crises in the hypothetical future—preparations that would involve a great deal of money and many institutional changes—is not such an easy sell, especially since we had been lulled into a false sense of security. As is the case with climate change, the dangers seemed so remote and theoretical that for most of us it was difficult to even imagine them.

After witnessing what this new coronavirus has done to our entire way of life in a few short weeks, I was quite disposed to take Osterholm seriously. And I think the entire content of the book—not just the warnings about a potential pandemic—are valuable. Osterholm turns his attention to a wide array of threats: Zika, AIDS, Yellow Fever, Typhoid, Malaria, Ebola, MERS. We are vulnerable on many fronts, and we are generally not doing much to prepare.

One example are the many diseases that are transmitted by mosquito bites. As modern transportation has introduced disease-carrying mosquitos into ever-more parts of the world, and global warming expands the geographic range of mosquitos, this will be an increasing concern. (Silent Spring may, ironically, have contributed to this problem.) Another worry is bio-terrorism. Now that we can see how paralyzing even a moderately lethal virus can be, imagine the damage could be inflicted by a genetically-modified virus. And the technology to edit genes is becoming cheaper by the year. We have already experienced bio-terrorism in the US on a relatively small scale with the 2001 anthrax attacks. This is just a taste of what is possible. According to Osterholm, a mere kilogram of the anthrax bacteria could potentially kill more than an atomic bomb. And it would be far cheaper to acquire.

But these are not even the biggest threats. According to Osterholm, we face two virtual certainties: another flu pandemic, and the imminent ineffectiveness of antibiotics.

The latter is quite terrifying to consider. Antibiotics are not easy to discover, and our arsenal is limited. Meanwhile, bacteria constantly evolve in response to environmental pressures, including to the use of antibiotics. It is inevitable that resistance to available antibiotics will increase; and this could have a profound effect on modern medicine. Even routine operations like knee-replacements would be unsafe if we did not have effective antibiotics. Slight injures—a scratch in the garden from a rose-bush—could result in amputations or even deaths. And yet, antibiotics continue to be widely prescribed for ailments they cannot treat, and given indiscriminately to livestock, which only accelerates the impending bacterial resistance.

The other major threat (as we are learning) is a pandemic. Now, Osterholm was not precisely correct in predicting the cause of the next pandemic, since he thought it would be a flu virus (though he does have a good chapter on coronaviruses, and in any case a flu pandemic is still just as possible). But he is certainly correct in identifying our many structural weaknesses. He notes our lack of stockpiles and correctly predicts a shortage in protective gear, face masks, and ventilators in the event of a pandemic. And though medical science has advanced a lot since 1918, in many ways we are even more vulnerable than we were back then, most notably because of our supply chains. Since so many of our medicines and medical equipment—among other things—are produced overseas, shortages are inevitable if trade is disrupted.

Osterholm is quite illuminating in his discussion of pharmaceutical companies and their incentives. As private businesses, they have little to gain by investing in preventative vaccines or in new antibiotics. In the former case, this is because vaccines have to undergo thorough testing and pass FDA approval, requiring millions in investment, only to face the prospect of uncertain demand once the vaccine hits the market. The case of SARS is instructive. After the disease was identified in 2002, companies rushed to make a vaccine; but when SARS receded, interest in the vaccine disappeared and pharmaceutical companies, cutting their losses, stopped work on the vaccine. We still do not have one.

The incentive system is just as ineffective when it comes to antibiotics. Finding new antibiotics is costly; and since there are currently many cheap antibiotics on the market, a new patented antibiotic probably would not turn a large profit. Besides, effective antibiotic stewardship requires that we use them sparingly, thus further limiting profit potential. Drug companies have much more to gain by creating products that would require continuous use, such as for chronic conditions. Letting the free market decide which drugs get developed, therefore, is not the wisest decision. Osterholm advocates the same approach as taken by government in weapons contracts, wherein the government essentially guarantees payment for any product that meets specifications.

Osterholm’s most ambitious idea for government funding is for a new universal flu vaccine. The flu vaccine we are all familiar with is based on old technology, and can only provide protection from a few strains of flu. Scientists essentially must guess what sort of flu will be circulating in a year; and they must do so every year. But Osterholm thinks that there is good reason to believe that a universal flu vaccine is possible, and recommends we devote at least as much money to such a vaccine as we devote to AIDS research. This seems very sensible to me, since the next pandemic will likely enough come from a flu virus.

I am summarizing Osterholm’s book, but I do not think I am doing justice to its emotional power. Now that I am living through the events that Osterholm predicts (in surprising detail), I feel a strange mixture of outrage and fear: outrage that governments did not listen when they had time, and fear that we will repeat the same mistakes when this current crisis is over. I cannot help but be reminded of another situation in which we comfortably ignore the dire warning of scientists: climate change. My biggest hope for the current crisis, then, is that afterwards we will be more willing to heed the warnings of these nerds in lab coats.



View all my reviews