Priceless Benefits, Costly Mistakes:
What’s Wrong With Cost-Benefit Analysis?1
Frank Ackerman (Global Development and Environment Institute, Tufts University, USA)
© Copyright 2004 Frank Ackerman
The critique of economic theory is not just a theoretical problem. In the hands of conservatives such as the Bush administration, simplistic and misleading economic abstractions are incorporated into structures of political power. Ill-founded economic theories provide a seemingly scientific rationale for doing the wrong thing, time after time.
Consider the current abuse of cost-benefit analysis, which is now said to be essential for evaluation of health and environmental protection. John Graham, formerly head of the Harvard Center for Risk Analysis, is the Bush administration’s “regulatory czar,” charged with evaluating regulations proposed by federal agencies to be sure that the costs do not exceed the benefits. Graham has frequently sent regulations back for revision or for additional analysis, when he concludes that the proposed rules would fail a cost-benefit test. Unsurprisingly, the end result has been a slowing and weakening of environmental protection.
The concept of cost-benefit analysis has a soothingly reasonable sound to it: why shouldn’t we check that the benefits exceed the costs before adopting a new regulation? But move beyond comfortable rhetoric to rigorous theory, and the case for cost-benefit analysis of regulations fails on at least three grounds.
Failure #1: Incremental movement toward an unattainable theoretical ideal may not be desirable. Cost-benefit analysis of health and environmental measures requires monetization of non-monetary benefits, a process that is the source of most of the difficulties in the analysis (as described below). It might appear that monetizing and internalizing environmental externalities is bringing the economy closer to the welfare optimum described by the Arrow-Debreu “fundamental theorems of welfare economics.” Yet that optimum depends on a host of unrealistic assumptions, including perfect competition among small, powerless firms in every industry, perfect information for all market participants, universal adherence to an implausible and unattractive model of consumer behavior, and perfect internalization of all externalities (not just the few that environmental economists have studied and politicians have accepted).
Even if all these assumptions are granted, economic theorists have known for thirty years that the market equilibrium may be neither unique nor dynamically stable.2 Perhaps most damning of all, the “theory of the second best,” known to economists since the 1950s, shows that if any aspects of the free-market ideal are fundamentally unattainable (as is of course the case), then incremental movement toward that ideal is not necessarily a welfare improvement.3 This point is not limited to environmental policy: the theory of the second best is a powerful argument against incremental market-based or market-oriented policy measures of any type. Such measures may or may not be desirable on other grounds, but they cannot logically be defended as small steps on the road to an idealized competitive market, since that ideal is clearly unattainable.
Failure #2: There is no crisis of excessive regulatory costs that needs to be addressed. The argument for cost-benefit analysis of public policy often involves the suggestion that we can’t afford to do (regulate) everything, so we should be sure we’re getting the most bang for the buck. This claim fails for two distinct reasons. First, there is no single budget, no lump sum of resources that is being allocated to one regulation or another by cost-benefit analysis. Most of the costs of environmental compliance are borne by the private sector, typically by the firms that cause pollution. Cost-benefit analysis of cleaning up the Hudson River in New York involves costs that might be imposed on the industrial corporations that pollute the river. Cost-benefit analysis of the use of harmful pesticides in California agriculture involves costs that might be imposed on agribusiness, in a different industry from the Hudson polluters and thousands of miles away from New York. If one of these measures passes a cost-benefit test and the other does not, no funds are transferred from one industry to the other; one industry just ends up with less regulation, more freedom to pollute, and more profits.
In some ultimate sense, it is true that overall resources are limited and we can’t afford to spend everything we’ve got on environmental protection. However, no society has ever approached this limit; no significant policy proposal has ever advocated anything of the sort. The limit on aggregate resources is so far from being a binding constraint on environmental policy that it can be ignored in practice, just as our inability to exceed the speed of light can be ignored in the process of automobile design.
Second, the most common evidence for the crisis of regulatory costs is simply erroneous. The tables showing widely differing costs per life saved by different regulations are so consistent with the worldview of mainstream economics that they have been repeatedly reprinted with little or no critical scrutiny. As my co-author Lisa Heinzerling has demonstrated, these tables and their claims of regulatory inefficiency rest on just a few widely cited studies, which commit a series of empirical errors in their haste to establish their desired conclusion.4 For example, many of the expensive-looking regulations in the familiar tables of regulatory costs are actually proposals that were never adopted, whereas the more cost-effective rules, such as removal of lead from gasoline, have often been completed and cannot be repeated for additional savings. There are no lives or money to be saved by moving imaginary resources from expensive proposals that were never adopted to cheaper regulations that have already been completed.
Failure #3: Compensation tests and “potential Pareto improvement” do not justify cost-benefit analysis. One of the underlying assumptions of cost-benefit analysis is that distribution can be ignored: costs and benefits to all economic agents are indiscriminately added together in calculating the bottom-line evaluation for society. This disinterest in distribution is justified by the Kaldor-Hicks compensation tests: if the winners from a policy could compensate the losers, leaving everyone as well or better off, then the policy is a potential Pareto improvement. There is no requirement that the winners actually pay compensation, and all too often, they choose not to do so; the Pareto improvement normally remains purely potential. As Amartya Sen has insisted, this potential improvement may not in fact be desirable. A policy that makes the rich much richer and the poor a little poorer is a potential Pareto improvement, but with enough of such improvements, the poor will starve. (If compensation is paid to the losers, then the policy becomes an actual, not just a potential, Pareto improvement.)
This and other problems with the Kaldor-Hicks compensation tests have long been known to theorists. Yet the practice of cost-benefit analysis continues to be justified in terms of the theory of compensation tests, along with the supposed crisis of regulatory costs and the general desirability of moving toward a competitive optimum. An old joke describes economists as seeing something working in practice, and asking whether it is possible in theory. In this case the joke is being told in reverse: having established that cost-benefit analysis of environmental protection is impossible in theory, its advocates have set out to see if it works in practice.5
Why Benefits Are Priceless
In practice, cost-benefit analysis of health and environmental protection rests on an implausible process of monetization of priceless benefits. Human life, health, the natural world, and the well-being of future generations are priceless – not infinite in value, but fundamentally incommensurable with money. Here I will only summarize some of the arguments that Lisa Heinzerling and I have made at greater length elsewhere:6
It is not meaningful to put a dollar value on human life. The benefits of many environmental regulations include avoided human deaths; the attempt to monetize benefits and compare them to costs requires a dollar value for life and death. Under the Clinton administration, US Environmental Protection Agency (EPA) felt the answer was $6.1 million, based on a literature review of a number of empirical studies. Most of the studies looked at the risk premium in wages for jobs that had differing risks of death, holding everything else constant. If the average male blue-collar worker gets a risk premium of about 30 cents per hour over equivalent risk-free work, that is arithmetically equivalent to $6 million per life.
The Bush administration, leaving no methodology unturned in its quest for lower benefits and weakened environmental protection, decided that it preferred the results of studies in which people are asked to assign monetary values to small hypothetical risks of death; this yields numbers as low as $3.7 million per head, or, in a particularly controversial version, only $2.6 million for those over 70. These numbers do not offer a reasonable description of society’s obligation to control and eliminate life-threatening health and environmental hazards. Indeed, there is no reason to think that society should spend the same amount of money on avoiding every type of preventable death, ignoring the many differences in context that determine the meaning of and responsibility for these deaths.
Valuation of non-fatal health hazards is conceptually and technically flawed. An enormous number of diseases and health conditions are affected by environmental policy measures; there is little hope of valuing them all. Health economists’ attempts to measure QALYs (Quality Adjusted Life Years) have led to paradoxes and inconsistencies, and have not been widely accepted. Willingness-to-pay measures favored by environmental economists have foundered on the impossibly large data requirements, as well as underlying conceptual flaws. In EPA’s cost-benefit analysis of removing arsenic from drinking water, the analysts could not find a value for avoiding a non-fatal case of bladder cancer, and (as usual) did not have sufficient time or budget to do a new empirical study. So they simply used a value that had been developed for chronic bronchitis more than a decade earlier – based on a shopping mall survey in which respondents were asked whether they preferred their current neighborhood, or a similar one with a lower cost of living and a higher rate of bronchitis.
Borrowing of values estimated for other externalities is called “benefits transfer” by practitioners. If, in elementary or high school, you copied someone else’s homework when you didn’t have time to do your own, you were engaged in “homework transfer.” As the practitioners discover at times, homework transfer can lead to grief if you do it carelessly and copy the answer to the wrong question. Despite its proclivity for similar mistakes, benefits transfer is ubiquitous in cost-benefit analysis, since in practice there is never enough time or funding to do a new, full-blown contingent valuation study for each relevant externality.
The natural world has a very large but nonquantifiable value to many people. In valuing impacts on nature, economists distinguish between use values and non-use values, such as the value placed on the existence of a species or wilderness. Use values are sometimes well-defined, but often small. Non-use values are often large, but poorly defined. In the case of the Exxon Valdez oil spill in Alaska, the losses to people who worked and lived in the affected area were estimated at $300 million, while the existence value of the area to the US population – the amount that American households were reportedly willing to pay to prevent a similar oil spill in a similar area – was $9 billion, or 30 times as large. If protection against oil spills is judged by a cost-benefit test, the existence value of the affected region justifies 30 times as much environmental protection as the use value.
But precise numerical existence values are conceptually problematical, as demonstrated by a brief digression on whales. The “use value” of whales is reflected in the amounts that people pay to go on whale-watching trips. This is an established tourist industry, with annual revenues of $160 million in the US. On the other hand, the existence of just one species, humpback whales, is, according to one study, worth $18 billion to the US population – more than 100 times the total revenues of whale-watching trips.
Suppose that you have bought the last ticket on a whale-watching trip, and someone offers to buy your ticket from you for twice the price you paid for it. You may or may not accept, but the offer is not offensive. Now suppose that someone offers $36 billion for the right to hunt and kill all the humpback whales in the ocean. Although this offer is twice the existence value, it would strike most people as offensive. The differing reactions reveal that the two types of “prices” are not comparable. The use value of whales is a real number; a seat on a whale-watching trip is a commodity with a meaningful market price. The existence of whales is enormously valuable to many people, but the $18 billion figure contains no quantitative information; it is not the price of a commodity that can be bought or sold. Existence values are real, but they are not really numbers. Some other way must be found to reflect those values in public policy.
Discounting distorts and trivializes future health and environmental outcomes. The process of discounting future costs and benefits is essential for short- and medium-term financial calculations. But the same mathematical techniques yield nonsensical results when applied to the far future, and to non-monetary values. There are two distinct problems that result from inappropriate discounting of the environment.
First, discounting is often used to suggest that events a century or two in the future don’t matter today. Discounting at any positive interest rate makes serious intergenerational harms such as the future impact of climate change look relatively small in present value terms. The conceptual error here stems from forgetting the rationale behind discounting: the calculation assumes that a single observer compares (usually) costs now and benefits later, coming to his/her own conclusion about whether to accept the tradeoff. But there is no individual who will have personal experience of both the costs of climate change mitigation today and the benefits that will be enjoyed one hundred years from now. Another method is needed for decision-making about future generations.
Second, in the analysis of exposure to toxic chemicals, it has become common to discount diseases such as cancer over their latency period. Since cancers often show up 20 years or more after the exposure that causes them, discounting has the effect of sharply reducing the “present value” of the health benefits from controlling carcinogens. Advocates of risk analysis and cost-benefit analysis argue that the benefits should be interpreted as the reduction of risk of death for large numbers of people, not the reduction of actual deaths for a much smaller number. While this argument is itself problematical (it ignores the different experience of the people who will actually die), it implies that health benefits should not be discounted over the latency period. Risk is reduced at the time when exposure to carcinogens is reduced, typically soon after a policy change – not decades later when there is a reduction in the appearance of cancers.
Theoretical Critiques and Practical Alternatives
Criticism of cost-benefit analysis inevitably leads to questions about the alternatives. If monetization of externalities, in the style favored by most environmental economists, is not a reliable basis for public policy, then how should decisions be made? One answer is that there is no need for a new decision-making system, since the old one works so well. The environmental laws and regulations of the last thirty-odd years have been extremely successful, reducing pollution and protecting health and nature; although adopted, for the most part, without complex economic calculations, none of these protective measures have bankrupted us or proved unaffordable.
While this simple response has considerable merit, there is more that can be said about right and wrong ways to make policy decisions. Three strands of theoretical critique of the cost-benefit methodology point toward desirable features of an alternative.
Values of risks and damages depend on context; they cannot be measured in general. Underlying cost-benefit analysis, and the related field of risk analysis, is the assumption that equal damages should be valued equally in every context. If a death is worth X dollars, whatever X may be, then 10 deaths are always worth 10X, regardless of how and why they occur. It turns out that people do not think this way: 20 times as many Americans died from diabetes in 2001 as from terrorism on September 11, yet there is no doubt which of these categories of deaths mattered more to public life and policy. To cite another example, the risk of death in the US is almost identical from working in the construction industry and from downhill skiing (about one death per two million person-days), but there is a much greater public responsibility to protect construction workers on the job than skiers on the slopes.
The implication of this critique is that there is no hope of creating a purely quantitative, context-independent system of decision-making. Context is everything in evaluating health and environmental damages; externalities have to be valued and addressed “in the field,” in the context in which they actually occur, not collected for later study in the laboratory. A political, not an economic, process is required to make the intrinsically context-dependent policy decisions.
Disaggregation of benefits makes the comparison of costs and benefits more opaque. There is a tautological sense in which everyone does “cost-benefit analysis” all the time – not monetizing benefits, but implicitly comparing costs and benefits of possible actions, perhaps according to rules of thumb or inarticulated personal standards. In this broad sense, every democratic decision can be said to have passed a cost-benefit test: policies are only adopted if the voters prefer the benefits of the policies to the costs.
The formal application of cost-benefit analysis to public policy employs a much narrower and more controversial methodology, assuming that the best way to compare costs and benefits is to disaggregate benefits into “elementary particles” of value – numbers of deaths and serious diseases avoided, hectares of wetlands preserved, and so on. Then the analysts supposedly can monetize each particle of value, and finally reassemble them into complex molecules of benefits, to be weighed against the costs.
This disaggregated methodology has failed in practice. It does not yield transparent or objective evaluations of benefits; rather, it renders the discussion of benefits obscurely technical, excluding all but specialists from participation. At the same time, political debate continues behind the veil of technicalities, as rival experts battle over esoteric valuation problems.
Rather than engaging in the hopeless effort to refine the disaggregated benefit estimates, we could ask people to judge costs and benefits on a more aggregated or holistic basis. Consider a policy proposal, debated in 2002-03, that would have increased the costs of many US power plants, in order to reduce the huge number of fish killed by their cooling water intake systems. One could, as EPA did, spend several person-years of effort in modeling the wide variety of fish populations and aquatic ecosystems, and in exploring intricately indirect ways to assign precise monetary values to the many affected categories of fish (most of which are not sold in markets). This led, in practice, only to more debate and disagreement about the minutiae of fish valuation. Or one could present the information on the costs of protecting fish, and the expected effect on electric bills, along with a description of the millions of fish that could be saved annually. Then voters, or their representatives, could decide whether the benefits as a whole – not monetized, but described in their natural units – justified the costs as a whole.
Precise estimates of future environmental impacts are frequently unavailable. Cost-benefit calculations rest on the best available estimates of health and environmental impacts. Much of the effort in cost-benefit analysis is required to develop these estimates; important effects are often omitted for lack of sufficiently precise data. EPA’s analysis of arsenic in drinking water recognized that at least a dozen serious diseases are linked to arsenic, but found sufficient data to estimate the numerical incidence of only two diseases, bladder and lung cancer. For lack of data, the other ten diseases were implicitly valued at zero.
An apparently common-sense, intuitively Bayesian approach to statistics can be seen here: why not use whatever information we have to develop the best possible estimates of impacts? But the focus on precise point estimates distracts attention from the tremendous uncertainty that surrounds many important impacts. Public health and environmental policy have always been matters of decision-making under uncertainty. The more uncertain we are, the more important it becomes to plan for the credible worst-case outcome. People act this way in daily life, in buying insurance against low-probability but high-cost outcomes like house fires or car crashes. (It’s possible in theory, too: just assume that people are liquidity constrained and risk averse, and the math works out perfectly.) Even such ordinary steps as arriving early at the airport or for an important appointment reflect precautionary approaches, based on planning for the worst, not playing the averages.
Cost-benefit analysis typically asks, what is the absolutely most likely outcome? But recognizing the pervasive uncertainty in our estimates and forecasts, we should instead be asking, what is the worst outcome that is at least as likely as risks that people normally pay to insure themselves against? Environmental activists are increasingly discussing the “precautionary principle” as a basis for decision-making; they might make more headway referring to it as the insurance principle.
Finally, in addition to these new directions, it is important to remember that the environmental decision-making of recent decades has been a remarkable success, without help from sophisticated new decision-making techniques. It may be a novel experience for critics of established economic theory to find themselves in the classically conservative role of defending history and tradition. (I’ve hardly been able to adjust to it myself.) But in the arena of US environmental policy, the radicals who want a sweeping, fundamental break with past practice are to be found in the White House and the halls of Congress, not outside in the street. The Clean Air Act, the Clean Water Act, and all the rest have, at entirely affordable cost, made you and your family much healthier. Don’t leave home without them.
1. This article draws extensively on a book I have recently co-authored: Frank Ackerman and Lisa Heinzerling, Priceless: On Knowing the Price of Everything and the Value of Nothing (The New Press, 2004).
2. Frank Ackerman, “Still Dead After All These Years: Interpreting the Failure of General Equilibrium Theory”, Journal of Economic Methodology 9 no. 2 (June 2002), reprinted in Frank Ackerman and Alejandro Nadal, The Flawed Foundations of General Equilibrium: Critical Essays on Economic (Routledge, 2004).
3. Lipsey, R. G. and Lancaster, K., “The General Theory of the Second Best”, Review of Economic Studies 24 (1956), 11-32.
4. Lisa Heinzerling, “Regulatory Costs of Mythic Proportions”, 107 Yale Law Journal (1998); Lisa Heinzerling and Frank Ackerman, “The Humbugs of the Anti-Regulatory Movement”, 87 Cornell Law Review, 648-670 (2002); Lisa Heinzerling, “Five-Hundred Life-Saving Interventions and Their Misuse in the Debate Over Regulatory Reform”, 13 Risk: Health, Safety & Environment 151 (Spring 2002). For a summary of this work, see Priceless, Chapter 3.
5. This point was made, in almost these words (though not as a joke), by Eric Posner, a legal scholar and leading advocate of cost-benefit analysis, in a recent debate on the subject at the University of Chicago. After acknowledging the theoretical weakness of the case for cost-benefit analysis, Posner maintained that it was nonetheless important to use it in practice.
6. The points made in this section are elaborated and documented in Priceless.