What economists don’t get about climate change

Economists tend to see climate change as a big optimization problem: Weigh the potential costs of future disasters against the benefits of fossil-fueled economic growth, and find a price of carbon that will balance the two. Unfortunately, it’s an illusory goal.

Consider, for example, a recent study by Yale University’s Kenneth Gillingham and colleagues. Using a collection of so- called “integrated” models of climate and the economy, they seek to get a better handle on how various uncertainties — in weather, population growth and technological development — might affect the price that policy makers should put on carbon. Their conclusion: No matter what happens, the optimal price in 2020 would probably be no more than about $50 per ton.

The paper’s appearance may be timed to influence policy makers at the United Nations Climate Change Conference in Paris, which begins at the end of this month. It really shouldn’t, because it feigns certainty in areas where none is to be had.

Granted, such integrated models include some realistic climate physics and economics. Yet their builders inevitably face crucial questions about which we know very little. For example, just how sensitive are global temperatures to the addition of further carbon dioxide? And how much economic damage can we expect from a temperature rise of, say, 2 degrees or 5 degrees?

- Advertisement -

Climate scientists believe that, given the current state of the earth, average temperatures should rise about 1 to 2.1 degrees Celsius for every trillion tons of carbon dioxide added. But the trajectory could point sharply upward if a hotter planet kicked up nonlinear feedbacks– such as melting ice sheets, which could accelerate warming by making polar regions absorb more sunlight. Predicting the economic damage from any particular rise in temperature entails similar problems.

Then there’s the issue of discounting — that is, figuring out the present value of future costs and benefits. Different choices can lead to wildly divergent recommendations. In 2006, UK economist Nicholas Stern, using one form of discounting, concluded that a strong and immediate reduction in emissions is needed. In contrast, U.S. economist William Nordhaus (a co- author of Gillingham’s) has used a stronger form of discounting to argue that it’s optimal to do little about climate change now, and more later. This conclusion, as the recent Nobel Prize winner Angus Deaton noted, reflects the extreme view that markets can be trusted as infallible guides.

To produce any result at all, integrated assessment models must make specific assumptions about all these parameters. The result is what computer scientists call “garbage in, garbage out” — the answer merely reflects the choices made by the model’s builders. MIT economist Robert Pindyck pointed out this problem a couple years ago, concluding that the models are “close to useless” for informing policy. Nonetheless, research programs lumber onward whether they’re really useful or not.

It’s good to see economists trying to acknowledge uncertainty, but they need to be bolder. Problems like climate change are far too complex to lend themselves to optimal solutions. In fact, psychologists widely agree that individuals facing complex problems generally make better decisions using simple heuristics or rules of thumb, rather than falsely precise calculations. That’s the wise strategy in situations where one can’t even list all the possible alternatives and consequences, let alone their probabilities.

A few years ago, scientists at the RAND Corporation developed a set of practical principles for coping with complexity and uncertainty — to give us a chance to learn, to adapt, and to shape the future to our liking. In tackling climate change, such a flexible approach will help us develop policies that — though they may not be optimal in the actual future — will help regardless of what happens.

No posts to display