Center Home Science Policy Photos University of Colorado spacer
Location: > Prometheus: Scientific Assessments Archives

A brief account of an aborted contribution to an ill-conceived debate
   in Author: Others | Climate Change | Science + Politics | Scientific Assessments July 25, 2008

The IPCC, Scientific Advice and Advocacy
   in Author: Pielke Jr., R. | Climate Change | Science + Politics | Scientific Assessments | The Honest Broker July 09, 2008

What the CCSP Extremes Report Really Says
   in Author: Pielke Jr., R. | Climate Change | Disasters | Scientific Assessments June 20, 2008

Why Costly Carbon is a House of Cards
   in Author: Pielke Jr., R. | Climate Change | Energy Policy | Science + Politics | Scientific Assessments | Technology Policy June 12, 2008

Visually Pleasing Temperature Adjustments
   in Author: Pielke Jr., R. | Climate Change | Risk & Uncertainty | Scientific Assessments June 02, 2008

Real Climate on Meaningless Temperature Adjustments
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments June 01, 2008

Does the IPCC’s Main Conclusion Need to be Revisited?
   in Author: Pielke Jr., R. | Climate Change | Science + Politics | Scientific Assessments May 29, 2008

Homework Assignment: Solve if you Dare
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments May 23, 2008

Nature Letters on PWG
   in Author: Pielke Jr., R. | Climate Change | Energy Policy | Scientific Assessments | Technology Policy May 22, 2008

World Bank and UK Government on Climate Change Implications of Development
   in Author: Pielke Jr., R. | Climate Change | International | Scientific Assessments | Technology and Globalization May 22, 2008

An *Inconsistent With* Spotted, and Defended
   in Author: Pielke Jr., R. | Climate Change | Disasters | Prediction and Forecasting | Scientific Assessments May 21, 2008

Do IPCC Temperature Forecasts Have Skill?
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments May 19, 2008

Old Wine in New Bottles
   in Author: Pielke Jr., R. | Climate Change | Energy Policy | Prediction and Forecasting | Scientific Assessments May 19, 2008

The Politicization of Climate Science
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Science + Politics | Scientific Assessments May 16, 2008

Comparing Distrubutions of Observations and Predictions: A Response to James Annan
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments May 15, 2008

Lucia Liljegren on Real Climate's Approach to Falsification of IPCC Predictions
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments May 14, 2008

How to Make Two Decades of Cooling Consistent with Warming
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Risk & Uncertainty | Scientific Assessments May 12, 2008

Blinded By Assumptions
   in Author: Pielke Jr., R. | Scientific Assessments May 01, 2008

Climate Experts Debating the Role of Experts in Policy
   in Author: Pielke Jr., R. | Science + Politics | Scientific Assessments | The Honest Broker January 31, 2008

Updated IPCC Forecasts vs. Observations
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments January 26, 2008

New Measures for Innovation
   in Author: Bruggeman, D. | Scientific Assessments January 23, 2008

Temperature Trends 1990-2007: Hansen, IPCC, Obs
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments January 18, 2008

UKMET Short Term Global Temperature Forecast
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments January 16, 2008

Verification of IPCC Sea Level Rise Forecasts 1990, 1995, 2001
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments January 15, 2008

James Hansen on One Year's Temperature
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments January 14, 2008

Updated Chart: IPCC Temperature Verification
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments January 14, 2008

Pachauri on Recent Climate Trends
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments January 14, 2008

Verification of IPCC Temperature Forecasts 1990, 1995, 2001, and 2007
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments January 14, 2008

Real Climate's Two Voices on Short-Term Climate Fluctuations
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments January 11, 2008

Verification of 1990 IPCC Temperature Predictions
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments January 10, 2008

Forecast Verification for Climate Science, Part 3
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments January 09, 2008

Forecast Verification for Climate Science, Part 2
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments January 08, 2008

Forecast Verification for Climate Science
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments January 07, 2008

Is there any weather inconsistent with the the scientific consensus on climate?
   in Author: Pielke Jr., R. | Climate Change | Scientific Assessments January 01, 2008

On the Political Relevance of Scientific Consensus
   in Author: Pielke Jr., R. | Climate Change | Risk & Uncertainty | Science + Politics | Scientific Assessments December 21, 2007

Rajendra Pachauri, IPCC, Science and Politics
   in Author: Pielke Jr., R. | Science + Politics | Scientific Assessments | The Honest Broker December 19, 2007

A Second Reponse from RMS
   in Author: Pielke Jr., R. | Disasters | Prediction and Forecasting | Scientific Assessments December 17, 2007

RMS Response to Forecast Evaluation
   in Author: Others | Disasters | Prediction and Forecasting | Scientific Assessments December 07, 2007

Revisiting The 2006-2010 RMS Hurricane Damage Prediction
   in Author: Pielke Jr., R. | Disasters | Prediction and Forecasting | Risk & Uncertainty | Scientific Assessments December 06, 2007

New Publication
   in Author: Pielke Jr., R. | Climate Change | Disasters | Scientific Assessments August 17, 2007

Reorienting U.S. Climate Science Policies
   in Author: Pielke Jr., R. | Climate Change | R&D Funding | Scientific Assessments May 10, 2007

What does Consensus Mean for IPCC WGIII?
   in Author: Pielke Jr., R. | Climate Change | Science + Politics | Scientific Assessments April 23, 2007

Some Views of IPCC WGII Contributors That You Won't Read About in the News
   in Author: Pielke Jr., R. | Climate Change | Scientific Assessments April 18, 2007

Laurens Bouwer on IPCC WG II on Disasters
   in Author: Pielke Jr., R. | Climate Change | Disasters | Scientific Assessments April 17, 2007

Who Said This? No Cheating!
   in Author: Pielke Jr., R. | Climate Change | Scientific Assessments January 06, 2007

Misrepresenting Literature on Hurricanes and Climate Change
   in Author: Pielke Jr., R. | Climate Change | Disasters | Scientific Assessments December 18, 2006

Useable Information for Policy
   in Author: Pielke Jr., R. | Climate Change | Science + Politics | Scientific Assessments December 15, 2006

Inside the IPCC's Dead Zone
   in Author: Pielke Jr., R. | Climate Change | Science Policy: General | Scientific Assessments December 08, 2006

A(nother) Problem with Scientific Assessments
   in Author: Pielke Jr., R. | Climate Change | Scientific Assessments June 23, 2006

July 25, 2008

A brief account of an aborted contribution to an ill-conceived debate

A guest post by Dennis Bray and Hans von Storch

The July 2008 newsletter of the American Physical Society (APS) opened a debate concerning the IPCC consensus related to anthropogenic induced climate change. We responded with a brief comment concerning the state and changing state of consensus as indicated by two surveys of climate scientists. Data was presented concerning climate scientists assessments of the understanding of atmospheric physics, climate related processes, climate scientists level of agreement with the IPCC as representative of consensus and of the level of belief in anthropogenic warming. (The full manuscript is available here .) Our comment was summarily dismissed by the editors as polemic, political and unscientific. The following is a brief account of this episode.

The APS Forum on Physics and Society states "The Forum on Physics and Society is a place for discussion and disagreement on scientific and policy matters". The Forum on Physics and Society, Newsletter, July 2008 began a debate "concerning one of the main conclusions" of the IPCC. The intended debate was clearly evident in the statement,

There is a considerable presence within the scientific community of people who do not agree with the IPCC conclusion that anthropogenic CO2 emissions are very probably likely to be primarily responsible for global warming ...

There is no reference as to how this statement was determined or its validity known. It is very probably likely to be primarily ethereal.

The intended debate seemed to be aimed at prompting a discussion, or perhaps as the two papers to date seem to suggest, an evaluation of the methods employed in reaching the IPCC conclusion. Two invited articles were published to set off the debate, one pro and one contra to the IPCC conclusion. Oddly enough, neither paper appears to be authored by a climate scientist per se although both present a detailed discussion of atmospheric physics. Subsequent contributions were invited from the "physics" community for "comments or articles that are scientific in nature."

So here we have two editors (who are themselves not climate scientists) soliciting invited papers from authors who, as far as we know, have never had any peer reviewed publications pertaining to climate science, setting off a debate concerning the consensus in the climate sciences by what appears to be a mere declaration of the current state of the consensus. The editors of the newsletter should be commended however for at least stating that the "correctness or fallacy of that [the IPCC] conclusion has immense implications for public policy."

Our interests were drawn by statements found on the web page: 1. the Forums declaration that it is "a place for discussion and disagreement on scientific and policy matters", and 2. the statement "There is a considerable presence within the scientific community of people who do not agree with the IPCC conclusion that anthropogenic CO2 emissions are very probably likely to be primarily responsible for global warming ...". We have been working for some time in the area of assessing the levels of consensus in the climate science community and therefore decided to submit a brief (and rapidly rejected) comment (PDF.) to the debate.

Our stance concerning "consensus" (on any matter) is:

1. Consensus and certainty are two different concepts, which sometimes are parallel, although often not.

2. Consensus is simply a level of agreement among practitioners and might be subject to change over time.

3. Consensus is a level of agreement in belief of the relevance of the theory to the issue and the casual relationship inherent in the theory

and in particular reference to climate science

4. Climate change science is considered to be multidisciplinary and therefore the knowledge claims comprising the consensus is considered to be multidimensional, that is, not able to be captured in a single statement.

In short, consensus is not as simple as a yes - no response. It is a negotiated outcome of multiple levels of expertise.

Now, returning to our submission, or more precisely, the rejection of our submission, the first rejection arrived in a matter of hours. Short and to the point, it said:

The original invitation was for participation in a scientific debate, not a political one. As your attached piece is not primarily of a scientific nature, we cannot consider it for publication in our newsletter. In my editorial comments for the July 2008 issue, I emphasized that we are not interested in publishing anything of a polemical or political nature.

The "emphasized" points are of interest. The paper was neither polemic nor political, as we invite the readers of the blog to verify, however giving the editors the benefit of the doubt, we asked for clarification. Again the APS response was quite rapid:

Your article [...] is not about technical issues concerning climate research. Instead, it is about the opinions of scientists. I would be glad to consider publication of articles, comments, or letters from you that address specific technical issues connected with climate research.

Now, aren’t the "opinions of scientists" the foundation of consensus? The "opinions of scientists" in our analysis represent not a political statement but a scientific comment. The data is empirical and the paper was deliberately devoid of political or polemic statement. Our paper does definitely not address a specific technical issue but it does provide a collective peer assessment of a number of specific technical issues (such as: representation of hydrodynamics and greenhouse gases). Indeed, our concern was to substantiate quantitatively the loose assertion of an anonymous APS officer:

There is a considerable presence within the scientific community of people who do not agree with the IPCC conclusion that anthropogenic CO2 emissions are very probably likely to be primarily responsible for global warming.

An estimate based on data can be read in our short comment.

July 09, 2008

The IPCC, Scientific Advice and Advocacy

For some time the leadership of the IPCC have sought to use the institution's authority to promote a specific political agenda in the climate debate. The comments made yesterday by Rajendra Pachauri, head of the IPCC, place the organization in opposition to the G8 leaders position on climate change:

RK Pachauri, head of the Intergovernmental Panel on Climate Change (IPCC), on Tuesday slammed developed countries for asking India and China to cut greenhouse gas emissions while they themselves had not taken strong steps to cut down pollution.

"India can not be held for any emission control. They (developed countries) should get off the back of India and China," Pachauri told reporters here.

"We are an expanding economy. How can we levy a cap when millions are living with deprivation? To impose any cap (on India) at a time when others (industrialised countries) are saying that they will reach the 1990 level of emission by 2025 is hazardous," Pachauri said.

He said countries like the US and Canada should accept their responsibilities and show leadership in reducing green house gases like carbon dioxide and methane.

Pachauri said millions of Indian do not have access to electricity and their per capita income is much less. At this point, you cannot ask a country to "stop developing".

Who does Dr. Pachauri speak for as head of the "policy neutral" IPCC?

It is as if the head of the CIA (or any other intelligence agency) decided to publicly criticize the government of Iran (or other country). Such behavior would seriously call into question the ability of the intelligence agency to perform its duties, which depend upon an ability to leave advocacy to other agencies. The United States has a Department of State responsible for international relations. The CIA collects intelligence in support of decision makers. These agencies have different roles in the policy process -- hoenst broker and issue advocate.

The IPCC seems to want to both gather intelligence and decide what to do based on that intelligence. This is not a recipe for effective expert advice. Leaders in many areas would not stand for this conflation of advice and advocacy, so why does it continue to occur in the climate arena with little comment?

June 20, 2008

What the CCSP Extremes Report Really Says

Yesterday the U.S. Climate Change Science Program released an assessment report titled "Weather and Climate Extremes in a Changing Climate" (PDF) with a focus on the United States. This post discusses some interesting aspects of this report, with an emphasis on what it does not show and does not say. It does not show a clear picture of ever increasing extreme events in the United States. And it does not clearly say why damage has been steadily increasing.

First, let me emphasize that the focus of the report is on changes in extremes in the United States, and not on climate changes more generally. Second, my comments below refer to the report’s discussion of observed trends. I do not discuss predictions of the future, which the report also covers. Third, the report relies a great deal on research that I have been involved in and obviously know quite well. Finally, let me emphasize that anthropogenic climate change is real, and deserving of significant attention to both adaptation and mitigation.

The report contains several remarkable conclusions, that somehow did not seem to make it into the official press release.

1. Over the long-term U.S. hurricane landfalls have been declining.

Yes, you read that correctly. From the appendix (p. 132, emphases added):

The final example is a time series of U.S. landfalling hurricanes for 1851-2006 . . . A linear trend was fitted to the full series and also for the following subseries: 1861-2006, 1871-2006, and so on up to 1921-2006. As in preceding examples, the model fitted was ARMA (p,q) with linear trend, with p and q identified by AIC.

For 1871-2006, the optimal model was AR(4), for which the slope was -.00229, standard error .00089, significant at p=.01. For 1881-2006, the optimal model was AR(4), for which the slope was -.00212, standard error .00100, significant at p=.03. For all other cases, the estimated trend was negative, but not statistically significant.

2. Nationwide there have been no long-term increases in drought.

Yes, you read that correctly. From p. 5:

Averaged over the continental U.S. and southern Canada the most severe droughts occurred in the 1930s and there is no indication of an overall trend in the observational record . . .

3. Despite increases in some measures of precipitation (pp. 46-50, pp. 130-131), there have not been corresponding increases in peak streamflows (high flows above 90th percentile).

From p. 53 (emphasis added):

Lins and Slack (1999, 2005) reported no significant changes in high flow above the 90th percentile. On the other hand, Groisman et al. (2001) showed that for the same gauges, period, and territory, there were statistically significant regional average increases in the uppermost fractions of total streamflow. However, these trends became statistically insignificant after Groisman et al. (2004) updated the analysis to include the years 2000 through 2003, all of which happened to be dry years over most of the eastern United States.

4. There have been no observed changes in the occurrence of tornadoes or thunderstorms

From p. 77:

There is no evidence for a change in the severity of tornadoes and severe thunderstorms, and the large changes in the overall number of reports make it impossible to detect if meteorological changes have occurred.

5. There have been no long-term increases in strong East Coast winter storms (ECWS), called Nor’easters.

From p. 68:

They found a general tendency toward weaker systems over the past few decades, based on a marginally significant (at the p=0.1 level) increase in average storm minimum pressure (not shown). However, their analysis found no statistically significant trends in ECWS frequency for all nor’easters identified in their analysis, specifically for those storms that occurred over the northern portion of the domain (>35°N), or those that traversed full coast (Figure 2.22b, c) during the 46-year period of record used in this study.

6. There are no long-term trends in either heat waves or cold spells, though there are trends within shorter time periods in the overall record.

From p. 39:

Analysis of multi-day very extreme heat and cold episodes in the United States were updated from Kunkel et al. (1999a) for the period 1895-2005. The most notable feature of the pattern of the annual number of extreme heat waves (Figure 2.3a) through time is the high frequency in the 1930s compared to the rest of the years in the 1895-2005 period. This was followed by a decrease to a minimum in the 1960s and 1970s and then an increasing trend since then. There is no trend over the entire period, but a highly statistically significant upward trend since 1960. . . Cold waves show a decline in the first half of the 20th century, then a large spike of events during the mid-1980s, then a decline. The last 10 years have seen a lower number of severe cold waves in the United States than in any other 10-year period since record-keeping began in 1895 . . .

From the excerpts above it should be obvious that there is not a pattern of unprecedented weather extremes in recent years or a long-term secular trend in extreme storms or streamflow. Yet the report shows data in at least three places showing that the damage associated with weather extremes has increased dramatically over the long-term. Here is what the report says on p. 12:

. . . the costs of weather-related disasters in the U.S. have been increasing since 1960, as shown in Figure 1.2. For the world as a whole, "weather-related [insured] losses in recent years have been trending upward much faster than population, inflation, or insurance penetration, and faster than non-weather-related events" (Mills, 2005a). Numerous studies indicate that both the climate and the socioeconomic vulnerability to weather and climate extremes are changing (Brooks and Doswell, 2001; Pielke et al., 2008; Downton et al., 2005), although these factors’ relative contributions to observed increases in disaster costs are subject to debate.

What debate? The report offers not a single reference to justify that there is a debate on this subject. In fact, a major international conference that I helped organize along with Peter Hoeppe of Munich Re came to a consensus position among experts as varied as Indur Goklany and Paul Epstein. Further, I have seen no studies that counter the research I have been involved in on trends in hurricane and flood damage in relation to climate and societal change. Not one. That probably explains the lack of citations.

They reference Mills 2005a, but fail to acknowledge my comment published in Science on Mills 2005a (found here in PDF) and yet are able to fit in a reference to Mills 2005b, titled "Response to Pielke" (responding to my comment). How selective. I critiqued Mills 2005a on this blog when it came out, writing some strong things: "shoddy science, bad peer review and a failure of the science community to demand high standards is not the best recipe for helping science to contribute effectively to policy."

The CCSP report continues:

For example, it is not easy to quantify the extent to which increases in coastal building damage is due to increasing wealth and population growth in vulnerable locations versus an increase in storm intensity. Some authors (e.g., Pielke et al., 2008) divide damage costs by a wealth factor in order to "normalize" the damage costs. However, other factors such as changes in building codes, emergency response, warning systems, etc. also need to be taken into account.

This is an odd editorial evaluation and dismissal of our work (Based on what? Again not a single citation to literature.) In fact, the study that I was lead author on that is referenced (PDF) shows quantitatively that our normalized damage record matches up with the trend in landfall behavior of storms, providing clear evidence that we have indeed appropriately adjusted for the effects of societal change in the historical record of damages.

The CCSP report then offers this interesting claim, again with the apparent intention of dismissing our work:

At this time, there is no universally accepted approach to normalizing damage costs (Guha-Sapir et al., 2004).

The reference used to support this claim can be found here in PDF. Perhaps surprisingly, given how it is used, Guha-Sapir et al. contains absolutely no discussion of normalization methodologies, but instead, a general discussion of damage estimation. It is therefore improperly cited in support of this claim. However, Guha-Sapir et al. 2004 does say the following on p. 53:

Are natural hazards increasing? Probably not significantly. But the number of people vulnerable and affected by disasters is definitely on the increase.

Sound familiar?

In closing, the CCSP report is notable because of what it does not show and what it does not say. It does not show a clear picture of ever increasing extreme events in the United States. And it does not clearly say why damage has been steadily increasing.

Overall, this is not a good showing by the CCSP.

June 12, 2008

Why Costly Carbon is a House of Cards

How can the world achieve economic growth while at the same time decarbonizing the global economy?

This question is important because there is apt to be little public or political support for mitigation policies that increase the costs of energy in ways that are felt in reduced growth. Consider this description of reactions around the world to the recent increasing costs of fuel:

Concerns were growing last night over a summer of coordinated European fuel protests after tens of thousands of Spanish truckers blocked roads and the French border, sparking similar action in Portugal and France, while unions across Europe prepared fresh action over the rising price of petrol and diesel. . .

Protests at rising fuel prices are not confined to Europe. A succession of developing countries have provoked public outcry by ordering fuel price increases. Yesterday Indian police forcibly dispersed hundreds of protesters in Kashmir who were angry at a 10% rise introduced last week. Protests appeared likely to spread to neighbouring Nepal after its government yesterday announced a 25% rise in fuel prices. Truckers in South Korea have vowed strike action over the high cost of diesel. Taiwan, Sri Lanka and Indonesia have all raised pump prices. Malaysia's decision last week to increase prices generated such public fury that the government moved yesterday to trim ministers' allowances to appease the public.

Advocates for a response to climate change based on increasing the costs of carbon-based energy skate around the fact that people react very negatively to higher prices by promising that action won’t really cost that much. For instance, our frequent debating partner Joe Romm says of a recent IEA report (emphasis added):

. . . cutting global emissions in half by 2050 is not costly. In fact, the total shift in investment needed to stabilize at 450 ppm is only about 1.1% of GDP per year, and that is not a "cost" or hit to GDP, because much of that investment goes towards saving expensive fuel.

And Joe tells us that even these "not costly" costs are "overestimated."

If action on climate change is indeed "not costly" then it would logically follow the only reasons for anyone to question a strategy based on increasing the costs of energy are complete ignorance and/or a crass willingness to destroy the planet for private gain. Indeed, accusations of "denial" and "delay" are now staples of any debate over climate policy.

There is another view. Specifically that the current ranges of actions at the forefront of the climate debate focused on putting a price on carbon in order to motivate action are misguided and cannot succeed. This argument goes as follows: In order for action to occur costs must be significant enough to change incentives and thus behavior. Without the sugarcoating, pricing carbon (whether via cap-and-trade or a direct tax) is designed to be costly. In this basic principle lies the seed of failure. Policy makers will do (and have done) everything they can to avoid imposing higher costs of energy on their constituents via dodgy offsets, overly generous allowances, safety valves, hot air, and whatever other gimmick they can come up with.

Analysts and advocates allow this house of cards to stand when trying to sell higher costs of energy to a skeptical public they provide analyses that support a conclusion that acting to cut future emissions is "not costly."

The argument of "not costly" based on under-estimating the future growth of emissions so that the resulting challenge does not appear so large. We have discussed such scenarios on many occasions here and explored their implications in a commentary in Nature (PDF).

One widely-know example is the stabilization wedge analysis of Stephen Pacala and Robert Socolow (PDF. The stabilization wedge analysis concluded that the challenge of stabilizing emissions was no so challenging.

Humanity already possesses the fundamental scientific, technical, and industrial know-how to solve the carbon and climate problem for the next half-century. A portfolio of technologies now exists to meet the world’s energy needs over the next 50years and limit atmospheric CO2 to a trajectory that avoids a doubling of the preindustrial concentration. . . But it is important not to become beguiled by the possibility of revolutionary technology. Humanity can solve the carbon and climate problem in the first half of this century simply by scaling up what we already know how to do.

In a recent interview the lead author of that paper, Pacala provided a candid and eye-opening explanation of the reason why they wrote the paper (emphases added):

The purpose of the stabilization wedges paper was narrow and simple – we wanted to stop the Bush administration from what we saw as a strategy to stall action on global warming by claiming that we lacked the technology to tackle it. The Secretary of Energy at the time used to give a speech saying that we needed a discovery as fundamental as the discovery of electricity by Faraday in the 19th century.

We also wanted to stop the group of scientists that were writing what I thought were grant proposals masquerading as energy assessments. There was one famous paper published in Science [Hoffert et al. 2002] that went down the list [of available technologies] fighting them one by one but never asked "what if we put them all together?" It was an analysis whose purpose was to show we lacked the technology, with a call at the end for blue sky research.

I saw it as an unhealthy collusion between the scientific community who believed that there was a serious problem and a political movement that didn’t. I wanted that to stop and the paper for me was surprisingly effective at doing that. I’m really happy with how it came out – I wouldn’t change a thing.

That doesn’t mean that there aren’t things wrong with it and that history won’t prove it false. It would be astonishing if it weren’t false in many ways, but what we said was accurate at the time.

So lets take a second to reflect on what you just read. Pacala is claiming that he wrote a paper to serve a political purpose and he admits that history may very well prove its analysis to be “false.” But he judges the paper was successful not because of its analytical soundness, but because it served its political function by severing relationship between a certain group of scientific experts and decision makers whose views he opposed.

Why is this problematic? NYU’s Marty Hoffert has explained that the Pacala and Socolow paper was simply based on flawed assumptions. Repeating different analyses with similar assumptions doesn’t make the resulting conclusions any more correct. Hoffert says (emphases added):

The problem with the formulation of Pacala and Socolow in their Science paper, and the later paper by Socolow in Scientific American issue that you cite, is that they both indicate that seven "wedges" of carbon emission reducing energy technology (or behavior) -- each of which creates a growing decline in carbon emissions relative to a baseline scenario equal to 25 billion tonnes less carbon over fifty years -- is enough to hold emissions constant over that period. . . .

A table is presented in the wedge papers of 15 "existing technology" wedges, leading virtually all readers to conclude the carbon and climate problem is soluble with near-term technology; and so, by implication, a major ramp-up of research and development investments in alternate energy technology like the "Apollo-like" R&D Program that we call for, is unnecessary. . . .

The actual number of wedges to hold carbon dioxide below 450 ppm is about 18, not 7, for Pacala-Socolow scenario assumptions, as Rob well knows; in which case we're much further from having the technology we need. The problem is actually much worse than that, since the number of emission-reducing wedges needed to avoid greater than two degree Celsius warming explodes after the mid-century mark if world GDP continues to grow three percent per year under a business-as-usual scenario.

The figure below is from a follow-on paper by Socolow in 2006 (PDF) and clearly indicates the need for 11 additional wedges of emissions reductions from 2005 to 2055. These are called "virtual wedges" which is ironic, because their existence is very real and in fact necessary for the stabilization of emissions to actually occur. (Cutting emissions by half would require another 4 wedges, or 22 total).

If Pacala and Socolow admit that we need 18 wedges to stabilize emissions, and 22 wedges to cut them by half, and this is based on an rosy assumption of only 1.5% growth in emissions to 2055, then why would anyone believe that we need less? If it is conceivable that emissions might grow faster than 1.5% per year, then we will need even more than the 22 wedges. Perhaps much more. But analysts seeking to impose a price on carbon won't tell you this. Instead, some will resort to demagoguery, and others will simply repeat over and over again the consequences of assuming rosy scenarios. None of this will make the mitigation challenge any easier. But as Pacala says in the excerpt above, such strategies may keep more sound analyses out of the debate.

Policies based on the argument that putting a price on carbon will be "not costly' are a house of cards, and based on a range of assumptions that could easily be judged very optimistic. Looking around, what you will see is that the minute that energy prices rise high enough to be felt by the public, action will indeed occur, but it will not be the action that is desired by the climate intelligencia. It will be demands for lower priced energy. And policy makers will listen to these demands and respond. Climate policy analysts should listen as well, because there will be no tricking of the public with rosy scenarios built on optimistic assumptions.

Virtual Triangle.png

June 02, 2008

Visually Pleasing Temperature Adjustments

This is a follow up to our continuing discussion of the possible implications of changes to mid-century global average temperatures for conclusions reached by the IPCC AR4, and how scientists react to such changes.

Over at Real Climate they pointed to the following figure as representing "a good first guess at what the change will look like" and asserted that it would have no meaningful implications for the trends in temperature rise since mid-century presented by the IPCC.

independent graph.jpg

Since there was some disagreement here in the comments of an earlier post about how to interpret this graph, I have decided to simply replicate it and then see if I could exactly replicate the graph from the Independent. The data is available here.

The first thing to note is that the Independent graph has a major error which Real Climate did not point out. It says that the smooth curve represents a 5-year average, when in fact, it actually represents a 21-point binomial filter. The difference in smoothing is critically important for interpreting what the graph actually says, and the error confused me and at least one climate scientist writing in our comments.

Here is a replication of the 21-point smoothing generated from the annual values, which will allow for my effort to replicate the graph from the Independent.

smooth seas.jpg

So far so good. But replication of the adjusted curve is a bit tricky as changing data for any one year has implications for the shape of the curve 10 years before that year and 10 years after. Upon trying to create a exact replication of the graph from The Independent, right away I realized that there was a major problem, because adding any increment to where Thompson et al. said it should begin (in 1945) instantly raised the adjusted curve to a point above the unadjusted curve. And as you can see in the Independent graph that at no point does the adjusted curve rise above the unadjusted curve, much less by a significant amount as implied by Thompson et al..

So right away it seems clear that we are not trying to make an adjustment that actually draws on the guidance from Thompson et al. This might seem odd, since the graph is supposed to show a proposed "guess" at the implications of Thompson et al. In any event, with that constraint removed I simply tried to get the best visual fit to the Independent graph that I could. And here is what I came up with.


Now, given the complicated smoothing routine, there is certainly any number of combinations of weird adjustments that will result in a very similar looking curve. (And if anyone from CRU is reading and wants to share with us exactly what you used, and the basis for it, please do so.) The adjustments I used are as follows:

1945 0
1946 0
1947 0
1948 0.1
1949 0.25
1950 0.18
1951 0.18
1952 0.18
1953 0.18
1954 0.16
1955 0.16
1956 0
1957 0
1958 0
1959 0
1960 0

Oh yeah, the effect of these visually pleasing adjustments on the IPCC trend from 1950? Not that it actually means anything given the obvious incorrectness, but it would reduce the trend by about 15%.

June 01, 2008

Real Climate on Meaningless Temperature Adjustments

[UPDATE]Real Climate did not like the figure shown below, so I responded to them with the following request, submitted as a comment on their site:

Hi Gavin-

I’d be happy to work from a proposed adjustment directly from you, rather than rely on the one proposed by Steve McIntyre or the one you point to from The Independent.

Thompson et al. write: "The new adjustments are likely to have a substantial impact on the historical record of global-mean surface temperatures through the middle part of the twentieth century."

It is hard to see how temperatures around 1950 can change "substantially" with no effect on trends since 1950, but maybe you have a different view. Lets hear it. Give me some better numbers and I’ll use them.

Their response was to dodge the request:

Response: Nick Rayner, Liz Kent, Phil Jones etc. are perfectly capable of working it out and I’d suggest deferring to their experience in these matters. Whatever they come up with will be a considered and reasonable approach that will include the buoy and drifter issues as well as the post WW-II canvas bucket transition. Second guessing how that will work out in the absence of any actual knowledge would be foolish. - gavin

But doesn't speculation that no changes will be needed to the IPCC trend estimates count as "second guessing," or pointing to a graph in The Independent as likely being correct?

Similarly, in the comments below climate scientist James Annan criticized the graph in this post and when asked to provide an alternative adjustment, he declined to do so.

If these guys know what is "wrong" then they must have an idea about what is "right".

Real Climate writes an entire post responding to Steve McIntyre's recent discussions of buckets and sea surface temperatures, explaining why the issue doesn't really matter, but for some weird reason they can't seem to mention him by name or provide a link to what they are in fact responding to. (If the corrections don't matter, then one wonders, why do them? Thompson et al. seemed to think that the issue matters.)

Real Climate does seem have mastered a passive voice writing style, however. Since they did have the courtesy to link here, before calling me "uninformed" (in deniable passive voice of course), I though a short response was in order.

Real Climate did not like our use of a proposed correction suggested by He Who Will Not Be Named. So Real Climate proposed another correction based on a graphic printed in The Independent. Never mind that the correction doesn't seem to jibe with that proposed by Thompson et al., but no matter, we used the one suggested by Mr. Not-To-Be-Named so lets use Real Climate's as well and see what difference it makes to temperature trends since 1950. Based on what Real Climate asserts (but oddly does not show with numbers), you'd think that their proposed adjustment results in absolutely no change to mid-20th century trends, and indeed anyone suggesting otherwise is an idiot or of ill-will. We'll lets see what the numbers show.

The graph below shows a first guess at the effects of the Real Climate adjustments (based on a decreasing adjustment from 1950-60) based on the graphic in The Independent.

Real Climate Adjustment.jpg

What difference to trends since 1950 does it make? Instead of the about 50% reduction in the 1950-2007 trend from the first rough guess from you-know-who, Real Climate's first guess results in a reduction of the trend by about 30%. A 30% reduction in the IPCC's estimate in temperature trends since 1950 would be just as important as a 50% reduction, and questions of its significance would seem appropriate to ask. But perhaps a 30% reduction in the trend would be viewed as being "consistent with" the original trend ;-)

Try again Real Climate. And next time, his name is STEVE MCINTYRE -- and his blog is called CLIMATE AUDIT. There is a lot of science and civil discussion there, with a healthy mix of assorted experts and a range of ordinary folks. Questioning scientific conclusions is a lot healthier for science than rote defense, but we all learned that in grad school, didn't we?

May 29, 2008

Does the IPCC’s Main Conclusion Need to be Revisited?

Yesterday Nature published a paper by Thompson et al. which argues that a change in the observational techniques for taking the temperatures of the oceans led to a cold bias in temperatures beginning in the 1940s. The need for the adjustment raises an interesting, and certainly sensitive, question related to the sociology and politics of science: Does the IPCC's main conclusion need to be revisited?

The Nature paper states of the effects of the bias on temperature measurements:

The adjustments immediately after 1945 are expected to be as large as those made to the pre-war data (.0.3 C; Fig. 4), and smaller adjustments are likely to be required in SSTs through at least the mid-1960s.

Thompson et al. do not provide a time series estimate on the effects of the bias on the global temperature record, but Steve McIntyre, who is building an impressive track record of analyses outside the peer-review system, discussed this topic on his weblog long before the paper appeared in Nature, and has proposed an adjustment to the temperature record (based on discussions with participants on his blog). Steve’s adjustment is based on assuming:

that 75% of all measurements from 1942-1945 were done by engine inlets, falling back to business as usual 10% in 1946 where it remained until 1970 when we have a measurement point - 90% of measurements in 1970 were still being made by buckets as indicated by the information in Kent et al 2007- and that the 90% phased down to 0 in 2000 linearly.

The effects of McIntyre’s proposed adjustments (on the UKMET global temperature record) are shown in the following figure.


Other adjustments are certainly plausible, and will certainly be proposed and debated in the literature and on blogs (McIntyre discusses possible implications of the adjustments in this post.). But given how much research has been based on the existing global temperature record, it seems likely that many studies will be revisited in light of the Nature paper. In a comment in Nature that accompanies Thompson et al., Forest and Reynolds suggest:

The SST adjustment around 1945 is likely to have far-reaching implications for modelling in this period.

In the figure above, the trend in the unadjusted data (1950-present) is 0.11 Deg C per decade (slightly lower than reported by IPCC AR4, due to the recent downturn), and after the adjustments are applied the trend drops by just about half, to 0.06 Deg C per decade.

And this brings us to the IPCC. In 2007 the IPCC (PDF) concluded that:

Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations

I interpret "mid-20th century" to be 1950, and "most" to be >50%. This means that the 2007 IPCC attributed more than 0.06 Deg per decade of the temperature increase since 1950 to increasing greenhouse gases. But we know now that the trend since 1950 included a spurious factor due to observational discontinuities, which reduces the entire trend to 0.06. So logically, if the proposed adjustment is in the ballpark, it would mean that one of the following statements must be true in order for the IPCC statement to still hold:

A. The entire trend of 0.06 per decade since 1950 should now be attributed to greenhouse gases (the balance of 0.06 per decade)

B. Only >0.03 per decade can be attributed to greenhouse gases (the "most" from the original statement)

C. The proposed adjustment is wildly off (I’d welcome other suggestions for an adjustment)

D. The IPCC statement needs to be fundamentally recast

So which is it?

PS. To ensure that this blog post is not misinterpreted, note that none of the mitigation or adaptation policies that I have advocated are called into question based on the answer that one gives to the question posed in the title.

May 23, 2008

Homework Assignment: Solve if you Dare


The graph above shows three trend lines.

BLUE: Temperature Trend prediction from the 1990 IPCC report
RED: Temperature Trend prediction from the 2007 IPCC report
GREEN: Observed Trend for 2001-2007 (from average of four obs datasets)

All data is as described in this correspodence (PDF).

Your assignment:

Which IPCC prediction is the trend observed 2001-2007 more consistent with and why? Show your work!

You are free to bring in whatever information and use whatever analysis that you want.

May 22, 2008

Nature Letters on PWG

The 8 May 2008 issue of Nature published 4 letters in response to the Pielke, Wigley, and Green commentary on IPCC scenarios (PDF). This provides a few excerpts from and reactions to these letters.

Vaclav Smil of the University of Manitoba writes:

I largely agree with the overall conclusion of Pielke et al. that the IPCC assessment is overly optimistic, but I fear that the situation is even worse than the authors imply.

Smil is realistic about the challenge of mitigation:

The speed of transition from a predominantly fossil-fuelled world to conversions of renewable flows is being grossly overestimated: all energy transitions are multigenerational affairs with their complex infrastructural and learning needs. Their progress cannot substantially be accelerated either by wishful thinking or by government ministers’ fiats.

But pessimistic about action:

Consequently, the rise of atmospheric CO2 above 450 parts per million can be prevented only by an unprecedented (in both severity and duration) depression of the global economy, or by voluntarily adopted and strictly observed limits on absolute energy use. The first is highly probable; the second would be a sapient action, but apparently not for this species.

Christopher Field, from Stanford University agrees with our analysis and its implications:

The trends towards increased carbon and energy intensity may or may not continue. In either case, we need new technologies and strategies for both endogenous and policy-driven intensity improvements. Given recent trends, it is hard to see how, without a massive increase in investment, the requisite number of relevant technologies will be mature and available when we need them.

Richard Richels, of the Electric Power Research Institute, Richard Tol, of the Economic and Social Research Institute (Ireland), and Gary Yohe, of Wesleyan University support our analysis and our interpretation of its significance:

Pielke et al. show that the 2000 Special Report on Emissions Scenarios (SRES) reflects unrealistic progress on both the supply and demand sides of the energy sector. These unduly optimistic baselines cause serious underestimation of the costs of policy-induced mitigation required to achieve a given stabilization level.

This is well known among experts but perhaps not to the public, which may explain why some politicians overstate the impact of their (plans for) climate policy, and why others argue incorrectly that ‘available’ off-the-shelf technologies can reduce emissions at very little or no cost.

They also make an absolutely critical point about climate policy – it is necessarily incremental and adaptive:

The focus of policy analysis should not be on what to do over the next 100 years, but on what to do today in the face of many important long-term uncertainties. The minute details of any particular scenario for 2100 are then not that important. This can be achieved through an iterative risk management approach in which uncertain long-term goals are used to develop short-term emission targets. As new information arises, emission scenarios, long-term goals and short-term targets are adjusted as necessary. Analyses would be conducted periodically (every 5–10 years), making it easier to distinguish autonomous trends from policy-induced developments — a major concern of Pielke and colleagues. If actual emissions are carefully monitored and analysed, the true efficacy and costs of past policies would be revealed and estimates of the impact of future policy interventions would be less uncertain.

Such an approach would incorporate recent actions by developed and developing countries. In an ‘act then learn’ framework, climate policy is altered in response to how businesses change their behavior in reaction to existing climate policies and in anticipation of future ones. This differs from SRES-like analyses, which ignore the dynamic nature of the decision process and opportunities for mid-course corrections as they compare scenarios without policy with global, century-long plans.

Ottmar Edenhofer, Bill Hare, Brigitte Knopf, Gunnar Luderer Potsdam of the Institute for Climate Impact Research (Germany) suggest that the range of rates for the future decarbonization of energy in the IPCC reports is in fact appropriate:

Over the past 30 years, the decrease in energy intensity has been 1.1% a year — well above the 0.6% a year assumed in 75% of the energy scenarios assessed by the IPCC.

Developments in China since 2000 do raise concerns that the rate of decrease in energy and carbon intensity could slow down, or even be reversed. However, similar short-term slow-downs in technical progress have occurred in the past, only for periods of more rapid development to compensate for them. India, for example, does not show the decreasing trend in energy efficiency seen in China.

The figure of 75% of scenarios of the IPCC assuming 0.6% per year decrease in energy intensity is difficult to interpret. But here is what the IPCC itself says on this (WGIII Ch. 3, p. 183 PDF):

In all scenarios, energy intensity improves significantly across the century – with a mean annual intensity improvement of 1%. The 90% range of the annual average intensity improvement is between 0.5% and 1.9% (which is fairly consistent with historic variation in this factor). Actually, this range implies a difference in total energy consumption in 2100 of more than 300% – indicating the importance of the uncertainty associated with this ratio.

So if 5% fall below 0.5%, it is hard to understand what the authors mean by "0.6% a year assumed in 75% of the energy scenarios assessed by the IPCC." Contrary to the other letters Edenhofer et al. conclude:

The IPCC’s main policy conclusions stand: present technologies can stop the rise in global emissions.

The final letter is from Joseph Romm, of the Center for America Progress. He chooses to parse what is meant by the term "climate policies" in the vernacular of the IPCC:

They criticize the IPCC for implicitly assuming that the challenge of reducing future emissions will mostly be met without climate policies. But the IPCC’s Special Report on Emissions Scenarios makes clear that, although the scenarios don’t technically have climate policies, they can and do have energy efficiency and decarbonization policies, which amount to the same thing

It is not clear why this semantic point matters for interpreting our analysis as it has no implications for either our technical analysis or its interpretation. Of course, the IPCC defined the notion of "climate policies" quite precisely for a reason -- because the policies that relate to improved energy efficiency and decarbonization assumed by the IPCC to occur in their scenarios in the absence of climate policy mean that these other policies would be implemented with no effort focused on the stabilization of greenhouse gases in the atmosphere (no cap and trade, no Kyoto, no carbon tax, etc. etc.). These policies, whatever they are, would happen spontaneously or automatically without any concern for climate. This assumption was explicit in the terms of reference for the IPCC SRES exercise for the purpose of clearly identifying the marginal benefits and costs of climate-specific policies.

Romm then simply repeats the conclusions of the IPCC:

the IPCC report makes clear that we have the necessary technologies, or soon will, and focuses on creating the conditions for rapid technological deployment

Interestingly, with a letter in Nature Romm, who has been a strong critic of our paper on his blog, had a perfect opportunity to explain what might have been incorrect in our technical analysis, and did not. We can assume that he was unable to find any flaws and thus chose to focus on the implications of the analysis, which he does not enagage, choosing simply to restate a position that he held before our paper came out.

As can be seen clearly in the letters, there is not a consensus among energy policy experts on the role of technological innovation in efforts to mitigate climate change. This is a debate which has only just begun, and for which there are a range of legitimate and informed points of view, despite the efforts of some to demagogue anyone who disagrees with their views.

World Bank and UK Government on Climate Change Implications of Development


The World Bank and UK government issued a report today titled, "Strategies For Sustained Growth And Inclusive Development." Here is what the report says about the implications for climate change of development in the developing world (p. 86), something that the report calls absolutely necessary:

Clearly the advanced countries are at per capita [carbon dioxide] output levels that, if replicated by the developing world, would be dramatically in excess of safe levels. World carbon emissions are now at about twice the safe level, meaning that if the current output is sustained, the CO2 stock in the atmosphere will rise above safe levels in the next 40 years. The figures for a range of countries, including developing countries, are shown in Figure 9.

If the developing countries did not grow, then safe levels of emissions would be achieved by reducing advanced country emissions by a factor of two or a little more. But with the growth of the developing countries, the incremental emissions are very large because of the size of the populations. To take the extreme case, if the whole world grew to advanced country incomes and converged on the German levels of emissions per capita, then to be safe from a warming standpoint, emissions per capita would need to decline by a factor of four. Reductions of this magnitude with existing technology are either not possible, or so costly as to be certain of slowing global and developing country growth.

What these calculations make clear is that technology is the key to accommodating developing country and global growth. We need to lower the costs of mitigation. Put differently, we need to build more economic value on top of a limited energy base. For that we need new knowledge.

What actions does the report call for (p. 90)?

The Commission recommends the following nine steps. Taken together, they will cut emissions, thereby staving off some of the worst dangers of global warming. They will reveal more about the cost of cutting emissions, and they will encourage new technologies that reduce these costs. These steps are also fair.

1. The advanced economies should cut emissions first and they should do so aggressively. This will slow the accumulation of carbon in the atmosphere. It will also reveal a great deal about how much it truly costs to cut carbon emissions.

2. More generous subsidies should be paid to energy-efficient technologies and carbon reduction technologies, which will reduce the cost of mitigation.

3. Advanced economies should strive to put a price on carbon.

4. The task of monitoring emissions cuts and other mitigation measures should be assigned to an international institution, which should begin work as soon as possible.

5. Developing countries, while resisting long-term target-setting, should offer to cut carbon at home if other countries are willing to pay for it. Such collaborations take place through the Clean Development Mechanism provisions in the Kyoto protocol. Rich countries can meet their Kyoto commitments by paying for carbon cuts in poorer countries.

6. Developing countries should promise to remove fuel subsidies, over a decent interval. These subsidies encourage pollution and weigh heavily on government budgets.

7. All countries should accept the dual criteria of efficiency and fairness in carbon mitigation. In particular, richer countries, at or near high-income levels, should accept that they will each have the same emissions entitlements per head as other countries.

8. Developing countries should educate their citizens about global warming. Awareness is already growing, bringing about changes in values and behavior.

9. International negotiations should concentrate on agreeing to carbon cuts for more advanced economies, to be achieved 10 or 15 years hence.
These mitigation efforts should be designed so as to reveal the true costs of mitigation.

Interestingly, the report calls for developing countries to "resist long-term target setting" while at the same time expressing skepticism about the "true costs of mitigation." The report shows that there is a wide range of views on what sort of mitigation actions make sense in the debate over climate policy that cannot be captured by the facile "denialist-alarmist" dichotomy that some observers would like to enforce on the debate. One oversight is that the report does not address the issue of adaptation.

May 21, 2008

An *Inconsistent With* Spotted, and Defended

Readers following recent threads know that I've been looking for instances where scientists make claims that some observations are "inconsistent with" the results from climate models. The reason for such a search is that it is all too easy for modelers to claim that anything and everything under the sun is "consistent with" their predictions, sometimes to avoid the perception of a loss of credibility in the political battle over climate change.

I am happy to report that claims of "inconsistent with" do exist. Here is an example from a paper just out by Knutson et al. in Nature Geoscience:

Our results using the ensemble-mean global model projections (Fig. 4) are inconsistent with the notion of large, upward trends in tropical storm and hurricane frequency over the twentieth century, driven by greenhouse warming.

The climate modelers at Real Climate apparently don't like the phrase "inconsistent with" in the context of models and try to air brush it away when they write of Knutson et al.:

. . .we know that (i) the warming [of the oceans] is likely in large part anthropogenic, and (ii) that the recent increases in TC frequency are related to that warming. It hardly seems a leap of faith to put two-and-two together and conclude that there is likely a relationship between anthropogenic warming and increased Atlantic TC activity.

Knutson et al. respond in the comments that this in fact is not how to interpret their paper, and -- kudos to them -- take strong, public issue with the weaselly words implying a connection that they don't show (emphasis added in the below, and I've copied the whole comment for the entire context):

Mike [Mann],

Statement (i), that "the warming [of the tropical Atlantic Ocean] is likely in large part anthropogenic." is reasonable, taking "anthropogenic" to mean "greenhouse gas", given the work of Santer et al (2006, PNAS), Knutson et al (2006, J. Clim.), and Gillett et al (2008, G.R.L.). To quote from Gillett et al:

…our results indicate that greenhouse gas increases are indeed likely the dominant cause of [tropical Atlantic] warming…

However, statement (ii), that "the recent increases in [Atlantic] TC (tropical cyclone) frequency are related to that warming" is vague – with "related to" allowing an interpretation that includes anything from a negative relationship, to a minor contribution, to local SST warming being the dominant dynamical control on TC frequency increase. Some might interpret "related to" to mean "are dominantly controlled by", and we think the evidence does not justify such a strong statement. In particular, the results of Knutson et al (2008) do not support such an attribution statement,if one focuses on the greenhouse gas part of the anthropogenic signal. Quoting from page 5 of the paper:

Our results using the ensemble-mean global model projections (Fig. 4) are inconsistent [emphasis added] with the notion of large, upward trends in tropical storm and hurricane frequency over the twentieth century, driven by greenhouse warming

We agree that TC activity and local Atlantic SSTs are correlated but do not view this correlation as implying causation. The alternative, consistent with our results, is that there is a causal nonlocal relationship between Atlantic TC activity and the tropical SST field. The simplest version uses the difference between Atlantic and Tropical-mean SST changes as the predictor (Swanson 2008, Non-locality of Atlantic tropical cyclone intensities, G-cubed, 9, Q04V01). This picture is also consistent with non-local control on wind shear (e.g. Latif et al 2007, G.RL.), atmospheric stability (e.g., Shen et al 2000, J. Clim.) and maximum potential intensity (e.g., Vecchi and Soden, 2007, Nature).

We view the SST change in the tropical Atlantic relative to the rest of the tropics as the key to these questions. Warming in recent decades has been particularly prominent in the northern tropical Atlantic, but such a pattern is not evident in the consensus of simulations of the response to increasing greenhouse gases. So, whether changes in Atlantic SST relative to the rest of the tropics - that according to our hypothesis have resulted in the changes in hurricane activity - were primarily caused by changes in radiative forcing, or whether they were primarily caused by internal climate variability, or (most likely) whether both were involved, is obviously an important issue, but this is not addressed by our paper

Now a word of caution -- Knutson et al. 2008 is by no means the last word on hurricanes and global warming, and the issue remains highly contested, and will remain so for a long time. Of course, you heard that (accurate) assessment of the state of this particular area of climate science here a long time ago (PDF;-)

Knutson et al. is notable because it clearly identifies observations "inconsistent with" what the models report which should give us greater confidence in research focused on generating climate predictions. We should have greater confidence because if practically everything observed is claimed to be "consistent with" model predictions, then climate models are pretty useless tools for decision making.

May 19, 2008

Do IPCC Temperature Forecasts Have Skill?

[UPDATE] Roger Pielke, Sr. tells us that we are barking up the wrong tree looking at surface temperatures anyway. He says that the real action is in looking at oceanic heat content, for which predictions have far less variability over short terms than do surface temperatures. And he says that observations of accumulated heat content over the past 4 years "are not even close" to the model predictions. For the details, please see for your self at his site.]

"Skill" is a technical term in the forecast verification literature that means the ability to beat a naïve baseline when making forecasts. If your forecasting methodology can’t beat some simple heuristic, then it will likely be of little use.

What are examples of such naïve baselines? In weather forecasting historical climatology is often used. So if the average temperature in Boulder for May 20 is 75 degrees, and my prediction is for 85 degree, then any observed temperature below 80 degrees will mean that my forecast had no skill. In the mutual fund industry stock indexes are examples of naive baselines used to evaluate performance of fund managers. Of course, no forecasting method can always show skill in every forecast, so the appropriate metric is the degree of skill present in your forecasts. Like many other aspects of forecast verification, skill is a matter of degree, and is not black or white.

Skill is preferred to "consistency" if only because the addition of bad forecasts to a forecasting ensemble does not improve skill unless it improves forecast accuracy, which is not the case with certain measures of "consistency," as we have seen. Skill also provides a clear metric of success for forecasts, once a naïve baseline is agreed upon. As time goes on, forecasts such as those issued by the IPCC should tend toward increasing skill, as the gap between a naive forecast and a prediction grows. If a forecasting methodology shows no skill then it would be appropriate to question the usefulness and/or accuracy of the forecasting methodology.

In this post I use the IPCC forecasts of 1990, 2001, and 2007 to illustrate the concept of skill, and to explain why it is a much better metric that "consistency" to evaluate forecasts of the IPCC.

The first task is to choose a naïve baseline. This choice is subjective and people often argue over it. People making forecasts usually want a baseline that is easy to beat, people using or paying for forecasts often want a more rigorous baseline. For this exercise I will use the observed temperature trend over the 100 years ending in 2005, as reported by the 2007 IPCC, which is 0.076 degrees per decade. So in this exercise the baseline that the IPCC forecasts have to beat is a naïve assumption that future temperature increases will increase by the same rate as has been observed over the past 100 years. Obviously, one could argue for a different naïve baseline, but this is the one I’ve chosen to use.

I will also use the ensemble average "best guess" from the IPCC for the most appropriate emissions scenario as the prediction. And for observations I will use the average value from the four main group tracking global temperature trends. These choices could be made differently, and a more comprehensive analysis would explore different ways to do the analysis.

So then, using these metrics how does the IPCC 1990 best estimate forecast for future increases in temperature compare for 1990-2007? The figure below shows that the IPCC forecast, while over-predicting the observed trend, outperformed this naïve baseline. So the forecast can be claimed to be skillful, but not by very much.


A more definitive example of a skillful forecast is the 2001 IPCC prediction, which the following figure shows demonstrated a high degree of skill.


Similarly, the 2000-2007 forecast of the IPCC 2007 also shows a high degree of skill, as seen in the next figure.


But in 2008 things get interesting. With data from 2008 included, rather than ending in 2007, then the 2007 IPCC forecast is no longer skillful, as shown below.


If one starts the IPCC predictions in 2001, then the lack of skill is even greater, as seen below.


What does all of this mean for the ability of the IPCC to predict longer-term climate change? Perhaps nothing, as many scientists would claim that it makes no sense to discuss IPCC predictions on time scales less than 20 or 30 years. If so, then it would also be inappropriate to claim that IPCC forecasts on the shorter scales are skillful or accurate. One way to interpret the recent Keenlyside et al. paper in Nature is that their analysis suggests that the IPCC predictions of future temperature evolution won't be skillful unless they account for various factors not included in the IPCC predictions.

The point of this exercise is to show that there are simple, unambiguous alternatives to using the notion of "consistency" as the basis for comparing IPCC forecasts with observations. "Consistency" between models and observations is a misleading, and I would say fairly useless way to talk about climate forecasts. Measures of skill provide an unambiguous way to evaluate how the IPCC is doing over time.

But make no mistake, the longer the IPCC forecasts lie in a zone of "no skill" -- which the most recent ones (2007) currently do (for the time of the forecast to present) -- the more interest they will receive. This time period may be for only one more month, or perhaps many years. I don't know. This situation creates interesting incentives for forecasters who want their predictions to show skill.

Old Wine in New Bottles

The IPCC will be using new scenarios for its future work, updating those produced in 2000, the so-called SRES scenarios. This would be good news, since, as we argued in Nature last month, the IPCC scenarios contain some dubious assumptions (PDF). But from the looks of it, it does not appear that much has changed, excpet the jargon. The figure below compares the new scenarios as presented in a report from a meeting of the IPCC held last month (source: PDF) with those from the 2000 IPCC SRES report. I have presented the two sets of scenarios on the same scale to facilitate comparison. Do they look much different to you?


May 16, 2008

The Politicization of Climate Science

[Update: The ever helpful David Roberts of Grist Magazine points out that an op-ed in the Washington Times yesterday makes the same logical error that I point out in this post below made by Patrick Michaels -- namely that short-term predictive failures obviate the need for action. The op-ed quotes me and says that I am "not previously a global warming skeptic," which is correct, but implies that somehow I am now . . . sorry, wrong. It also quotes my conclusion that climate models are "useless" without the important qualifiers **for decision making in the short term when specific decisions must be made**. Such models are great exploratory scientific tools, and were helpful in bringing the issue of greenhouse gases to the attention of decision makers. I've emailed the author making these points, asking him to correct his piece.]

Here I'd like to explain why one group of people, which we might call politically active climate scientists and their allies, seek to shut down a useful discussion with intimidation, bluster, and name-calling. It is, as you might expect, a function of the destructive politics of science in the global warming debate.

We've had a lot of interest of late in our efforts to explore what would seem to be a simple question:

What observations of the global climate system (over what time scale, with what certainty, etc.) would be inconsistent with predictions of the IPCC AR4?

The motivation for asking this question is of course the repeated claims by climate scientists that this or that observation is "consistent with" such predictions. For claims of consistency between observations and predictions to have any practical meaning whatsoever, they must be accompanied by knowledge of what observations would be inconsistent with predictions. This is a straightforward logical claim, and should be uncontroversial.

Yet efforts to explore this question have been met with accusations of "denialism," of believing that human-caused global warming is "not a problem," of being a "conspiracy theorist." More constructive responses have claimed that questions of inconsistency cannot really be addressed for 20-30 years (which again raises the question why claims of consistency are appropriate on shorter timescales), have focused attention on the various ways to present uncertainty in predictions from a suite of models and also on uncertainties in observations systems, and have focused attention on the proper statistical tests to apply in such situations. In short, there is a lot of interesting subjects to discuss. Some people think that they have all of the answers, which is not at all problematic, as it makes this issue no different than most any other discussion you'll find on blogs (or in academia for that matter).

But why is it that some practicing climate scientists and their allies in the blogosphere appear to be trying to shut down this discussion? After all, isn't asking and debating interesting questions one of the reasons most of us decided to pursue research as a career in the first place? And in the messy and complicated science/politics of climate change wouldn't more understanding be better than less?

The answer to why some people react so strongly to this subject can be gleaned from an op-ed in today's Washington Times by one Patrick Michaels, a well-known activist skeptical of much of the claims made about the science and politics of climate change. Here is what Pat writes:

On May Day, Noah Keenlyside of Germany's Leipzig Institute of Marine Science, published a paper in Nature forecasting no additional global warming "over the next decade."

Al Gore and his minions continue to chant that "the science is settled" on global warming, but the only thing settled is that there has not been any since 1998. Critics of this view (rightfully) argue that 1998 was the warmest year in modern record, due to a huge El Nino event in the Pacific Ocean, and that it is unfair to start any analysis at a high (or a low) point in a longer history. But starting in 2001 or 1998 yields the same result: no warming.

Michaels is correct in his assertion of no warming starting in these dates, but one would reach a different conclusion starting in 1999 or 2000. He continues,

The Keenlyside team found that natural variability in the Earth's oceans will "temporarily offset" global warming from carbon dioxide. Seventy percent of the Earth's surface is oceanic; hence, what happens there greatly influences global temperature. It is now known that both Atlantic and Pacific temperatures can get "stuck," for a decade or longer, in relatively warm or cool patterns. The North Atlantic is now forecast to be in a cold stage for a decade, which will help put the damper on global warming. Another Pacific temperature pattern is forecast not to push warming, either.

Science no longer provides justification for any rush to pass drastic global warming legislation. The Climate Security Act, sponsored by Joe Lieberman and John Warner, would cut emissions of carbon dioxide — the main "global warming" gas — by 66 percent over the next 42 years. With expected population growth, this means about a 90 percent drop in emissions per capita, to 19th-century levels.

He has laid out the bait, complete with reference to Al Gore, claiming that recent trends of no warming plus a forecast of continued lack of warming mean that there is no scientific basis for action on climate change.

There are several ways that one could respond to these claims.

One very common response to these sort of arguments would be to attack Michaels putative scientific basis for his policy arguments. Some would argue that he has cherrypicked his starting dates for asserting no trend. Other would observe that the recent trends in temperature are in fact consistent with predictions made by the IPCC. This latter strategy is exactly the approach used by the bloggers at Real Climate when I first started comparing 2007 IPCC predictions (from 2000) with temperature observations.

The "consistent with" strategy is a potential double-edged sword because it grants Pat Michaels a large chunk of territory in the debate. Once you attack the scientific basis for political arguments that are justified in those terms, you are accepting Michaels claim that the political arguments are in fact a function of the science. So in this case, by attacking Michaels scientific claims, you would be in effect saying

"Yes while it is true that these policies are justified on scientific conclusions, Pat Michaels has his science wrong. Getting the science right would lead to different political conclusions that Michaels arrives at."

Here at Prometheus for a long time we've observed how this dynamic shifts political debates onto scientific debates. Any I discuss this in detail in my book, The Honest Broker (now on sale;-).

Now, the "consistent with" strategy is a double-edged sword because the future is uncertain. It could very well be the case that there is no additional warming over the next decade or longer, or perhaps a cooling. Given such uncertainty, scientists with an eye on the politics of climate change are quick to define pretty much anything that could be observed in the climate system as "consistent with" IPCC predictions in order to maintain their ability to deflect the sort of claims made by Patrick Michaels. For if everything observed is consistent with IPCC predictions, there is no reason to then call into question the scientific basis used to justify policies.

But this strategy runs a real risk of damaging the credibility of the scientific community. It is certainly possible to claim, as some of our commenters have and the folks at RC have, that 20 years of cooling is "consistent with" IPCC predictions, but I can pretty much guarantee that if the world has experienced cooling for 20 years from the late 1990s to the 2000-teens that the political dynamics of climate change and the standing of skeptics will be vastly different than it is today.

Now I am sure that many scientist/activists are just trying to buy some time (e.g., buy offering a wager on cooling, as RC has done), waiting for a strong warming trend to resume. And it very well might, since this is the central prediction of the IPCC. Blogger /activist/scientist Joe Romm gushed with mock enthusiasm when the March temperatures showed a much higher rate of warming than the previous three months. We'll see what sort of announcement he or others put up for the much cooler April temperatures. But all such celebrations, on any side of the debate, do is set the stage for the acceptance of articles like that by Pat Michaels who point out the opposite when it occurs. One way to buy time is to protest, call others names, and muddy the waters. This strategy can work really well when questions of inconsistency take place over a few months and the real world assumes the pattern of behavior found in the central tendency of the IPCC predictions, but if potential inconsistency goes on any longer than this then you start looking like you are protesting too much.

So what is the alternative for those of us who seek action on climate change? I see two options, both predicated on rejecting the linkage between IPCC predictions and current political actions.

1) Recognize that any successful climate policies must be politically robust. This means that they have to make sense to many constituencies for many reasons. Increasing carbon dioxide in the atmosphere will have effects, and these effects are largely judged to be negative over the long term. Whether or not scientists can exactly predict these effects over decades is an open question. But the failure to offer accurate decadal predictions would say nothing about the judgment that continued increasing carbon dioxide is not a good idea. Further, for any climate policies to succeed they must make sense for a lot of reasons -- the economy, trade, development, pork, image, etc. etc. -- science is pretty much lost in the noise. So step one is to reject the premise of claims like that made by Pat Michaels. The tendency among activist climate scientists is instead to accept those claims.

2) The climate community should openly engage the issue of falsification of its predictions. By giving the perception that fallibility is not only acceptable, but expected as part of learning,it would go a long way toward backing off of the overselling of climate science that seems to have taken place. If the IPCC does not have things exactly correct, and the world has been led to believe that they do, then an inevitable loss of credibility might ensue. Those who believe that the IPCC is infallible will of course reject this idea.

Who knows? Maybe warming will resume in May, 2008 at a rapid rate, and continue for years or decades. Then this discussion will be moot. But what if it doesn't?

May 15, 2008

Comparing Distrubutions of Observations and Predictions: A Response to James Annan

James Annan, a climate modeler, has written a post at his blog trying to explain why it is inconceivable that recent observations of global average temperature trends can be considered to be inconsistent with predictions from the models of the IPCC. James has an increasing snarky, angry tone to his comments which I will ignore in favor of the math (and I'd ask those offering comments on our blog to also be respectful, even if that respect is not returned), and in this post I will explain that even using his approach, there remains a quantitative justification for arguing that recent trends are inconsistent with IPCC projections.

James asks:

Are the models consistent with the observations over the last 8 years?

He answers this question using a standard approach to comparing means from two distributions, a test that I have openly questioned its appropriateness in this context. But lets grant James this methodological point for this discussion.

James defines the past 8 years as the past 8 calendar years, 2000-2007, which we will see is a significant decision. As reported to us by his fellow modelers at Real Climate, James presents the distribution of models as having a mean 8-year trend of 0.19 degrees per decade, with a standard deviation of 0.21. So lets also accept this starting point.

In a post on 8-year trends in observational data Real Climate reported the standard deviation of these trends to be 0.19. (Note this is based on NASA data, and I would be happy to use a different value if a good argument can be made to do so.) I calculated the least-squares best fit line for the monthly data 2000-2007 from the UKMET dataset that James pointed to and arrived at 0.10 degrees/C per decade (James gets 0.11).

So lets take a look at how the distribution of 8-year trends in the models [N(0.19, 0.21)] compares to the analogous 8-year trend in the observations [N(0.10, 0.19)]. This is shown in the following graph with the model distribution in dark blue, and the observations in red.


Guess what? Using this approach James is absolutely correct when he says that it would be incorrect to claim that the temperatures observed from 2000-2007 are inconsistent with the IPCC AR4 model predictions. In more direct language, any reasonable analysis would conclude that the observed and modeled temperature trends are consistent.

But now lets take a look at two different periods, first the past eight years of available data, so April 2000 to March 2008 (I understand that April 2008 values are just out and the anomaly is something like half the value of April 2000, so making this update would make a small difference).


You can clearly see that the amount of overlap between the distributions is smaller than in the first figure above. If one wanted to claim that this amount of overlap demonstrates consistency between models and observations I would not disagree. But at the same time, there is also a case to be made that the distributions are inconsistent, as the amount of overlap is not insignificant. There would be an even stronger case to be made for inconsistency using the satellite data, which shows a smaller trend over this same period.

But now lets take a look at the period January 2001 to present, shown below.


Clearly, there is a strong argument to be made that these distributions are inconsistent with one another (and again, even stronger with the satellite data).

So lets summarize. I have engaged these exercises to approach the question: "What observations of the climate system would be inconsistent with predictions of IPCC AR4?"

1. Using the example of global average temperatures to illustrate how this answer might be approached, I have concluded that it is not "bogus" or "denialist" (as some prominent climate modelers have suggested) to either ask the question or to suggest that there is some valid evidence indicating inconsistency between observations and model predictions.

2. The proper way to approach this question is not clear. With climate models we are not dealing with balls and urns, as in idealized situations of hypothesis testing. Consider that the greater the uncertainty in climate models -- which results from any research that expands the realization space -- will increase the consistency between observations and models, if consistency is simply defined as some part of the distribution of observations overlapping with the distribution of forecasts. Thus, defining a distribution of model predictions simply as being equivalent to the distribution of realizations is problematic, especially if model predictions are expected to have practical value.

3. Some people get very angry when these issues are raised. Readers should see the reactions to my posts as an obvious example of how the politics of climate change are reflected in pressures not to ask these sort of questions.

One solution to this situation would be to ask those who issue climate predictions for the purposes of informing decision makers -- on any time scale -- to clearly explain at the time the prediction is issued what data are being predicted and what values of those data would falsify the prediction. Otherwise, we will find ourselves in a situation where the instinctive response of those issuing the predictions will be to defend their forecasts as being consistent with the observations, no matter what is observed.

May 14, 2008

Lucia Liljegren on Real Climate's Approach to Falsification of IPCC Predictions


Lucia Liljegren has wonderfully clear post up which explains issues of consistency and inconsistency between models and observations using a simple analogy based on predicting the heights of Swedes.

She writes;

I think a simple example using heights is helps me explain the answer to these questions:

1. Is the mean trend in surface temperature over time predicted by the IPCC consistent with the temperature trends we have been experiencing? (That is: is 2C/century consistent with the trend we’ve seen? )
2. Is the lowest uncertainty bound the IPCC shows the public consistent with the trend in GMST (global mean surface temperature) we have seen since 2001?

I think these questions are important to the public and policy makers. They are the questions people at many climate blogs are asking and they are the questions many voters and likely policy makers would like answered.

I think the answer to both questions is "No, the IPCC predictions are inconsistent with recent data."

Please go to her site and read the entire post.

She concludes her discussion as follows:

The IPCC projections remain falsified. Comparison to data suggest they are biased. The statistical tests accounts for the actual weather noise in data on earth.

The argument that this falsification is somehow inapplicable because the earth data falls inside the full range of possibilities for models is flawed. We know why the full range of climate models is huge: It contains a large amount of "climate model noise" due to models that are individually biased relative to the system of interest: the earth.

It will continue to admit what I have always admitted: When applying hypothesis tests to a confidence limit of 5%, one does expect to be wrong 5% of the time. It is entirely possible that the current falsification fall in the category of 5% incorrect falsifications. If this is so, the “falsified” diagnosis will reverse, and not we won’t see another one anytime soon.

However, for now, the IPCC projections remain falsified, and will do so until the temperatures pick up. Given the current statistical state ( a period when large “type 2″ error is expected) it is quite likely we will soon see “fail to falsify” even if the current falsification is a true one. But if the falsification is a “true” falsification, as is most likely, we will see “falsifications” resume. In that case, the falsification will ultimately stick.

For now, all we can do is watch the temperature trends of the real earth.

May 12, 2008

How to Make Two Decades of Cooling Consistent with Warming

The folks at Real Climate have produced a very interesting analysis that provides some useful information for the task of framing a falsification exercise on IPCC predictions of global surface temperature changes. The exercise also provides some insight into how this branch of the climate science community defines the concept of consistency between models and observations, and why it is that every observation seems to be, in their eyes, "consistent with" model predictions. This post explains why Real Climate is wrong in their conclusions on falsification and the why it is that two decades of cooling can be defined as "consistent with" predictions of warming.

In their post, RealClimate concludes:

Claims that a negative observed trend over the last 8 years would be inconsistent with the models cannot be supported. Similar claims that the IPCC projection of about 0.2ºC/dec over the next few decades would be falsified with such an observation are equally bogus.

Real Climate defines observations to be "consistent with" the models to mean that an observation, with its corresponding uncertainty range, overlaps with the spread of the entire ensemble of model realizations. This is the exact same definition of "consistent with" that I have criticized here on many occasions. Why? Because it means that the greater the uncertainty in modeling -- that is, the greater the spread in outcomes across model realizations -- the more likely that observations will be “consistent with” the models. More models, more outcomes, greater consistency – but less certainty. It is in this way that pretty much any observation becomes "consistent with" the models.

As we will see below, the assertion by Real Climate that "a negative observed trend over the last 8 years would be inconsistent with the models cannot be supported" is simply wrong. Real Climate is more on the mark when they write:

Over a twenty year period, you would be on stronger ground in arguing that a negative trend would be outside the 95% confidence limits of the expected trend (the one model run in the above ensemble suggests that would only happen ~2% of the time).

Most people seeking to examine the consistency between models and observations would use some sort of probabilistic threshold, like a 95% confidence interval, which would in this case be calculated as a joint probability of observations and models.

So let’s go through the exercise of comparing modeled and observed trends to illustrate why Real Climate is wrong, or more generously, has adopted a definition of "consistent with" that is so broad as to be meaningless in practice.

First the observations. Thanks to Lucia Liljegren we have the observed trends in global surface temperature 2001-present (which slightly less than 8 years), with 95% confidence intervals, for five groups that keep such record. Here is that information she has presented in degrees Celsius per decade:

UKMET -1.3 +/- 1.8
NOAA 0.0 +/- 1.6
RSS -1.5 +/- 2.2
UAH -0.9 +/- 2.8
GISS 0.2 +/- 2.1

Real Climate very usefully presents 8-year trends for 55 model realizations in a figure that is reproduced below. I have annotated the graph by showing the 95% range for the model realizations, which corresponds to excluding the most extreme 3 model realization on either end of the distribution (2.75 to be exact). (I have emailed Gavin Schmidt asking for the data, which would enable a bit more precision. ) The blue horizontal line at the bottom labeled "95% spread across model realizations" shows the 95% range of 8-year trends present across the IPCC model realizations.

I have also annotated the figure to show in purple the 8+ year trends from the five groups that track global surface temperatures, with the 95% range as calculated by Lucia Liljegren. I have presented each of the individual ranges for the 5 groups, and then with a single purple horizontal line the range across the five observational groups.


Quite clearly there is a large portion of the spread in the observations that is not encompassed by the spread in the models. This part of the observations is cooler than the range provided by the models. And this then leads us to the question of how to interpret the lack of complete overlap.

One interpretation, and the one that makes the most sense to me, is that because there is not an overlap between modeled and observed trends at the 95% level (which is fairly obvious from the figure, but could be easily calculated with the original data) then one could properly claim that the surface temperature observations 2001-present fail to demonstrate consistency with the models of IPCC AR4 at the 95% level. They do however show consistency at some lower level of confidence. Taking each observational dataset independently, one would conclude that UKMET, RSS, and UAH are inconsistent with the models, whereas NASA and NOAA are consistent with them, again at a 95% threshold.

Another interpretation, apparently favored by the guys at Real Climate, is that because there is some overlap between the 95% ranges (i.e., overlap between the blue and purple lines), the models and observations are in fact consistent with one another. [UPDATE: Gave Schmidt at RC confirms this interpretation when he writes in response to a question about the possibility of falsifying IPCC predictions: "Sure. Data that falls unambiguously outside it [i.e., the model range]."] But this type of test for consistency is extremely weak. The Figure below takes the 95% spread in the observations and illustrates how far above and below the 95% spread in the models some overlap would allow. If the test of “consistent with” is defined as any overlap between models and observations, then any rate of cooling or warming between -10 deg C/decade and +13.0 dec C/decade could be said to be “consistent with” the model predictions of the IPCC. This is clearly so absurd as to be meaningless.


So when Real Climate concludes that . . .

Claims that a negative observed trend over the last 8 years would be inconsistent with the models cannot be supported

. . . they are simply incorrect by any reasonable definition of consistency based on probabilistic reasoning. Such claims do in fact have ample support.

If they wish to assert than any overlap between uncertainties in observed temperature trends and the spread of model realizations over an 8-year period implies consistency, then they are arguing that any 8-year trend between -10/C and +13/C (per century) would be consistent with the models. This sort of reasoning turns climate model falsification into a rather meaningless exercise. [UPDATE: In the comments, climate modeler James Annan makes exactly this argument, but goes even further: "even if the model and obs ranges didn't overlap at all, they might (just) be consistent".

Of course in practice the tactical response to claims that observations falsify model predictions will be to argue for expanding the range of realizations in the models, and arguing for reducing the range of uncertainties in the observations. This is one reason why debates over the predictions of climate models devolve into philosophical discussions about how to treat uncertainties.

Finally, how then should we interpret Keenlyside et al.? It is, as Real Climate admits, outside the 95% range of the IPCC AR4 models for its prediction of trends to 2015. But wait, Keelyside et al. in fact use one of the models of the IPCC AR4 runs, and thus this fact could be used to argue that the range of possible 20-year trends is actually larger than that presented by the IPCC. If interpreted in this way, then this would get us back to the interesting conclusion that more models, initialized in different ways, actually work to expand the range of possible futures. Thus we should not be surprised to see Real Climate conclude

Similar claims that the IPCC projection of about 0.2ºC/dec over the next few decades would be falsified with such an observation [of "a negative observed trend"] are equally bogus.

And this gentle readers is exactly why I explained in a recent post that Keelyside et al. now means that a two-decade cooling trend (in RC parlance, a “negative observed trend over 20 years”) is now defined as consistent with predictions of warming.

May 01, 2008

Blinded By Assumptions

My latest column for Bridges is out, and it is titled "Blinded by Assumptions," you can read it here.

January 31, 2008

Climate Experts Debating the Role of Experts in Policy

In Spring, 1997 a group called Ozone Action issued a statement signed by six prominent scientists calling for action on climate change. The letter prompted an interesting public exchange among leading scientists about who has the authority and credentials to call for political action on issues involving science, and whether or not the IPCC is the sole legitimate voice. The exchange is worth reviewing and considering, and I've reproduced parts of it below..

The Six Scientists letter was criticized by a leading climate scientist, Tom Wigley, who wrote:

I thought I should tell you that, for a number of reasons, I am not willing to sign the "6 scientists" statement you distributed. To the contrary, I strongly oppose it.

While I hold the individuals in high regard, I do not consider them authorities on the climate change issue. From memory, none were lead authors of the recent IPCC reports. While this may be an advantage from some points of view, it is not sufficient to overcome the criticism implied by my first sentence. Their endorsement of IPCC is useful, but their statement goes beyond what IPCC says. This can only be damaging to the IPCC process.

Phrases like (my emphasis) "climate DISRUPTION is under way" have no scientific basis, and the claimed need for "greenhouse gas emissions (reductions) beginning immediately" is contrary to the careful assessment of this issue that is given in the IPCC reports.

No matter how well meaning they may be, inexpert views and opinions will not help. In this issue, given that a comprehensive EXPERT document exists, it is exceedingly unwise for highly regarded scientists to step outside their areas of expertise. This is not good scientific practice.

I urge the authors of the statement to endorse IPCC, but go no further. I further recommend that any other scientist considering endorsement of the present statement think very carefully before so doing. In my view, endorsing any statement that goes beyond IPCC, or which is in any way inconsistent with IPCC publications, will potentially label the individual as an advocate and reduce their credibility as an informed and dispassionate scientist.

John Holdren, an energy policy expert now at Harvard, responded strongly to these comments:

Dr. Wigley's critique of the "6 scientists' statement" on global climatic disruption is surprising and, in all of its principal contentions, completely unconvincing.

Consider first his apparent contention that, the IPCC having rendered its authoritative judgment on the causes, consequences, and implications of climate change, no other scientist or group of scientists now has any business offering a supplemental opinion on any part of the matter. Or perhaps he is saying that no scientists other than _climatologists_ should be offering such opinions. (More about that below.) Either way, it is a disturbing proposition, not least for being so contrary to soundly based and solidly established traditions of both scientific and policy discourse.

Assessments of complex science-and-society problems by interdisciplinary panels can make valuable contributions to consensus-building in the scientific community, to shaping research agendas, and to illuminating policy options (among other benefits), as the IPCC admirably has done. I myself have put a good deal of my professional life, over the last quarter of a century, into participating in and leading such assessments on a wide range of topics in the energy, environment, and international-security fields. But I would never have asserted that the product of any of them was sacrosanct -- not to be commented or expanded upon, never mind criticized, by any group other than the original authors -- as Dr. Wigley appears to be asserting for the product of the IPCC. Does he really think that truth, wisdom, and insight are now to be regarded as the exclusive franchise of giant international panels, and anybody not so empaneled (or even those who are but might wish to speak through another channel) must be quiet?

Dr. Wigley has written that he does not consider the signers of the "6 scientists' statement" to be "authorities on the climate change issue" and that "Inexpert opinions do not help". Since he is a climatologist, one supposes that he would have been at least somewhat less distressed if a statement of this sort had been issued by members of that profession. Do they hold the only relevant "expertise"? What part of "the climate change issue" is he talking about here?

The IPCC process engaged not only climatologists but also atmospheric chemists, soil scientists, foresters, ecologists, energy technologists, economists, statisticians, and a good many other kinds of specialists as well -- and for good reason. Even the relatively narrow question of how much climate change has taken place so far is not the province of climatologists alone (since, for example, the insights of atmospheric chemists, geochemists, glaciologists, geographers, and more are needed to help understand what the climate was like before humans started messing with it).

Understanding how the climate may change in the future, of course, depends on insights not only from climatologists but also from soil scientists, oceanographers, and biologists who study the carbon cycle; from energy analysts who study how much fossil fuel is likely to be burned in the future and with what technologies; from foresters and geographers who study the race between deforestation and reforestation; and so on. Understanding the likely and possible responses of terrestrial and marine ecosystems to climate change -- and the consequences for agriculture, forestry, fisheries, biodiversity, and the distribution and abundance of human-disease vectors and pathogens -- is the province of another whole panoply of types of biologists, as well as agronomists, foresters, epidemiologists, and more.

Understanding what technical and policy options are available for reducing greenhouse-gas emissions, and how fast and with what costs these options might be implemented, is the province of energy technologists, economists, and policy analysts, among others. And the decision about what measures governments should take to prepare for and/or implement some suitable subset of these options is necessarily a political choice, inasmuch as it entails a value-laden set of trade-offs among costs, risks, and benefits of incommensurable types. Of course people's thinking about these trade-offs ought to be informed by as complete a portrayal as possible of what is known and not known about the climatological, geochemical, biological, techno- logical, economic, and other characteristics of the problem. But to believe that this portrayal will be understood in exactly the same way by any two individuals -- or that, if it were, its ingredients would be weighed by those individuals in exactly the same way, so as to lead them to identical policy preferences -- would be naive in the extreme.

Luckily, society has worked out a way to reach conclusions about what to do in the face of multifaceted, uncertainty-laden choices about problems affecting the common good, and it involves not only science and policy analysis but also, ultimately and appropriately, politics. Neither the science part of this mix nor the policy-analysis part -- not to speak of the political part -- works by designating a single individual or group (no matter how distinguished) as the single arbiter of what is right, what is reasonable, or what is helpful in public discourse. . .

Thanks to folks at Carnegie Mellon University the full exchange is preserved here.

January 26, 2008

Updated IPCC Forecasts vs. Observations

IPCC Verification w-RSS correction.png

Carl Mears from Remote Sensing Systems, Inc. was kind enough to email me to point out that the RSS data that I had shared with our readers a few weeks ago contained an error that RSS has since corrected. The summary figure above is re-plotted with the corrected data (RSS is the red curve). At the time I wrote:

Something fishy is going on. The IPCC and CCSP recently argued that the surface and satellite records are reconciled. This might be the case from the standpoint of long-term linear trends. But the data here suggest that there is some work left to do. The UAH and NASA curves are remarkably consistent. But RSS dramatically contradicts both. UKMET shows 2007 as the coolest year since 2001, whereas NASA has 2007 as the second warmest. In particular estimates for 2007 seem to diverge in unique ways. It'd be nice to see the scientific community explain all of this.

For those interested in the specifics, Carl explained in his email:

The error was simple -- I made a small change in the code ~ 1 year ago that resulted in a ~0.1K decrease in the absolute value of AMSU TLTs, but neglected to reprocess data from 1998-2006, instead only using it for the new (Jan 2007 onward) data. Since the AMSU TLTs are forced to match the MSU TLTs (on average) during the overlap period, this resulted in an apparent drop in TLT for 2007. Reprocessing the earlier AMSU data, thus lowering AMSU TLT by 0.1 from 1998-2006, resulted in small changes in the parameters that are added to the AMSU temperatures to make them match MSU temperatures, and thus the 2007 data is increased by ~0.1K. My colleagues at UAH (Christy and Spencer) were both very helpful in diagnosing the problem.

It is important to note that the RSS correction does not alter my earlier analysis of the IPCC predictions (made in 1990, 1995, 2001, 2007) and various observations. Thanks again to Carl for alerting me to the error and giving me a chance to update the figures with the new information!

January 23, 2008

New Measures for Innovation

I posted before about the Advisory Committee on Measuring Innovation in the 21st Century, an effort of the Department of Commerce to adjust the economic statistics to better reflect the nature of current innovation. The challenges of effective measurement (and the corresponding analyses) can be demonstrated by the lack of effective data on the services in the economy, and the limits of patent measures as an indicator of innovation.

As an unfortunate indicator of the perceived value of the project, the committee's report was released this past Friday (a time guaranteed to get limited attention from the media). You can read the press release, as well as key quotes and facts, from the Committee's homepage -

The project will continue through a series of workshops on the drivers of innovation. The Commerce Secretary committed to developing measures for innovation through the Bureau of Economic Analysis, with help from the Bureau of Labor Statistics and relevant Department of Commerce agencies. The plan is to develop an innovation account that would include measures of intellectual property and human capital that would help measure the impact of investments in innovation on productivity. They will encourage the National Science Foundation to continue its efforts to improve R&D measures connected to innovation. The first new account (the official parlance for a collection of measures in the economic indicators) is expected by January of next year. In the meantime, interested parties should check back with the website to learn about the workshops and other efforts of the committee.

Posted on January 23, 2008 11:15 PM View this article | Comments (0)
Posted to Author: Bruggeman, D. | Scientific Assessments

January 18, 2008

Temperature Trends 1990-2007: Hansen, IPCC, Obs

The figure below shows linear trends in temperature for Jim Hansen's three 1988 scenarios (in shades of blue), for the IPCC predictions issued in 1990, 1995, 2001, 2007 (in shades of green), and for four sets of observations (in shades of brown). I choose the period 1990-2007 because this is the period of overlap for all of the predictions (except IPCC 2007, which starts in 2000).

temp trends.png

Looking just at these measures of central tendency (i.e., no formal consideration of uncertainties) it seems clear that:

1. Trends in all of Hansen's scenarios are above IPCC 1995, 2001, and 2007, as well as three of the four surface observations.

2. The outlier on surface observations, and the one consistent with Hansen's Scenarios A and B is the NASA dataset overseen by Jim Hansen. Whatever the explanation for this, good scientific practice would have forecasting and data collection used to verify those forecasts conducted by completely separate groups.

3. Hansen's Scenario A is very similar to IPCC 1990, which makes sense given their closeness in time, and assumptions of forcings at the time (i.e., thoughts on business-as-usual did not change much over that time).

The data for the Hansen scenarios was obtained at Climate Audit from the ongoing discussion there, and the IPCC and observational data is as described on this site over the past week or so in the forecast verification exercise that I have conducted. This is an ongoing exercise, as part of a conversation across the web, so if you have questions or comments, please share them, either here, or if our comment interface is driving you nuts (as it is with me), then comment over at Climate Audit where I'll participate in the discussions.

January 16, 2008

UKMET Short Term Global Temperature Forecast

UKMET Short Term Forecast.png

This figure shows a short-term forecast of global average temperature issued by the UK Meteorological Service, with some annotations that I've added and described below. The forecast is discussed in this PDF where you can find the original figure. This sort of forecast should be applauded, because it allows for learning based on experience. Such forecasts, whether eventually shown to be wrong or right, can serve as powerful tests of knowledge and predictive skill. The UK Met Service is to be applauded. Now on to the figure itself.

The figure is accompanied by this caption:

Observations of global average temperature (black line) compared with decadal ‘hindcasts’ (10-year model simulations of the past, white lines and red shading), plus the first decadal prediction for the 10 years from 2005. Temperatures are plotted as anomalies (relative to 1979–2001). As with short-term weather forecasts there remains some uncertainty in our predictions of temperature over a decade. The red shading shows our confidence in predictions of temperature in any given year. If there are no volcanic eruptions during the forecast period, there is a 90% likelihood of the temperature being within the shaded area.

The figure shows both hindcasts and a forecast. I've shaded the hindcasts in grey. I've added the green curve which is my replication of the global temperature anomalies from the UKMET HADCRUT3 dataset extended to 2007. I've also plotted as a blue dot the prediction issued by UKMET for 2008, which is expected to be indistinguishable from the temperature of years 2001 to 2007 (which were indistinguishable from each other). The magnitude of the UKMET forecast over the next decade is almost exactly identical to the IPCC AR4 prediction over the same time period, which I discussed last week.

I have added the pink star at 1995 to highlight the advantages offered by hindcasting. Imagine if the model realization begun in 1985 had been continued beyond 1995, rather than being re-run after 1995. Clearly, all subsequent observed temperatures would have been well below that 1985 curve. One important reason for this is of course the eruption of Mt. Pinatubo, which was not predicted. And that is precisely the point -- prediction is really hard, especially when conducted in the context of open systems, and as is often said, especially about the the future. Our ability to explain why a prediction was wrong does not make that prediction right, and this is a point often lost in debate about climate change.

Again, kudos to the UK Met Service. They've had the fortitude to issue a short term prediction related to climate change. Other scientific bodies should follow this lead. It is good for science, and good for the use of science in decision making.

January 15, 2008

Verification of IPCC Sea Level Rise Forecasts 1990, 1995, 2001

Here is a graph showing IPCC sea level rise forecasts from the FAR (1990), SAR (1995), and TAR (2001).

IPCC Sea Level.png

And here are the sources:

IPCC Sea Level Sources.png

Observational data can be found here. Thanks to my colleague Steve Nerem.

Unlike temperature forecasts by the IPCC, sea level rise shows no indication that scientists have a handle on the issue. As with temperature the IPCC dramatically decreased its predictions of sea level rise in between its first (1990) and second (1995) assessment reports. It then nudged down its prediction a very small amount in its 2001 report. The observational data falls in the middle of the 1990 and 1995/2001 assessments.

Last year Rahmstorf et al. published a short paper in Science comparing observations of temperature with IPCC 2001 predictions (Aside: it is remarkable that Science allowed them to ignore IPCC 1990 and 1995). Their analysis is completely consistent with the temperature and sea level rise verifications that I have shown. On sea level rise they concluded:

Previous projections, as summarized by IPCC, have not exaggerated but may in some respects even have underestimated the change, in particular for sea level.

This statement is only true if one ignores the 1990 IPCC report which overestimated both sea level rise and temperature. Rahmstorf et al. interpretation of the results is little more than spin, as it would have been equally valid to conclude based on the 1990 report:

Previous projections, as summarized by IPCC, have not underestimated but may in some respects even have exaggerated the change, both for sea level and temperature.

Rather than spin the results, I conclude that the ongoing debate about future sea level rise is entirely appropriate. The fact that the IPCC has been unsuccessful in predicting sea level rise, does not mean that things are worse or better, but simply that scientists clearly do not have a handle on this issue and are unable to predict sea level changes on a decadal scale. The lack of predictive accuracy does not lend optimism about the prospects for accuracy on the multi-decadal scale. Consider that the 2007 IPCC took a pass on predicting near term sea level rise, choosing instead to focus 90 years out (as far as I am aware, anyone who knows differently, please let me know).

This state of affairs should give no comfort to anyone: over the 21st century sea level is expected to rise, anywhere from an unnoticeable amount to the catastrophic, and scientists have essentially no ability to predict this rise, much less the effects of various climate policies on that rise. As we've said here before, this is a cherrypickers delight, and a policy makers nightmare. It'd be nice to see the scientific community engaged in a bit less spin, and a bit more comprehensive analysis.

January 14, 2008

James Hansen on One Year's Temperature

NASA's James Hansen just sent around a commentary (in PDF here) on the significance of the 2007 global temperature in the context of the long-term temperature record that he compiles for NASA. After Real Climate went nuts over how misguided it is to engage in a discussion of eight years worth of temperature records, I can''t wait to see them lay into Jim Hansen for asserting that one year's data is of particular significance (and also for not graphing uncertainty ranges):

The Southern Oscillation and the solar cycle have significant effects on year-to-year global temperature change. Because both of these natural effects were in their cool phases in 2007, the unusual warmth of 2007 is all the more notable.

But maybe it is that data that confirms previously held beliefs is acceptable no matter how short the record, and data that does not is not acceptable, no matter how long the record. But that would be confirmation bias, wouldn't it?

Anyway, Dr. Hansen does not explain why the 2007 NASA data runs counter to that of UKMET, UAH or RSS, but does manage to note the "incorrect" 2007 UKMET prediction of a record warm year. Dr. Hansen issues his own prediction:

. . . it is unlikely that 2008 will be a year with an unusual global temperature change, i.e., it is likely to remain close to the range of (high) values exhibited in 2002-2007. On the other hand, when the next El Nino occurs it is likely to carry global temperature to a significantly higher level than has occurred in recent centuries, probably higher than any year in recent millennia. Thus we suggest that, barring the unlikely event of a large volcanic eruption, a record global temperature clearly exceeding that of 2005 can be expected within the next 2-3 years.

I wonder if this holds just for the NASA dataset put together by Dr. Hansen or for all of the temperature datasets.

Updated Chart: IPCC Temperature Verification

I've received some email comments suggesting that my use of the 1992 IPCC Supplement as the basis for IPCC 1990 temperature predictions was "too fair" to the IPCC because the IPCC actually reduced its temperature projections from 1990 to 1992. In addition, Gavin Schmidt and a commenter over at Climate Audit also did not like my use of the 1992 report. So I am going to take full advantage of the rapid feedback of the web to provide an updated figure, based on IPCC 1990, specifically, Figure A.9, p. 336. In other words, I no longer rely on the 1992 supplement, and have simply gone back to the original IPCC 1990 FAR. Here then is that updated Figure:

IPCC Verification 90-95-01-07 vs Obs.png

Thanks all for the feedback!

Pachauri on Recent Climate Trends

Last week scientists at the Real Climate blog gave their confirmation bias synapses a workout by explaining that eight years of climate data is meaningless, and people who pay any attention to recent climate trends are "misguided." I certainly agree that we should exhibit cautiousness in interpreting short-duration observations, nonetheless we should always be trying to explain (rather than simply discount) observational evidence to avoid the trap of confirmation bias.

So it was interesting to see IPCC Chairman Rajendra Pachauri exhibit "misguided" behavior when he expressed some surprise about recent climate trends in The Guardian:

Rajendra Pachauri, the head of the U.N. Panel that shared the 2007 Nobel Peace Prize with former U.S. Vice President Al Gore, said he would look into the apparent temperature plateau so far this century.

"One would really have to see on the basis of some analysis what this really represents," he told Reuters, adding "are there natural factors compensating?" for increases in greenhouse gases from human activities.

He added that sceptics about a human role in climate change delighted in hints that temperatures might not be rising. "There are some people who would want to find every single excuse to say that this is all hogwash," he said.

Ironically, by suggesting that their might be some significance to recent climate trends, Dr. Pachauri has provided ammunition to those very same skeptics that he disparages. Perhaps Real Climate will explain how misguided he is, but somehow I doubt it.

For the record, I accept the conclusions of IPCC Working Group I. I don't know how to interpret climate observations of the early 21st century, but believe that there are currently multiple valid hypotheses. I also think that we can best avoid confirmation bias, and other cognitive traps, by making explicit predictions of the future and testing them against experience. The climate community, or at least its activist wing, studiously avoids forecast verification. It just goes to show, confirmation bias is more a more comfortable state than dissonance -- and that goes for people on all sides of the climate debate.

Verification of IPCC Temperature Forecasts 1990, 1995, 2001, and 2007

Last week I began an exercise in which I sought to compare global average temperature predictions with the actual observed temperature record. With this post I'll share my complete results.

Last week I showed a comparison of the 2007 IPCC temperature forecasts (which actually began in 2000, so they were really forecasts of data that had already been observed). Here is that figure.

surf-sat vs. IPCC.png

Then I showed a figure with a comparison of the 1990 predictions made by the IPCC in 1992 with actual temperature data. Some folks misinterpreted the three curves that I showed from the IPCC to be an uncertainty bound. They were not. Instead, they were forecasts conditional on different assumptions about climate sensitivity, with the middle curve showing the prediction for a 2.5 degree climate sensitivity, which is lower than scientists currently believe to the most likely value. So I have reproduced that graph below without the 1.5 and 4.5 degree climate sensitivity curves.

IPCC 1990 verification.png

Now here is a similar figure for the 1995 forecast. The IPCC in 1995 dramatically lowered its global temperature predictions, primarily due to the inclusion of consideration of atmospheric aerosols, which have a cooling effect. You can see the 1995 IPCC predictions on pp. 322-323 of its Second Assessment Report. Figure 6.20 shows the dramatic reduction of temperature predictions through the inclusion of aerosols. The predictions themselves can be found in Figure 6.22, and are the values that I use in the figure below, which also use a 2.5 degree climate sensitivity, and are also based on the IS92e or IS92f scenarios.

IPCC 1995 Verification.png

In contrast to the 1990 prediction, the 1995 prediction looks spot on. It is worth noting that the 1995 prediction began in 1990, and so includes observations that were known at the time of the prediction.

In 2001, the IPCC nudged its predictions up a small amount. The prediction is also based on a 1990 start, and can be found in the Third Assessment Report here. The most relevant scenario is A1FI, and the average climate sensitivity of the models used to generate these predictions is 2.8 degrees, which may be large enough to account for the difference between the 1995 and 2001 predictions. Here is a figure showing the 2001 forecast verification.

IPCC 2001 Verification.png

Like 1995, the 2001 figure looks quite good in comparison to the actual data.

Now we can compare all four predictions with the data, but first here are all four IPCC temperature predictions (1990, 1995, 2001, 2007) on one graph.

IPCC Predictions 90-95-01-07.png

IPCC issued its first temperature prediction in 1990 (I actually use the prediction from the supplement to the 1990 report issued in 1992). Its 1995 report dramatically lowered this prediction. 2001 nudged this up a bit, and 2001 elevated the entire curve another small increment, keeping the slope the same. My hypothesis for what is going on here is that the various changes over time to the IPCC predictions reflect incrementally improved fits to observed temperature data, as more observations have come in since 1990.

In other words, the early 1990s showed how important aerosols were in the form of dramatically lowered temperatures (after Mt. Pinatubo), and immediately put the 1990 predictions well off track. So the IPCC recognized the importance of aerosols and lowered its predictions, putting the 1995 IPCC back on track with what had happened with the real climate since its earlier report. With the higher observed temperatures in the late 1990s and early 2000s the slightly increased predictions of temperature in 2001 and 2007 represented better fits with observations since 1995 (for the 2001 report) and 2001 (for the 2007 report).

Imagine if your were asked to issue a prediction for the temperature trend over next week, and you are allowed to update that prediction every 2nd day. Regardless of where you think things will eventually end up, you'd be foolish not to include what you've observed in producing your mid-week updates. Was this behavior by the IPCC intentional or simply the inevitable result of using a prediction start-date years before the forecast was being issued? I have no idea. But the lesson for the IPCC should be quite clear: All predictions (and projections) that it issues should begin no earlier than the year that the prediction is being made.

And now the graph that you have all been waiting for. Here is a figure showing all four IPCC predictions with the surface (NASA, UKMET) and satellite (UAH, RSS) temperature record.

IPCC Verification 90-95-01-07 vs Obs.png

You can see on this graph that the 1990 prediction was obviously much higher than the other three, and you can also clearly see how the IPCC temperature predictions have creeped up as observations showed increasing temperatures from 1995-2005. A simple test of my hypothesis is as follows: In the next IPCC, if temperatures from 2005 to the next report fall below the 2007 IPCC prediction, then the next IPCC will lower its predictions. Similarly, if values fall above that level, then the IPCC will increase its predictions.

What to take from this exercise?

1. The IPCC does not make forecast verification an easy task. The IPCC does not clearly identify what exactly it is predicting nor the variables that can be used to verify those predictions. Like so much else in climate science this leaves evaluations of predictions subject to much ambiguity, cherrypicking, and seeing what one wants to see.

2. The IPCC actually has a pretty good track record in its predictions, especially after it dramatically reduced its 1990 prediction. This record is clouded by an appearance of post-hoc curve fitting. In each of 1995, 2001, and 2007 the changes to the IPCC predictions had the net result of improving predictive performance with observations that had already been made. This is a bit like predicting today's weather at 6PM.

3. Because the IPCC clears the slate every 5-7 years with a new assessment report, it is guarantees that its most recent predictions can never be rigorously verified, because, as climate scientists will tell you, 5-7 years is far too short to say anything about climate predictions. Consequently, the IPCC should not predict and then move on, but pay close attention to its past predictions and examine why the succeed or fail. As new reports are issued the IPCC should go to great lengths to place its new predictions on an apples-to-apples basis with earlier predictions. The SAR did a nice job of this, more recent reports have not. A good example of how not to update predictions is the predictions of sea level rise between the TAR and AR4 which are not at all apples-to-apples.

4. Finally, and I repeat myself, the IPCC should issue predictions for the future, not the recent past.

Appendix: Checking My Work

The IPCC AR4 Technical Summary includes a figure (Figure TS.26) that shows a verification of sorts. I use that figure as a comparison to what I've done. Here is that figure, with a number of my annotations superimposed, and explained below.

IPCC Check.png

Let me first say that the IPCC probably could not have produced a more difficult-to-interpret figure (I see Gavin Schmidt at Real Climate has put out a call for help in understanding it). I have annotated it with letters and some lines and I explain them below.

A. I added this thick horizontal blue line to indicate the 1990 baseline. This line crosses a thin blue line that I placed to represent 2007.

B. This thin blue line crosses the vertical axis where my 1995 verification value lies, represented by the large purple dot.

C. This thin blue line crosses the vertical axis where my 1990 verification value lies, represented by the large green dot. (My 2001 verification is represented by the large light blue dot.)

D. You can see that my 1990 verification value falls exactly on a line extended from the upper bound of the IPCC curve. I have also extended the IPCC mid-range curve as well (note that my extension superimposed falls a tiny bit higher than it should). Why is this? I'm not sure, but one answer is that the uncertainty range presented by the IPCC represents the scenario range, but of course in the past there is no scenario uncertainty. Since emissions have fallen at the high end of the scenario space, if my interpretation is correct, then my verification is consistent with that of the IPCC.

E. For the 1995 verification, you can see that similarly my value falls exactly on a line extended from the upper end of the IPCC range. This would also be consistent with the IPCC presenting the uncertainty range as representing alternative scenarios. The light blue dot is similarly at the upper end of the blue range. What should not be missed is that the relative difference between my verifications and those of the IPCCs are just about identical.

A few commenters over at Real Climate, including Gavin Schmidt, have suggested that such figures need uncertainty bounds on them. In general, I agree, but I'd note that none of the model predictions presented by the IPCC (B1, A1B, A2, Commitment -- note that all of these understate reality since emissions are following A1FI, the highest, most closely) show any model uncertainty whatsoever (nor any observational uncertainty, nor multiple measures of temperature). Surely with the vast resources available to the IPCC, they could have done a much more rigorous job of verification.

In closing, I guess I'd suggest to the IPCC that this sort of exercise should be taken up as a formal part of its work. There are many, many other variables (and relationships between variables) that might be examined in this way. And they should be.

January 11, 2008

Real Climate's Two Voices on Short-Term Climate Fluctuations

Real Climate has been speaking with two voices on how to compare observations of climate with models. Last August they asserted that one-year's sea ice extent could be compared with models:

A few people have already remarked on some pretty surprising numbers in Arctic sea ice extent this year (the New York Times has also noticed). The minimum extent is usually in early to mid September, but this year, conditions by Aug 9 had already beaten all previous record minima. Given that there is at least a few more weeks of melting to go, it looks like the record set in 2005 will be unequivocally surpassed. It could be interesting to follow especially in light of model predictions discussed previously.

Today, they say that looking at 8 years of temperature records is misguided:

John Tierney and Roger Pielke Jr. have recently discussed attempts to validate (or falsify) IPCC projections of global temperature change over the period 2000-2007. Others have attempted to show that last year's numbers imply that 'Global Warming has stopped' or that it is 'taking a break' (Uli Kulke, Die Welt)). However, as most of our readers will realise, these comparisons are flawed since they basically compare long term climate change to short term weather variability.

So according to Real Climate one-year's ice extent data can be compared to climate models, but 8 years of temperature data cannot.

Right. This is why I believe that whatever one's position of climate change is, everyone should agree that rigorous forecast verification is needed.

Post Script. I see at Real Climate commenters are already calling me a "skeptic" for even discussing forecast verification. For the record I accept the consensus of the IPCC WGI. If asking questions about forecast verification is to be tabooo, then climate science is in worse shape than I thought.

January 10, 2008

Verification of 1990 IPCC Temperature Predictions

1990 IPCC verification.png

I continue to receive good suggestions and positive feedback on the verification exercise that I have been playing around with this week. Several readers have suggested that a longer view might be more appropriate. So I took a look at the IPCC's First Assessment Report that had been sitting on my shelf, and tried to find its temperature prediction starting in 1990. I actually found what I was looking for in a follow up document: Climate Change 1992: The Supplementary Report to the IPCC Scientific Assessment (not online that I am aware of).

In conducting this type of forecast verification, one of the first things to do is to specify which emissions scenario most closely approximated what has actually happened since 1990. As we have discussed here before, emissions have been occurring at the high end of the various scenarios used by the IPCC. So in this case I have used IS92e or IS92f (the differences are too small to be relevant to this analysis), which are discussed beginning on p. 69.

With the relevant emissions scenario, I then went to the section that projected future temperatures, and found this in Figure Ax.3 on p. 174. From that I took from the graph the 100-year temperature change and converted it into an annual rate. At the time the IPCC presented estimates for climate sensitivities of 1.5 degree, 2.5 degrees, and 4.5 degrees, with 2.5 degrees identified as a "best estimate." In the figure above I have estimated the 1.5 and 4.5 degree values based on the ratios taken from graph Ax.2, but I make no claim that they are precise. My understanding is that climate scientists today think that climate sensitivity is around 3.0 degrees, so if one were to re-do the 1990 prediction with a climate sensitivity of 3.0 the resulting curve would be a bit above the 2.5 degree curve shown above.

On the graph you will also see the now familiar temperature records from two satellite and two surface analyses. It seems pretty clear that the IPCC in 1990 over-forecast temperature increases, and this is confirmed by the most recent IPCC report (Figure TS.26), so it is not surprising.

I'll move on to the predictions of the Second Assessment Report in a follow up.

January 09, 2008

Forecast Verification for Climate Science, Part 3

By popular demand, here is a graph showing the two main analyses of global temperatures from satellite, from RSS and UAH, as well as the two main analyses of global temperatures from the surface record, UKMET and NASA, plotted with the temperature predictions reported in IPCC AR4, as described in Part 1 of this series.

surf-sat vs. IPCC.png

Some things to note:

1) I have not graphed observational uncertainties, but I'd guess that they are about +/-0.05 (and someone please correct me if this is wildly off), and their inclusion would not alter the discussion here.

2) A feast for cherrypickers. One can arrive at whatever conclusion one wants with respect to the IPCC predictions. Want the temperature record to be consistent with IPCC? OK, then you like NASA. How about inconsistent? Well, then you are a fan of RSS. On the fence? Well, UAH and UKMET serve that purpose pretty well.

3) Something fishy is going on. The IPCC and CCSP recently argued that the surface and satellite records are reconciled. This might be the case from the standpoint of long-term liner trends. But the data here suggest that there is some work left to do. The UAH and NASA curves are remarkably consistent. But RSS dramatically contradicts both. UKMET shows 2007 as the coolest year since 2001, whereas NASA has 2007 as the second warmest. In particular estimates for 2007 seem to diverge in unique ways. It'd be nice to see the scientific community explain all of this.

4) All show continued warming since 2000!

5) From the standpoint of forecast verification, which is where all of this began, the climate community really needs to construct a verification dataset for global temperature and other variables that will be (a) the focus of predictions, and (b) the ground truth against which those predictions will be verified.

Absent an ability to rigorously evaluate forecasts, in the presence of multiple valid approaches to observational data we run the risk of engaging in all sorts of cognitive traps -- such as availability bias and confirmation bias. So here is a plea to the climate community: when you say that you are predicting something like global temperature or sea ice extent or hurricanes -- tell us is specific detail what those variables are, who is measuring them, and where to look in the future to verify the predictions. If weather forecasters, stock brokers, and gamblers can do it, then you can too.

January 08, 2008

Forecast Verification for Climate Science, Part 2

Yesterday I posted a figure showing how surface temperatures compare with IPCC model predictions. I chose to use the RSS satellite record under the assumption that the recent IPCC and CCSP reports were both correct in their conclusions that the surface and satellite records have been reconciled. It turns out that my reliance of the IPCC and CCSP may have been mistaken.

I received a few comments from people suggesting that I had selectively used the RSS data because it showed different results than other global temperature datasets. My first reaction to this was to wonder how the different datasets could show different results if the IPCC was correct when it stated (PDF):

New analyses of balloon-borne and satellite measurements of lower- and mid-tropospheric temperature show warming rates that are similar to those of the surface temperature record and are consistent within their respective uncertainties, largely reconciling a discrepancy noted in the TAR.

But I decided to check for myself. I went to the NASA GISS and downloaded its temperature data and scaled to a 1980-1999 mean. I then plotted it on the same scale as the RSS data that I shared yesterday. Here is what the curves look like on the same scale.

RSS v. GISS.png

Well, I'm no climate scientist, but they sure don't look reconciled to me, especially 2007. (Any suggestions on the marked divergence in 2007?)

What does this mean for the comparison with IPCC predictions? I have overlaid the GISS data on the graph I prepared yesterday.

AR4 Verificantion Surf Sat.png

So using the NASA GISS global temperature data for 2000-2007 results in observations that are consistent with the IPCC predictions, but contradict the IPCC's conclusion that the surface and satellite temperature records are reconciled. Using the RSS data results in observations that are (apparently) inconsistent with the IPCC predictions.

I am sure that in conducting such a verification some will indeed favor the dataset that best confirms their desired conclusions. But, it would be ironic indeed to see scientists now abandon RSS after championing it in the CCSP and IPCC reports. So, I'm not sure what to think.

Is it really the case that the surface and satellite records are again at odds? What dataset should be used to verify climate forecasts of the IPCC?

Answers welcomed.

January 07, 2008

Forecast Verification for Climate Science

Last week I asked a question:

What behavior of the climate system could hypothetically be observed over the next 1, 5, 10 years that would be inconsistent with the current consensus on climate change?

We didn’t have much discussion on our blog, perhaps in part due to our ongoing technical difficulties (which I am assured will be cleared up soon). But John Tierney at the New York Times sure received an avalanche of responses, many of which seemed to excoriate him simply for asking the question, and none that really engaged the question.

I did receive a few interesting replies by email from climate scientists. Here is one of the most interesting:

The IPCC reports, both AR4 (see Chapter 10) and TAR, are full of predictions made starting in 2000 for the evolution of surface temperature, precipitation, precipitation intensity, sea ice extent, and on and on. It would be a relatively easy task for someone to begin tracking the evolution of these variables and compare them to the IPCC’s forecasts. I am not aware of anyone actually engaged in this kind of climate forecast verification with respect to the IPCC, but it is worth doing.

So I have decided to take him up on this and present an example of what such a verification might look like. I have heard some claims lately that global warming has stopped, based on temperature trends over the past decade. So global average temperature seems like a as good a place as any to provide an example.

I begin with the temperature trends. I have decided to use the satellite record provided by Remote Sensing Systems, mainly because of the easy access of its data. But the choice of satellite versus surface global temperature dataset should not matter, since these have been reconciled according to the IPCC AR4. Here is a look at the satellite data starting in 1998 through 2007.

RSS TLT 1998-2007 Monthly.png

This dataset starts with the record 1997/1998 ENSO event which boosted temperatures a good deal. It is interesting to look at, but probably not the best place to start for this analysis. A better place to start is with 2000, but not because of what the climate has done, but because this is the baseline used for many of the IPCC AR4 predictions.

Before proceeding, a clarification must be made between a prediction and a projection. Some have claimed that the IPCC doesn’t make predictions, it only makes projections across a wide range of emissions scenarios. This is just a fancy way of saying that the IPCC doesn’t predict future emissions. But make no mistake, it does make conditional predictions for each scenario. Enough years have passed for us to be able to say that global emissions have been increasing at the very high end of the family of scenarios used by the IPCC (closest to A1F1 for those scoring at home). This means that we can zero in on what the IPCC predicted (yes, predicted) for the A1F1 scenario, which has best matched actual emissions.

So how has global temperature changed since 2000? Here is a figure showing the monthly values, indicating that while there has been a decrease in average global temperature of late, the linear trend since 2000 is still positive.

RSS TLT 2000-2007 Monthly.png

But monthly values are noisy, and not comparable with anything produced by the IPCC, so let’s take a look at annual values.

RSS 2000-2007 Annual.png

The annual values result in a curve that looks a bit like an upwards sloping letter M.

The model results produced by the IPCC are not readily available, so I will work from their figures. In the IPCC AR4 report Figure 10.26 on p. 803 of Chapter 10 of the Working Group I report (here in PDF) provides predictions of future temperature as a function of emissions scenario. The one relevant for my purposes can be found in the bottom row (degrees C above 1980-2000 mean) and second column (A1F1).

I have zoomed in on that figure, and overlaid the RSS temperature trends 2000-2007 which you can see below.

AR4 Verification Example.png

Now a few things to note:

1. The IPCC temperature increase is relative to a 1980 to 2000 mean, whereas the RSS anomalies are off of a 1979 to 1998 mean. I don’t expect the differences to be that important in this analysis, particularly given the blunt approach to the graph, but if someone wants to show otherwise, I’m all ears.

2. It should be expected that the curves are not equal in 2000. The anomaly for 2000 according to RSS is 0.08, hence the red curve begins at that value. Figure 10.26 on p. 803 of Chapter 10 of the Working Group I report actually shows observed temperatures for a few years beyond 2000, and by zooming in on the graph in the lower left hand corner of the figure one can see that 2000 was in fact below the A1B curve.

So it appears that temperature trends since 2000 are not closely following the most relevant prediction of the IPCC. Does this make recent temperature trends inconsistent with the IPCC? I have no idea, and that is not the point of this post. I'll leave it to climate scientists to tell us the significance. I assume that many climate scientists will say that there is no significance to what has happened since 2000, and perhaps emphasize that predictions of global temperature are more certain in the longer term than shorter term. But that is not what the IPCC figure indicates. In any case, 2000-2007 may not be sufficient time for climate scientists to become concerned that their predictions are off, but I’d guess that at some point, if observations don’t match predictions they might be of some concern. Alternatively, if observations square with predictions, then this would add confidence.

Before one dismisses this exercise as an exercise in randomness, it should be observed that in other contexts scientists associated short term trends with longer-term predictions. In fact, one need look no further than the record 2007 summer melt in the Arctic which was way beyond anything predicted by the IPCC, reaching close to 3 million square miles less than the 1978-2000 mean. The summer anomaly was much greater than any of the IPCC predictions on this time scale (which can be seen in IPCC AR4 Chapter 10 Figure 10.13 on p. 771). This led many scientists to claim that because the observations were inconsistent with the models, that there should be heightened concern about climate change. Maybe so. But if one variable can be examined for its significance with respect to long-term projections, then surely others can as well.

What I’d love to see is a place where the IPCC predictions for a whole range of relevant variables are provided in quantitative fashion, and as corresponding observations come in, they can be compared with the predictions. This would allow for rigorous evaluations of both the predictions and the actual uncertainties associated with those predictions. Noted atmospheric scientist Roger Pielke, Sr. (my father, of course) has suggested that three variables be looked at: lower tropospheric warming, atmospheric water vapor content, and oceanic heat content. And I am sure there are many other variables worth looking at.

Forecast evaluations also confer another advantage – they would help to move beyond the incessant arguing about this or that latest research paper and focus on true tests of the fidelity of our ability to forecast future states of the climate system. Making predictions and them comparing them to actual events is central to the scientific method. So everyone in the climate debate, whether skeptical or certain, should welcome a focus on verification of climate forecasts. If the IPCC is indeed settled science, then forecast verifications will do nothing but reinforce that conclusion.

For further reading:

Pielke, Jr., R.A., 2003: The role of models in prediction for decision, Chapter 7, pp. 113-137 in C. Canham and W. Lauenroth (eds.), Understanding Ecosystems: The Role of Quantitative Models in Observations, Synthesis, and Prediction, Princeton University Press, Princeton, N.J. (PDF)

Sarewitz, D., R.A. Pielke, Jr., and R. Byerly, Jr., (eds.) 2000: Prediction: Science, decision making and the future of nature, Island Press, Washington, DC. (link) and final chapter (PDF).

January 01, 2008

Is there any weather inconsistent with the the scientific consensus on climate?

Two years ago I asked a question of climate scientists that never received a good answer. Over at the TierneyLab at the New York Times, John Tierney raises the question again:

What behavior of the climate system could hypothetically be observed over the next 1, 5, 10 years that would be inconsistent with the current consensus on climate change? My focus is on extreme events like floods and hurricanes, so please consider those, but consider any other climate metric or phenomena you think important as well for answering this question. Ideally, a response would focus on more than just sea level rise and global average temperature, but if these are the only metrics that are relevant here that too would be very interesting to know.

The answer, it seems, is "nothing would be inconsistent," but I am open to being educated. Climate scientists especially invited to weigh in in the comments or via email, here or at the TierneyLab.

And a Happy 2008 to all our readers!

December 21, 2007

On the Political Relevance of Scientific Consensus

Senator James Inhofe (R-OK) has released a report in which he has identified some hundreds of scientists who disagree with the IPCC consensus. Yawn. In the comments of Andy Revkin's blog post on the report you can get a sense of why I often claim that arguing about the science of climate change is endlessly entertaining but hardly productive, and confirming Andy's assertion that "A lot of us live in intellectual silos."

In 2005 I had an exchange with Naomi Oreskes in Science on the significance of a scientific consensus in climate politics. Here is what I said then (PDF):

IN HER ESSAY "THE SCIENTIFIC CONSENSUS on climate change" (3 Dec. 2004, p. 1686), N. Oreskes asserts that the consensus reflected in the Intergovernmental Panel on Climate Change (IPCC) appears to reflect, well, a consensus. Although Oreskes found unanimity in the 928 articles with key words "global climate change," we should not be surprised if a broader review were to find conclusions at odds with the IPCC consensus, as "consensus" does not mean uniformity of perspective. In the discussion motivated by Oreskes’ Essay, I have seen one claim made that there are more than 11,000 articles on "climate change" in the ISI database and suggestions that about 10% somehow contradict the IPCC consensus position.

But so what? If that number is 1% or 40%, it does not make any difference whatsoever from the standpoint of policy action. Of course, one has to be careful, because people tend to read into the phrase "policy action" a particular course of action that they themselves advocate. But in the IPCC, one can find statements to use in arguing for or against support of the Kyoto Protocol. The same is true for any other specific course of policy action on climate change. The IPCC maintains that its assessments do not advocate any single course of action.

So in addition to arguing about the science of climate change as a proxy for political debate on climate policy, we now can add arguments about the notion of consensus itself. These proxy debates are both a distraction from progress on climate change and a reflection of the tendency of all involved to politicize climate science.

The actions that we take on climate change should be robust to (i) the diversity of scientific perspectives, and thus also to (ii) the diversity of perspectives of the nature of the consensus. A consensus is a measure of a central tendency and, as such, it necessarily has a distribution of perspectives around that central measure (1). On climate change, almost all of this distribution is well within the bounds of legitimate scientific debate and reflected within the full text of the IPCC reports. Our policies should not be optimized to reflect a single measure of the central tendency or, worse yet, caricatures of that measure, but instead they should be robust enough to accommodate the distribution of perspectives around that
central measure, thus providing a buffer against the possibility that we might learn more in the future (2).

Center for Science and Technology Policy Research,
University of Colorado, UCB 488, Boulder, CO
80309–0488, USA.

1 D. Bray,H. von Storch, Bull.Am.Meteorol. Soc. 80, 439 (1999).
2. R. Lempert, M. Schlesinger, Clim. Change 45, 387 (2000).

December 19, 2007

Rajendra Pachauri, IPCC, Science and Politics

The current issue of Nature has a lengthy profile of Rajendra Pachauri, its "Newsmaker of the Year." In the profile Dr. Pachauri discusses his personal views on the politics of climate change and his responsibilities as IPCC chair. Here is how he characterizes his own efforts, as quoted in the Nature profile:

We have been so drunk with this desire to produce and consume more and more whatever the cost to the environment that we're on a totally unsustainable path. I am not going to rest easy until I have articulated in every possible forum the need to bring about major structural changes in economic growth and development.

AP Pachauri Gore.jpg

In recent weeks and months, Dr. Pachauri, and other representatives of the IPCC, have certainly not been shy in advocating specific actions on climate change, using their role as IPCC leaders as a pulpit to advance those agendas. For instance, in a recent interview with CNN on the occasion of representing the IPCC at the Nobel Prize ceremony, Dr. Pachauri downplayed the role of geoengineering as a possible response to climate change, suggested that people eat less meat, called for lifestyle changes, suggested that all the needed technologies to deal with climate change are in the marketplace or soon to be commercialized, endorsed the Kyoto Protocol approach, criticized via allusion U.S. non-participation, and defended the right of developing countries to be exempt from limits on future emissions.

Dr. Pachauri has every right to these personal opinions, but each of the actions called for above are contested by some thoughtful people who believe that climate change is a problem requiring action, and accept the science as reported by the IPCC. These policies are not advocated by the IPCC because the formal mandate of the IPCC is to be "policy neutral." But with its recent higher profile, it seems that the IPCC leadership believes that it can flout this stance with impunity. The Nature profile discusses this issue:

The IPCC's mandate is to be 'neutral with respect to policy' — to set out the options and let policy-makers decide how to act. The reports themselves reflect this. Every word is checked and double-checked by scientists, reviewers and then government representatives — "sanitized", as Pachauri puts it. But Pachauri is the face of the IPCC, and he often can't resist speaking out, despite a few "raps on the knuckles" for his comments. He insists that he always makes it clear he is speaking on his own behalf and not for the IPCC. "It's one thing to make sure that our reports are sanitized. It's another for me as an individual to talk about policies that might work. I feel I have responsibility far beyond being a spokesman for the IPCC. If I feel there are certain actions that can help us meet this challenge, I feel I should articulate them."

"I think Patchy needs to be careful," says Bert Metz, a senior researcher at the Netherlands Environmental Assessment Agency in Bilthoven, who is one of the co-chairs of the IPCC's working group on greenhouse-gas mitigation. "One of the things about the IPCC is that it lays down the facts. If you start mixing [that] with your own views that's not very wise. But he gets away with it because of his charm." Steve Rayner, director of the James Martin Institute at the University of Oxford, UK, and a senior author with the same working group, feels that Pachauri's personal statements place too much stress on lifestyles and not enough on technologies. But he also concedes that a certain amount of outspokenness is an essential part of the job. "I don't think you can provide inspirational leadership in an enterprise like this unless you are passionate. That's something Bob [Watson] and Patchy have in common. They are both very passionate about the issue and I think that's appropriate."

In general, those who agree with the political agenda advanced by Dr. Pachauri will see no problem with his advocacy, and those opposed will find it to be problematic. And this is precisely the problem. By using his platform as a scientific advisor to advance a political agenda, Dr. Pachauri risks politicizing the IPCC and turning it (or perceptions of it) into simply another advocacy group on climate change, threatening its legitimacy and ultimately, its ability to serve as a trusted arbiter of science.

On this point reasonable people will disagree. However, before you decide how you feel about this subject, consider how you would feel if the head of the International Atomic Energy Association responsible for evaluating nuclear weapons programs were to be an outspoken advocate for bombing the very country he was assessing, or if the head of the CIA with responsibility to bring intelligence to policy makers also was at the same time waging a public campaign on certain foreign policies directly related to his intelligence responsibilities. For many people the conflation of providing advice and seeking to achieve political ends would seem to be a dangerous mix for both the quality of advice and the quality of decision making.

The IPCC is riding high these days, but as Burt Metz says, they need to be very careful. Saying that your organization is "policy neutral" while behaving quite differently does not seem to be a sustainable practice. Policy makers will need science advice on climate change for a long time. The IPCC politicizes its efforts with some risk.

December 17, 2007

A Second Reponse from RMS

A few weeks ago I provided a midterm evaluation of the RMS 2006-2010 US hurricane damage prediction. RMS (and specifically Steve Jewson) responded and has subsequently (and graciously) sent in a further response to a question that I posed:

Does RMS stand by its spring 2006 forecast that the period 2006-2010 would see total insured losses 40% above the historical average?

The RMS response appears below, and I'll respond in the comments:

Yes, we do stand by that forecast, although I should point out that we update the forecast every year, so the 2005 forecast (for 2006-2010) is now 2 years out of date. Apart from questions of forecast accuracy, there's no particular reason for any of our users to use the 2005 forecast at this point (that would be like using a weather forecast from last week). It is, of course, important to understand the correct mathematical interpretation of the forecast. In your original post you interpreted the forecast incorrectly in a couple of ways. Over the last 2-3 years we've issued this forecast to hundreds of insurance companies, and discussed it with dozens of scientists around the world, and none of them have misinterpreted it, so I don't think our communication of the intended meaning of the forecast is unclear. However, some explanation is required and I realise that you probably haven't had the benefit of hearing one of the many presentations we've given on this subject. The two things that need clarifying are: 1) This forecast is a best estimate of the mean of a very wide distribution of possible losses. Because of this no-one should expect to be able to verify or falsify the forecast in a short period of time.

This is a typical property of forecasts in situations with high levels of uncertainty. I think it's pretty well understood by the users of the forecast.

One curious property of the loss distribution is that it is very skewed. As a result the real losses would be expected to fall below the mean in most years. This is compensated for in the average by occasional years with very high losses.

In fact the forecast that we give to the insurance industry is a completely probabilistic forecast, that estimates the entire distribution of possible losses, but it's a bit difficult to put that
kind of information into a press release, or on a blog.

2) Your conditional interpretation of the forecast is not mathematically correct. Neither RMS, nor our clients, expect the losses to increase in 2008-2010 in the way you suggest just because they were low in 2006-2007. I can't think of any reason why that would be the case. To get the (roughly) correct interpretation for 2008-2010 you have to multiply the original 5 year mean values by 0.6. That's what the users of our forecast do when they want that number.

I hope that clarifies the issues a bit.

December 07, 2007

RMS Response to Forecast Evaluation

Robert Muir-Woods of RMS has graciously provided for posting a response to the thoughts on forecast verification that I posted earlier this week. Here are his comments:

Scientifically it is of course not possible to draw any conclusion from the occurrence of two years without hurricane losses in the US, in particular following two years with the highest level of hurricane losses ever recorded and the highest ever number of severe hurricanes making landfall in a two year period. Even including 2006 and 2007, average annualized losses for the past five years are significantly higher than the long term historical average (and maybe you should also show this five year average on your plot?)

The basis for catastrophe loss modeling is that one can separate out the question of activity rate from the question as to the magnitude of losses that will be generated by the occurrence of hurricane events. In generating average annualized losses we need to explore the full 'virtual spectrum' of all the possible events that can occur. The question about current activity rates is a difficult one, which is why we continue to involve some of the leading hurricane climatologists, and a very wide range of forecasting methodologies, in our annual hurricane activity rate update procedure. In October 2007 an independent expert panel concluded that activity rates are forecasted to remain elevated for the next five years. While this perspective was announced and articulated by RMS, we did not originate it. Each year we undertake this exercise, we ensure that the forecasting models used to estimate activity over the next five years also reflect any additional learning from the forecasting of previous years, including the low activity experienced in 2006 and 2007. We don't 'declare success' that the activity rate estimate that has emerged from this procedure over the past three years (using different forecast models and different climatologists) has scarcely changed, but the consistency in the three 5 year projections is interesting nonetheless.

You may also be surprised to learn that our five-year forward-looking perspective on hurricane risk does not inevitably produce higher losses than all other models, which use the extrapolation of the simple long-term average to estimate future activity. This is as shown in a comparison published in a report prepared by the Florida Commission on Hurricane Loss Projection Methodology for the Florida House of Representatives (see the Table 1 on page 25 of the report, which can be downloaded from here:

Robert Muir-Wood

December 06, 2007

Revisiting The 2006-2010 RMS Hurricane Damage Prediction

In the spring of 2006, a company called Risk Management Solutions (RMS) issued a five year forecast of hurricane activity (for 2006-2010) predicting U.S. insured losses to be 40% higher than average. RMS is an important company because their loss models are used by insurance companies to set rates charged to homeowners, by reinsurance companies to set rates they charge to insurers, by ratings agencies for evaluating risks, and others.

We are now two years into the RMS forecast period and can thus say something preliminary about their forecast based on actual hurricane damage from 2006 and 2007, which was minimal. In short, the forecast doesn't look too good. For 2006 and 2007, the following figure shows average annual insured historical losses (for 2005 and earlier) in blue (based on Pielke et al. 2008, adjusted up by 4% from 2006 to 2007 to account for changing exposure), the RMS prediction of 40% more losses above the average in pink, and the actual losses in red.

RMS Verification.png

The RMS prediction obviously did not improve upon a naive forecast of average losses in either year.

What are the chances for the 5-year forecast yet to verify?

Average U.S. insured losses according to Pielke et al. (2008) are about $5.2 billion per year. Over 5 years this is $26 billion, and 40% higher than this is $36 billion. A $36 billion dollar insured loss is about $72 billion in total damage, and $26 billion insured is about $52 billion. For the RMS forecast to do better than the naive baseline of Pielke et al. (2008) total damage in 2008-2010 will have to be higher than $62 billion ($31 billion insured). That is, losses higher than $62B are closer to the RMS forecast than to the naive baseline.

The NHC official estimate for Katrina is $81 billion. So for the 2006-2010 RMS forecast to verify will require close to another Katrina-like event to occur in the next 3 years, or several large events. This is of course possible, but I doubt that there is a hurricane expert out there willing to put forward a combination of event probability and loss magnitude that will lead to an expected $62 billion total loss over the next 3 years. Consider that a 50% chance of $124 billion in losses results in an expected $62 billion. Is there any scientific basis to expect a 50% chance of $124 billion in losses? Or perhaps a 100% chance of $62 billion in total losses? Anyone wanting to make claims of this sort, please let us know!

From Pielke et al. (2008) the annual chances of a >$10B event (i.e., $5B insured) during 1900-2005 about 25%, and the annual chances of a >$50 billion ($25 billion insured) are just under 5%. There were 7 unique three-year periods with >$62B (>$31B insured) in total losses, or about a 7% chance. So RMS prediction of 40% higher than average losses for 2006-2010 has about a 7% chance of being more accurate than a naive baseline. It could happen, of course, but I wouldn't bet on it without good odds!

So what has RMS done is the face of evidence that its first 5-year forecast was not so accurate? Well, they have declared success and issued another 5-year forecast of 40% higher losses for the period 2008-2012.

Risk Management Solutions (RMS) has confirmed its modeled hurricane activity rates for 2008 to 2012 following an elicitation with a group of the world's leading hurricane researchers. . . . The current activity rates lead to estimates of average annual insured losses that will be 40% higher than those predicted by the long-term mean of hurricane activity for the Gulf Coast, Florida, and the Southeast, and 25-30% higher for the Mid-Atlantic and Northeast coastal regions.

For further reading:

Pielke, R. A., Jr., Gratz, J., Landsea, C. W., Collins, D., Saunders, M. A., and Musulin, R. (2008). "Normalized Hurricane Damages in the United States: 1900-2005." Natural Hazards Review, in press, February. (PDF, prepublication version)

August 17, 2007

New Publication

Pielke, Jr., R. A., 2007. Mistreatment of the economic impacts of extreme events in the Stern Review Report on the Economics of Climate Change, in press, corrected proof.

Full text here in PDF.

May 10, 2007

Reorienting U.S. Climate Science Policies

Last week the House Committee on Science and Technology held an important hearing on the future direction of climate research in he United States (PDF).

The major scientific debate is settled. Climate change is occurring. It is impacting our nation and the rest of the world and will continue to impact us into the future. The USGCRP should move beyond an emphasis on addressing uncertainties and refining climate science. In addition the Program needs to provide information that supports action to reduce vulnerability to climate and other global changes and facilitates the development of adaptation and mitigation strategies that can be applied here in the U.S. and in other vulnerable locations throughout the world.

This refocusing of climate research is timely and worthwhile. Kudos to the S&T Committee.

For a number of years, Congressman Mark Udall (D-CO) has led efforts to make the nation's climate research enterprise more responsive to the needs of decision makers (joined by Bob Inglis (R-SC)). Mr. Udall explained the reasons for rethinking climate science as follows:

The evolution of global science and the global change issue sparked the need to make changes to the 1978 National Climate Program Act, and gave us the Global Change Research Act of 1990. It is now time for another adjustment to alter the focus of the program governed by this law.

The debate, about whether climate change is occurring and about whether human activity has contributed to it, is over. As our population, economy, and infrastructure have grown, we have put more pressure on the natural resources we all depend upon. Each year, fires, droughts, hurricanes, and other natural events remind us of our vulnerability to extreme weather and climate changes. The human and economic cost of these events is very high. With better planning and implementation of adaptation strategies these costs can be reduced.

For all of these reasons, we need the USGCRP to produce more information that is readily useable by decision makers and resource managers in government and in the private sector. People throughout this country and in the rest of the world need information they can use to develop response, adaptation, and mitigation strategies to make our communities, our businesses, and our nation more resilient and less vulnerable to the changes that are inevitable.

We must also move aggressively to reduce greenhouse gas emissions if we are to avoid future increases in surface temperature that will trigger severe impacts that we cannot overcome with adaptation strategies. We need economic and technical information as well as information about system responses and climate responses to different concentrations of greenhouse gases in the atmosphere. The USGCRP should be the vehicle for providing this information.

The hearing charter (PDF) is worth reading in full.

April 23, 2007

What does Consensus Mean for IPCC WGIII?

The IPCC assessment process is widely referred to as reflecting a consensus of the scientific community. An AP news story reports on a leaked copy of the forthcoming Working Group III report on mitigation.

"Governments, businesses and individuals all need to be pulling in the same direction," said British researcher Rachel Warren, one of the report's authors.

For one thing, the governments of such major emitters as the United States, China and India will have to join the Kyoto Protocol countries of Europe and Japan in imposing cutbacks in carbon dioxide and other heat-trapping gases emitted by industry, power plants and other sources.

The Bush administration rejected the protocol's mandatory cuts, contending they would slow U.S. economic growth too much. China and other poorer developing countries were exempted from the 1997 pact, but most expected growth in greenhouse emissions will come from the developing world.

The draft report from the Intergovernmental Panel on Climate Change (IPCC), whose final version is to be issued in Bangkok on May 4, says emissions can be cut below current levels if the world shifts away from carbon-heavy fuels like coal, embraces energy efficiency and significantly reduces deforestation.

"The opportunities, the technology are there and now it's a case of encouraging the increased use of these technologies," said International Energy Agency analyst Ralph Sims, another of the 33 scientists who drafted the report.

As we've often discussed here, human-caused climate change is a serious problem requiring attention to both mitigation and adaptation. While I can make sense of a consensus among Working Group I scientists on causes and consequences of climate change, and even a consensus among Working Group II on impacts, how should we interpret a "consensus" among 33 authors recommending specific political actions? All of the movement toward the "democratization of science" and "stakeholder involvement" and "public participation" that characterizes science and technology issues ranging from GMOs to nanotechnology to nuclear waste disposal seems oddly absent in the climate issue in favor of a far more technocratic model of decision making. Is climate change somehow different?

April 18, 2007

Some Views of IPCC WGII Contributors That You Won't Read About in the News

I was surprised to read in E&E News today a news story on yesterday's hearing held by the House Science Committee suggesting that the take-home message was that adaptation would be difficult, hence mitigation should be preferred (for subscribers here is the full story). My reading of the written testimony suggested a very different message, and not one I've seen in the media. Below are some relevant excerpts from IPCC WG II authors who testified yesterday (emphasis added). I know both and respect their views.

Roger Pulwarty (PDF)

Climate is one factor among many that produce changes in our environment. Demographic, socio-economic and technological changes may play a more important role in most time horizons and regions. In the 2050s, differences in the population projections of the four scenarios contained in the IPCC Special Report on Emission Scenarios show that population size could have a greater impact on people living in water-stressed river basins (defined as basins with per-capita water resources of less than 1000 m3/year) than differences in emissions scenarios. As the number of people and attendant demands in already stressed river basins increase, even small changes in natural or anthropogenic climate can trigger large impacts on water resources.

Adaptation is unavoidable because climate is always varying even if changes in variability are amplified or dampened by anthropogenic warming. In the near term, adaptation will be necessary to meet the challenge of impacts to which we are already committed. There are significant barriers to implementing adaptation in complex settings. These barriers include both the inability of natural systems to adapt at the rate and magnitude of climate change, as well as technological, financial, cognitive and behavioral, social and cultural constraints. There are also significant knowledge gaps for adaptation, as well as impediments to flows of knowledge and information relevant for decision makers. In addition, the scale at which reliable information is produced (i.e. global) does not always match with what is needed for adaptation decisions (i.e. watershed and local). New planning processes are attempting to overcome these barriers at local, regional and national levels in both developing and developed countries.

Shardul Agrawala (PDF)

The costs of both mitigation and adaptation are predominantly local and near term. Meanwhile, the climate related benefits of mitigation are predominantly global and long-term, but not immediate. Owing to lag times in the climate system, the benefits of current mitigation efforts will hardly be noticeable for several decades. The benefits of adaptation are more immediate, but primarily local, and over the short to medium term.

Given these differences between mitigation and adaptation, climate policy is not about making a choice between adapting to and mitigating climate change. Even the most stringent mitigation efforts cannot avoid further impacts of climate change in the next few decades, which makes adaptation essential, particularly in addressing near term impacts. On the other hand, unmitigated climate change would, in the long term exceed the capacity of natural, managed, and human systems to adapt.

April 17, 2007

Laurens Bouwer on IPCC WG II on Disasters

In the comments, Laurens Bouwer, of the Institute for Environmental Studies, Vrije Universiteit, Amsterdam, who served as an expert reviewer for the IPCC WGII report, provides the following perspective (Thanks Laurens!):

Thanks Roger, for this discussion. It clearly points the fact that IPCC has not done enough to make an unambiguous statement on the attribution of disaster losses in their Working Group 2 Summary for Policymakers (SPM). This now leaves room for speculation based on the individual statements and graphs from underlying chapters in the report, in particular Figure TS-15, Chapters 1, 3 and 7, that all have substantial paragraphs on the topic.

As reviewer for WG2 I have repeatedly (3 times) asked to put a clear statement in the SPM that is in line with the general literature, and underlying WG2 chapters. In my view, WG2 has not succeeded in adequately quoting and discussing all relevant recent papers that have come out on this topic -- see above-mentioned chapters.

Initial drafts of the SPM had relatively nuanced statements such as:

Global economic losses from weather-related disasters have risen substantially since the 1970s. During the same period, global temperatures have risen and the magnitude of some extremes, such as the intensity of tropical cyclones, has increased. However, because of increases in exposed values ..., the contribution of these weather-related trends to increased losses is at present not known.

For unknown reasons, this statement (which seems to implicitly acknowledge Roger's and the May 2006 workshop conclusion that societal factors dominate) was dropped from the final SPM. Now the SPM has no statement on the attribution of disaster losses, and we do not know what is the 'consensus' here.

January 06, 2007

Who Said This? No Cheating!

One day, anthropologists, sociologists, and maybe even psychoanalysts will look back on the early-twenty first century debate on climate change with incredulity and bafflement. Consider the following statement as a weekend pop quiz – no Googling if you wan to play along!

I worry a little bit about what you might call the Tyranny of the IPCC. . . That gives me some slight willies. . . Sure, IPCC is confident about the existence and cause of recent warming . . . But there are other areas -- mainly around the effects of climate change -- where the IPCC says, in effect, "we don't know for sure yet." Does that mean all respectable people must stay silent about those effects until the IPCC ratifies a consensus conclusion? Yes, we have to leave science to the scientists. But science is not a priesthood that can or should impose quietude on the rest of us. Our informed gut feelings about how things will turn out are legitimate. People make statements beyond what's strictly supported by the peer-reviewed evidence all the time.

Was this statement made by:

(a) A Republican U.S. Senator from a Midwestern state with a panhandle in a Senate floor speech
(b) A scientist formerly associated with the IPCC responding to being labeled a "skeptic" by his peers
(c) An environmentalist writer defending Al Gore’s scientifically unsupportable statements
(d) A prominent NASA scientist defending his work projecting rapid future sea level rise
(e) A conservative blogger explaining why the notion of scientific consensus is really cover for a political agenda

Click through for the answer (click on Comments now to reply before reading the answer):

The answer is (c). The author is Dave Roberts at Grist magazine. He writes these views over at the Gristmill blog. To best serve the pop quiz we did cut out some of what Dave wrote to preserve ambiguity, so please do go and read the whole thing. We have every respect for Dave as a passionate advocate for causes that he believes in (check him out on TWC, also a music recommendation for Dave;-). But at the same time his willingness to forgive departures from scientific standards in support of causes and people that he believes in makes him no different from his opponents who do the exact same thing.

In the comments on Dave’s post climate scientist Andrew Dessler tries to gently make this exact same point:

I am very leery letting Al Gore or anyone "supplement" the IPCC. If we let Al Gore inject his scientific expertise, then why shouldn't we let James Inhofe also make pronouncements on the science. Gore and Inhofe are both advocates, and their interpretation of the science clearly reflects their preferred policy choices. I would therefore argue that science should be left to the professionals. I know that sounds elitist and I'll probably get flamed for it, but so be it.

This pop quiz should be interpreted as a lesson in the politicization of science. It is very easy to hold different standards for representations of science as a function of different political or policy commitments. Some, like Prof. Dessler will say that the antidote to this is to focus on getting the best scientific assessments possible. Others, like me, recognize that scientists who produce assessments are people with values and political agendas. So I argue that the only way to move beyond this situation is to of course seek the best science but to also discuss policy options explicitly, rather than orienting the climate debate solely with respect to science. Science is comfortable, and allows some a convenient excuse not to discuss policy (or worse a way to smuggle politics under the cover of "science"). But we should remember that we talk, debate, and argue about climate change because it matters, and because the decisions that we take matter as well. The answer to this is not to pretend that science can be discussed in a vacuum, or to suggest that politicians are legitimate voices on where future science is going.

The answer lies in explicitly discussing policy – what should we do, when, at what cost, with what effects, etc.? Everything else is a distraction.

December 18, 2006

Misrepresenting Literature on Hurricanes and Climate Change

Greg Holland and Peter Webster have a new paper accepted on the statistics of Atlantic hurricanes. While there are many interesting questions that might be raised about the data and statistics in the paper, here I comment on the paper’s treatment of the existing literature, some of which involves work I have contributed to. In this instance I find their characterization of the literature to grossly misrepresent what the existing research actually says. I have shared my comments with Drs. Holland and Webster, to which I received the following reaction from Greg Holland: "We shall not be modifying the paper as a result of your comments."

Below I present their original text and my comments. We think that readers can judge for themselves whether a mischaracterization of the literature has occurred. I promised Peter Webster that I wouldn’t speculate on their motivations, and so I’ll stick to the facts in what I present below. I do know that when scientists misrepresent each others work, it is likely to stymie the advancement of knowledge in the community, and thus should be of general concern. When such misrepresentations are missed in the peer review process this also should raise some concerns. In this case I find the misrepresentations obvious to see and egregious, occurring in just about every sentence in the relevant paragraph.

Do note that the comments below do not get into their statistical analysis, which is worth considering separately on its own merits, but which goes beyond the focus of this post. Both Drs. Holland and Webster are widely published and respected scientists with admirable track records. They are welcome to respond here if they’d like. And I do note that different people can interpret the literature in different ways, so the below is my reading only.

Holland and Webster’s new paper can be found here in PDF and the text I have excerpted below in bold comes from their pp. 5-6. My comments are interlaid within their text.

Questions have been raised over the quality of the NATL data even for such a broad brush accounting. For example, a recent study by Landsea et al (2006) claimed that long-term trends in tropical cyclone numbers and characteristics cannot be determined because of the poor quality of the data base in the NATL even after the incorporation of satellite data into the data base. Landsea et al. also state unequivocally that there is no trend in any tropical storm characteristics (frequency or intensity) after 1960, despite this being established in earlier papers by Emanuel (2005) and Webster et al. (2005), and more recently by Hoyos et al. (2006).
Here is what I read in Landsea et al. (2006) (PDF): "There may indeed be real trends in tropical cyclone intensity . . ." Holland and Webster report the opposite of what Landsea et al. (2006) actually says. Landsea et al. (2006) state that they do not believe that the data record is of sufficient quality to definitively detect trends. They do not say that there are no trends. Holland and Webster ascribe a claim to Landsea et al. that they do not make.

Figure 1 shows a strong statistically significant trend since the 1970s similar to that found by Hoyos et al. (2006) and Curry et al. (2006). The overall Landsea et al. analysis is curious and is based on the premise that the data must be wrong because the models suggest a much smaller change in hurricane characteristics relative to the observed SST warming (e.g., Henderson-Sellers et al 1998).

Here is what Landsea at al. (2006) actually say: "Theoretical considerations based on sea surface temperature increases suggest an increase of ~4% in maximum sustained surface wind per degree Celsius (4, 5). But such trends are very likely to be much smaller (or even negligible) than those found in the recent studies (1-3)." Landsea et al. (2006) are reporting a finding accepted in the community. Indeed, the recent WMO statement (written and signed by Greg Holland) states, "The more relevant question is how large a change: a relatively small one several decades into the future or large changes occurring today? Currently published theory and numerical modeling results suggest the former, which is inconsistent with the observational studies of Emanuel (2005) and Webster et al. (2005) by a factor of 5 to 8 (for the Emanuel study)." Holland and Webster do not cite the WMO statement.

In contrast, Michaels, Knappenberger and Landsea (2005) argue the opposite, that the models must be wrong because they do not agree with the data. We shall show later that there are factors not included in the models that may explain some of the differences between model and observed trends.

Michaels et al. (2005) do not say that "the models must be wrong because they do not agree with the data." They say that if you run the models with different inputs you get different results. They write (PDF), "when [Knutson and Tuleya’s model is] driven by real-world observations rather than unrealistically parameterized and constrained model conditions, the prospects for a detectable increase in hurricane strength in coming decades are reduced to the noise level of the data." Michaels et al. are not comparing data with models, but looking at modeled output using different inputs.

Further, noticeable by omission is that Holland and Webster ignore relevant work that discusses the relationship of models, theory, and observations that includes Landsea as an author (which seems to be the focus of this paragraph). In particular the following paper discusses this subject explicitly:

Pielke, Jr., R. A., C.W. Landsea, M. Mayfield, J. Laver, R. Pasch, 2006. Reply to Hurricanes and Global Warming Potential Linkages and Consequences, Bulletin of the American Meteorological Society, Vol. 87, pp. 628-631. (PDF)

It particular Pielke et al. (2006) responds to a statement in a related paper (Anthes et al., Greg Holland included in the et al., PDF), that says that there is "broad consistency between observations, models, and theory." This statement is contradicted by Pielke et al. (2006) and WMO (2006), the latter is actually signed by Holland.

Of greater concern is that the conclusions in the Landsea et al. paper are at odds with several previous publications that include the same authors (e.g. Owens and Landsea 2003, Landsea et al. 1999), without introducing any additional evidence. These papers state clearly that the author’s considered that the period of reliable and accurate NATL records commenced in 1944 with the implementation of aircraft reconnaissance.

I coauthored Landsea et al. (1999) (PDF) and in that paper there are indeed statements on concerns about post-1944 hurricane data (e.g., at p. 94). Further, Landsea et al. (2006) cite a range of post-1999 studies acknowledging new uncertainties in data and methodologies, (e.g., C. Velden et al., Bull. Am. Meteorol. Soc., in press.; J. A. Knaff, R. M. Zehr, Weather Forecast., in press). To say that there has been no additional evidence cited by Landsea et al. (2006) (when Holland and Webster’s work is key to that new evidence) is simply misleading and wrong.

The bottom line here is that while this is just one paragraph in one paper, there is perhaps reason to be concerned about the fidelity of the literature, whatever the underlying causes may be. We have documented other shortfalls in the literature on several occasions on this site. To the extent that these data points are representative of broader problems in the climate literature, scientists should redouble their efforts to exert high standards of quality control. For if I can spot these misrepresentations in the literature, then others will as well.

December 15, 2006

Useable Information for Policy

Twenty-two members of Congress have written a letter to the head of the Climate Change Science Program observing that the program is failing to fulfill its mandate under Public Law 101-606 to deliver useable information for policy makers. This is good news.

The letter observes that the Bush Administration has failed to produce an assessment as required by the law, which is supposed to be delivered every four years. This situation is analogous to the behavior of the Clinton Administration which produced a single assessment in 2000, which was six years overdue. The assessment produced by the Clinton Administration was produced within OSTP under the nominal leadership of Al Gore which – rightly or wrongly – put a partisan tint on the product. Some – both on the right and the left -- continue to use the 2000 assessment six years later as a political wedge device.

The letter from the members of Congress observes:

. . . the current CCSP [Climate Change Science Program] website acknowledges that the law directs the agencies to "produce information readily useable by policy makers attempting to formulate effective strategies for preventing, mitigating, and adapting to the effects of global change," . . . The failure of the CCSP to produce a National Assessment report within the time frame required by law has made it more difficult for Congress to develop a comprehensive policy response to the challenge of global climate change.

The CCSP is currently producing 20 different assessment reports but according to the program’s previous direction, the CCSP does not engage in discussion of policy options. It is pretty difficult to produce usable information for policy makers without discussing policy options.

Does the Bush Administration want to avoid disucssion of policy options on climate change? Yes. Did the Clinton Administration also want to avoid discussion of policy options on climate change? Yes. Has much of the scientific community also wanted to avoid discussion of policy options on climate change? Yes. Sounds like a perfect situation for congressional oversight.

The policy failures of the CCSP have nothing to do with Democrats or Republicans, and everything to do with the structure of scientific advice implemented under the CCSP and its predecessor organization. Why do I say this? Because in 1994 I defended my doctoral dissertation on implementation of the climate science program under Public Law 101-606, and the exact same issues involving "usable information for policy" identified by the current letter from the 22 members of Congress existed at that time as well.

It is good to see Congress finally invoking the language in P.L 101-606 calling for usable information for policy makers. This is a matter of the effective governance of science in support of decision making, and it should not be dragged into partisan political bickering. The bipartisan letter from 22 members of Congress is a good place to start.

For details see:

Pielke Jr., R. A., 1995: Usable Information for Policy: An Appraisal of the U.S. Global Change Research Program. Policy Sciences, 38, 39-77. (PDF)

Letter from 22 members of Congress to CCSP, courtesy of E&E Daily (PDF)

December 08, 2006

Inside the IPCC's Dead Zone

Climate scientist James Annan has related a tale of angst and suffering as a result of peer reviews that will, in broad terms, sound familiar to most academics. His experience raises a question that I’d like to ask of the folks familiar with the IPCC.

I have no idea what James’ paper is about, except that it argues that very high values of climate sensitivity can be ruled out, which I take it is contrary to the views of some others in the field. This situation leads me to consider several general questions about the IPCC:

How does the IPCC handle information that appears after its deadline for citation of peer-reviewed papers that may contradict literature which appears before that deadline?

Doesn’t this create a potential conflict of interest for contributors to the IPCC who are reviewing papers that appear during the drafting process?

Take hurricanes and climate change for example. Whatever the IPCC reports next March, it certainly won’t be as current as the recent WMO consensus report because the IPCC cannot cite literature that appeared after some point early in 2006, and the WMO can. And I'd bet there will be more studies released between now and march. On hurricanes the IPCC may wind up creating confusion by taking the scientific discussion back to early 2006 when in reality much has happened since. Similarly, its discussion of climate sensitivity and other areas could, in principle, suffer from the same lag effects. Now James’ paper was rejected, and for all I know, correctly. But on highly sensitive topics, I find myself agreeing with the AAAS – trust alone is no longer enough.

June 23, 2006

A(nother) Problem with Scientific Assessments

The American Geophysical Union released an assessment report last week titled "Hurricanes and the US Gulf Coast" which was the result of a "Conference of Experts" held in January, 2006. One aspect of the report illustrates why it is so important to have such assessments carefully balanced with participants holding a diversity of legitimate scientific perspectives. When such diversity is not present, it increases the risks of misleading or false science being presented as definitive or settled, which can be particularly problematic for an effort intended to be "a coordinated effort to integrate science into the decision-making processes." In this particular case the AGU has given assessments a black eye. Here are the details:

The AGU Report includes the following bold claim:

There currently is insufficient skill in empirical predictions of the number and intensity of storms in the forthcoming hurricane season. Predictions by statistical methods that are widely distributed also show little skill, being more often wrong than right.

Such seasonal predictions are issued by a number of groups around the world, and are also an official product of the U.S. government’s Climate Prediction Center. If these groups were indeed publishing forecasts with no (or negative) skill, then there would be good reason to ask them to cease immediately and get back to research, lest they mislead the public and decision makers.

As it turns out the claim by the AGU is incorrect, or at a minimum, is a minority view among the relevant expert community. According to groups responsible for providing seasonal forecasts of hurricane activity, their products do indeed have skill. [Note: Skill refers to the relative improvement of a forecast over some naïve baseline. For example, if your actively managed mutual fund makes money this year, but does not perform better than an unmanaged index fund, then your fund’s manager has showed no skill – no added value beyond what could be done naively.] Consider the following:

1. Tropical Storm Risk, led by Mark Saunders finds that their (and other) forecasts of 2004 and 2005 demonstrated excellent skill according to a number of metrics:

Lea, A. S. and M. A. Saunders, How well forecast were the 2004 and 2005 Atlantic and U.S. hurricane Seasons? in Proceedings of the 27th Conference on Hurricanes and Tropical Meteorology, Monterey, USA, April 24-28 2006. (PDF)

For further details see this paper:

Saunders, M. A. and A. S. Lea, Seasonal prediction of hurricane activity reaching the coast of the United States, Nature, 434, 1005-1008, 2005. (PDF)

2. Phil Klotzbach of Colorado State University, now responsible for issuing the forecasts of the William Gray research team, provides a number of spreadsheets with data showing that their forecasts demonstrate skill:

Seasonal skill excel file

August monthly forecast skill excel file

September monthly forecast skill excel file

October monthly forecast skill excel file

Klotzbach writes in an email: "All three of our monthly forecasts have shown skill with respect to the previous five-year monthly mean of NTC using MSE (mean-squared error as our skill metric). Here are our skills (% value is the % improvement over the previous five-year mean):

August Monthly Forecast: 38%
September Monthly Forecast: 2%
October Monthly Forecast: 33%"

3. NOAA’s Chris Landsea provides the following two figures which show NOAA’s seasonal forecast performance.



He writes in an email: "You can see that we do okay in May (4 out of 7 seasons correctly forecasting number of hurricanes for example), but better in August (6 of 8 seasons correct)."

He also provides a link to this peer-reviewed paper:

Owens, B. F., and C. W. Landsea, 2003: Assessing the skill of operational Atlantic seasonal tropical cyclone forecasts, Weather and Forecasting, Vol. 18, No. 1, pp.45-54. (PDF)

From this information, it seems clear that there are strong claims in support of the skill of seasonal hurricane forecasts and relevant peer-reviewed literature. The AGU statement is therefore misleading and more likely just plain wrong. It certainly is not a community consensus perspective that one might expect to find in an assessment report.

What is going on here? Perhaps the AGU's committee was unaware of this information, which if so would make one wonder about their "expert" committee. Given the distinguished people on their committee, I find this an unlikely explanation. Instead, it may be that the issue of seasonal hurricane forecasting has gotten caught up in the "climate wars." William Gray is the originator of seasonal climate forecasts and has rudely dismissed the notion of human-caused global warming, much less a connection to hurricanes. One of the lead authors of the AGU assessment has been in a public feud with Bill Gray and is a strong advocate of a human role in recent hurricane activity. It is not unreasonable to think that the AGU assessment was being used as a vehicle to advance this battle under the guise of community "consensus." It may be the perception among some that if Bill Gray’s or NOAA’s work on seasonal forecasts, which is based on various natural climatic factors, can be shown to be fundamentally flawed, then this would elevate the importance of alternative explanations.

If this hypothesis explaining what is going on is indeed the case, then it would be a serious misuse of the AGU for the advancement of personal views, unrepresentative of the actual community perspective. It would also represent a complete failure of the AGU’s assessment process. Given that there is peer-reviewed literature indicating the skill of seasonal forecasts, and none that I am aware of making the case for no skill, the AGU has given consensus assessments a black eye, and in the process provided incorrect or misleading information to decision makers.

The AGU case may be isolated, but it does beg the question raised by my father and others, how can we know whether scientific assessments faithfully represent the relevant community of experts versus a subset with an agenda posing under the guise of consensus? I am aware of no systematic approaches to answering this question. It is a question that needs discussion, because as political, personal, and other issues infuse the scientific enterprise, blind trust in disinterested science and science institutions no longer seems to be enough.

Sitemap | Contact | Find us | Email webmaster