Center Home Science Policy Photos University of Colorado spacer
Location: > Prometheus: Prediction and Forecasting Archives

The Helpful Undergraduate: Another Response to James Annan
   in Prediction and Forecasting May 16, 2008

The Politicization of Climate Science
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Science + Politics | Scientific Assessments May 16, 2008

Comparing Distrubutions of Observations and Predictions: A Response to James Annan
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments May 15, 2008

Lucia Liljegren on Real Climate's Approach to Falsification of IPCC Predictions
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments May 14, 2008

How to Make Two Decades of Cooling Consistent with Warming
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Risk & Uncertainty | Scientific Assessments May 12, 2008

Inconsistent With? One Answer
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting May 12, 2008

Real Climate's Bold Bet
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Risk & Uncertainty May 09, 2008

Teats on a Bull
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Risk & Uncertainty | Science + Politics May 08, 2008

The Consistent-With Chronicles
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting May 02, 2008

Global Cooling Consistent With Global Warming
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting April 30, 2008

Peter Webster on Predicting Tropical Cyclones
   in Author: Pielke Jr., R. | Climate Change | Disasters | Prediction and Forecasting April 16, 2008

Lucia Liljegren on Real Climate Spinmeisters
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting April 11, 2008

Real Climate on My Letter to Nature Geosciences
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting April 10, 2008

Letter to Nature Geoscience
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting April 02, 2008

You Can't Make This Stuff Up
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Science + Politics March 18, 2008

Update on Falsifiability of Climate Predictions
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Risk & Uncertainty March 15, 2008

Climate Model Predictions and Adaptation
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting February 18, 2008

Seasonal Forecasts and the Colorado Winter
   in Author: Pielke Jr., R. | Prediction and Forecasting | Water Policy February 14, 2008

The Consistent-With Game: On Climate Models and the Scientific Method
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Science + Politics February 13, 2008

Guest Comment: Sharon Friedman, USDA Forest Service - Change Changes Everything
   in Author: Others | Climate Change | Environment | Prediction and Forecasting | Science + Politics February 01, 2008

Updated IPCC Forecasts vs. Observations
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments January 26, 2008

Temperature Trends 1990-2007: Hansen, IPCC, Obs
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments January 18, 2008

UKMET Short Term Global Temperature Forecast
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments January 16, 2008

Verification of IPCC Sea Level Rise Forecasts 1990, 1995, 2001
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments January 15, 2008

James Hansen on One Year's Temperature
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments January 14, 2008

Updated Chart: IPCC Temperature Verification
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments January 14, 2008

Pachauri on Recent Climate Trends
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments January 14, 2008

Verification of IPCC Temperature Forecasts 1990, 1995, 2001, and 2007
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments January 14, 2008

Real Climate's Two Voices on Short-Term Climate Fluctuations
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments January 11, 2008

Verification of 1990 IPCC Temperature Predictions
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments January 10, 2008

Forecast Verification for Climate Science, Part 3
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments January 09, 2008

Forecast Verification for Climate Science, Part 2
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments January 08, 2008

Forecast Verification for Climate Science
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting | Scientific Assessments January 07, 2008

A Second Reponse from RMS
   in Author: Pielke Jr., R. | Disasters | Prediction and Forecasting | Scientific Assessments December 17, 2007

RMS Response to Forecast Evaluation
   in Author: Others | Disasters | Prediction and Forecasting | Scientific Assessments December 07, 2007

Revisiting The 2006-2010 RMS Hurricane Damage Prediction
   in Author: Pielke Jr., R. | Disasters | Prediction and Forecasting | Risk & Uncertainty | Scientific Assessments December 06, 2007

State of Florida Rejects RMS Cat Model Approach
   in Author: Pielke Jr., R. | Disasters | Prediction and Forecasting | Risk & Uncertainty May 11, 2007

Review of Useless Arithmetic
   in Author: Pielke Jr., R. | Prediction and Forecasting May 04, 2007

Now I've Seen Everything
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting March 29, 2007

Cashing In
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting March 29, 2007

Prediction in Science and Policy
   in Author: Pielke Jr., R. | Prediction and Forecasting February 20, 2007

Ryan Meyer in Ogmius
   in Author: Pielke Jr., R. | Climate Change | Prediction and Forecasting December 19, 2006

Limits of Models in Decision
   in Author: Pielke Jr., R. | Prediction and Forecasting October 10, 2006

Prediction and Decision
   in Author: Pielke Jr., R. | Prediction and Forecasting October 02, 2006

May 16, 2008

The Helpful Undergraduate: Another Response to James Annan

In his latest essay on my stupidity, climate modeler James Annan made the helpful suggestion that I consult a "a numerate undergraduate to explain it to [me]." So I looked outside my office, where things are quiet out on the quad this time of year, but as luck would have it, I did find a young lady named Megan, who just happened to be majoring in mathematics who agreed to help me overcome my considerable ignorance.

The first thing I had to do was explain to Megan the problem we are looking at. I told her that we had 55 estimates of a particular quantity, with a mean of 0.19 and standard deviation of 0.21. At the same time we had 5 different observations of that same quantity, with a mean of –0.07 and standard deviation of 0.07. I wanted to know how similar or different from each other these two sets of data actually were.

I explained to her that James Annan, a modest, constructive, and respectful colleague of mine who happened to be a climate modeler ("Cool" she said), had explained that the best way to compare these datasets was to look at the normal distribution associated with the data (N(0.19. 0.21) and plot on that distribution the outlying value from the smaller dataset.


Since the outlying value of the observations fell well within the distribution of the estimates, James told us, the two dataset could not be claimed to be different -- case closed, anyone saying anything different must be an ignorant climate denying lunatic.

"Professor Pielke," Megan said, "You are funny. James surely didn’t react that way, because since he is a climate modeler he must surely recognize that there are many ways to look at statistical problems. We even learned that just this year in our intro stats class. Besides, I can’t imagine a scientific colleague being so rude! You must have misinterpreted him."

Since Megan was being so helpful in my education, I simply replied that we should stick to the stats. Besides, if she really knew that I was a climate denying moron, she might not continue to help me.

Megan said, "There is another way to approach this problem. Have you heard of an unpaired t-test for two different samples? (PDF)"

I replied, "Of course not, I am just political scientist."

Megan said, "We learned in stats this year that such a test is appropriate for comparing two distributions with equal variance to see how similar they are. It is really very easy. In fact you can run these tests online using a simple calculator. Here is one such website that will do all of the work for you, just plug in the numbers."

So we plugged our numbers into the magic website as follows:

Sample 1:

Mean = 0.19
SD = 0.21
N = 55

Sample 2

Mean = -0.07
SD = 0.07
N = 5

And here is what the magic website reported back:

Unpaired t test results

P value and statistical significance:

The two-tailed P value equals 0.0082

By conventional criteria, this difference is considered to be very statistically significant.

Confidence interval:
The mean of Group One minus Group Two equals -0.2600
95% confidence interval of this difference: From -0.4502 to -0.0698

Intermediate values used in calculations:
t = 2.7358
df = 58
standard error of difference = 0.095

"Wow," I said to Megan, "These are lots of numbers. What do they all mean?"

"Well," Megan helpfully replied, "They mean that there is a really good chance that your two distributions inconsistent with each other."

"But," I protested, "Climate modeler James Annan came up with a different result! And he said that his method was the one true way!"

"You are kidding me again, Professor Pielke," she calmly replied, "Dr. Annan surely recognizes that there are a lot of interesting nuances in statistical testing and using and working with information. There are even issues that can be raised about the appropriateness of test that we performed. So I wouldn't even be too assured that these results are the one true way either. But they do indicate that there are different ways to approach scientific questions. I am sure that Dr. Annan recognizes this, after all he is a climate scientist. But we'll have to discuss those nuances later. I'm taking philosophy of science in the fall, and would be glad to tutor you in that subject as well. But for now I have to run, I am on summer break after all."

And just like that she was gone. Well, after this experience I am just happy that I was instructed to find a smart undergraduate to help me out.

Posted on May 16, 2008 11:55 AM View this article | Comments (0)
Posted to Prediction and Forecasting

The Politicization of Climate Science

Here I'd like to explain why one group of people, which we might call politically active climate scientists and their allies, seek to shut down a useful discussion with intimidation, bluster, and name-calling. It is, as you might expect, a function of the destructive politics of science in the global warming debate.

We've had a lot of interest of late in our efforts to explore what would seem to be a simple question:

What observations of the global climate system (over what time scale, with what certainty, etc.) would be inconsistent with predictions of the IPCC AR4?

The motivation for asking this question is of course the repeated claims by climate scientists that this or that observation is "consistent with" such predictions. For claims of consistency between observations and predictions to have any practical meaning whatsoever, they must be accompanied by knowledge of what observations would be inconsistent with predictions. This is a straightforward logical claim, and should be uncontroversial.

Yet efforts to explore this question have been met with accusations of "denialism," of believing that human-caused global warming is "not a problem," of being a "conspiracy theorist." More constructive responses have claimed that questions of inconsistency cannot really be addressed for 20-30 years (which again raises the question why claims of consistency are appropriate on shorter timescales), have focused attention on the various ways to present uncertainty in predictions from a suite of models and also on uncertainties in observations systems, and have focused attention on the proper statistical tests to apply in such situations. In short, there is a lot of interesting subjects to discuss. Some people think that they have all of the answers, which is not at all problematic, as it makes this issue no different than most any other discussion you'll find on blogs (or in academia for that matter).

But why is it that some practicing climate scientists and their allies in the blogosphere appear to be trying to shut down this discussion? After all, isn't asking and debating interesting questions one of the reasons most of us decided to pursue research as a career in the first place? And in the messy and complicated science/politics of climate change wouldn't more understanding be better than less?

The answer to why some people react so strongly to this subject can be gleaned from an op-ed in today's Washington Times by one Patrick Michaels, a well-known activist skeptical of much of the claims made about the science and politics of climate change. Here is what Pat writes:

On May Day, Noah Keenlyside of Germany's Leipzig Institute of Marine Science, published a paper in Nature forecasting no additional global warming "over the next decade."

Al Gore and his minions continue to chant that "the science is settled" on global warming, but the only thing settled is that there has not been any since 1998. Critics of this view (rightfully) argue that 1998 was the warmest year in modern record, due to a huge El Nino event in the Pacific Ocean, and that it is unfair to start any analysis at a high (or a low) point in a longer history. But starting in 2001 or 1998 yields the same result: no warming.

Michaels is correct in his assertion of no warming starting in these dates, but one would reach a different conclusion starting in 1999 or 2000. He continues,

The Keenlyside team found that natural variability in the Earth's oceans will "temporarily offset" global warming from carbon dioxide. Seventy percent of the Earth's surface is oceanic; hence, what happens there greatly influences global temperature. It is now known that both Atlantic and Pacific temperatures can get "stuck," for a decade or longer, in relatively warm or cool patterns. The North Atlantic is now forecast to be in a cold stage for a decade, which will help put the damper on global warming. Another Pacific temperature pattern is forecast not to push warming, either.

Science no longer provides justification for any rush to pass drastic global warming legislation. The Climate Security Act, sponsored by Joe Lieberman and John Warner, would cut emissions of carbon dioxide — the main "global warming" gas — by 66 percent over the next 42 years. With expected population growth, this means about a 90 percent drop in emissions per capita, to 19th-century levels.

He has laid out the bait, complete with reference to Al Gore, claiming that recent trends of no warming plus a forecast of continued lack of warming mean that there is no scientific basis for action on climate change.

There are several ways that one could respond to these claims.

One very common response to these sort of arguments would be to attack Michaels putative scientific basis for his policy arguments. Some would argue that he has cherrypicked his starting dates for asserting no trend. Other would observe that the recent trends in temperature are in fact consistent with predictions made by the IPCC. This latter strategy is exactly the approach used by the bloggers at Real Climate when I first started comparing 2007 IPCC predictions (from 2000) with temperature observations.

The "consistent with" strategy is a potential double-edged sword because it grants Pat Michaels a large chunk of territory in the debate. Once you attack the scientific basis for political arguments that are justified in those terms, you are accepting Michaels claim that the political arguments are in fact a function of the science. So in this case, by attacking Michaels scientific claims, you would be in effect saying

"Yes while it is true that these policies are justified on scientific conclusions, Pat Michaels has his science wrong. Getting the science right would lead to different political conclusions that Michaels arrives at."

Here at Prometheus for a long time we've observed how this dynamic shifts political debates onto scientific debates. Any I discuss this in detail in my book, The Honest Broker (now on sale;-).

Now, the "consistent with" strategy is a double-edged sword because the future is uncertain. It could very well be the case that there is no additional warming over the next decade or longer, or perhaps a cooling. Given such uncertainty, scientists with an eye on the politics of climate change are quick to define pretty much anything that could be observed in the climate system as "consistent with" IPCC predictions in order to maintain their ability to deflect the sort of claims made by Patrick Michaels. For if everything observed is consistent with IPCC predictions, there is no reason to then call into question the scientific basis used to justify policies.

But this strategy runs a real risk of damaging the credibility of the scientific community. It is certainly possible to claim, as some of our commenters have and the folks at RC have, that 20 years of cooling is "consistent with" IPCC predictions, but I can pretty much guarantee that if the world has experienced cooling for 20 years from the late 1990s to the 2000-teens that the political dynamics of climate change and the standing of skeptics will be vastly different than it is today.

Now I am sure that many scientist/activists are just trying to buy some time (e.g., buy offering a wager on cooling, as RC has done), waiting for a strong warming trend to resume. And it very well might, since this is the central prediction of the IPCC. Blogger /activist/scientist Joe Romm gushed with enthusiasm when the March temperatures showed a much higher rate of warming than the previous three months. We'll see what sort of announcement he puts up for the much cooler April temperatures. But all such celebrations do is set the stage for the acceptance of articles like that by Pat Michaels who point out the opposite when it occurs. One way to buy time is to protest, call others names, and muddy the waters. This strategy can work really well when questions of inconsistency take place over a few months and the real world assumes the pattern of behavior found in the central tendency of the IPCC predictions, but if potential inconsistency goes on any longer than this then you start looking like you are protesting too much.

So what is the alternative for those of us who seek action on climate change? I see two options, both predicated on rejecting the linkage between IPCC predictions and current political actions.

1) Recognize that any successful climate policies must be politically robust. This means that they have to make sense to many constituencies for many reasons. Increasing carbon dioxide in the atmosphere will have effects, and these effects are largely judged to be negative over the long term. Whether or not scientists can exactly predict these effects over decades is an open question. But the failure to offer accurate decadal predictions would say nothing about the judgment that continued increasing carbon dioxide is not a good idea. Further, for any climate policies to succeed they must make sense for a lot of reasons -- the economy, trade, development, pork, image, etc. etc. -- science is pretty much lost in the noise. So step one is to reject the premise of claims like that made by Pat Michaels. The tendency among activist climate scientists is instead to accept those claims.

2) The climate community should openly engage the issue of falsification of its predictions. By giving the perception that fallibility is not only acceptable, but expected as part of learning,it would go a long way toward backing off of the overselling of climate science that seems to have taken place. If the IPCC does not have things exactly correct, and the world has been led to believe that they do, then an inevitable loss of credibility might ensue. Those who believe that the IPCC is infallible will of course reject this idea.

Who knows? Maybe warming will resume in May, 2008 at a rapid rate, and continue for years or decades. Then this discussion will be moot. But what if it doesn't?

May 15, 2008

Comparing Distrubutions of Observations and Predictions: A Response to James Annan

James Annan, a climate modeler, has written a post at his blog trying to explain why it is inconceivable that recent observations of global average temperature trends can be considered to be inconsistent with predictions from the models of the IPCC. James has an increasing snarky, angry tone to his comments which I will ignore in favor of the math (and I'd ask those offering comments on our blog to also be respectful, even if that respect is not returned), and in this post I will explain that even using his approach, there remains a quantitative justification for arguing that recent trends are inconsistent with IPCC projections.

James asks:

Are the models consistent with the observations over the last 8 years?

He answers this question using a standard approach to comparing means from two distributions, a test that I have openly questioned its appropriateness in this context. But lets grant James this methodological point for this discussion.

James defines the past 8 years as the past 8 calendar years, 2000-2007, which we will see is a significant decision. As reported to us by his fellow modelers at Real Climate, James presents the distribution of models as having a mean 8-year trend of 0.19 degrees per decade, with a standard deviation of 0.21. So lets also accept this starting point.

In a post on 8-year trends in observational data Real Climate reported the standard deviation of these trends to be 0.19. (Note this is based on NASA data, and I would be happy to use a different value if a good argument can be made to do so.) I calculated the least-squares best fit line for the monthly data 2000-2007 from the UKMET dataset that James pointed to and arrived at 0.10 degrees/C per decade (James gets 0.11).

So lets take a look at how the distribution of 8-year trends in the models [N(0.19, 0.21)] compares to the analogous 8-year trend in the observations [N(0.10, 0.19)]. This is shown in the following graph with the model distribution in dark blue, and the observations in red.


Guess what? Using this approach James is absolutely correct when he says that it would be incorrect to claim that the temperatures observed from 2000-2007 are inconsistent with the IPCC AR4 model predictions. In more direct language, any reasonable analysis would conclude that the observed and modeled temperature trends are consistent.

But now lets take a look at two different periods, first the past eight years of available data, so April 2000 to March 2008 (I understand that April 2008 values are just out and the anomaly is something like half the value of April 2000, so making this update would make a small difference).


You can clearly see that the amount of overlap between the distributions is smaller than in the first figure above. If one wanted to claim that this amount of overlap demonstrates consistency between models and observations I would not disagree. But at the same time, there is also a case to be made that the distributions are inconsistent, as the amount of overlap is not insignificant. There would be an even stronger case to be made for inconsistency using the satellite data, which shows a smaller trend over this same period.

But now lets take a look at the period January 2001 to present, shown below.


Clearly, there is a strong argument to be made that these distributions are inconsistent with one another (and again, even stronger with the satellite data).

So lets summarize. I have engaged these exercises to approach the question: "What observations of the climate system would be inconsistent with predictions of IPCC AR4?"

1. Using the example of global average temperatures to illustrate how this answer might be approached, I have concluded that it is not "bogus" or "denialist" (as some prominent climate modelers have suggested) to either ask the question or to suggest that there is some valid evidence indicating inconsistency between observations and model predictions.

2. The proper way to approach this question is not clear. With climate models we are not dealing with balls and urns, as in idealized situations of hypothesis testing. Consider that the greater the uncertainty in climate models -- which results from any research that expands the realization space -- will increase the consistency between observations and models, if consistency is simply defined as some part of the distribution of observations overlapping with the distribution of forecasts. Thus, defining a distribution of model predictions simply as being equivalent to the distribution of realizations is problematic, especially if model predictions are expected to have practical value.

3. Some people get very angry when these issues are raised. Readers should see the reactions to my posts as an obvious example of how the politics of climate change are reflected in pressures not to ask these sort of questions.

One solution to this situation would be to ask those who issue climate predictions for the purposes of informing decision makers -- on any time scale -- to clearly explain at the time the prediction is issued what data are being predicted and what values of those data would falsify the prediction. Otherwise, we will find ourselves in a situation where the instinctive response of those issuing the predictions will be to defend their forecasts as being consistent with the observations, no matter what is observed.

May 14, 2008

Lucia Liljegren on Real Climate's Approach to Falsification of IPCC Predictions


Lucia Liljegren has wonderfully clear post up which explains issues of consistency and inconsistency between models and observations using a simple analogy based on predicting the heights of Swedes.

She writes;

I think a simple example using heights is helps me explain the answer to these questions:

1. Is the mean trend in surface temperature over time predicted by the IPCC consistent with the temperature trends we have been experiencing? (That is: is 2C/century consistent with the trend we’ve seen? )
2. Is the lowest uncertainty bound the IPCC shows the public consistent with the trend in GMST (global mean surface temperature) we have seen since 2001?

I think these questions are important to the public and policy makers. They are the questions people at many climate blogs are asking and they are the questions many voters and likely policy makers would like answered.

I think the answer to both questions is "No, the IPCC predictions are inconsistent with recent data."

Please go to her site and read the entire post.

She concludes her discussion as follows:

The IPCC projections remain falsified. Comparison to data suggest they are biased. The statistical tests accounts for the actual weather noise in data on earth.

The argument that this falsification is somehow inapplicable because the earth data falls inside the full range of possibilities for models is flawed. We know why the full range of climate models is huge: It contains a large amount of "climate model noise" due to models that are individually biased relative to the system of interest: the earth.

It will continue to admit what I have always admitted: When applying hypothesis tests to a confidence limit of 5%, one does expect to be wrong 5% of the time. It is entirely possible that the current falsification fall in the category of 5% incorrect falsifications. If this is so, the “falsified” diagnosis will reverse, and not we won’t see another one anytime soon.

However, for now, the IPCC projections remain falsified, and will do so until the temperatures pick up. Given the current statistical state ( a period when large “type 2″ error is expected) it is quite likely we will soon see “fail to falsify” even if the current falsification is a true one. But if the falsification is a “true” falsification, as is most likely, we will see “falsifications” resume. In that case, the falsification will ultimately stick.

For now, all we can do is watch the temperature trends of the real earth.

May 12, 2008

How to Make Two Decades of Cooling Consistent with Warming

The folks at Real Climate have produced a very interesting analysis that provides some useful information for the task of framing a falsification exercise on IPCC predictions of global surface temperature changes. The exercise also provides some insight into how this branch of the climate science community defines the concept of consistency between models and observations, and why it is that every observation seems to be, in their eyes, "consistent with" model predictions. This post explains why Real Climate is wrong in their conclusions on falsification and the why it is that two decades of cooling can be defined as "consistent with" predictions of warming.

In their post, RealClimate concludes:

Claims that a negative observed trend over the last 8 years would be inconsistent with the models cannot be supported. Similar claims that the IPCC projection of about 0.2ºC/dec over the next few decades would be falsified with such an observation are equally bogus.

Real Climate defines observations to be "consistent with" the models to mean that an observation, with its corresponding uncertainty range, overlaps with the spread of the entire ensemble of model realizations. This is the exact same definition of "consistent with" that I have criticized here on many occasions. Why? Because it means that the greater the uncertainty in modeling -- that is, the greater the spread in outcomes across model realizations -- the more likely that observations will be “consistent with” the models. More models, more outcomes, greater consistency – but less certainty. It is in this way that pretty much any observation becomes "consistent with" the models.

As we will see below, the assertion by Real Climate that "a negative observed trend over the last 8 years would be inconsistent with the models cannot be supported" is simply wrong. Real Climate is more on the mark when they write:

Over a twenty year period, you would be on stronger ground in arguing that a negative trend would be outside the 95% confidence limits of the expected trend (the one model run in the above ensemble suggests that would only happen ~2% of the time).

Most people seeking to examine the consistency between models and observations would use some sort of probabilistic threshold, like a 95% confidence interval, which would in this case be calculated as a joint probability of observations and models.

So let’s go through the exercise of comparing modeled and observed trends to illustrate why Real Climate is wrong, or more generously, has adopted a definition of "consistent with" that is so broad as to be meaningless in practice.

First the observations. Thanks to Lucia Liljegren we have the observed trends in global surface temperature 2001-present (which slightly longer than 8 years), with 95% confidence intervals, for five groups that keep such record. Here is that information she has presented in degrees Celsius per decade:

UKMET -1.3 +/- 1.8
NOAA 0.0 +/- 1.6
RSS -1.5 +/- 2.2
UAH -0.9 +/- 2.8
GISS 0.2 +/- 2.1

Real Climate very usefully presents 8-year trends for 55 model realizations in a figure that is reproduced below. I have annotated the graph by showing the 95% range for the model realizations, which corresponds to excluding the most extreme 3 model realization on either end of the distribution (2.75 to be exact). (I have emailed Gavin Schmidt asking for the data, which would enable a bit more precision. ) The blue horizontal line at the bottom labeled "95% spread across model realizations" shows the 95% range of 8-year trends present across the IPCC model realizations.

I have also annotated the figure to show in purple the 8+ year trends from the five groups that track global surface temperatures, with the 95% range as calculated by Lucia Liljegren. I have presented each of the individual ranges for the 5 groups, and then with a single purple horizontal line the range across the five observational groups.


Quite clearly there is a large portion of the spread in the observations that is not encompassed by the spread in the models. This part of the observations is cooler than the range provided by the models. And this then leads us to the question of how to interpret the lack of complete overlap.

One interpretation, and the one that makes the most sense to me, is that because there is not an overlap between modeled and observed trends at the 95% level (which is fairly obvious from the figure, but could be easily calculated with the original data) then one could properly claim that the surface temperature observations 2001-present fail to demonstrate consistency with the models of IPCC AR4 at the 95% level. They do however show consistency at some lower level of confidence. Taking each observational dataset independently, one would conclude that UKMET, RSS, and UAH are inconsistent with the models, whereas NASA and NOAA are consistent with them, again at a 95% threshold.

Another interpretation, apparently favored by the guys at Real Climate, is that because there is some overlap between the 95% ranges (i.e., overlap between the blue and purple lines), the models and observations are in fact consistent with one another. [UPDATE: Gave Schmidt at RC confirms this interpretation when he writes in response to a question about the possibility of falsifying IPCC predictions: "Sure. Data that falls unambiguously outside it [i.e., the model range]."] But this type of test for consistency is extremely weak. The Figure below takes the 95% spread in the observations and illustrates how far above and below the 95% spread in the models some overlap would allow. If the test of “consistent with” is defined as any overlap between models and observations, then any rate of cooling or warming between -10 deg C/decade and +13.0 dec C/decade could be said to be “consistent with” the model predictions of the IPCC. This is clearly so absurd as to be meaningless.


So when Real Climate concludes that . . .

Claims that a negative observed trend over the last 8 years would be inconsistent with the models cannot be supported

. . . they are simply incorrect by any reasonable definition of consistency based on probabilistic reasoning. Such claims do in fact have ample support.

If they wish to assert than any overlap between uncertainties in observed temperature trends and the spread of model realizations over an 8-year period implies consistency, then they are arguing that any 8-year trend between -10/C and +13/C (per century) would be consistent with the models. This sort of reasoning turns climate model falsification into a rather meaningless exercise. [UPDATE: In the comments, climate modeler James Annan makes exactly this argument, but goes even further: "even if the model and obs ranges didn't overlap at all, they might (just) be consistent".

Of course in practice the tactical response to claims that observations falsify model predictions will be to argue for expanding the range of realizations in the models, and arguing for reducing the range of uncertainties in the observations. This is one reason why debates over the predictions of climate models devolve into philosophical discussions about how to treat uncertainties.

Finally, how then should we interpret Keenlyside et al.? It is, as Real Climate admits, outside the 95% range of the IPCC AR4 models for its prediction of trends to 2015. But wait, Keelyside et al. in fact use one of the models of the IPCC AR4 runs, and thus this fact could be used to argue that the range of possible 20-year trends is actually larger than that presented by the IPCC. If interpreted in this way, then this would get us back to the interesting conclusion that more models, initialized in different ways, actually work to expand the range of possible futures. Thus we should not be surprised to see Real Climate conclude

Similar claims that the IPCC projection of about 0.2ºC/dec over the next few decades would be falsified with such an observation [of "a negative observed trend"] are equally bogus.

And this gentle readers is exactly why I explained in a recent post that Keelyside et al. now means that a two-decade cooling trend (in RC parlance, a “negative observed trend over 20 years”) is now defined as consistent with predictions of warming.

Inconsistent With? One Answer

UPDATE: Real Climate has already dismissed the paper linked below as a failed effort.

Climate Audit provides a pointer to this paper (PDF) by Koutsoyiannis et al. which has the following abstract:

As falsifiability is an essential element of science (Karl Popper), many have disputed the scientific basis of climatic predictions on the grounds that they are not falsifiable or verifiable at present. This critique arises from the argument that we need to wait several decades before we may know how reliable the predictions will be. However, elements of falsifiability already exist, given that many of the climatic model outputs contain time series for past periods. In particular, the models of the IPCC Third Assessment Report have projected future climate starting from 1990; thus, there is an 18‐year period for which comparison of model outputs and reality is possible. In practice, the climatic model outputs are downscaled to finer spatial scales, and conclusions are drawn for the evolution of regional climates and hydrological regimes; thus, it is essential to make such comparisons on regional scales and point basis rather than on global or hemispheric scales. In this study, we have retrieved temperature and precipitation records, at least 100‐year long, from a number of stations worldwide. We have also retrieved a number of climatic model outputs, extracted the time series for the grid points closest to each examined station, and produced a time series for the station location based on best linear estimation. Finally, to assess the reliability of model predictions, we have compared the historical with the model time series using several statistical indicators including long‐term variability, from monthly to overyear (climatic) time scales. Based on these analyses, we discuss the usefulness of climatic model future projections (with emphasis on precipitation) from a hydrological perspective, in relationship to a long‐term uncertainty framework.

The paper provides the following conclusions:

*All examined long records demonstrate large overyear variability (long‐term fluctuations) with no systematic signatures across the different locations/climates.

*GCMs generally reproduce the broad climatic behaviours at different geographical locations and the sequence of wet/dry or warm/cold periods on a mean monthly scale.

*However, model outputs at annual and climatic (30‐year) scales are irrelevant with reality; also, they do not reproduce the natural overyear fluctuation and, generally, underestimate the variance and the Hurst coefficient of the observed series; none of the models proves to be systematically better than the others.

*The huge negative values of coefficients of efficiency at those scales show that model predictions are much poorer that an elementary prediction based on the time average.

*This makes future climate projections not credible.

*The GCM outputs of AR4, as compared to those of TAR, are a regression in terms of the elements of falsifiability they provide, because most of the AR4 scenarios refer only to the future, whereas TAR scenarios also included historical periods.

May 09, 2008

Real Climate's Bold Bet

The Real Climate guys have offered odds on future temperature changes, which is great because it gives us a sense of their confidence in predictions of future global average temperatures. Unfortunately, RCs foray into laying odds is not as useful as it might be.

The motivation for this bet is the recent Keenlyside et al. paper that has caused a set of mixed reactions among the commenters in the blogosphere. Some commenters here have stridently argued that the predictions in the Keelyside et al. paper are perfectly consistent with predictions of climate models in the IPCC. However, when one such commenter here was asked to show a single IPCC climate model run showing no temperature increase for the 2 decades following the late 1990s he submitted an irrelevant link and disappeared. Others have argued that the Keenlyside et al. projections (and this includes Keenlyside) are inconsistent with the IPCC predictions. Real Climate apparently falls into this latter camp.

The Real Climate Bet (and there is also one for a later period) is that the period 1994-2004 will have a higher average temperature than the period 2000-2010. Since the periods have in common 2000-2004, we can throw those out as irrelevant. Thus, the bet is really about whether the period 1994-1998 will be warmer than the period 2005-2010. And since we know the temperatures for 2005 to present, the bet is really about what will happen in 2009 and 2010. (Using UKMET temps here.)

It is strange to see the Real Climate guys wagering on 2-year climate trends when they already taught us a lesson that 8 years is far to short for trends to be meaningful. But perhaps there is some other reason why they offer this bet. That reason is that they are playing with a stacked deck, which is what you do when looking for suckers. The following figure shows why.


For the Real Climate guys to lose the bet global average temperatures for 2009 and 2010 would have to fall by about 0.30 from the period 2005-present (and I've assumed Jan-Mar as the 2008 value, 2008 obviously could wind up higher or lower). Real Climate has boldly offered 50-50 odds that this will happen. This is a bit like giving 50-50 odds that Wigan will come back from a 3-0 halftime deficit to Manchester United. Who would take that bet?

Another interpretation of the odds provided by RC is that they actually believe that there is a 50% chance that global temperatures will decrease by more than 0.30 over the next few years. Since I don't think they actually believe that, it is safe to conclude that they've offered a suckers bet. Too bad. When Real Climate wants to offer a 50-50 bet in which the bettor gets to pick which side to take in the bet (i.e., the definition of 50-50) then we'll know that they are serious.

May 08, 2008

Teats on a Bull

Here is a very thoughtful comment sent in by email on the ""consistent with chronicles". I haven't identified the author, since he didn't ask me to post it. But it is worth a read about how climate science is received by one rancher in West Tennessee. I appreciate the feedback.

I am neither an academic nor a scientist. I raise cattle in West Tennessee. I came across your ruminations on the uses and meaning (or lack thereof) of the expression "consistent with" in environmental debates. I enjoyed it very much. You make some very valid, interesting, and to your critics irritating points.

You hear "consistent with" employed in other circumstances as well, as for example when a prosecuting attorney says certain evidence is "consistent with" his or her theory of who committed a crime. However a good defense attorney will almost surely point out that the evidence in question is "consistent with" other explanations as well. Thus, at least in legal dealings, the "consistent with" argument doesn't get one very far.

Which brings me to my point. You're absolutely right to ask what kinds of evidence would be inconsistent with environmental theories, for just the reasons you outline, but of equal or perhaps greater importance is the question "With what other theories or explanations is the same evidence consistent?" It is not so terribly unusual for facts or evidence to be consistent with multiple theories, even ones that contradict one another. I'm clearly not qualified to judge, but could the cited evidence also be "consistent with" environmental theories involving sunspot activity, the Gulf Stream, el Nino, or lord knows what else?

There's another problem I see with the "consistent with" construction: it never addresses the issue of probability. One sees this frequently with the use of "possible." For many folks the claim that something, no matter how implausible, is "possible" is enough to end a discussion. The mere theoretical possibility of something is to their minds proof of its reality. And the truth is it's virtually impossible to prove, especially to such people, that something is impossible. The best one can do is assess probabilities. However, to the true believer, even the highest statistical improbability carries little if any weight. The same, I think, is true for those who offer the "consistent with" argument. Although to their minds they may be equivalent, "consistent with" is not the same thing as "equal to," just as "possible" is not the same thing as "actual."

My personal feeling is that "consistent with" is a hedge term that has about as much meaning, and carries about as much weight, as what we here in West Tennessee call a WAG, or wild ass guess. The number of things a thing can be "consistent with" is so large as to rob the expression of meaning, or communicative value. If my veterinarian looked at one of my cows and informed me that her swollen belly was "consistent with" her being pregnant, I'm not sure I'd find that of much value, as it's also "consistent with" a number of other things, some benign, some fatal.

"Consistent with" doesn't help me make decisions on the farm. With regards to the much more vast and consequential issue of global weather predictions, "consistent with" is to me about as useful as teats on a bull.

Again, you produced a fine article and I enjoyed reading it. Keep pushing environmentalist towards honesty and clarity. A very great deal is at stake, as I'm sure you know.

May 02, 2008

The Consistent-With Chronicles

Scientists are fond of explaining that recent observations of the climate are "consistent with" predictions from climate models. With this construction, scientists are thus explicitly making the claim that models can accurately predict the evolution of those climate variables. Here are just a few recent examples:

"What we are seeing [in recent hurricane trends] is consistent with what the global warming models are predicting," Thomas Knutson, a research meteorologist at a National Oceanic and Atmospheric Administration laboratory in Princeton, N.J., said Friday. link
In a change that is consistent with global warming computer models, the jet streams that govern weather patterns around the world are shifting their course, according to a new analysis by the Carnegie Institution published in Geophysical Research Letters. link
Francis Zwiers, the director of the climate research division of Environment Canada, said research consistently showed the addition of sulfate aerosols and greenhouse gases such as carbon dioxide into the atmosphere has changed rainfall patterns in the Arctic. Zwiers and his colleagues made their findings using 22 climate models that looked at precipitation conditions from the second half of the 20th century. Writing in the journal Science, Zwiers said these findings are consistent with observed increases in Arctic river discharge and the freshening of Arctic water masses during the same time period. link
The fact that we are seeing an expansion of the ocean’s least productive areas as the subtropical gyres warm is consistent with our understanding of the impact of global warming. But with a nine-year time series, it is difficult to rule out decadal variation. link

All of this talk of observations being "consistent with" the predictions from climate models led me to wonder -- What observations would be inconsistent with those same models?

Logically, for a claim of observations being "consistent with" model predictions to have any meaning then there also must be some class of observations that are "inconsistent with" model predictions. For if any observation is "consistent with" model predictions then you are saying absolutely nothing, while at the same time suggesting that you are saying something meaningful. In other contexts this sort of talk is called spin.

So I have occasionally used this blog to ask the question -- what observations would be inconsistent with model predictions?

The answer that keeps coming up is "no observations" -- though a few commenters have suggested that a temperature change of 10 degrees C over a decade would be inconsistent, as too would be the glaciation of NYC over the next few years. These responses certainly are responsive, but I think help to make my point.

Others, such as climate modeler James Annan, suggest that my goal is to falsify global warming theory (whatever that is):"no-one is going to "falsify" the fact that CO2 absorbs LW radiation". No. James is perhaps trying to change the subject, as I am interested in exactly what I say I am interested in -- to understand what observations might be inconsistent with predictions from "global warming models," in the words of climate modeler Tom Knutson, cited above.

Others suggest that by asking this questions I am providing skeptics with "talking points." The implication I suppose is that I should not be looking behind the curtain, lest I find a little wizard at the controls and reveal that we are all actually in Oz. How silly is this complaint? If the political agenda of those wanting action on climate change is so sensitive to someone asking questions of climate models that it risks collapsing, then it is a pretty frail agenda to begin with. I actually do not think that it is so frail, and in fact, my view is that the science, and policies justified based on scientific claims, will be stronger by openly discussing these issues.

A final set of reactions has been that climate models only predict trends over the long-term, such as 30 years, and that anyone looking to examine short-term climate behavior is either stupid or willfully disingenuous. It is funny how this same complaint is not levied at those scientists making claims of "consistent with," such as in those examples listed above. Of course, any time period can be used to compare model predictions with observations -- uncertainties will simply need to be presented as a function of the time period selected. When scientists (and others) argue against rigorously testing predictions against observations, then you know that the science is in an unhealthy state.

So, to conclude, so long as climate scientists make public claims that recent observations of aspects of the climate are "consistent with" the results of "global warming models," then it is perfectly appropriate to ask what observations would be "inconsistent with" those very same models. Until this follow up question is answered in a clear, rigorous manner, the incoherent, abusive, and misdirected responses to the question will have to serve as answer enough.

April 30, 2008

Global Cooling Consistent With Global Warming

For a while now I've been asking climate scientists to tell me what could be observed in the real world that would be inconsistent with forecasts (predictions, projections, etc.) of climate models, such as those that are used by the IPCC. I've long suspected that the answer is "nothing" and the public silence from those in the outspoken climate science community would seem to back this up. Now a paper in Nature today (PDF) suggests that cooling in the world's oceans couldthat the world may cool over the next 20 years few decades , according to Richard Woods who comments on the paper in the same issue, "temporarily offset the longer-term warming trend from increasing levels of greenhouse gases in the atmosphere", and this would not be inconsistent with predictions of longer-term global warming.

I am sure that this is an excellent paper by world class scientists. But when I look at the broader significance of the paper what I see is that there is in fact nothing that can be observed in the climate system that would be inconsistent with climate model predictions. If global cooling over the next few decades is consistent with model predictions, then so too is pretty much anything and everything under the sun.

This means that from a practical standpoint climate models are of no practical use beyond providing some intellectual authority in the promotional battle over global climate policy. I am sure that some model somewhere has foretold how the next 20 years will evolve (and please ask me in 20 years which one!). And if none get it right, it won't mean that any were actually wrong. If there is no future over the next few decades that models rule out, then anything is possible. And of course, no one needed a model to know that.

Don't get me wrong, models are great tools for probing our understanding and exploring various assumptions about how nature works. But scientists think they know with certainty that carbon dioxide leads to bad outcomes for the planet, so future modeling will only refine that fact. I am focused on the predictive value of the models, which appears to be nil. So models have plenty of scientific value left in them, but tools to use in planning or policy? Forget about it.

Those who might object to my assertion that models are of no practical use beyond political promotion, can start by returning to my original question: What can be observed in the climate over the next few decade that would be inconsistent with climate model projections? If you have no answer for this question then I'll stick with my views.

April 16, 2008

Peter Webster on Predicting Tropical Cyclones

Some wise words from Georgia Tech's Peter Webster on our ability to predict the future incidence of tropical cyclones (or TCs, which includes hurricanes):

Unless we can explain physically the history of the number and intensity of TCs in the recent past, then determining the number and intensity of TCs in the future will be either an extrapolation of very poor data sets or a belief in incomplete and inexact models.

April 11, 2008

Lucia Liljegren on Real Climate Spinmeisters

Lucia Liljegren has a considered post up on Real Climate's odd post on my recent letter to Nature Geoscience. I apologize for our comment problems on that thread, but perhaps this one will work better, and you can always comment at Lucia's site, or try to get through the screeners at Real Climate. Is it just me or has the Real Climate discussion board become completely empty of anything resembling scientific discussion?

April 10, 2008

Real Climate on My Letter to Nature Geosciences

The folks at the Real Climate blog have offered up some comments on my letter to Nature Geosciences (PDF) which appeared last week. In the condescending tone that we have come to expect from Real Climate, they helpfully frame their comments in terms of teaching me some lessons. I encourage you to read the whole post, but here is my response (submitted for their posting approval) to their three main points, which I've highlighted in bold:

Thanks for this discussion. Full text of the letter can be found here:

1. IPCC already showed a very similar comparison as Pielke does, but including uncertainty ranges.

RESPONSE: Indeed, and including the uncertainty ranges would not change my conclusion that:

"Temperature observations fall at
the low end of the 1990 IPCC forecast range
and the high end of the 2001 range. Similarly,
the 1990 best estimate sea level rise projection
overstated the resulting increase, whereas the
2001 projection understated that rise."

2. If a model-data comparison is done, it has to account for the uncertainty ranges - both in the data (that was Lesson 1 re noisy data) and in the model (that's Lesson 2).

RESPONSE: I did not do a "model-data comparison". One should be done, though, I agree.

3. One should not mix up a scenario with a forecast - I cannot easily compare a scenario for the effects of greenhouse gases alone with observed data, because I cannot easily isolate the effect of the greenhouse gases in these data, given that other forcings are also at play in the real world.

RESPONSE: Indeed. However, I made no claims about attribution, so this is not really relevant to my letter.

Thanks again, and I'll be happy to follow the discussion.

April 02, 2008

Letter to Nature Geoscience

Nature's Climate Feedback blog provides a nice summary of a correspondence that I authored published today in Nature Geoscience:

Today in a letter to Nature Geoscience (subscription required), Roger Pielke, Jr, questions whether models from that 2001 generation improve on the predictive power of their forbears.

Pielke checks predictions from all four IPCC reports, dating back to 1990, against reality. Each report made a series of 'if-then' statements about the likely results of various emissions scenarios; in hindsight, Pielke can pick out which of these possible greenhouse experiments has actually been running on Earth since 1990 and compare the results to the IPCC's shifting hypotheses.

Whereas the 2001 projections undershot the observed temperatures and sea levels, the 1990 projections overshot them, he concludes. Projections of temperature and sea level fell substantially between the 1990 and 1995 IPCC reports, when aerosols were added to models and carbon-cycle simulations were tweaked. But because they dropped too far, the adjusted post-1995 projections "are not obviously superior in capturing climate evolution", says Pielke.

March 18, 2008

You Can't Make This Stuff Up

Now according to Grist Magazine's Joe Romm I am a "delayer/denier" because I've asked what data would be inconsistent with IPCC predictions. Revealed truths are not to be questioned lest we take you to the gallows. And people wonder why some people see the more enthusiastic climate advocates akin to religious zealots.

I am happy to report that it is quite possible to believe in strong action on mitigation and adaptation while at the same time ask probing questions of our scientific understandings.

March 15, 2008

Update on Falsifiability of Climate Predictions


UPDATE 2:40PM 3-15-08: Within a few hours of this post, as we might have expected, rather than contributing to the substantive discussion, a climate scientist chooses instead to tell us how stupid we are for even discussing such subjects. We are told that "until the temperature obviously and unambiguously turns up again, this kind of stuff is going to continue." Isn't that what this post says? For the "stuff" read on below.

Regular readers will recall that not long ago I asked the climate community research community to suggest what climate observations might be observed on decadal time scales that might be inconsistent with predictions from models. While Real Climate has decided to take a pass on this question other scientists and interested observers have taken up the challenge, no doubt with interest added by the recent cooling in the primary datasets of global temperature.

A very interesting perspective is provided by Lucia Liljegren, who has several interesting posts on observations versus predictions. The figure above is from her analysis. Her complete analysis can be found here. She has several follow up posts in which she discusses other aspects of the analysis and links to a few other, similar explorations of this issue. She writes:

No matter which major temperature measuring group we examine, or which reasonable criteria for limiting our choices we select, it appears that possible that something not anticipated by the IPCC WG1 happened soon after they published their predictions for this century. That something may be the shift in the Pacific Decadal Oscillation; it may be something else. Statistics cannot tell us.

It may turn out that this something is a relatively infrequent but climatologically important, feature that results in unusually cold weather . Events that happen at a rate of 1% do happen– at a rate of 1%. So, if recent flat trend is the 1% event, then 30 year trend in temperatures will resume.

For what it’s worth: I believe AGW is real, based on physical arguments and longer term trends, I suspect we will discover that GCM’s are currently unable to predict shifts in the PDO. The result is the uncertainty intervals on IPCC projections for the short term trend were much too small.

Of course, the reason for the poor short term predictions may turn out to be something else entirely. It remains to those who make these predictions to try to identify what, if anything, resulted in this mismatch between projections and short term data. Or to stand steadfast and wait for La Nina to break and the weather to begin to warm.

Those wanting to quibble with her analysis would no doubt observe that the uncertainty around IPCC predictions for the short term is undoubtedly larger that then IPCC itself presented. Lucia in fact suggests this in her analysis, making one wonder if uncertainties are indeed larger than presented, why didn't the IPCC say so?

In 2006 my father and I wrote about the possible effects on the climate debate of short-term predictions that do not square with observations:

predictions represent a huge gamble with public and policymaker opinion. If more-or-less steady global warming does not occur as forecast by these models, not only will professional reputations be at risk, but the need to reduce threats to the wide spectrum of serious and legitimate environmental concerns (including the human release of greenhouse gases) will be questioned by some as having been oversold. For better or worse, a failure to accurately predict the changes in the global average surface temperature, global average tropospheric temperature, ocean average heat content change, or Arctic sea ice coverage would raise questions on the reliance of global climate models for accurate prediction on multi-decadal time scales.

In one of the comments in response to that post a climate scientist (and Real Climate blogger) took us to task for raising the issue suggesting that there was no really reason to speculate about such things given that, "I’ve pointed out that in the obs, there is no sign of > 2 yr decreasing trends."

Another climate scientist commented that climate models were completely on target:

Re the possibility that the Earth is acting in a way that the models hadn’t predicted, I must say I’m pretty relaxed about that. Let’s wait a few more years and see, eh?

I have not yet seen rebuttals to Lucia's analysis, or others like it (she points to a few), which are not peer-reviewed analyses, yet certainly of some merit and worth considering. There continues to be good reasons for climate scientists to begin more openly discussing the limitations of short-term climate predictions and the implications for understanding uncertainties. They have these discussions among themselves all of the time. For example, with a view quite similar to my own, Real Climate's Gavin Schimdt suggests that if the full context of a prediction from a climate model is not understood, then:

model results have an aura of exactitude that can be misleading. Reporting those results without the appropriate caveats can then provoke a backlash from those who know better, lending the whole field an aura of unreliability.

None of this discussion means that the basic conclusion that greenhouse gases affect the climate system is wrong, or that action to mitigate emissions do not make sense. What it does mean is that we should be concerned about the overselling of climate predictions and the corresponding risks to public credibility and advocacy built upon these predictions.

February 18, 2008

Climate Model Predictions and Adaptation

At a recent conference on adaptation in London, I co-authored a presented paper (with Suraje Dessai, Mike Hulme, and Rob Lempert) on the the role of climate model forecasts in support of adaptation. Our argument is that climate models don't forecast very well on time and spatial scales of relevance to decision makers facing adaptation choices, and even if they did, given irreducible uncertainties robust decision making is a better approach than seeking to optimize.

For more evidence of why it is that climate models are of little predictive use in adaptation decision making, consider the recent discussion of cooling in Antarctica and the southern oceans from RealClimate:

The pioneer climate modelers Kirk Bryan and Syukuro Manabe took up the question with a more detailed model that revealed an additional effect. In the Southern Ocean around Antarctica the mixing of water went deeper than in Northern waters, so more volumes of water were brought into play earlier. In their model, around Antarctica "there is no warming at the sea surface, and even a slight cooling over the 50-year duration of the experiment." In the twenty years since, computer models have improved by orders of magnitude, but they continue to show that Antarctica cannot be expected to warm up very significantly until long after the rest of the world’s climate is radically changed.

Bottom line: A cold Antarctica and Southern Ocean do not contradict our models of global warming. For a long time the models have predicted just that.

Today CSIRO in Australia reports that southern oceans have in fact been warming:

The longest continuous record of temperature changes in the Southern Ocean has found that Antarctic waters are warming and sea levels are rising, an Australian scientist said Monday.

I have no doubt that these observations of warming will also be found, somehow, to be consistent with predictions of climate models. And that is the problem; climate scientists, especially those involved in political advocacy for action on climate change, steadfastly refuse to describe what observations over the short term (i.e., when most adaptation decisions are made) would be inconsistent with model predictions. So all observations are consistent with predictions of climate models.

The reason for this situation of total ambiguity is a perceived need to maintain the public credibility of climate model predictions over the very long term in support of political action on climate change in the face of relentless attacks for politically motivated skeptics. So what do we get? Nonsensical and useless pronouncements such as a cooling southern ocean and a warming southern ocean are both consistent with climate model predictions, thus we can trust the models.

The lesson for decision makers grappling with adaptation to future climate changes? Make sure that your decisions are robust to a wide range of future possibilities, and use caution in seeking to optimize based on this or that prediction of the near-term future.

February 14, 2008

Seasonal Forecasts and the Colorado Winter


The figure above shows the snowpack in the state of Colorado for the past few years. The current level is higher than its been for a while. This is great news for just about everyone in Colorado -- except seasonal climate forecasters, who had predicted a dry, warm winter, and were sticking to that forecast as recently as a month ago.

In today's Denver Post our excellent local science reporter Katy Human takes a look at the forecasts and why forecasters have given in to the reality of massive snowfall totals here in Colorado:

Dry-winter forecasts were flat wrong this year for much of Colorado and the Southwest, and weather experts say they're struggling to understand why the snow just keeps falling.

Some forecasters blame climate change, and others point to the simple vicissitudes of weather. Regardless, almost everyone called for a dry-to-normal winter in Colorado and the Southwest — but today, the state's mountains are piled so thick with snow that state reservoirs could fill and floods could be widespread this spring.

"The polar jet stream has been on steroids. We don't understand this. It's pushing our limits, and it's humbling," said Klaus Wolter, a meteorologist with the National Oceanic and Atmospheric Administration and the University of Colorado at Boulder.

Wolter and NOAA both forecast a drier-than-average winter in most of Colorado. AccuWeather Inc. did the same, citing similar reasons: A La Niña weather system of cool, equatorial Pacific water had set up in the tropics last fall.

I have a lot of respect for Klaus, who is brave enough to put out forecasts in the public on time scales that will allow verification, and hence newspaper articles on his performance. Forecasting is not for the thin-skinned. But forecasts have other effects as well:

La Niña winters have almost always brought droughtlike conditions to the Southwest, as the jet stream ferries storms farther north.

But Arizona has been hit with record snowfall this winter, said Mark Hubble, a senior hydrologist with the Salt River Project in Arizona, the largest provider of water and power in Phoenix.

Dry forecasts last fall convinced Salt River Project managers to purchase about 20,000 acre-feet of water from the Central Arizona Project as a backup, Hubble said. An acre- foot of water is about enough for a family of four for one year.

"As it turns out, we didn't need it — at all," Hubble said. He could not estimate the financial losses to Salt River, because some payments are made in-kind, with water trades and offsets in the future.

So why did the forecasts bust this year? No one really knows.

Wolter said he's troubled that his and other long-range forecasts have been off two years in a row now.

Last year, experts predicted a wet year from Southern California across to Arizona and southern Colorado, because of an El Niño weather system of warmer Pacific water.

Instead, drought worsened in the Southwest, capped by a huge fire season in Southern California.

"So we have two years in a row here where the atmosphere does not behave as we expect," Wolter said. "Maybe global changes are pulling the rug out from underneath us. We may not know the answer for 10 years, . . . but one pet answer is that you should get more variability with global change."

So I suppose we should add busted seasonal forecasts to our growing lists of things consistent with predictions of climate change. Making long term, unverifiable forecasts is sure a lot safer territory than predicting seasonal snowpack!

For further reading:

Pielke, Jr., R. A., 2000: Policy Responses to the 1997/1998 El Niño: Implications for Forecast Value and the Future of Climate Services. Chapter 7 in S. Changnon (ed.), The 1997/1998 El Niño in the United States. Oxford University Press: New York. 172-196. (PDF)

February 13, 2008

The Consistent-With Game: On Climate Models and the Scientific Method

I have been intrigued by the frequent postings over at Real Climate in defense of the predictive ability of climate models. The subtext of course is political – specifically that criticisms of climate models are an unwarranted basis for criticizing climate policies that are justified or defended in terms of the results of climate models. But this defensive stance risks turning climate modeling from a scientific endeavor to a pseudo-scientific exercise in the politics of climate change.

In a post now up, Real Climate explains that cooling of Antarctica is consistent with the predictions of climate models:

A cold Antarctica and Southern Ocean do not contradict our models of global warming.

And we have learned from Real Climate that all possible temperature trends of 8 years in length are consistent with climate models, so too are just about any possible observed temperature trends in the tropics, so too is a broad range of behavior of mid-latitude storms, as is the behavior of tropical sea surface temperatures, so too is a wide range of behaviors of the tropical climate, including ENSO events, and the list goes on.

In fact, there are an infinite number of things that are not inconsistent with the predictions of climate models (or if you prefer, conditional projections). This is one reason why a central element of the scientific method focuses on the falsifiability of hypotheses. According to Wikipedia (emphasis added):

Falsifiability (or refutability or testability) is the logical possibility that an assertion can be shown false by an observation or a physical experiment. That something is "falsifiable" does not mean it is false; rather, it means that it is capable of being criticized by observational reports. Falsifiability is an important concept in science and the philosophy of science. Some philosophers and scientists, most notably Karl Popper, have asserted that a hypothesis, proposition or theory is scientific only if it is falsifiable.

Are climate models falsifiable?

I am not sure. Over at Real Climate I asked the following question on its current thread:

There are a vast number of behaviors of the climate system that are consistent with climate model predictions, along the lines of your conclusion:
"A cold Antarctica and Southern Ocean do not contradict our models of global warming."

I have asked many times and never received an answer here: What behavior of the climate system would contradict models of global warming? Specifically what behavior of what variables over what time scales? This should be a simple question to answer.


As often is the case, Real Climate lets their commenters provide the easy answers to difficult questions. Here are a few choice responses that Real Climate viewed as contributing to the scientific discussion:

If Pielke wants to contribute constructively to this area of science, he should become a climate modeler himself and discuss such questions in the scientific literature. Otherwise, unless he can present some strong reason for doubting the competence or objectivity of people who do such work, he should listen to people who do work in the area.

. . .
Roger, your question is rather broad and vague. What aspect of the science are you seeking to falsify? See, that is precisely the problem when you have a theory that draws support from such a broad range of phenomena and studies as does the current theory of climate. It is rather like saying, "How would we falsify the theory of evolution?" When a theory has made many predictions and explained many diverse phenomena, it is quite difficult to falsify as a whole. You may be able to look at pieces of it and add to the understanding. Climate science is quite a mature field; future revolutions are quite unlikely. Changes will come but will likely be incremental. It is very hard to envision a development that would significantly alter our understanding of greenhouse forcing unless our whole understanding of climate is radically wrong, and that seems unlikely.

The good news is that there are a range of serious scholars working on the predictive skill of climate models. And there are some folks, myself included, who think that climate models are largely of exploratory or heuristic value, rather than predictive (or consolidative). (And perhaps a post on why this distinction is of crucial importantce may be a good idea here.) But you won’t hear about them at Real Climate.

Once you start playing the "consistent with" or "not inconsistent with" game, you have firmly placed yourself into a Popperian view of models as hypotheses to be falsified. And out of fear that legitimate efforts at falsifiability will be used as ammunition by skeptics (and make no mistake, they will) in the politics of climate change, issues of falsification are simply ignored or avoided. A defensive posture is adopted instead. And as Naomi Oreskes and colleagues have observed, this is a good way to mislead with models.

One of the risks of playing the politics game through science is that you risk turning your science – or at least impressions of it – into pseudo-science. If policy makers and the public begin to believe that climate models are truth machines -- i.e., nothing that has been, will be, or could be observed could possibly contradict what they say -- then a loss of credibility is sure to follow at some point when experience shows them not to be (and they are not). This doesn’t mean that humans don’t affect the climate or that we shouldn’t be taking aggressive action, only that accurate prediction of the future is really difficult. (For the new reader I am an advocate for strong action on both adaptation and mitigation, despite what you might read in the comments at RC.)

So beware the "consistent with" game being played with climate models by activist scientists, it is every bit as misleading as the worst arguments offered by climate skeptics and a distraction from the challenge of effective policy making on climate change.

For Further Reading:

Pielke, Jr., R.A., 2003: The role of models in prediction for decision, Chapter 7, pp. 113-137 in C. Canham and W. Lauenroth (eds.), Understanding Ecosystems: The Role of Quantitative Models in Observations, Synthesis, and Prediction, Princeton University Press, Princeton, N.J. (PDF)

Sarewitz, D., R.A. Pielke, Jr., and R. Byerly, Jr., (eds.) 2000: Prediction: Science, decision making and the future of nature, Island Press, Washington, DC.

February 01, 2008

Guest Comment: Sharon Friedman, USDA Forest Service - Change Changes Everything

It is true that the calculus of environmental tradeoffs will be inevitably and irretrievably changed due to consideration of climate change. Ideas that were convenient (convenient untruths) like “the world worked fine without humans, if we remove their influence it will go back to what it should be” have continued to provide the implicit underpinning for much scientific effort. In short, people gravitated to the concept that "if we studied how things used to be" (pre- European settlement) we would know how they "should" be, with no need for discussions of values or involving non-scientists. This despite excellent work such as the book Discordant Harmonies by Dan Botkin, that displayed the scientific flaws in this reasoning (in 1992).

What's interesting to me in the recent article, "The Preservation Predicament", by Cornelia Dean in The New York Times
is the implicit assumption that conservationists and biologists will be the ones who determine whether investing in conservation in the Everglades compared to somewhere else, given climate change, is a good idea - perhaps implying that sciences like decision science or economics have little to contribute to the dialog. Not to speak of communities and their elected officials.

I like to quote the IUCN (The World Conservation Union) governance principles:

Indigenous and local communities are rightful primary partners in the development and implementation of conservation strategies that affect their lands, waters, and other resources, and in particular in the establishment and management of protected areas.

Is it more important for scientists to "devise theoretical frameworks for deciding when, how or whether to act" (sounds like decision science) or for folks in a given community, or interested in a given species, to talk about what they think needs to be done and why? There are implicit assumptions about what sciences are the relevant ones and the relationship between science and democracy, which in my opinion need to be debated in the light of day rather than assumed.

Sharon Friedman
Director, Strategic Planning
Rocky Mountain Region
USDA Forest Service

January 26, 2008

Updated IPCC Forecasts vs. Observations

IPCC Verification w-RSS correction.png

Carl Mears from Remote Sensing Systems, Inc. was kind enough to email me to point out that the RSS data that I had shared with our readers a few weeks ago contained an error that RSS has since corrected. The summary figure above is re-plotted with the corrected data (RSS is the red curve). At the time I wrote:

Something fishy is going on. The IPCC and CCSP recently argued that the surface and satellite records are reconciled. This might be the case from the standpoint of long-term linear trends. But the data here suggest that there is some work left to do. The UAH and NASA curves are remarkably consistent. But RSS dramatically contradicts both. UKMET shows 2007 as the coolest year since 2001, whereas NASA has 2007 as the second warmest. In particular estimates for 2007 seem to diverge in unique ways. It'd be nice to see the scientific community explain all of this.

For those interested in the specifics, Carl explained in his email:

The error was simple -- I made a small change in the code ~ 1 year ago that resulted in a ~0.1K decrease in the absolute value of AMSU TLTs, but neglected to reprocess data from 1998-2006, instead only using it for the new (Jan 2007 onward) data. Since the AMSU TLTs are forced to match the MSU TLTs (on average) during the overlap period, this resulted in an apparent drop in TLT for 2007. Reprocessing the earlier AMSU data, thus lowering AMSU TLT by 0.1 from 1998-2006, resulted in small changes in the parameters that are added to the AMSU temperatures to make them match MSU temperatures, and thus the 2007 data is increased by ~0.1K. My colleagues at UAH (Christy and Spencer) were both very helpful in diagnosing the problem.

It is important to note that the RSS correction does not alter my earlier analysis of the IPCC predictions (made in 1990, 1995, 2001, 2007) and various observations. Thanks again to Carl for alerting me to the error and giving me a chance to update the figures with the new information!

January 18, 2008

Temperature Trends 1990-2007: Hansen, IPCC, Obs

The figure below shows linear trends in temperature for Jim Hansen's three 1988 scenarios (in shades of blue), for the IPCC predictions issued in 1990, 1995, 2001, 2007 (in shades of green), and for four sets of observations (in shades of brown). I choose the period 1990-2007 because this is the period of overlap for all of the predictions (except IPCC 2007, which starts in 2000).

temp trends.png

Looking just at these measures of central tendency (i.e., no formal consideration of uncertainties) it seems clear that:

1. Trends in all of Hansen's scenarios are above IPCC 1995, 2001, and 2007, as well as three of the four surface observations.

2. The outlier on surface observations, and the one consistent with Hansen's Scenarios A and B is the NASA dataset overseen by Jim Hansen. Whatever the explanation for this, good scientific practice would have forecasting and data collection used to verify those forecasts conducted by completely separate groups.

3. Hansen's Scenario A is very similar to IPCC 1990, which makes sense given their closeness in time, and assumptions of forcings at the time (i.e., thoughts on business-as-usual did not change much over that time).

The data for the Hansen scenarios was obtained at Climate Audit from the ongoing discussion there, and the IPCC and observational data is as described on this site over the past week or so in the forecast verification exercise that I have conducted. This is an ongoing exercise, as part of a conversation across the web, so if you have questions or comments, please share them, either here, or if our comment interface is driving you nuts (as it is with me), then comment over at Climate Audit where I'll participate in the discussions.

January 16, 2008

UKMET Short Term Global Temperature Forecast

UKMET Short Term Forecast.png

This figure shows a short-term forecast of global average temperature issued by the UK Meteorological Service, with some annotations that I've added and described below. The forecast is discussed in this PDF where you can find the original figure. This sort of forecast should be applauded, because it allows for learning based on experience. Such forecasts, whether eventually shown to be wrong or right, can serve as powerful tests of knowledge and predictive skill. The UK Met Service is to be applauded. Now on to the figure itself.

The figure is accompanied by this caption:

Observations of global average temperature (black line) compared with decadal ‘hindcasts’ (10-year model simulations of the past, white lines and red shading), plus the first decadal prediction for the 10 years from 2005. Temperatures are plotted as anomalies (relative to 1979–2001). As with short-term weather forecasts there remains some uncertainty in our predictions of temperature over a decade. The red shading shows our confidence in predictions of temperature in any given year. If there are no volcanic eruptions during the forecast period, there is a 90% likelihood of the temperature being within the shaded area.

The figure shows both hindcasts and a forecast. I've shaded the hindcasts in grey. I've added the green curve which is my replication of the global temperature anomalies from the UKMET HADCRUT3 dataset extended to 2007. I've also plotted as a blue dot the prediction issued by UKMET for 2008, which is expected to be indistinguishable from the temperature of years 2001 to 2007 (which were indistinguishable from each other). The magnitude of the UKMET forecast over the next decade is almost exactly identical to the IPCC AR4 prediction over the same time period, which I discussed last week.

I have added the pink star at 1995 to highlight the advantages offered by hindcasting. Imagine if the model realization begun in 1985 had been continued beyond 1995, rather than being re-run after 1995. Clearly, all subsequent observed temperatures would have been well below that 1985 curve. One important reason for this is of course the eruption of Mt. Pinatubo, which was not predicted. And that is precisely the point -- prediction is really hard, especially when conducted in the context of open systems, and as is often said, especially about the the future. Our ability to explain why a prediction was wrong does not make that prediction right, and this is a point often lost in debate about climate change.

Again, kudos to the UK Met Service. They've had the fortitude to issue a short term prediction related to climate change. Other scientific bodies should follow this lead. It is good for science, and good for the use of science in decision making.

January 15, 2008

Verification of IPCC Sea Level Rise Forecasts 1990, 1995, 2001

Here is a graph showing IPCC sea level rise forecasts from the FAR (1990), SAR (1995), and TAR (2001).

IPCC Sea Level.png

And here are the sources:

IPCC Sea Level Sources.png

Observational data can be found here. Thanks to my colleague Steve Nerem.

Unlike temperature forecasts by the IPCC, sea level rise shows no indication that scientists have a handle on the issue. As with temperature the IPCC dramatically decreased its predictions of sea level rise in between its first (1990) and second (1995) assessment reports. It then nudged down its prediction a very small amount in its 2001 report. The observational data falls in the middle of the 1990 and 1995/2001 assessments.

Last year Rahmstorf et al. published a short paper in Science comparing observations of temperature with IPCC 2001 predictions (Aside: it is remarkable that Science allowed them to ignore IPCC 1990 and 1995). Their analysis is completely consistent with the temperature and sea level rise verifications that I have shown. On sea level rise they concluded:

Previous projections, as summarized by IPCC, have not exaggerated but may in some respects even have underestimated the change, in particular for sea level.

This statement is only true if one ignores the 1990 IPCC report which overestimated both sea level rise and temperature. Rahmstorf et al. interpretation of the results is little more than spin, as it would have been equally valid to conclude based on the 1990 report:

Previous projections, as summarized by IPCC, have not underestimated but may in some respects even have exaggerated the change, both for sea level and temperature.

Rather than spin the results, I conclude that the ongoing debate about future sea level rise is entirely appropriate. The fact that the IPCC has been unsuccessful in predicting sea level rise, does not mean that things are worse or better, but simply that scientists clearly do not have a handle on this issue and are unable to predict sea level changes on a decadal scale. The lack of predictive accuracy does not lend optimism about the prospects for accuracy on the multi-decadal scale. Consider that the 2007 IPCC took a pass on predicting near term sea level rise, choosing instead to focus 90 years out (as far as I am aware, anyone who knows differently, please let me know).

This state of affairs should give no comfort to anyone: over the 21st century sea level is expected to rise, anywhere from an unnoticeable amount to the catastrophic, and scientists have essentially no ability to predict this rise, much less the effects of various climate policies on that rise. As we've said here before, this is a cherrypickers delight, and a policy makers nightmare. It'd be nice to see the scientific community engaged in a bit less spin, and a bit more comprehensive analysis.

January 14, 2008

James Hansen on One Year's Temperature

NASA's James Hansen just sent around a commentary (in PDF here) on the significance of the 2007 global temperature in the context of the long-term temperature record that he compiles for NASA. After Real Climate went nuts over how misguided it is to engage in a discussion of eight years worth of temperature records, I can''t wait to see them lay into Jim Hansen for asserting that one year's data is of particular significance (and also for not graphing uncertainty ranges):

The Southern Oscillation and the solar cycle have significant effects on year-to-year global temperature change. Because both of these natural effects were in their cool phases in 2007, the unusual warmth of 2007 is all the more notable.

But maybe it is that data that confirms previously held beliefs is acceptable no matter how short the record, and data that does not is not acceptable, no matter how long the record. But that would be confirmation bias, wouldn't it?

Anyway, Dr. Hansen does not explain why the 2007 NASA data runs counter to that of UKMET, UAH or RSS, but does manage to note the "incorrect" 2007 UKMET prediction of a record warm year. Dr. Hansen issues his own prediction:

. . . it is unlikely that 2008 will be a year with an unusual global temperature change, i.e., it is likely to remain close to the range of (high) values exhibited in 2002-2007. On the other hand, when the next El Nino occurs it is likely to carry global temperature to a significantly higher level than has occurred in recent centuries, probably higher than any year in recent millennia. Thus we suggest that, barring the unlikely event of a large volcanic eruption, a record global temperature clearly exceeding that of 2005 can be expected within the next 2-3 years.

I wonder if this holds just for the NASA dataset put together by Dr. Hansen or for all of the temperature datasets.

Updated Chart: IPCC Temperature Verification

I've received some email comments suggesting that my use of the 1992 IPCC Supplement as the basis for IPCC 1990 temperature predictions was "too fair" to the IPCC because the IPCC actually reduced its temperature projections from 1990 to 1992. In addition, Gavin Schmidt and a commenter over at Climate Audit also did not like my use of the 1992 report. So I am going to take full advantage of the rapid feedback of the web to provide an updated figure, based on IPCC 1990, specifically, Figure A.9, p. 336. In other words, I no longer rely on the 1992 supplement, and have simply gone back to the original IPCC 1990 FAR. Here then is that updated Figure:

IPCC Verification 90-95-01-07 vs Obs.png

Thanks all for the feedback!

Pachauri on Recent Climate Trends

Last week scientists at the Real Climate blog gave their confirmation bias synapses a workout by explaining that eight years of climate data is meaningless, and people who pay any attention to recent climate trends are "misguided." I certainly agree that we should exhibit cautiousness in interpreting short-duration observations, nonetheless we should always be trying to explain (rather than simply discount) observational evidence to avoid the trap of confirmation bias.

So it was interesting to see IPCC Chairman Rajendra Pachauri exhibit "misguided" behavior when he expressed some surprise about recent climate trends in The Guardian:

Rajendra Pachauri, the head of the U.N. Panel that shared the 2007 Nobel Peace Prize with former U.S. Vice President Al Gore, said he would look into the apparent temperature plateau so far this century.

"One would really have to see on the basis of some analysis what this really represents," he told Reuters, adding "are there natural factors compensating?" for increases in greenhouse gases from human activities.

He added that sceptics about a human role in climate change delighted in hints that temperatures might not be rising. "There are some people who would want to find every single excuse to say that this is all hogwash," he said.

Ironically, by suggesting that their might be some significance to recent climate trends, Dr. Pachauri has provided ammunition to those very same skeptics that he disparages. Perhaps Real Climate will explain how misguided he is, but somehow I doubt it.

For the record, I accept the conclusions of IPCC Working Group I. I don't know how to interpret climate observations of the early 21st century, but believe that there are currently multiple valid hypotheses. I also think that we can best avoid confirmation bias, and other cognitive traps, by making explicit predictions of the future and testing them against experience. The climate community, or at least its activist wing, studiously avoids forecast verification. It just goes to show, confirmation bias is more a more comfortable state than dissonance -- and that goes for people on all sides of the climate debate.

Verification of IPCC Temperature Forecasts 1990, 1995, 2001, and 2007

Last week I began an exercise in which I sought to compare global average temperature predictions with the actual observed temperature record. With this post I'll share my complete results.

Last week I showed a comparison of the 2007 IPCC temperature forecasts (which actually began in 2000, so they were really forecasts of data that had already been observed). Here is that figure.

surf-sat vs. IPCC.png

Then I showed a figure with a comparison of the 1990 predictions made by the IPCC in 1992 with actual temperature data. Some folks misinterpreted the three curves that I showed from the IPCC to be an uncertainty bound. They were not. Instead, they were forecasts conditional on different assumptions about climate sensitivity, with the middle curve showing the prediction for a 2.5 degree climate sensitivity, which is lower than scientists currently believe to the most likely value. So I have reproduced that graph below without the 1.5 and 4.5 degree climate sensitivity curves.

IPCC 1990 verification.png

Now here is a similar figure for the 1995 forecast. The IPCC in 1995 dramatically lowered its global temperature predictions, primarily due to the inclusion of consideration of atmospheric aerosols, which have a cooling effect. You can see the 1995 IPCC predictions on pp. 322-323 of its Second Assessment Report. Figure 6.20 shows the dramatic reduction of temperature predictions through the inclusion of aerosols. The predictions themselves can be found in Figure 6.22, and are the values that I use in the figure below, which also use a 2.5 degree climate sensitivity, and are also based on the IS92e or IS92f scenarios.

IPCC 1995 Verification.png

In contrast to the 1990 prediction, the 1995 prediction looks spot on. It is worth noting that the 1995 prediction began in 1990, and so includes observations that were known at the time of the prediction.

In 2001, the IPCC nudged its predictions up a small amount. The prediction is also based on a 1990 start, and can be found in the Third Assessment Report here. The most relevant scenario is A1FI, and the average climate sensitivity of the models used to generate these predictions is 2.8 degrees, which may be large enough to account for the difference between the 1995 and 2001 predictions. Here is a figure showing the 2001 forecast verification.

IPCC 2001 Verification.png

Like 1995, the 2001 figure looks quite good in comparison to the actual data.

Now we can compare all four predictions with the data, but first here are all four IPCC temperature predictions (1990, 1995, 2001, 2007) on one graph.

IPCC Predictions 90-95-01-07.png

IPCC issued its first temperature prediction in 1990 (I actually use the prediction from the supplement to the 1990 report issued in 1992). Its 1995 report dramatically lowered this prediction. 2001 nudged this up a bit, and 2001 elevated the entire curve another small increment, keeping the slope the same. My hypothesis for what is going on here is that the various changes over time to the IPCC predictions reflect incrementally improved fits to observed temperature data, as more observations have come in since 1990.

In other words, the early 1990s showed how important aerosols were in the form of dramatically lowered temperatures (after Mt. Pinatubo), and immediately put the 1990 predictions well off track. So the IPCC recognized the importance of aerosols and lowered its predictions, putting the 1995 IPCC back on track with what had happened with the real climate since its earlier report. With the higher observed temperatures in the late 1990s and early 2000s the slightly increased predictions of temperature in 2001 and 2007 represented better fits with observations since 1995 (for the 2001 report) and 2001 (for the 2007 report).

Imagine if your were asked to issue a prediction for the temperature trend over next week, and you are allowed to update that prediction every 2nd day. Regardless of where you think things will eventually end up, you'd be foolish not to include what you've observed in producing your mid-week updates. Was this behavior by the IPCC intentional or simply the inevitable result of using a prediction start-date years before the forecast was being issued? I have no idea. But the lesson for the IPCC should be quite clear: All predictions (and projections) that it issues should begin no earlier than the year that the prediction is being made.

And now the graph that you have all been waiting for. Here is a figure showing all four IPCC predictions with the surface (NASA, UKMET) and satellite (UAH, RSS) temperature record.

IPCC Verification 90-95-01-07 vs Obs.png

You can see on this graph that the 1990 prediction was obviously much higher than the other three, and you can also clearly see how the IPCC temperature predictions have creeped up as observations showed increasing temperatures from 1995-2005. A simple test of my hypothesis is as follows: In the next IPCC, if temperatures from 2005 to the next report fall below the 2007 IPCC prediction, then the next IPCC will lower its predictions. Similarly, if values fall above that level, then the IPCC will increase its predictions.

What to take from this exercise?

1. The IPCC does not make forecast verification an easy task. The IPCC does not clearly identify what exactly it is predicting nor the variables that can be used to verify those predictions. Like so much else in climate science this leaves evaluations of predictions subject to much ambiguity, cherrypicking, and seeing what one wants to see.

2. The IPCC actually has a pretty good track record in its predictions, especially after it dramatically reduced its 1990 prediction. This record is clouded by an appearance of post-hoc curve fitting. In each of 1995, 2001, and 2007 the changes to the IPCC predictions had the net result of improving predictive performance with observations that had already been made. This is a bit like predicting today's weather at 6PM.

3. Because the IPCC clears the slate every 5-7 years with a new assessment report, it is guarantees that its most recent predictions can never be rigorously verified, because, as climate scientists will tell you, 5-7 years is far too short to say anything about climate predictions. Consequently, the IPCC should not predict and then move on, but pay close attention to its past predictions and examine why the succeed or fail. As new reports are issued the IPCC should go to great lengths to place its new predictions on an apples-to-apples basis with earlier predictions. The SAR did a nice job of this, more recent reports have not. A good example of how not to update predictions is the predictions of sea level rise between the TAR and AR4 which are not at all apples-to-apples.

4. Finally, and I repeat myself, the IPCC should issue predictions for the future, not the recent past.

Appendix: Checking My Work

The IPCC AR4 Technical Summary includes a figure (Figure TS.26) that shows a verification of sorts. I use that figure as a comparison to what I've done. Here is that figure, with a number of my annotations superimposed, and explained below.

IPCC Check.png

Let me first say that the IPCC probably could not have produced a more difficult-to-interpret figure (I see Gavin Schmidt at Real Climate has put out a call for help in understanding it). I have annotated it with letters and some lines and I explain them below.

A. I added this thick horizontal blue line to indicate the 1990 baseline. This line crosses a thin blue line that I placed to represent 2007.

B. This thin blue line crosses the vertical axis where my 1995 verification value lies, represented by the large purple dot.

C. This thin blue line crosses the vertical axis where my 1990 verification value lies, represented by the large green dot. (My 2001 verification is represented by the large light blue dot.)

D. You can see that my 1990 verification value falls exactly on a line extended from the upper bound of the IPCC curve. I have also extended the IPCC mid-range curve as well (note that my extension superimposed falls a tiny bit higher than it should). Why is this? I'm not sure, but one answer is that the uncertainty range presented by the IPCC represents the scenario range, but of course in the past there is no scenario uncertainty. Since emissions have fallen at the high end of the scenario space, if my interpretation is correct, then my verification is consistent with that of the IPCC.

E. For the 1995 verification, you can see that similarly my value falls exactly on a line extended from the upper end of the IPCC range. This would also be consistent with the IPCC presenting the uncertainty range as representing alternative scenarios. The light blue dot is similarly at the upper end of the blue range. What should not be missed is that the relative difference between my verifications and those of the IPCCs are just about identical.

A few commenters over at Real Climate, including Gavin Schmidt, have suggested that such figures need uncertainty bounds on them. In general, I agree, but I'd note that none of the model predictions presented by the IPCC (B1, A1B, A2, Commitment -- note that all of these understate reality since emissions are following A1FI, the highest, most closely) show any model uncertainty whatsoever (nor any observational uncertainty, nor multiple measures of temperature). Surely with the vast resources available to the IPCC, they could have done a much more rigorous job of verification.

In closing, I guess I'd suggest to the IPCC that this sort of exercise should be taken up as a formal part of its work. There are many, many other variables (and relationships between variables) that might be examined in this way. And they should be.

January 11, 2008

Real Climate's Two Voices on Short-Term Climate Fluctuations

Real Climate has been speaking with two voices on how to compare observations of climate with models. Last August they asserted that one-year's sea ice extent could be compared with models:

A few people have already remarked on some pretty surprising numbers in Arctic sea ice extent this year (the New York Times has also noticed). The minimum extent is usually in early to mid September, but this year, conditions by Aug 9 had already beaten all previous record minima. Given that there is at least a few more weeks of melting to go, it looks like the record set in 2005 will be unequivocally surpassed. It could be interesting to follow especially in light of model predictions discussed previously.

Today, they say that looking at 8 years of temperature records is misguided:

John Tierney and Roger Pielke Jr. have recently discussed attempts to validate (or falsify) IPCC projections of global temperature change over the period 2000-2007. Others have attempted to show that last year's numbers imply that 'Global Warming has stopped' or that it is 'taking a break' (Uli Kulke, Die Welt)). However, as most of our readers will realise, these comparisons are flawed since they basically compare long term climate change to short term weather variability.

So according to Real Climate one-year's ice extent data can be compared to climate models, but 8 years of temperature data cannot.

Right. This is why I believe that whatever one's position of climate change is, everyone should agree that rigorous forecast verification is needed.

Post Script. I see at Real Climate commenters are already calling me a "skeptic" for even discussing forecast verification. For the record I accept the consensus of the IPCC WGI. If asking questions about forecast verification is to be tabooo, then climate science is in worse shape than I thought.

January 10, 2008

Verification of 1990 IPCC Temperature Predictions

1990 IPCC verification.png

I continue to receive good suggestions and positive feedback on the verification exercise that I have been playing around with this week. Several readers have suggested that a longer view might be more appropriate. So I took a look at the IPCC's First Assessment Report that had been sitting on my shelf, and tried to find its temperature prediction starting in 1990. I actually found what I was looking for in a follow up document: Climate Change 1992: The Supplementary Report to the IPCC Scientific Assessment (not online that I am aware of).

In conducting this type of forecast verification, one of the first things to do is to specify which emissions scenario most closely approximated what has actually happened since 1990. As we have discussed here before, emissions have been occurring at the high end of the various scenarios used by the IPCC. So in this case I have used IS92e or IS92f (the differences are too small to be relevant to this analysis), which are discussed beginning on p. 69.

With the relevant emissions scenario, I then went to the section that projected future temperatures, and found this in Figure Ax.3 on p. 174. From that I took from the graph the 100-year temperature change and converted it into an annual rate. At the time the IPCC presented estimates for climate sensitivities of 1.5 degree, 2.5 degrees, and 4.5 degrees, with 2.5 degrees identified as a "best estimate." In the figure above I have estimated the 1.5 and 4.5 degree values based on the ratios taken from graph Ax.2, but I make no claim that they are precise. My understanding is that climate scientists today think that climate sensitivity is around 3.0 degrees, so if one were to re-do the 1990 prediction with a climate sensitivity of 3.0 the resulting curve would be a bit above the 2.5 degree curve shown above.

On the graph you will also see the now familiar temperature records from two satellite and two surface analyses. It seems pretty clear that the IPCC in 1990 over-forecast temperature increases, and this is confirmed by the most recent IPCC report (Figure TS.26), so it is not surprising.

I'll move on to the predictions of the Second Assessment Report in a follow up.

January 09, 2008

Forecast Verification for Climate Science, Part 3

By popular demand, here is a graph showing the two main analyses of global temperatures from satellite, from RSS and UAH, as well as the two main analyses of global temperatures from the surface record, UKMET and NASA, plotted with the temperature predictions reported in IPCC AR4, as described in Part 1 of this series.

surf-sat vs. IPCC.png

Some things to note:

1) I have not graphed observational uncertainties, but I'd guess that they are about +/-0.05 (and someone please correct me if this is wildly off), and their inclusion would not alter the discussion here.

2) A feast for cherrypickers. One can arrive at whatever conclusion one wants with respect to the IPCC predictions. Want the temperature record to be consistent with IPCC? OK, then you like NASA. How about inconsistent? Well, then you are a fan of RSS. On the fence? Well, UAH and UKMET serve that purpose pretty well.

3) Something fishy is going on. The IPCC and CCSP recently argued that the surface and satellite records are reconciled. This might be the case from the standpoint of long-term liner trends. But the data here suggest that there is some work left to do. The UAH and NASA curves are remarkably consistent. But RSS dramatically contradicts both. UKMET shows 2007 as the coolest year since 2001, whereas NASA has 2007 as the second warmest. In particular estimates for 2007 seem to diverge in unique ways. It'd be nice to see the scientific community explain all of this.

4) All show continued warming since 2000!

5) From the standpoint of forecast verification, which is where all of this began, the climate community really needs to construct a verification dataset for global temperature and other variables that will be (a) the focus of predictions, and (b) the ground truth against which those predictions will be verified.

Absent an ability to rigorously evaluate forecasts, in the presence of multiple valid approaches to observational data we run the risk of engaging in all sorts of cognitive traps -- such as availability bias and confirmation bias. So here is a plea to the climate community: when you say that you are predicting something like global temperature or sea ice extent or hurricanes -- tell us is specific detail what those variables are, who is measuring them, and where to look in the future to verify the predictions. If weather forecasters, stock brokers, and gamblers can do it, then you can too.

January 08, 2008

Forecast Verification for Climate Science, Part 2

Yesterday I posted a figure showing how surface temperatures compare with IPCC model predictions. I chose to use the RSS satellite record under the assumption that the recent IPCC and CCSP reports were both correct in their conclusions that the surface and satellite records have been reconciled. It turns out that my reliance of the IPCC and CCSP may have been mistaken.

I received a few comments from people suggesting that I had selectively used the RSS data because it showed different results than other global temperature datasets. My first reaction to this was to wonder how the different datasets could show different results if the IPCC was correct when it stated (PDF):

New analyses of balloon-borne and satellite measurements of lower- and mid-tropospheric temperature show warming rates that are similar to those of the surface temperature record and are consistent within their respective uncertainties, largely reconciling a discrepancy noted in the TAR.

But I decided to check for myself. I went to the NASA GISS and downloaded its temperature data and scaled to a 1980-1999 mean. I then plotted it on the same scale as the RSS data that I shared yesterday. Here is what the curves look like on the same scale.

RSS v. GISS.png

Well, I'm no climate scientist, but they sure don't look reconciled to me, especially 2007. (Any suggestions on the marked divergence in 2007?)

What does this mean for the comparison with IPCC predictions? I have overlaid the GISS data on the graph I prepared yesterday.

AR4 Verificantion Surf Sat.png

So using the NASA GISS global temperature data for 2000-2007 results in observations that are consistent with the IPCC predictions, but contradict the IPCC's conclusion that the surface and satellite temperature records are reconciled. Using the RSS data results in observations that are (apparently) inconsistent with the IPCC predictions.

I am sure that in conducting such a verification some will indeed favor the dataset that best confirms their desired conclusions. But, it would be ironic indeed to see scientists now abandon RSS after championing it in the CCSP and IPCC reports. So, I'm not sure what to think.

Is it really the case that the surface and satellite records are again at odds? What dataset should be used to verify climate forecasts of the IPCC?

Answers welcomed.

January 07, 2008

Forecast Verification for Climate Science

Last week I asked a question:

What behavior of the climate system could hypothetically be observed over the next 1, 5, 10 years that would be inconsistent with the current consensus on climate change?

We didn’t have much discussion on our blog, perhaps in part due to our ongoing technical difficulties (which I am assured will be cleared up soon). But John Tierney at the New York Times sure received an avalanche of responses, many of which seemed to excoriate him simply for asking the question, and none that really engaged the question.

I did receive a few interesting replies by email from climate scientists. Here is one of the most interesting:

The IPCC reports, both AR4 (see Chapter 10) and TAR, are full of predictions made starting in 2000 for the evolution of surface temperature, precipitation, precipitation intensity, sea ice extent, and on and on. It would be a relatively easy task for someone to begin tracking the evolution of these variables and compare them to the IPCC’s forecasts. I am not aware of anyone actually engaged in this kind of climate forecast verification with respect to the IPCC, but it is worth doing.

So I have decided to take him up on this and present an example of what such a verification might look like. I have heard some claims lately that global warming has stopped, based on temperature trends over the past decade. So global average temperature seems like a as good a place as any to provide an example.

I begin with the temperature trends. I have decided to use the satellite record provided by Remote Sensing Systems, mainly because of the easy access of its data. But the choice of satellite versus surface global temperature dataset should not matter, since these have been reconciled according to the IPCC AR4. Here is a look at the satellite data starting in 1998 through 2007.

RSS TLT 1998-2007 Monthly.png

This dataset starts with the record 1997/1998 ENSO event which boosted temperatures a good deal. It is interesting to look at, but probably not the best place to start for this analysis. A better place to start is with 2000, but not because of what the climate has done, but because this is the baseline used for many of the IPCC AR4 predictions.

Before proceeding, a clarification must be made between a prediction and a projection. Some have claimed that the IPCC doesn’t make predictions, it only makes projections across a wide range of emissions scenarios. This is just a fancy way of saying that the IPCC doesn’t predict future emissions. But make no mistake, it does make conditional predictions for each scenario. Enough years have passed for us to be able to say that global emissions have been increasing at the very high end of the family of scenarios used by the IPCC (closest to A1F1 for those scoring at home). This means that we can zero in on what the IPCC predicted (yes, predicted) for the A1F1 scenario, which has best matched actual emissions.

So how has global temperature changed since 2000? Here is a figure showing the monthly values, indicating that while there has been a decrease in average global temperature of late, the linear trend since 2000 is still positive.

RSS TLT 2000-2007 Monthly.png

But monthly values are noisy, and not comparable with anything produced by the IPCC, so let’s take a look at annual values.

RSS 2000-2007 Annual.png

The annual values result in a curve that looks a bit like an upwards sloping letter M.

The model results produced by the IPCC are not readily available, so I will work from their figures. In the IPCC AR4 report Figure 10.26 on p. 803 of Chapter 10 of the Working Group I report (here in PDF) provides predictions of future temperature as a function of emissions scenario. The one relevant for my purposes can be found in the bottom row (degrees C above 1980-2000 mean) and second column (A1F1).

I have zoomed in on that figure, and overlaid the RSS temperature trends 2000-2007 which you can see below.

AR4 Verification Example.png

Now a few things to note:

1. The IPCC temperature increase is relative to a 1980 to 2000 mean, whereas the RSS anomalies are off of a 1979 to 1998 mean. I don’t expect the differences to be that important in this analysis, particularly given the blunt approach to the graph, but if someone wants to show otherwise, I’m all ears.

2. It should be expected that the curves are not equal in 2000. The anomaly for 2000 according to RSS is 0.08, hence the red curve begins at that value. Figure 10.26 on p. 803 of Chapter 10 of the Working Group I report actually shows observed temperatures for a few years beyond 2000, and by zooming in on the graph in the lower left hand corner of the figure one can see that 2000 was in fact below the A1B curve.

So it appears that temperature trends since 2000 are not closely following the most relevant prediction of the IPCC. Does this make recent temperature trends inconsistent with the IPCC? I have no idea, and that is not the point of this post. I'll leave it to climate scientists to tell us the significance. I assume that many climate scientists will say that there is no significance to what has happened since 2000, and perhaps emphasize that predictions of global temperature are more certain in the longer term than shorter term. But that is not what the IPCC figure indicates. In any case, 2000-2007 may not be sufficient time for climate scientists to become concerned that their predictions are off, but I’d guess that at some point, if observations don’t match predictions they might be of some concern. Alternatively, if observations square with predictions, then this would add confidence.

Before one dismisses this exercise as an exercise in randomness, it should be observed that in other contexts scientists associated short term trends with longer-term predictions. In fact, one need look no further than the record 2007 summer melt in the Arctic which was way beyond anything predicted by the IPCC, reaching close to 3 million square miles less than the 1978-2000 mean. The summer anomaly was much greater than any of the IPCC predictions on this time scale (which can be seen in IPCC AR4 Chapter 10 Figure 10.13 on p. 771). This led many scientists to claim that because the observations were inconsistent with the models, that there should be heightened concern about climate change. Maybe so. But if one variable can be examined for its significance with respect to long-term projections, then surely others can as well.

What I’d love to see is a place where the IPCC predictions for a whole range of relevant variables are provided in quantitative fashion, and as corresponding observations come in, they can be compared with the predictions. This would allow for rigorous evaluations of both the predictions and the actual uncertainties associated with those predictions. Noted atmospheric scientist Roger Pielke, Sr. (my father, of course) has suggested that three variables be looked at: lower tropospheric warming, atmospheric water vapor content, and oceanic heat content. And I am sure there are many other variables worth looking at.

Forecast evaluations also confer another advantage – they would help to move beyond the incessant arguing about this or that latest research paper and focus on true tests of the fidelity of our ability to forecast future states of the climate system. Making predictions and them comparing them to actual events is central to the scientific method. So everyone in the climate debate, whether skeptical or certain, should welcome a focus on verification of climate forecasts. If the IPCC is indeed settled science, then forecast verifications will do nothing but reinforce that conclusion.

For further reading:

Pielke, Jr., R.A., 2003: The role of models in prediction for decision, Chapter 7, pp. 113-137 in C. Canham and W. Lauenroth (eds.), Understanding Ecosystems: The Role of Quantitative Models in Observations, Synthesis, and Prediction, Princeton University Press, Princeton, N.J. (PDF)

Sarewitz, D., R.A. Pielke, Jr., and R. Byerly, Jr., (eds.) 2000: Prediction: Science, decision making and the future of nature, Island Press, Washington, DC. (link) and final chapter (PDF).

December 17, 2007

A Second Reponse from RMS

A few weeks ago I provided a midterm evaluation of the RMS 2006-2010 US hurricane damage prediction. RMS (and specifically Steve Jewson) responded and has subsequently (and graciously) sent in a further response to a question that I posed:

Does RMS stand by its spring 2006 forecast that the period 2006-2010 would see total insured losses 40% above the historical average?

The RMS response appears below, and I'll respond in the comments:

Yes, we do stand by that forecast, although I should point out that we update the forecast every year, so the 2005 forecast (for 2006-2010) is now 2 years out of date. Apart from questions of forecast accuracy, there's no particular reason for any of our users to use the 2005 forecast at this point (that would be like using a weather forecast from last week). It is, of course, important to understand the correct mathematical interpretation of the forecast. In your original post you interpreted the forecast incorrectly in a couple of ways. Over the last 2-3 years we've issued this forecast to hundreds of insurance companies, and discussed it with dozens of scientists around the world, and none of them have misinterpreted it, so I don't think our communication of the intended meaning of the forecast is unclear. However, some explanation is required and I realise that you probably haven't had the benefit of hearing one of the many presentations we've given on this subject. The two things that need clarifying are: 1) This forecast is a best estimate of the mean of a very wide distribution of possible losses. Because of this no-one should expect to be able to verify or falsify the forecast in a short period of time.

This is a typical property of forecasts in situations with high levels of uncertainty. I think it's pretty well understood by the users of the forecast.

One curious property of the loss distribution is that it is very skewed. As a result the real losses would be expected to fall below the mean in most years. This is compensated for in the average by occasional years with very high losses.

In fact the forecast that we give to the insurance industry is a completely probabilistic forecast, that estimates the entire distribution of possible losses, but it's a bit difficult to put that
kind of information into a press release, or on a blog.

2) Your conditional interpretation of the forecast is not mathematically correct. Neither RMS, nor our clients, expect the losses to increase in 2008-2010 in the way you suggest just because they were low in 2006-2007. I can't think of any reason why that would be the case. To get the (roughly) correct interpretation for 2008-2010 you have to multiply the original 5 year mean values by 0.6. That's what the users of our forecast do when they want that number.

I hope that clarifies the issues a bit.

December 07, 2007

RMS Response to Forecast Evaluation

Robert Muir-Woods of RMS has graciously provided for posting a response to the thoughts on forecast verification that I posted earlier this week. Here are his comments:

Scientifically it is of course not possible to draw any conclusion from the occurrence of two years without hurricane losses in the US, in particular following two years with the highest level of hurricane losses ever recorded and the highest ever number of severe hurricanes making landfall in a two year period. Even including 2006 and 2007, average annualized losses for the past five years are significantly higher than the long term historical average (and maybe you should also show this five year average on your plot?)

The basis for catastrophe loss modeling is that one can separate out the question of activity rate from the question as to the magnitude of losses that will be generated by the occurrence of hurricane events. In generating average annualized losses we need to explore the full 'virtual spectrum' of all the possible events that can occur. The question about current activity rates is a difficult one, which is why we continue to involve some of the leading hurricane climatologists, and a very wide range of forecasting methodologies, in our annual hurricane activity rate update procedure. In October 2007 an independent expert panel concluded that activity rates are forecasted to remain elevated for the next five years. While this perspective was announced and articulated by RMS, we did not originate it. Each year we undertake this exercise, we ensure that the forecasting models used to estimate activity over the next five years also reflect any additional learning from the forecasting of previous years, including the low activity experienced in 2006 and 2007. We don't 'declare success' that the activity rate estimate that has emerged from this procedure over the past three years (using different forecast models and different climatologists) has scarcely changed, but the consistency in the three 5 year projections is interesting nonetheless.

You may also be surprised to learn that our five-year forward-looking perspective on hurricane risk does not inevitably produce higher losses than all other models, which use the extrapolation of the simple long-term average to estimate future activity. This is as shown in a comparison published in a report prepared by the Florida Commission on Hurricane Loss Projection Methodology for the Florida House of Representatives (see the Table 1 on page 25 of the report, which can be downloaded from here:

Robert Muir-Wood

December 06, 2007

Revisiting The 2006-2010 RMS Hurricane Damage Prediction

In the spring of 2006, a company called Risk Management Solutions (RMS) issued a five year forecast of hurricane activity (for 2006-2010) predicting U.S. insured losses to be 40% higher than average. RMS is an important company because their loss models are used by insurance companies to set rates charged to homeowners, by reinsurance companies to set rates they charge to insurers, by ratings agencies for evaluating risks, and others.

We are now two years into the RMS forecast period and can thus say something preliminary about their forecast based on actual hurricane damage from 2006 and 2007, which was minimal. In short, the forecast doesn't look too good. For 2006 and 2007, the following figure shows average annual insured historical losses (for 2005 and earlier) in blue (based on Pielke et al. 2008, adjusted up by 4% from 2006 to 2007 to account for changing exposure), the RMS prediction of 40% more losses above the average in pink, and the actual losses in red.

RMS Verification.png

The RMS prediction obviously did not improve upon a naive forecast of average losses in either year.

What are the chances for the 5-year forecast yet to verify?

Average U.S. insured losses according to Pielke et al. (2008) are about $5.2 billion per year. Over 5 years this is $26 billion, and 40% higher than this is $36 billion. A $36 billion dollar insured loss is about $72 billion in total damage, and $26 billion insured is about $52 billion. For the RMS forecast to do better than the naive baseline of Pielke et al. (2008) total damage in 2008-2010 will have to be higher than $62 billion ($31 billion insured). That is, losses higher than $62B are closer to the RMS forecast than to the naive baseline.

The NHC official estimate for Katrina is $81 billion. So for the 2006-2010 RMS forecast to verify will require close to another Katrina-like event to occur in the next 3 years, or several large events. This is of course possible, but I doubt that there is a hurricane expert out there willing to put forward a combination of event probability and loss magnitude that will lead to an expected $62 billion total loss over the next 3 years. Consider that a 50% chance of $124 billion in losses results in an expected $62 billion. Is there any scientific basis to expect a 50% chance of $124 billion in losses? Or perhaps a 100% chance of $62 billion in total losses? Anyone wanting to make claims of this sort, please let us know!

From Pielke et al. (2008) the annual chances of a >$10B event (i.e., $5B insured) during 1900-2005 about 25%, and the annual chances of a >$50 billion ($25 billion insured) are just under 5%. There were 7 unique three-year periods with >$62B (>$31B insured) in total losses, or about a 7% chance. So RMS prediction of 40% higher than average losses for 2006-2010 has about a 7% chance of being more accurate than a naive baseline. It could happen, of course, but I wouldn't bet on it without good odds!

So what has RMS done is the face of evidence that its first 5-year forecast was not so accurate? Well, they have declared success and issued another 5-year forecast of 40% higher losses for the period 2008-2012.

Risk Management Solutions (RMS) has confirmed its modeled hurricane activity rates for 2008 to 2012 following an elicitation with a group of the world's leading hurricane researchers. . . . The current activity rates lead to estimates of average annual insured losses that will be 40% higher than those predicted by the long-term mean of hurricane activity for the Gulf Coast, Florida, and the Southeast, and 25-30% higher for the Mid-Atlantic and Northeast coastal regions.

For further reading:

Pielke, R. A., Jr., Gratz, J., Landsea, C. W., Collins, D., Saunders, M. A., and Musulin, R. (2008). "Normalized Hurricane Damages in the United States: 1900-2005." Natural Hazards Review, in press, February. (PDF, prepublication version)

May 11, 2007

State of Florida Rejects RMS Cat Model Approach

According to a press release from RMS, Inc. the state of Florida has rejected their risk assessment methodology based on using an expert elicitation to predict hurricane risk for the next five years. Regular readers may recall that we discussed this issue in depth not long ago. Here is an excerpt from the press release:

During the week of April 23, the Professional Team of the Florida Commission on Hurricane Loss Projection Methodology (FCHLPM) visited the RMS offices to assess the v6.0 RMS U.S. Hurricane Model. The model submitted for review incorporates our standard forward-looking estimates of medium-term hurricane activity over the next five years, which reflect the current prolonged period of increased hurricane frequency in the Atlantic basin. This model, released by RMS in May 2006, is already being used by insurance and reinsurance companies to manage the risk of losses from hurricanes in the United States.

Over the past year, RMS has been in discussions with the FCHLPM regarding use of a new method of estimating future hurricane activity over the next five years, drawing upon the expert opinion of the hurricane research community, rather than relying on a simplistic long-term historical average which does not distinguish between periods of higher and lower hurricane frequency. RMS was optimistic that the certification process would accommodate a more robust approach, so it was disappointed that the Professional Team was "unable to verify" that the company had met certain FCHLPM model standards relating to the use of long-term data for landfalling hurricanes since 1900.

As a result of the Professional Team’s decision, RMS has elected this year to submit a revised version of the model that is based on the long-term average, to satisfy the needs of the FCHLPM.

This is of course the exact same issue that we highlighted over at Climate Feedback, where I wrote, "Effective planning depends on knowing what range of possibilities to expect in the immediate and longer-term future. Use too long a record from the past and you may underestimate trends. Use too short a record and you miss out on longer time-scale variability."

In their press release, RMS complains correctly that the state of Florida is now likely to underestimate risk:

The long-term historical average significantly underestimates the level of hurricane hazard along the U.S. coast, and there is a consensus among expert hurricane researchers that we will continue to experience elevated frequency for at least the next 10 years. The current standards make it more difficult for insurers and their policy-holders to understand, manage, and reduce hurricane risk effectively.

In its complaint, RMS is absolutely correct. However, the presence of increased risk does not justify using an untested, unproven, and problematic methodology for assessing risk, even if it seems to give the "right" answer.

The state of Florida would be wise to err in the decision making on the side of recognizing that the long-term record of hurricane landfalls and impacts is likely to dramatically understate their current risk and exposure. From all accounts, the state of Florida appears to be gambling with its hurricane future rather than engaging in robust risk management. For their part, RMS, the rest of the cat model industry, and insurance and reinsurance companies should together carefully consider how best to incorporate rapidly evolving and still-uncertain science into scientifically robust and politically legitimate tools for risk management, and this cannot happen quickly enough.

May 04, 2007

Review of Useless Arithmetic

In the current issue of Nature I review Useless Arithmetic: Why Environmental Scientists Can’t Predict the Future by Orrin Pilkey & Linda Pilkey-Jarvis. Here is my review in PDF. The book's home page can be found here.

March 29, 2007

Now I've Seen Everything

NASA's Jim Hansen has discovered STS (science and technology studies, i.e., social scientists who study science), and he is using it to justify why the IPCC is wrong and he, and he alone, is correct on predictions of future sea level rise and as well on calls for certain political actions, like campaign finance reform.

In a new paper posted online (here in PDF) Dr. Hansen conveniently selects a notable 1961 paper on the sociology of scientific discovery from Science to suggest that scientific reticence can be used to predict where future research results will lead. And he finds, interestingly enough, that they lead exactly to where his views are today.

What evidence does Dr. Hansen provide to indicate that his views on sea level rise are correct and those presented by the IPCC, which he openly disagrees with, are wrong? Well, for one he explains that no glaciologist agrees with his views (as they are apparently reticent), suggesting that in fact his views must be correct (a creative use of STS if I've ever seen one;-). If holding a minority view is a standard for predicting future scientific understandings then we should therefore apparently pay more attention to all those lonely skeptics crying out in the wilderness, no?

I find it simply amazing that Dr. Hansen has the moxie to invoke the STS literature to support his scientific arguments when that literature, had he looked at maybe one more paper, indicates that Bernard Barber's 1961 essay, while provocative is not widely accepted (see, e.g., this book or this paper). And even if one accepts Barber's article at face value which argues that scientists resist new discoveries (Thomas Kuhn, hello?), what Dr. Hansen doesn't explain (as he is throwing out the IPCC model of scientific consensus) is why his views are those that will prove to be proven correct in the future rather than those other scientific perspectives that are not endorsed by the IPCC. (Dr. Hansen appears to ignore Barber's argument in the same paper suggesting that older scientists are more likely to be captured by political or other interests when presenting their science.)

If we can use the sociology of science to foretell where science is headed, we could save a lot of money not having to in fact do the research. The climate issue is full of surprises and this one just about takes the cake for me. Now I've seen everything!

Cashing In

At least one IPCC lead author appears to be trying to cash in on concern over climate change. With the help of several University of Arizona faculty members, including one prominent IPCC contributor, a company called Climate Appraisal, LLC is selling address specific climate predictions looking out as far as the next 100 years. Call me a skeptic or a cynic but I'm pretty sure that the science of climate change hasn't advanced to the point of providing such place-specific information. In fact, I'd go so far as to suggest that if such information were credible and available, it'd already be in the IPCC. The path from global consensus to snake oil seems pretty short. I wouldn't deny anyone the chance to make a buck, but can this be good for the credibility of the IPCC?

February 20, 2007

Prediction in Science and Policy

In the New York Times today Corneila Dean has an article about a new book by Orrin Pilkey and Linda Pilkey-Jarvis on the role of predictions in decision making. The book is titled Useless Arithmetic: Why Environmental Scientists Can’t Predict the Future.

Here is an excerpt from the book’s description at Columbia University Press:

Writing for the general, nonmathematician reader and using examples from throughout the environmental sciences, Pilkey and Pilkey-Jarvis show how unquestioned faith in mathematical models can blind us to the hard data and sound judgment of experienced scientific fieldwork. They begin with a riveting account of the extinction of the North Atlantic cod on the Grand Banks of Canada. Next they engage in a general discussion of the limitations of many models across a broad array of crucial environmental subjects.

The book offers fascinating case studies depicting how the seductiveness of quantitative models has led to unmanageable nuclear waste disposal practices, poisoned mining sites, unjustifiable faith in predicted sea level rise rates, bad predictions of future shoreline erosion rates, overoptimistic cost estimates of artificial beaches, and a host of other thorny problems. The authors demonstrate how many modelers have been reckless, employing fudge factors to assure "correct" answers and caring little if their models actually worked.

A timely and urgent book written in an engaging style, Useless Arithmetic evaluates the assumptions behind models, the nature of the field data, and the dialogue between modelers and their "customers."

Naomi Oreskes offers the following praise quote:

Orrin H. Pilkey and Linda Pilkey-Jarvis argue that many models are worse than useless, providing a false sense of security and an unwarranted confidence in our scientific expertise. Regardless of how one responds to their views, they can't be ignored. A must-read for anyone seriously interested in the role of models in contemporary science and policy.

In an interview the authors comment:

The problem is not the math itself, but the blind acceptance and even idolatry we have applied to the quantitative models. These predictive models leave citizens befuddled and unable to defend or criticize model-based decisions. We argue that we should accept the fact that we live in a qualitative world when it comes to natural processes. We must rely on qualitative models that predict only direction, trends, or magnitudes of natural phenomena, and accept the possibility of being imprecise or wrong to some degree. We should demand that when models are used, the assumptions and model simplifications are clearly stated. A better method in many cases will be adaptive management, where a flexible approach is used, where we admit there are uncertainties down the road and we watch and adapt as nature rolls on.

I have not yet read the book, but I will.

Orrin participated in our project on Prediction in the Earth Sciences in the late 1990s, contributing a chapter on beach nourishment. The project resulted in this book:

Sarewitz, D., R.A. Pielke, Jr., and R. Byerly, Jr., (eds.) 2000: Prediction: Science, decision making and the future of nature, Island Press, Washington, DC.

Our last chapter can be found here in PDF.

Posted on February 20, 2007 10:20 AM View this article | Comments (5)
Posted to Author: Pielke Jr., R. | Prediction and Forecasting

December 19, 2006

Ryan Meyer in Ogmius

Ryan Meyer, whose letter to Science we highlighted a few days ago, also has the cover story in our Center's latest newsletter which has just been put online. Ryan's article is titled, "Arbitrary Impacts and Unknown Futures: The shortcomings of climate impact models" and be found here.

The newsletter, called Ogmius, can be found here in html and here in PDF. Have a look!

October 10, 2006

Limits of Models in Decision

In today’s Financial Times columnist John Kay has a very insightful piece on the limits of models in decision making. He discusses the downfall of Amaranth, a hedge fund, which lost billions of dollars, in part, because its investors did not fully understand the full scope of uncertainties associated with their investment strategies. Kay highlights an important distinction between what he calls “in model” risk and “off model” risk. In model risk refers to the uncertainties that are associated with the design of the model, in data inputs, randomness, and so on. Modelers use techniques such as Monte Carlo analysis to get a quantitative sense of model uncertainties. Off model risk refers to the degree of conformance between a model and the real world. Models by their nature are always simplifications of the real world. As in the case of Amaranth, often hard lessons of experience remind us that as powerful as models are, they can also reinforce bad decisions. As Kay writes,

When someone does attach a probability to a forecast, they have – implicitly or explicitly – used a model of the problems. The model they have used accounts for in-model risk but ignores off-model risk. Their forecasts are therefore too confident and neither you nor they have much idea how over-confident they are. That is why mathematical modeling of risk can be an aid to sound judgment, but never a complete substitute.

Posted on October 10, 2006 03:13 PM View this article | Comments (1)
Posted to Author: Pielke Jr., R. | Prediction and Forecasting

October 02, 2006

Prediction and Decision

Across a number of threads comments have arisen about the role of forecasting in decision making. Questions that have come up include:

What is a good forecast?
When should research forecasts transition to operational forecasts?
What sorts of decisions require quantitative probabilities?
In what contexts can good decisions result without accurate predictions?

It was questions like these that motivated Rad Byerly, Dan Sarewitz, and I to work on a project in the late 1990s focused on prediction. the results of this work were published in a book by Island Press in 2000, titled "Prediction."

With this post I'd like to motivate discussion on this subject, and to point to our book's concluding chapter, which may provide a useful point of departure:

Pielke Jr., R. A., D. Sarewitz and R. Byerly Jr., 2000: Decision Making and the Future of Nature: Understanding and Using Predictions. Chapter 18 in Sarewitz, D., R. A. Pielke Jr., and R. Byerly Jr., (eds.), Prediction: Science Decision Making and the Future of Nature. Island press: Washington, DC. (PDF)

See in particular Table 18.1 on p. 383 which summarizes the criteria we developed in the form of questions which might be used to "question predictions."

Comments welcomed on any of the questions raised above, and others as appropriate as well.

Sitemap | Contact | Find us | Email webmaster