McIntyre on Climate Science Policy

February 14th, 2005

Posted by: Roger Pielke, Jr.

Here at Prometheus we don’t do hockey sticks. (Astute readers will find one oblique reference to it in this paper – PDF.) However, the debate over the hockey stick is worth our attention not only for what it says about the state of climate science and politics, but also because it is significant for how we think about climate science policy. Climate science policy refers to those decisions that we make about climate science, including priorities for research and processes of scientific assessment and evaluation.

Steven McIntyre has posted his thoughts on climate science policy arising from his experiences with taking on the hockey stick. He writes,

“IPCC proponents place great emphasis on the merit of articles that have been “peer reviewed” by a journal. However, as a form of due diligence, journal peer review in the multiproxy climate field is remarkably cursory, as compared with the due diligence of business processes. Peer review for climate publications, even by eminent journals like Nature or Science, is typically a quick unpaid read by two (or sometimes three) knowledgeable persons, usually close colleagues of the author. It is unheard of for a peer reviewer to actually check the data and calculations.”

This observation has also been made in the Bulletin of the American Meteorological Society in a 2000 commentary by Ron Errico, who writes,

“Too frequently, published papers contain fundamental errors… How can a piece of work be adequately evaluated or duplicated if what was really done or meant is not adequately stated?… My paramount recommendation is that our community acknowledges that a major problem in fact exists and requires ardent attention. Unless this is acknowledged, the community will likely not even consider significant changes. I suspect that too many scientists, especially those with the authority to demand changes, will prefer the status quo.”

Errico’s paper, titled “On the Lack of Accountability in Meteorological Research,” is well worth reading in full. He makes several recommendations that are completely consistent with McIntyre’s recommendations.

McIntyre also comments on the incestuous structure of the IPCC,

“The inattentiveness of IPCC to verification is exacerbated by the lack of independence between authors with strong vested interests in previously published intellectual positions and IPCC section authors… For someone used to processes where prospectuses require qualifying reports from independent geologists, the lack of independence is simply breathtaking and a recipe for problems, regardless of the reasons initially prompting this strange arrangement.”

McIntyre concludes by observing, “Businesses developed checks and balances because other peoples’ money was involved, not because businessmen are more virtuous than academics. Back when paleoclimate research had little implication outside academic seminar rooms, the lack of any adequate control procedures probably didn’t matter much. However, now that huge public policy decisions are based, at least in part, on such studies, sophisticated procedural controls need to be developed and imposed.”

Of course, some scientists will reply to this in exactly opposite fashion, by saying that academics are more virtuous than business people so such checks are unnecessary. But whatever one thinks about the debate over the hockey stick, McIntyre’s views on climate science policy make good sense and are good for the community as a whole.

5 Responses to “McIntyre on Climate Science Policy”

  1. Tom Rees Says:

    Statistical errors in published papers are common in all disciplines, I imagine. Certainly, it’s true for medicine (see ) And medicine is big business, of course. The real check comes when others produce their own studies. In science, nobody gets paid for checking someone else’s numbers. You have to go out and do your own work to get noticed. Its only if two or more papers disagree that errors start to get picked up.

  2. 2
  3. Jim Kanuth Says:

    Tom Rees writes “In science, nobody gets paid for checking someone else’s numbers.” which is accurate today, but wasn’t always the case.

    30 years ago, a significant part of graduate student education was duplicating published experimental results, both as educational development and as the routine fact checking mechanism in American science practice. These days, any professor who puts his or her grad student to work duplicating already published data would soon be drummed out because they weren’t attracting “extramural funding” or contributing to the grad student’s publication count.

    In the rush to commercialize academic research since the early 80’s, we’ve had the unintended consequence of losing one of our major fact checking mechanisms.

    Even in quantum physics, a bell labs researcher got away with falsifying results for years before someone read two particular papers in succession and realized that random noise in the detectors shouldn’t be identical in two different experiments.

    Im not sure what the answer to the dilemma is. There should be some way of checking anything that is going to make a difference to a particular target audience (whether it is financial, policy, restriction on freedom of actions, whatever). Most major publishers have a rule requiring raw data to be provided to anyone who has a legitimate interest in it following publication under penalty of having their paper withdrawn, but it obviously doesn’t have any teeth as evidenced by the difficulty that folks have had getting at Mann raw data years after first publication.

  4. 3
  5. Garry Culhane Says:

    You do not “do hockey sticks”? Alas you do and rather badly.

    The term, hockey stick, has come to stand for the intense debate between those who are charmed or persuaded by a couple of obvious adventurers, and those who support or at least accept the work of a group of well established climate scientists.

    You can choose one side or the other, or you can say nothing at all, but to act as though there is a middle ground where you can roost in solitary neutrality is, unhappily, a very definite own goal.

    You can make amends by actually reading the “record” that is very much available (except for the original attack by M&M which seems to have been deleted and replaced by a brand new version) and then telling us laymen what you think, or you can find someone who can work their way through PCA and tell us whether William Connelly is correct. Or you could assist us in some other way that does not include offering a podium for MacIntyre to flourish a newly found respectability
    (imagine a mining man flouncing across the stage in a tutu).

    But if you will do it, and surely you can bend your efforts to help the unlettered rest of us, please do it right.

    Garry Culhane

  6. 4
  7. Kooiti Masuda Says:

    (I am sorry I do not know how to insert line breaks from my browser (Mozilla on Linux). I insert “//” where I want to have a line break.)

    It is regrettable that affirmative references to McIntyre’s words invited responses like Garry Culhane’s. I think that the main issue of this thread is a bigger problem that is discussed by Jim Kanuth.

    Suppose that an analysis (maybe Mann’s) is good. Even then, it does not appear good until independent confirmations are made. Suppose that someone tries to reproduce the same analysis. If the attempt produces a result different from the original by mistake, there is a chance for the result to be published as an original research paper, even though it is because reviewers do not usually try analysis themselves.
    If the attempt produces an identical result as the original, it can hardly be published. Thus we do not usually know how well a certain study has been confirmed. It also makes a negative feedback to draw scientists away from such activity in the culture of “publish or perish”.

    The question about the quality of analysis by Mann et al. or that by McIntyre and McKitrick is another matter that should be discussed elsewhere. I do not have an answer, partly because of the reason mentioned above. By the way, my guess (just guess) is that the analysis by Mann et al. is good for the purpose to estimate the most likely mean value of temperature at any given time point (thus it seems to be a good scientific paper of paleoclimatology), but that it tends to underestimate temporal variability as demonstrated by von Storch et al. (2004, Science 306, 679-682) (thus it should not be given too great weight in such context as evaluating the climate of the 20th century in the millenial perspective).

    K. Masuda

    in Yokohama (sometimes in Fujisawa), Japan.

  8. 5
  9. Kooiti Masuda Says:

    Sorry for ugly appearance of my previous post. When finally posted, a blank line in the input form was translated as a line break. My trouble was the line break did not appear in the preview.

    K. Masuda