Center Home Science Policy Photos University of Colorado spacer
Location: > Prometheus: Myanna Lahsen's Latest Paper on Climate Models Archives

January 17, 2006

Myanna Lahsen's Latest Paper on Climate Models

Posted to Author: Pielke Jr., R. | Climate Change

Myanna Lahsen is an anthropologist who spent about seven years embedded within the "tribe" of climate modelers at the National Center for Atmospheric Research. She is presently a research scientist here at our Center, and for the last several years she has been conducting fieldwork in Brazil on the "interplay of science, culture, power and politics in international affairs through a focus on the Large-Scale Biosphere Atmosphere (LBA) experiment." Her project website is here. Her work is rich in detail and strong in weaving together analysis from data collected through participant-observation.

Myanna just had a very interesting paper come out on climate models:

Lahsen, M., 2005. Seductive Simulations? Uncertainty Distribution Around Climate Models, Social Studies of Science, 35:895-922. (PDF)

The paper will be of interest to scholars in STS because it provides an alternative (and much needed) perspective on Mackenzie's somewhat influential notion of the "certainty trough." If you are interested in Myanna's critique and elaboration of Mackenzie's perspective, then have a look at the full paper. For those folks interested in the perspectives of climate modelers on uncertainty with respect to their models, I thought that a few excerpts from Myanna's recent paper might be worth pulling out and highlighting. However, given the richness of the paper and importance of context for understanding her arguments, I would still encourage you to have a look at the whole paper. Meantime, the excerpts below will give you a sense of her analysis and arguments.

She starts by noting that her purpose is not to criticize models or modelers but to focus on how their creators understand them and their uncertainties. This is a particularly important subject because climate modelers are important contributors to policy debates and discussions on climate change.

"Climate models are impressive scientific accomplishments with importance for science and policy-making. They also have important limitations and involve considerable uncertainties. The present discussion focuses on uncertainties about the realism of climate simulations - rather than the models' significant strengths - in order to highlight features of models that are overlooked when their output is taken at face value." (p. 898)

Lahsen observes that in practice climate scientists routinely confused their modeled world with the real world.

" During modelers' presentations to fellow atmospheric scientists that I attended during my years at NCAR, I regularly saw confusion arise in the audience because it was unclear whether overhead charts and figures were based on observations or simulations. . . In interviews, modelers indicated that they have to be continually mindful to maintain critical distance from their own models. For example:

Interviewer: Do modelers come to think of their models as reality?

Modeler A: Yes! Yes. You have to constantly be careful about that [laughs].

He described how it happens that modelers can come to forget known and potential errors:

[Modeler A:] You spend a lot of time working on something, and you are really trying to do the best job you can of simulating what happens in the real world. It is easy to get caught up in it; you start to believe that what happens in your model must be what happens in the real world. And often that is not true . . . The danger is that you begin to lose some objectivity on the response of the model [and] begin to believe that the model really works like the real world . . . then you begin to take too seriously how it responds to a change in forcing. Going back to trace gases, CO2 models - or an ozone change in the stratosphere: if you really believe your model is so wonderful, then the danger is that it's very tempting to believe that the way it responds to a change in forcing must be right. [Emphasis added]

And as well in the following passage:

" Erroneous assumptions and questionable interpretations of model accuracy can, in turn, be sustained by the difficulty of validating the models in the absence of consistent and independent data sets. Critical distance is also difficult to maintain when scientists spend the vast majority of their time producing and studying simulations, rather than less mediated empirical representations. Noting that he and fellow modelers spend 90% of their time studying simulations rather than empirical evidence, a modeler explained the difficulty of distinguishing a model from nature:

Modeler B: Well, just in the words that you use. You start referring to your simulated ocean as 'the ocean' - you know, 'the ocean gets warm', 'the ocean gets salty'. And you don't really mean the ocean, you mean your modeled ocean. Yeah! If you step away from your model you realize 'this is just my model'. But [because we spend 90% of our time studying our models] there is a tendency to forget that just because your model says x, y, or z doesn't mean that that's going to happen in the real world.

This modeler suggests that modelers may talk about their models in ways they don't really mean ('you don't really mean the ocean, you mean your modeled ocean . . . '). However, in the sentence that immediately follows, he implies that modelers sometimes actually come to think about their models as truth-machines (they 'forget to step away from their models to realize that it is just a model'; they have a 'tendency to forget')."

Another modeler interviewed by Lahsen reinforces these perspectives:

" The following modeler suggests that the above tendencies are pervasive in the field of climate modeling:

Modeler D: There are many ways to use models, and some of them I don't approve of. [Pause] It is easy to get a bad name as a modeler, among both theoreticians and observational people, by running experiments and seeing something in the model and publishing the result. And pretending to believe what your model gives - or, even, really believing it! [small laugh] - is the first major mistake. If you don't keep the attitude that it's just a model, and that it's not reality . . . I mean, mostly people that are involved in this field really have that, they have the overtone that it is.

Interviewer: They do tend to think that their model is the reality?

Modeler D: Or even if they don't think that, they tend to oversell it, regardless.

Interviewer: And why do they oversell it?

Modeler D: Because people get wrapped up in what they have done. You know, I spent years building this model and then I ran these experiments, and the tendency is to think: 'there must be something here'. And then they start showing you all the wonderful things they have done . . . And you have to be very careful about that.

Confirming Shackley and Wynne's argument, modeler D suggests that modelers sometimes 'oversell' their models, strategically associating them with more certainty than is warranted. However, echoing others quoted earlier, Modeler D also suggests that modelers sometimes lose critical distance from their own models and come to think of them as reliable representations of reality . . . As GCMs incorporate ever more details - even things such as dust and vegetation - the models increasingly appear like the real world, but the addition of each variable increases the error range (Syukuro Manabe, quoted in Revkin, 2001)."

Lahsen reports similar conclusions related to another modeler that she interviewed:

" Modeler E, in the excerpt quoted above [Ed.- not shown here], distinguished some modelers from 'people who are interested in the real world'. He thus implied that modelers sometimes become so involved in their models that they lose sight of, or interest in, the real world, ignoring the importance of knowing how the models diverge from it. Recognition of this tendency may be reflected in modelers' jokes among themselves. For example, one group joked about a 'dream button' allowing them - Star Wars style - to blow up a satellite when its data did not support their model output. They then jokingly discussed a second best option of inserting their model's output straight into the satellite data output."

Lahsen's earlier work documented different perspectives between theoreticians (modelers) and empiricists (typically meteorologists or old-school climatologists), and reinforces that here.

" Modeler E noted that theoreticians and empiricists often criticize modelers for claiming unwarranted levels of accuracy, to the point of conflating their models with reality. My fieldwork revealed that such criticisms circulate widely among atmospheric scientists. Sometimes such criticisms portray modelers as motivated by a need to secure funding for their research, but they also suggest that modelers have genuine difficulty with gaining critical distance from their models' strengths and weaknesses. Moreover, they criticize modelers for lacking empirical understanding of how the atmosphere works ('Modelers don't know anything about the atmosphere'). . . In interviews, empiricists often voice criticisms along the lines of this one expressed by a meteorologist: 'I joke about modelers: they have a charming and naive faith in their models.' Such comments were especially common among empirical meteorologists trained in synoptic weather forecasting techniques, who conduct empirical research on a regional or local scale. They have not been centrally involved in the process of model development and validation, and thus may fall within MacKenzie's category of the 'alienated'. These empiricists trained in synoptic methods are particularly inclined to criticize GCMs. Such criticism may have to do with the fact that there is considerable resentment among various subgroups of atmospheric scientists about the increased use of simulation techniques, and such resentment may be echoed in other sciences in which simulations are ascendant . . . "

Lahsen describes the tensions between theoreticians and empiricists.

"Compared with modelers, such empirical research meteorologists with background in weather forecasting are part of a different social world; these two groups partake in different, albeit overlapping, social networks defined by different scientific orientations and cultural norms. The empiricists are less committed to GCMs or to the theory of human-induced climate change. They manifest skepticism about numerical forecasts in own creations. Simulation of complex, uncertain, and inaccessible phenomena leaves considerable room for emotional involvement to undermine the ability to recognize weaknesses and uncertainties. Empiricists complain that model developers often freeze others out and tend to be resistant to critical input. At least at the time of my fieldwork, close users and potential close users at NCAR (mostly synoptically trained meteorologists who would like to have a chance to validate the models) complained that modelers had a 'fortress mentality'. In the words of one such user I interviewed, the model developers had 'built themselves into a shell into which external ideas do not enter'.

But she also explains that they need each other.

" Generally speaking, atmospheric scientists are better judges than, for example policy-makers, of the accuracy of model output. However, the distribution of certainty about GCM output within the atmospheric sciences reveals complications in the categories of 'knowledge producers' and 'users', and the privileged vantage point from which model accuracies may be gauged proves to be elusive. Model developers' knowledge of their models' inaccuracies is enhanced by their participation in the construction process. However, developers are not deeply knowledgeable about all dimensions of their models because of their complex, coupled nature. Similarly, the empirical training of some atmospheric scientists - scientists who may be described as users - limits their ability to gauge GCM accuracies in some respects while enhancing their ability to do so in other respects; and, generally, they may have better basis than the less empirically oriented modelers for evaluating the accuracy of at least some aspects of the models. Professional and emotional investment adds another layer of complexity. Model developers have a professional stake in the credibility of the models to which they devote a large part of their careers. These scientists are likely to give their models the benefit of doubt when confronted with some areas of uncertainty. By contrast, some of the empirically trained atmospheric scientists, who are less invested in the success of the models, may be less inclined to give them the benefit of the doubt, maintaining more critical understanding of their accuracy."

We are in the process of getting all of her publications online accessible from her homepage, and will announce them when available. Meantime you can find the paper discusses above (here in PDF).

Posted on January 17, 2006 07:21 AM


Haven't read the full paper yet, but this is not all all surprising and I think an important example of larger issue both in science and in general perceptions of the world.

There is a real human tendency for people to confuse their "models" and "constructs" of the world with reality. This is true of economics, psychology and the hard sciences.

The quotes you give indicate that the modelers are very aware of the pitfalls, and to some extent the community is partially self correcting. For example: "It is easy to get a bad name as a modeler, among both theoreticians and observational people, by running experiments and seeing something in the model and publishing the result. And pretending to believe what your model gives - or, even, really believing it! ..."

As an aside... audience confusion during presentations between observations and simulations, is an example of how common poor presentations are, not merely because of confusion on the part of the modeler. Confusing slides, unlabeled axis, too many abbreviations and jargon are all too common at every conference I've been too. The emergence of Power Point has brought its own set of pitfalls.

Posted by: Greg Lewis at January 18, 2006 09:41 AM

This is a discouraging study, particularly because it's the first time I've ever been exposed to this information. I bet the same is true of most policy planners and journalists. If I am not mistaken,the people over at realclimate seem to manifest this mentality, especially when they insist that their models are science, while their critics are not experts and so don't "know" enough to criticize. I hope this study -- expose? -- gets the attention it deserves, especially in magazines like Nature and Science. The stakes are way to high to ignore it.

BTW, to what extent do these model builders support, say, the Kyota protocol, while ignoring the economic dimensions of the problem, especially the need for some kind of cost/benefit analysis? Have their been any polls of their political opinions on the subject?

Finally, it is naive for anyone to suppose that these model builders are immune from plain old human nature. When your livlihood, social status, and future funding prospects depend on the outcome, only a saint could maintain objectivity under such circumstances.

(BTW, pardon my typos; I'm old and creaky, unable to see my own typos and spelling gaffs. Feel free to correct.)

Posted by: Luke Lea at January 18, 2006 01:39 PM

To my knowledge, there are no studies of climate modelers' policy preferences related to climate change. On the basis of my observations, I would say that climate modelers, as a whole, are environmentally concerned. However, few of them involve themselves in any active way with policy issues. As a whole, they are much more interested in the science than in the associated policy consequences.

Just curious - why do you think it matters whether they think about the economic dimension of policy action?

I agree with Greg Lewis that it is common for people to confuse their "models" and "constructs" of the world with reality, be that in the social or the natural sciences, with or without computers. In the paper I suggest some reasons why climate models may weigh particularly strongly on the imagination, but that is speculative on my part.

Thanks for the comments.


Posted by: Myanna Lahsen at January 18, 2006 06:42 PM

In all fairness to modelers, my experience is that the attitudes Dr. Lahsen documents are wide spread among scientists. In chemistry we are exposed to LCAO-MO theory (linear combination of atomic orbitals-Molecular orbitals) at an early age. The heuristics we learn (simplified representations (pictures) of orbitals) are very useful in providing understanding how the quantum mechanical is revealed in the macroscopic behavior of collections of molecules. However, it is important to remember that the pictures we draw have no great foundation in physical reality. The great value of models, or perhaps the value of great models, is in how effectively they reveal how inputs effect outputs relative to the reality of the thing being modelled. GCM's may become useful when they overcome there current testability shortcomings. Please excuse the maunderings of an old chemist.

Posted by: shoes at January 19, 2006 12:47 PM

When there is a confusion about which is the real data or which is the model simulations, then the models must be quite good. Sure, they contain some errors, but over all the climate models represent a very significant ackomplishement.


Posted by: Rasmus at January 19, 2006 11:56 PM

In response to several comments above:

The reason I wonder whether these model builders actively favor the Kyoto protocol is that their area of expertise is divorced from the economic dimensions of the problem they are working on. To support a stabilization of carbon dioxide gases in the atmosphere sounds all fine and good by itself, but maybe not when you begin to consider how many trillions of dollars it would cost to achieve this laudable objective. Yet the editorial staffs of Nature, Science, and Scientific American elide this problem several times a year, the latter having disgraced itself by the hysterical nature of its reaction to Bjorn Lomborg when he raised this very issue.

Likewise it is irresponsible to invoke "the precautionary principle", let alone issue warnings that "we are approaching a point of no return," when even the model builders themselves admit that ten or even twenty years of stable global temperature would not be inconsistent with their models.

A good model, like a good theory, needs to make predictions about things we don't already know. Only then can you begin to have confidence in it. I don't know about chemistry, but physicists at CERN engage in elaborate statistical analyses of their data (beyond "sigma five") to make sure that the evidence for a new particle they are hoping to find isn't just a statistical fluke. Clearly they don't get their theories confused with reality.

Economic self-interest is a very important part of human nature -- as it should be if you want to survive and prosper. Here, then, we see a very small but well-organized "tribe" of experts whose long-term economic well-being quite obviously depends their models being both valid and important. This creates a frightening conflict of interest since the future economic well-being of billions of human beings around the world could be adversely affected in a very major way if the models are wrong -- or even if the models are right but it is clear that there is little we can do to mitigate the problem to any significant degree, in which case the models would lose their importance. Under these circumstance it's not good enough for the modellers to say, "trust us, we know what we are doing."

One more thing. The latest climate models are all opaque --"almost as complex as the climate itself" I have heard them described -- and even the IPCC admits that a great deal of "judgment" goes into their construction and interpretation. The truth is nobody really understands them, including the model builders themselves. Add this to the points I made above and you can see why I think the modeling community has a serious amount of explaining to do. Indeed, their task may be impossible -- which is yet another reason why we should begin to turn our attention to adapting to climate change instead of trying to prevent it. Adaptation is one thing homo sapiens with their cultures are especially good at in comparison to all other species that lack culture and so must depend on natural selection to make the necessary changes. It's time to be human.

Luke Lea

Posted by: Luke Lea at January 20, 2006 02:11 AM

Luke Lea,

Perhaps you should not confuse your model of the climate modellers motivations and beliefs with reality.

As far as I can the people at Real Climate do not believe in a "tipping point" (see the discussion on if the climate is Chaotic), and are not particularly attached to Kyoto.


Posted by: Greg Lewis at January 20, 2006 02:27 PM

Dear Greg,

Mine is at best an attempt at responsible citizenship, not a scientific model of reality.

The fact is I do not know a single climate modeller and have never met one. But I do know large numbers of people both among my friends on the left and among our educated elites, who are profoundly alienated from our corporate, capitalist institutions. Considering the crimes which have been committed in the name of capitalism, I can certainly understand where they are coming from. But our task is not to condemn the origins of our civilization but to try to redeem them. It would be tragic to waste the evil fruits of history when they are what make it possible for us to have freedom, democracy, rights, leisure, affluence, and just about everything else we hold dear including science itself.

BTW, I would like to add one more consideration that might, if I am not mistaken, help us better understand what might be motivating the climate modelling community. I've already mentioned the threat to their individual economic well-being and status in the community. We need also to take into account the influence of a near universal trait in human nature: the phenomenon of peer pressure, or "political correctness" as it is sometimes called today. Organized professional groups are even more prone than private individuals to sacrifice truth and objectivity in the interests of their collective economic self-interest and professional standing in society. To depart from dogmas of the group is to transgress a professional taboo; it is to court exile from one's closest friends and associates -- not an easy thing to do for even the most heroic individuals.

I submit that there is absolutely no reason to suppose that self-styled "scientific model builder " are any less immune to these pressures than the general run of human beings. Intelligence, education, training, professionalism, and moral integrity are all quite different things, for which history affords us many examples.

Here is one from my own experience. I live in a small city in southeastern Tennessee with a third-rate medical system. When people in my community are diagnosed with cancer, as both my wife and I have been in recent years, the local oncologists recommend local surgery at the first possible moment, no questions asked. There exists, or seems to exist, a kind of unwritten code which says that second opinions will be discouraged, and patients will not be referred to doctors in faraway places -- notwithstanding the highly technical nature of cancer pathology, and notwithstanding the relative lack of local experience treating most types of cancer. If pressed hard, and if the cancer has already progressed, a doctor may suggest the name of an oncologist they know in the nearest big city; at most they will steer you to the closest regional cancer center -- provided that it lies either to the south, or to the west, or to the east. Almost never will they send the patient to one of the major cancer centers in the Northeast or upper Mid-West, even when the patient is the doctor herself, or one of her children.!

Now, why should this be? The reason seems transparent is to me: once local doctors start referring each other and their loved ones to medical care elsewhere, they will send an unmistakable signal to the rest of the community that the best care is to be had elsewhere, as indeed it is. This signal would threaten their collective livelihoods and their professional standing in the community. These consideration, when push comes to shove, turn out to be more powerful than even the bonds of love between husbands and wives, and between parents and children.

Let me re-emphasize that these doctors are not bad human beings They are neither better nor worse than the average professional group of human beings. The proof lies in the fact that similar behavior is seen in similar communities in similar situations all across America. It is human nature.

Posted by: Luke Lea at January 20, 2006 09:44 PM


I'm sorry, but when someone form one political group or point of view attributes motivations to, or psychoanalyses, someone with the oposite point of view, I just can't take it too seriously.

Although I think your generalizations of human nature have some truth in them, I think your assumptions about the beliefs of the modellers are inaccurate. But then again, that may just be my model of reality.

Posted by: Greg Lewis at January 21, 2006 10:45 PM

Sitemap | Contact | Find us | Email webmaster