Climate Models At Their Limit: Limitless Possibilities

Jun 14th, 2012 | By | Category: CLIMATE SCIENCE, Global Warming, IPCC, Lessons, News, Opinion

Allmodelsarewrong: Mark Maslin and Patrick Austin at University College London have just had a comment published in Nature called “Climate models at their limit?”. This builds on the emerging evidence that the latest, greatest climate predictions, which will be summarised in the next assessment report of the Intergovernmental Panel on Climate Change (IPCC AR5, 2013) are not going to tell us anything too different from the last report (AR4, 2007) and in fact may have larger uncertainty ranges.

I’d like to discuss some of the climate modelling issues they cover. I agree with much of what they say, but not all…

1. Models are always wrong

Why do models have a limited capability to predict the future? First of all, they are not reality….models cannot capture all the factors involved in a natural system, and those that they do capture are often incompletely understood.

A beginning after my own heart! This is the most important starting point for discussing uncertainty about the future.

Climate modellers, like any other modellers, are usually well aware of the limits of their simulators*. The George Box quote from which this blog is named is frequently quoted in climate talks and lectures. But sometimes simulators are implicitly treated as if they were reality: this happens when a climate modeller has made no attempt to quantify how wrong it is, or does not know how to, or does not have the computing power to try out different possibilities, and throws their hands up in the air. Or perhaps their scientific interest is really in testing how the simulator behaves, not in making predictions.

For whatever reason, this important distinction might be temporarily set aside. The danger of this is memorably described by Jonty Rougier and Michel Crucifix**:

One hears “assuming that the simulator is correct” quite frequently in verbal presentations, or perceives the presenter sliding into this mindset. This is so obviously a fallacy that he might as well have said “assuming that the currency of the US is the jam doughnut.”

Models are always wrong, but what is more important is to know how wrong they are: to have a good estimate of the uncertainty about the prediction. Mark and Patrick explain that our uncertainties are so large because climate prediction is a chain of very many links. The results of global simulators are fed into regional simulators (for example, covering only Europe), and the results of these are fed into another set of simulators to predict the impacts of climate change on sea level, or crops, or humans. At each stage in the chain the range of possibilities branches out like a tree: there are many global and regional climate simulators, and several different simulators of impacts, and each simulator may be used to make multiple predictions if they have parameters (which can be thought of as “control dials”) for which the best settings are not known. And all of this is repeated for several different “possible futures” of greenhouse gas emissions, in the hope of distinguishing the effect of different actions.

2. Models are improving

“The climate models…being used in the IPCC’s fifth assessment make fewer assumptions than those from the last assessment…. Many of them contain interactive carbon cycles, better representations of aerosols and atmospheric chemistry and a small improvement in spatial resolution.”

Computers are getting faster. Climate scientists are getting a better understanding of the different physical, chemical and biological processes that govern our climate and the impacts of climate change, like the carbon cycle or the response of ice in Greenland and Antarctica to changes in the atmosphere and oceans. So there has been a fairly steady increase in resolution***, in how many processes are included, and in how well those processes are represented. In many ways this is closing the gap between simulators and reality. This is illustrated well in weather forecasting: if only they had a resolution of 1km instead of 12km, the UK Met Office might have predicted the Boscastle flood in 2004 (page 2 of this presentation).

But the other side of the coin are, of course, the “unknown unknowns” that become “known unknowns”. The things we hadn’t thought of. New understanding that leads to an increase in uncertainty because the earlier estimates were too small.

Climate simulators are slow, as slow as one day to simulate two or three model years, several months for long simulations. So modellers and their funders must decide where to spend their money: high resolution, more processes, or more replications (such as different parameter settings). Many of those of us who spend our working hours, and other hours, thinking about uncertainty, strongly believe the climate modelling community must not put resolution and processes (to improve the simulator) above generating multiple predictions (to improve our estimates of how wrong the simulator is). Jonty and Michel again make this case**:

Imagine being summoned back in the year 2020, to re-assess your uncertainties in the light of eight years of climate science progress. Would you be saying to yourself, “Yes, what I really need is an ad hoc ensemble of about 30 high-resolution simulator runs, slightly higher than today’s resolution.” Let’s hope so, because right now, that’s what you are going to get.

But we think you’d be saying, “What I need is a designed ensemble, constructed to explore the range of possible climate outcomes, through systematically varying those features of the climate simulator that are currently ill-constrained, such as the simulator parameters, and by trying out alternative modules with qualitatively different characteristics.”

Higher resolution and better processes might close the gap between the simulator and reality, but if it means you can only afford the computing power to run one simulation then you are blind as to how small or large that gap may be. Two examples of projects that do place great importance on multiple replications and uncertainty are the UK Climate Projections and ClimatePrediction.net.

3. Models agree with each other

None of this means that climate models are useless….Their vision of the future has in some ways been incredibly stable. For example, the predicted rise in global temperature for a doubling of CO2 in the atmosphere hasn’t changed much in more than 20 years.

This is the part of the modelling section I disagree with. Mark and Patrick argue that consistency in predictions through the history of climate science (such as the estimates of climate sensitivity in the figure below) is an argument for greater confidence in the models. Of course inconsistency would be a pointer to potential problems. If changing the resolution or adding processes to a GCM wildly changed the results in unexpected ways, we might worry about whether they were reliable.

But consistency is only necessary, not sufficient, to give us confidence. Does agreement imply precision? I think instinctively most of us would say no. The majority of my friends might have thought the Manic Street Preachers were a good band, but it doesn’t mean they were right.

In my work with Jonty and Mat Collins, we try to quantify how similar a collection of simulators are to reality. This is represented by a number we call ‘kappa’, which we estimate by comparing simulations of past climate to reconstructions based on proxies like pollen. If kappa equals one, then reality is essentially indistinguishable from the simulators. If kappa is greater than one, then it means the simulators are more like each other than they are like reality. And our estimates of kappa so far? Are all greater than one. Sometimes substantially.

The authors do make a related point earlier in the article:

Paul Valdes of Bristol University, UK, argues that climate models are too stable, built to ‘not fail’ rather than to simulate abrupt climate change.

Many of the palaeoclimate studies by BRIDGE (one of my research groups) and others show that simulators do not respond much to change when compared with reconstructions of the past. They are sluggish, and stable, and not moved easily from the present day climate. This could mean that they are underestimating future climate change.

In any case, either sense of the word ‘stability’ – whether consistency of model predictions or the degree to which a simulator reacts to being prodded – is not a good indicator of model reliability.

Apart from all this, the climate sensitivity estimates (as shown in their Figure) mostly have large ranges so I would argue in that case that consistency did not mean much…

Warning: here be opinions

Despite the uncertainty, the weight of scientific evidence is enough to tell us what we need to know. We need governments to go ahead and act…We do not need to demand impossible levels of certainty from models to work towards a better, safer future.

This being a science and not a policy blog, I’m not keen to discuss this last part of the article and would prefer your comments below not to be dominated by this either. I would only like to point out, to those that have not heard of them, the existence (or concept) of “no-regrets” and “low-regrets” options. Chapter 6 of the IPCC Special Report on ‘Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation (SREX)’ describes them:

Options that are known as ‘no regrets’ and ‘low regrets’ provide benefits under any range of climate change scenarios…and are recommended when uncertainties over future climate change directions and impacts are high.

Many of these low-regrets strategies produce co-benefits; help address other development goals, such as improvements in livelihoods, human well-being, and biodiversity conservation; and help minimize the scope for maladaptation.

No-one could argue against the aim of a better, safer future. Only (and endlessly) about the way we get there. Again I ask, please try to stick to on-topic and discuss science below the line.

 
 

*I try to use ‘simulator’, because it is a more specific word than ‘model’. I will also refer to climate simulators by their most commonly-used name: GCMs, for General Circulation Models.

**”Uncertainty in climate science and climate policy”, chapter contributed to “Conceptual Issues in Climate Modeling”, Chicago University Press, E. Winsberg and L. Lloyd eds, forthcoming 2013.

***Just like the number of pixels of a digital camera, the resolution of a simulator is how much detail it can ‘see’. In the climate simulator I use, HadCM3, the pixels are about 300km across, so the UK is made of just a few. In weather simulators, the pixels are approaching 1km in size.

 Written by Tamsin Edwards Posted in Uncategorized

Source>>

About

Started in year 2010, ‘Climate Himalaya’ initiative has been working on Mountains and Climate linked issues in the Himalayan region of South Asia. In the last five years this knowledge sharing portal has become one of the important references for the governments, research institutions, civil society groups and international agencies, those have work and interest in the Himalayas. The Climate Himalaya team innovates on knowledge sharing, capacity building and climatic adaptation aspects in its focus countries like Bhutan, India, Nepal and Pakistan. Climate Himalaya’s thematic areas of work are mountain ecosystem, water, forest and livelihood. Read>>

Himalayan Nations at Climate Change Conference-CoP21

Over 150 heads of state and government gathered in Paris at the UN climate change conference on Monday, 30 November, the largest group of leaders ever to attend a UN event in a single day. In speech after speech, they provided political leadership and support to reach an ambitious and effective climate change agreement by…

Read more…

Comments are closed.

seo packagespress release submissionsocial bookmarking services