Skip to content
Home » Climate Change Blog » Overselling k-scale? Hmm

Overselling k-scale? Hmm


Guest commentary from Paul Bates, Peter Bauer, Tim Palmer, Julia Slingo, Graeme Stephens, Bjorn Stevens, Thomas Stocker and Georg Teutsch

Gavin Schmidt claims that the benefits of k-scale climate models (i.e. global climate models with grid spacing on the order of 1 km) have been “potentially somewhat oversold” in two recent Nature Climate Change papers. By this, we suppose Gavin means that the likely benefits of k-scale cannot justify the cost and time their development demands.

The benefit of k-scale — which is laid out in the papers (Slingo et al., 2022 and Hewitt et al., 2022) and only briefly recapitulated here — is that it enables the use of physics, instead of fitting, to represent some of the most important components of the climate system (precipitating systems in the atmosphere, eddies in the ocean, and topographic relief), and hence build more reliable models.

Those who haven’t been following the technology might be surprised to learn how far we’ve come. Later this year the NICAM group will perform global simulations on a 220m grid. If they had the Earth Simulator machine Fugaku to themselves, they could deliver 1500 years of throughput with their 3.5 km global coupled model within a year. Likewise, at the MPI in Hamburg, benchmarking of a 1.25 km version of ICON on pre-exascale platforms suggests that, over the course of a year, DOE’s Frontier machine could deliver 800 years of coupled 1.25 km output.

Yes, investments in science and software engineering will be required to get the most out of k-scale models running on exascale machines. This realization motivates new programmes like Destination Earth (EU), as well as national projects in Germany (WarmWorld), and Switzerland (EXCLAIM). However, if we wish to benefit from the fruits of these efforts, and we must, we will need to better align the development of k-scale models with exascale machines in ways that makes their developing information content accessible by all. How much would this cost? Given access to the hardware, we estimate that about 100 M€/year of additional funding would be sufficient to support the staff needed to use it to make multi-decadal, multi-model, k-scale climate information systems a reality; not by 2080, but by the end of this decade.

We don’t pretend that k-scale modelling will solve every problem, but it will solve many important ones, and put us in a much better position to tackle those that remain. This conviction is rooted in the experiences of scientists world-wide, who perform their numerical experiments and run their weather models at k-scale if they can. It is also why the European Centre, and many leading climate centers are retooling their labs to develop k-scale coupled global models. No-one says that the benefits of high resolution, e.g. in terms of lives saved, have been oversold for weather prediction. Why should the same not be true of climate prediction? If anything is being oversold, it is the idea that business as usual, which favors fitting over physics, will be adequate to address the challenges of a changing climate.

Fig 4.42 from IPCC AR6 WG1 Report

No-one who understands the notion of nonlinearity can be comfortable with the fact that the precipitation biases in CMIP-class models are larger than the signal, and that the oceans — which buffer emissions and export enthalpy to ice-sheets — behave like asphalt. As societies begin to confront the reality of climate change, and the existential threats it poses, our science cannot countenance complacency. To build resilience to the changing nature of weather extremes, communities and countries need to know what is coming — more droughts and heat waves, or more storms and floods, and with what intensity, scale and duration? The brutal truth is that we don’t know enough, a fact testified to by, for example, Fig. 3 and Fig. 8 of the SPM, or Fig 4.42 of the full AR6 report (right).

If six phases of CMIP have taught us anything, it is that a seventh phase based on similar models won’t help. Unless we up the scientific ante, nothing will change.

If people are happy to endorse satellite missions to observe convective cloud systems, surely they should support the development of models which have the capability to assimilate the data from such missions and then exploit them in a physically robust manner? We have no beef with the argument that the current generation of climate models have helped advance climate science; we just happen to be convinced that we can, and must, do better — ideally by working together.

References


  1. J. Slingo, P. Bates, P. Bauer, S. Belcher, T. Palmer, G. Stephens, B. Stevens, T. Stocker, and G. Teutsch, “Ambitious partnership needed for reliable climate prediction”, Nature Climate Change, vol. 12, pp. 499-503, 2022. http://dx.doi.org/10.1038/s41558-022-01384-8


  2. H. Hewitt, B. Fox-Kemper, B. Pearson, M. Roberts, and D. Klocke, “The small scales of the ocean may hold the key to surprises”, Nature Climate Change, vol. 12, pp. 496-499, 2022. http://dx.doi.org/10.1038/s41558-022-01386-6

The post Overselling k-scale? Hmm first appeared on RealClimate.