
Two opinion pieces (Slingo et al., and Hewitt et al.) and a supportive Nature Climate Change editorial were published this week, extolling the prospects for what they call “k-scale” climate modeling. These are models that would have grid boxes around 1 to 2 km in the horizontal – some 50 times smaller than what was used in the CMIP6 models. This would be an enormous technical challenge and, while it undoubtedly would propel the science forward, the proclaimed benefits and timeline are perhaps being somewhat oversold.
The technical wall to climb

Climate model resolution has always lagged the finest-scale weather model resolution which, for instance, at ECMWF is now around 9km. This follows from the need to run for much longer simulation periods (centuries, as opposed to days) (a factor of ~5000 more computation), and to include more components of the climate system (the full ocean, atmospheric chemistry, aerosols, bio-geochemistry, ice sheets etc.) (another factor of 2). These additional cost factors (~10000) for climate models can be expressed in the resolution, since each doubling of resolution leads to about a factor of 10 increase in cost (2 horizontal dimensions, half the timestep, and a not-quite proportionate increase in vertical resolution and/or complexity). Thus you expect climate model resolutions to be around 24 times larger than current weather models (and lo, 16*9km is ~144km).
For standard climate models to get to ~1 km resolution then, we need at least a 106 increase in computation. For reference, effective computational capacity is increasing at about a factor of 10 per decade. At face value then, one would not expect k-scale climate models to be standard before around 2080. Of course, special efforts can be made to push the boundaries well before then and indeed these efforts are underway and referenced in the papers linked above. But, to be clear, many things will need to be sacrificed to get there early (initial condition ensembles, forcing ensembles, parameter ensembles, interactive chemistry, biogeochemical cycles, paleoclimate etc.) which are currently thought of as essential in order to bracket internal and structural uncertainties.
Both the Slingo and Hewitt papers suggest some shortcuts that could help – machine learning to replace computationally expensive parameterizations for instance, but since part of the attraction in k-scale modeling is that many parameterizations will no longer be needed, there is a limited role there for this. More dedicated hardware has also been posited instead of using general purpose computing and that has historically helped (temporarily) in other fields of physics. Unfortunately, the market for climate modeling supercomputers is not large enough to really drive the industry forward on its own, and so it’s almost inevitable that general purpose computing will advance faster than application-specific bespoke machines.
The ‘ask’ is therefore considerable.
Potential breakthroughs
Increasing resolution in climate models has, historically, allowed for new emergent behaviour and improved climatology. However, the big question in climate prediction is the sensitivity to changing drivers and this has not shown much correlation (if any) with resolution. Conceptually, it’s easy to imagine cases where an improved background climate sharpens the predictions of change, e.g. in regions with strong precipitation gradients. Similarly, one can imagine (as the authors of Hewitt et al. do) that the sensitivity of ocean circulation will be radically different when mesoscale eddies are explicitly included. Indeed, the worth of these models will rely almost entirely on these kinds of new rectification effects where the inclusion of smaller scales makes the larger scale response different. If the k-scale models produce similar dynamic sensitivities of the North Atlantic overturning circulation or the jet stream position as current models, that would be interesting and confirming, but I think would also be slightly disappointing.
There are real and important targets for these models at regional scales. One would be the interaction of the ocean and the ice sheets in the ice shelf cavity regions around Antarctica, but note we are still ignorant of key boundary conditions (like the shape of the cavities!), and so more observations would also be needed. In the atmosphere, the mesoscale organization of convection would clearly be another target. Indeed, there are multiple other areas where such models could be tapped to provide insight or parametrizations for the more standard models. All of this is interesting and relevant.
Missing context?
But neither of the comments nor the editorial discuss the issue of climate sensitivity in the standard sense (the warming of the planet if CO2 is doubled). The reason is obvious. The sensitivity variations in the existing CMIP6 ensemble are both broader than the observational constraints and are mostly due to variations in cloud micro-physics which would still need to parameterized in k-scale models. Thus there is no expectation (as far as I can tell) that k-scale models would lead to models converging on the ‘right’ value of the climate sensitivity. Similarly, aerosol-cloud interactions are the most important forcing uncertainty and, again, this is a micro-physical issue that will not be much affected by better dynamics. Since these are the biggest structural uncertainties at present (IMO), we can’t look to k-scale models to help reduce them. Note too, that it will be much longer before k-scale models are used in the paleo-climate configurations (e.g. Zhu et al., 2022) that have proved so useful in building credibility (or not) for the shifts predicted by current climate models.
More subtly, I often get the impression from these kinds of articles that progress in climate modeling has otherwise come to a standstill. Indeed, in the Slingo et al. paper, they use Fieldler et al. (2020) (in Box 1) to suggest that rainfall biases in response to El Niño have ‘remained largely unchanged over two decades’. However, Fieldler et al. actually say that “CMIP5 witnessed a marked improvement in the amplitude of the precipitation signal for the composite El Niño events. This improvement is maintained by CMIP6, which additionally shows a slight improvement in the spatial pattern”. This result is consistent with results from just the US models (Orbe et al., 2020), where the correlations related to ENSO and PDO have markedly improved in the latest round of models.

Perhaps this is a glass half-full issue, and while I agree there is a ways to go, the models have already come far. Based on previous successes, I assume that this progress will continue even with the normal process of model improvement.
A hybrid way forward?
Some modeling groups will continue to prioritize higher model resolution and that’s fine. We may even see cross-group and cross-nation initiatives to accelerate progress (though it probably won’t be as rapid as implied). But we are not going to be in a situation where these efforts can be the only approaches for many decades. Thus, in my view, we should be planning for an integrated effort that supports cutting edge higher resolution work, but that also improves the pipeline for parameterisation development, calibration and tuning for the standard models (which will be naturally increasing their resolution over time though at a slower rate). We should also be investing in hybrid efforts, where for instance, the highest resolution operational weather model ensemble (currently at 18km resolution) could be run with snapshots of ocean temperatures and sea ice from 1.5ºC or 2ºC worlds derived from the coupled climate models. Do these simulations give different statistics for extremes than the originating coupled climate model? Can we show that the hybrid set-up is more skillful in hindcast mode? We could learn an enormous amount using existing technology with only minimal additional investment. Indeed, these simulations could be key ‘proof of concept’ test that could support some of the more speculative statements being used to justify a full k-scale effort.
And so…
Back in 1997, the Japanese Earth Simulator came online with similar ambitions to the aims outlined in these papers. Much progress was made with the technology, with the IO, and with the performance of the climate model. Beautiful visualizations were created. Yet I have the impression that the impact on the wider climate model effort has not been that profound. Are there any parameterizations in use that used the ES as the source of the high resolution data? (My knowledge of how the various Japanese climate modeling efforts intersect is scant, so I may be wrong on this. If so, please let me know).
I worry that these new proposed efforts will be focused more on exercising flashy new hardware than on providing insight and usable datasets. I worry that implicit claims that climate model prediction will be as improved by higher resolution as weather forecasts have been will backfire. I also worry that the excitement of shiny new models will lead to a neglect of the workhorse climate model systems that we will still need for many years (decades!) to come.
Over the years, we have heard frequent claims that paradigm-shattering high resolution climate models are just around the corner and that they will revolutionize climate modeling. At some point this will be true – but perhaps not quite yet.
References
J. Slingo, P. Bates, P. Bauer, S. Belcher, T. Palmer, G. Stephens, B. Stevens, T. Stocker, and G. Teutsch, “Ambitious partnership needed for reliable climate prediction”, Nature Climate Change, vol. 12, pp. 499-503, 2022. http://dx.doi.org/10.1038/s41558-022-01384-8
H. Hewitt, B. Fox-Kemper, B. Pearson, M. Roberts, and D. Klocke, “The small scales of the ocean may hold the key to surprises”, Nature Climate Change, vol. 12, pp. 496-499, 2022. http://dx.doi.org/10.1038/s41558-022-01386-6
“Think big and model small”, Nature Climate Change, vol. 12, pp. 493-493, 2022. http://dx.doi.org/10.1038/s41558-022-01399-1
J. Zhu, B.L. Otto‐Bliesner, E.C. Brady, A. Gettelman, J.T. Bacmeister, R.B. Neale, C.J. Poulsen, J.K. Shaw, Z.S. McGraw, and J.E. Kay, “LGM Paleoclimate Constraints Inform Cloud Parameterizations and Equilibrium Climate Sensitivity in CESM2”, Journal of Advances in Modeling Earth Systems, vol. 14, 2022. http://dx.doi.org/10.1029/2021MS002776
S. Fiedler, T. Crueger, R. D’Agostino, K. Peters, T. Becker, D. Leutwyler, L. Paccini, J. Burdanowitz, S.A. Buehler, A.U. Cortes, T. Dauhut, D. Dommenget, K. Fraedrich, L. Jungandreas, N. Maher, A.K. Naumann, M. Rugenstein, M. Sakradzija, H. Schmidt, F. Sielmann, C. Stephan, C. Timmreck, X. Zhu, and B. Stevens, “Simulated Tropical Precipitation Assessed across Three Major Phases of the Coupled Model Intercomparison Project (CMIP)”, Monthly Weather Review, vol. 148, pp. 3653-3680, 2020. http://dx.doi.org/10.1175/MWR-D-19-0404.1
C. Orbe, L. Van Roekel, �.F. Adames, A. Dezfuli, J. Fasullo, P.J. Gleckler, J. Lee, W. Li, L. Nazarenko, G.A. Schmidt, K.R. Sperber, and M. Zhao, “Representation of Modes of Variability in Six U.S. Climate Models”, Journal of Climate, vol. 33, pp. 7591-7617, 2020. http://dx.doi.org/10.1175/JCLI-D-19-0956.1
The post Mmm-k scale climate models first appeared on RealClimate.