Weather forecast models. Is higher resolution always better?
johnckealy last edited by pavelneuman
After three years of doctoral study in numerical weather prediction, I recently forced myself to ask the question: what have I learned? Or more to point, what do I now know that might actually be useful to the average user of a weather forecast? Today, I'm going to discuss model resolution.
I often hear people talk about model resolution, with varying degrees of understanding about what it actually means. To put it simply, think of a giant grid – laid out evenly across the earth.
The resolution of the model is the distance between each node on this grid. One thing you might notice about doing this on a sphere: the "resolution" in the east–west direction is largest at the equator, and becomes vanishingly small near the poles.
Take the American GFS (now a.k.a the FV3) model. Often it is quoted as showing a resolution of 22km, but other times, it seems to be 27km. In actual fact, it has a resolution of 0.25°. Just google for a latitude–longitude distance calculator and you'll see that 0.25° is 27km at the equator, but its 22km over Washington DC.
Increasing resolution towards the poles is a major problem in NWP, but some really cool solutions are emerging.
Now, if you have more computing power to throw at the model, you can decrease the spacing between the grid boxes. Though this kind of computer power has traditionally been kept in-house by government Met agencies, a shift towards cloud (no pun intended) providers like Amazon's AWS is starting to take hold.
ECMWF's IFS model (the HIRES configuration that Windy uses) runs at 9km (remember that 9km is an approximation). There are some very obvious benefits to doing this. With more resolution, you can resolve more weather features, such as the downwind vortex spinning off the Big Island of Hawaiʻi in the image below – which the GFS (left) cannot see, and the IFS (right) can.
Higher resolution means that orography can be better resolved, and smaller-scale features can appear. This, in general, is a good thing.
In the interest of accuracy, I should mention that the GFS and IFS are actually 'spectral models' – they don't use a grid at all, and their "resolutions" are inferred.
Spectral models are created by summing massive amounts of trigonometric functions, and they're developed by people that are smarter than I'll ever be. Luckily, spectral models need to be converted to a grid at some point anyway – so it's pretty harmless to think of them this way.
Before I go on, I should reiterate – high resolution, where possible, is usually a good thing. We push for it. I could show you a thousand examples to support this. But before we go on a supercomputer shopping spree, I'd like to talk about a less popular topic – the problems high resolution has now opened up.
The requirement of conjecture
When talking about resolution, people frequently skip over an incredibly important topic – parametrization (of which, incidentally, nobody seems able to agree on the spelling). Without parametrization, weather forecasts would be total and complete garbage.
Within each grid box (behind the scenes, if you will), assumptions get made, and these assumptions are what makes the model work.
For instance, we don't model every photon coming from the sun. We just assume that the sun gives us X amount of Watts per square metre in location Y. We also assume that cloud droplets develop into raindrops, and melt, or condense, etc. – according to certain laws and relations that scientists have derived.
Parametrizations allow a model to work by filling in the gaps; they're also known as the "physics" of the model (while grid-scale processes are known as the "dynamics" of the model).
The grey zone
Another assumption that we make is that thunderstorms will form and dissipate where they should; passing, rain, lightning, and turbulence back to the grid, where we can see it in the output. In a convective parametrization, the updrafts within the storm cloud are not actually modelled.
The thing is, there's no reason why they can't be. Since storm clouds tend to be something like 10km deep, a model with a resolution of about half of this (known as the 'effective resolution') should, theoretically, be able to explicitly represent it. And the model will indeed try. And it will fail, spectacularly.
The 'grey zone' is a regime in NWP in which the model tries to resolve the updrafts and heat transfer in turbulent eddies and convective showers, but doesn't have the resolution to do so properly. It gets 'stuck' between the two; it can't resolve the feature, but can't parametrize it either. This leads to a grid dependence in the model, which is a very bad thing.
ECMWF will soon roll out their new 5km HIRES configuration. Hopefully, they've developed something to help out with the grey zone issue – 5km is right on the edge of starting to resolve deep convection. (4km is generally accepted as the limit of 'convection permitting' models.)
When the resolution gets too high, the assumptions that make parametrizations work start to break down, and we must either find a solution to this problem, or simply skip over the grey zone entirely and resolve each thunderstorm explicitly – something we can kind-of do in high-resolution nested models (like AROME), but certainly not with global models (yet).
Too much detail?
The next question is, how much detail do we actually want? If you had limitless computational resources, and could run your NWP model at, say, a resolution of 10 metres, would you really want to?
Let's say you need a precipitation accumulation across a twelve-hour period for your racecourse on race day. Do you need to know how many showers happened overnight? Or let's say you're a sailor, wondering whether tomorrow is a good day for a sail. Do you need to know what pattern of eddies the wind will create as it bounces off the yacht club building?
I was once a forecaster for commercial aviation. One day, we received word from HQ that a new model, known as the EURO-4, was being implemented. With the EURO-4, we had now jumped from 12km resolution to 4km. Suddenly, our once accurate display was giving us very odd wind speeds over our airfields.
Why? The new model was able to resolve convective updrafts. This was technically more representative of the real world. But we needed the average speed to make a properly representative forecast, and so had to smooth out the extra detail, just to make the forecast usable.
As is common in science, we must drive ahead, but with a healthy dose of scepticism. Higher resolution is definitely the future. But NWP is so often misunderstood, especially now that the level of detail available is allowing us to dispense with once-essential guidance – human forecasters who spent their days drawing fronts, and using conceptual cyclone models and empirical techniques.
As with many other areas of industry, computers are rendering humans redundant. But in the rush to do so, we must not lose the knowledge that every meteorologist once possessed, knowledge that we, as direct model users, do not have – the limitations of each individual NWP model.
NWP will not be devoid of problems for many decades to come, and we, as users, must always be willing to ask the hard question: Does today's forecast represent the pinnacle of state-of-the-art NWP? Or, is it total garbage?
I thought I'd have a try at writing an article today, would be great if someone on the Windy team could let me know if it's usable, and if I've formatted it right, etc. Thanks! @pavelneuman
pavelneuman last edited by
@johnckealy Thanks, John. I am on it.
pavelneuman last edited by
@pavelneuman Thanks Pavel!
Andrea Perron last edited by
There is no one weather "model," the forecasts are ensembles. Different approaches apply somewhat different weighting to possible trajectories. Some seem to work a little better than others sometimes.
Thanks, John. Very good