Why Are Thunderstorms Invisible to Weather Forecasters?

Thunderstorms are notoriously difficult to forecast with any precision. It is perhaps a little surprising that prediction of the timing and location of one of Nature's most dramatic events, poses an enduring problem for weather forecasters.
|

Yesterday The Telegraph led with the headline "Heat wave warning: health experts tell Britons to stay indoors", and last night stormy weather arrived in Britain from France.

It's July, it's hot and it's going to get hotter. And with the weekend upon us - especially if we're planning to visit the Farnborough International Air Show, attend the wedding of the year in our local village, or simply have a barbeque with friends - the sixty-four thousand dollar question is, are we in for a deluge?

Thunderstorms are notoriously difficult to forecast with any precision. It is perhaps a little surprising that prediction of the timing and location of one of Nature's most dramatic events, poses an enduring problem for weather forecasters. After all, the weathermen did a pretty good job with forecasting the near-ceaseless downpours that inundated much of southern England during January and February, and, earlier still, the St Jude's Day Storm last October was predicted with astonishing accuracy. So why are thunderstorms so capricious?

Part of the answer lies with the "digital resolution" of modern computer simulations. Weather forecasting involves calculating how the basic physical variables - wind speed and direction, temperature, pressure, air density and humidity - change from one moment to the next at millions of data points in our atmosphere. The forecast is generated by solving tens of millions of equations on state-of-the-art supercomputers capable of over 1,000,000,000,000,000 (one thousand billion) calculations per second (a measure called a petaflop). So we have a "big data" problem and ever bigger computers on which to crunch the numbers. But even with this computational power at our fingertips, the end result is a rather out-of-focus picture of the weather for tomorrow.

If we imagine the data points are pixels - similar to the ones that make up a digital image - then each of our weather pixels represents the average weather over distances of several miles. At this resolution, we can work out how hurricanes will form and develop quite successfully, because we use hundreds of pixels to capture the entire storm. But a single thunderstorm, especially in the early stages of its development, will be "smeared out" within a single pixel. And this brings us to the real subtlety of the issue.

Most national weather forecasting centres generate forecasts for up to seven days ahead, every twelve hours. Each new forecast uses the previous forecast as a "first guess" and then the most recent data, from systems such as satellites and weather radar, are blended with the previous forecast in a process known as data assimilation. The old forecast is then recalculated with the new data, and this gives us the starting point for the next forecast.

One of the fundamental problems in weather forecasting is that we shall never have sufficient data to determine the starting point for the next forecast with anything like perfect accuracy. Instead, we have to devise ways to cope with the inevitable paucity of observations.

Forecasters have grappled with this problem since the earliest days of modern electronic computing. The key to success is to identify the principal physical forces that control the evolution of entire weather systems and incorporate the relevant physics into the data assimilation algorithms. For example, the motion of a great swirling depression, bringing wind and rain to our shores, is governed largely by pressure differences and the effect of the Earth's rotation (known as the Coriolis force). This fact enables us to use observational data from one part of the weather system and exploit them in the next forecast for regions where there are fewer observations.

The problem with forecasting the onset of a thunderstorm is that the state of the atmosphere changes quite drastically, over a small region - perhaps within a single weather pixel - relatively quickly. In these circumstances the assimilation of data - if we're lucky enough to have useful observations - is a major challenge: it's a problem of contemporary research.

The good news is that the resolution of our computer simulations is sufficient to predict how large pools of warm moist air move across Europe, and this enables forecasters to estimate the likelihood of storms in any given region. But trying to pin-point just where and when thunderstorms will break out remains beyond the capability of current forecasting techniques.

The air traffic controllers at Farnborough will be watching the forecast closely this weekend, and we should do so too if we're planning the wedding of the year, or simply to dine alfresco!

Ian Roulstone is Professor of Mathematics at the University of Surrey and co-author (with John Norbury, University of Oxford) of Invisible in the Storm: the role of mathematics in understanding weather (Princeton University Press, 2013)