The UK Was Never Four Weeks Behind Italy. How Did 'Following The Science’ Go So Wrong?

Having an accurate consensus on R will be crucial in the weeks to come, Kit Yates writes.
LOADINGERROR LOADING

“On the curve, we are maybe four weeks or so behind [Italy] in terms of the scale of the outbreak.” This is what Patrick Vallance told the nation at the daily press briefing on March 12. At that point, the UK had a total of 596 reported cases and just 10 deaths, whereas Italy had over 15,000 cases and upwards of 1,000 deaths. There were just two new deaths reported on March 12 in the UK. Even if that daily figure increased to 30 deaths per day for the next four weeks we still wouldn’t have reached Italy’s death toll. With such a huge disparity in the numbers and a low number of daily deaths many people just swallowed this four-week figure without too much thought.

Unfortunately, this is not how either cases or deaths increase at the start of an epidemic. Instead of growing linearly – by the same amount each day – both cases and deaths increase exponentially – in proportion to their current size. The more infected people there are, the more people they will infect and the faster the cases will rise. There is a common misconception that exponential growth means fast growth. It doesn’t. At the early stages of an epidemic exponential growth can be misleadingly slow. When case numbers are low, so is their growth. But things can get out of hand incredibly quickly. This is especially dangerous if, for example, you think you are further behind the curve than you are.

Patients in the Circolo hospital, in Varese, Italy
Patients in the Circolo hospital, in Varese, Italy
Flavio Lo Scalzo / Reuters

Vallance’s four-week figure is reflected in the Sage minutes from the March 10: “The UK is considered to be 4-5 weeks behind Italy but on a similar curve.” In reality, the UK reached the figure of 1,000 deaths just 16 days later on March 28. The potential policy ramifications of thinking we had more time than we did and that the epidemic was slower growing than it was are huge. This false sense of security might have contributed to the UK’s disastrous, and quickly rescinded, ‘herd immunity’ policy and to the delay in taking measures to supress the epidemic, which resulted in the avoidable loss of tens of thousands of lives. So why the government’s chief scientific adviser get it so wrong?

Perhaps the most crucial figure in understanding how fast an epidemic is growing is the doubling time – the time for cases, hospitalisations or deaths to increase by a factor of two. The consistent doubling of these statistics in a fixed period of time is the hallmark of exponential growth in the early stages of an epidemic. On March 16, Boris Johnson told the press that “… without drastic action, cases could double every five or six days”. This figure is reflected in the SAGE minutes from March 18 where a doubling time of “5-7 days” is quoted. This figure likely comes directly from SAGE’s modelling subgroup – SPI-M.

This doubling time explains where the 4-5 weeks figure comes from. With a doubling rate of six days (in the middle of the SPI-M estimate) the time to get from the UK’s 10 daily deaths reported March 14 to Italy’s 285 daily deaths (reported the same day) would have been 29 days – just over four weeks. But this 5-7 day doubling time was wrong. Drastically wrong.

A more realistic doubling time has been calculated to be around three days. Although SPI-M’s estimate of the doubling time is only out by a factor of two, which doesn’t sound too bad, the undeniably dramatic exponential spread of the disease means that this error is compounded – itself doubling every few days. The three-day doubling time predicts that the UK would have reached Italy’s March 14 total of 285 daily deaths around two weeks later. This estimate was borne out in reality, with the UK hitting 260 daily deaths on March 28 – after just 14 days. Of course, it is easy to make these assertions in hindsight. The crucial question is whether a three-day doubling period could and should have been estimated at the time.

Calibrated modelling using the SEIR (Susceptible-Exposed-Infected-Removed) model suggests that using only data available at the time, by March 14, it should have been clear that the doubling time was much shorter than five days. A ball-park estimate can be derived without even resorting to sophisticated mathematical modelling. Here’s a simple argument, which just considers case numbers available in the public domain at the time. On March 14, the UK reported 342 new cases. On March 8, six days earlier, there were 64 new cases. Cases increased by more than a factor of five, which implies at least two doubling periods squeezed into this six-day window. This suggests a doubling time of less than three days. Although estimates of this informal nature will vary depending on the day-to-day figures, even the publicly available data at the time suggested we were much nearer two weeks behind than four.

The miscalculation leading to the four-week claim can be laid squarely at the door of the mathematical modelling sub-group of Sage. So why weren’t SPI-M’s world-leading modellers better able to calibrate their models to the early UK data. It turns out that some of groups who contribute to SPI-M did calculate significantly shorter and more realistic doubling times at an earlier stage in the UK’s epidemic, but that their estimates never found consensus within the group. Members of SPI-M have communicated their concerns to me, that some modelling groups had more influence over the consensus than others.

On March 16, Neil Fergusson’s Imperial College Covid-19 Response Team published their infamous report, which used effective estimates of the doubling time of over five days – way, way too slow. This figure seems to have dominated proceedings in SPI-M. It was a long time before more accurate doubling time figures eventually made their way up through SAGE and on to policy-makers.

SPI-M appear to have learned lessons from this error and now have better methods of model-averaging, which they are using to come up with consensus views on estimates of R - the reproduction number of the disease. Having an accurate consensus on R will be crucial in the weeks to come as we seek to understand the impact of easing the lockdown restrictions and attempt to avoid the mistakes of March that allowed the epidemic in the UK to get so drastically out of hand.

Kit Yates is a Senior Lecturer in Mathematical Biology at the University of Bath.

Close

What's Hot