THE BLOG

NHS Death Rate Statistic Should Not Be Ignored - But Do Not Tell the Whole Picture

28/02/2014 12:05 GMT | Updated 29/04/2014 10:59 BST

The credibility of the hospital standardised mortality ratio (HSMR) as a method to measure the performance of NHS hospitals has come under attack.

Speaking on BBC Radio 4's 'File on 4' programme, Professor Nick Black, who is one of the experts conducting a review commissioned by Professor Sir Bruce Keogh NHS Medical Director for England into the use of death rate statistics, said that the HSMR figures, which compare the expected rates of death in a hospital for the patients it treats with the actual death rates there, should be ignored and can 'give a misleading [indication of] quality of care'. The review of such statistics is due to be completed at the end of the year.

According to Professor Black, other factors that do not relate to the standard of treatment provided could skew HSMR figures and present a misleading impression of the standard of care that patients might expect at a particular hospital.

Dr Foster, the research group that compiles HSMR data, has contested such opinions, arguing that they have been used to identify underperforming hospitals, such as Stafford Hospital.

The HSMR is calculated in relation to the death rates for individual illnesses, and then an average figure for the whole hospital is calculated. There are a number of ways that this figure can be skewed that do not relate to the standard of treatment being provided. For example, where some departments have excellent death rates, and others have poor rates, the hospital will have an average overall death rate, thus 'hiding' the department with a poor rate.

Different areas of treatment clearly also carry differing levels of risk, and so the balance of types of patients held by the hospital will affect the expectation of the death rate that it will have.

Another important consideration is how patients are classified as suffering with a particular condition. Many patients have multiple conditions and a decision must be made on which one should be used as the diagnosis entered for the purpose of statistical analysis. A diagnosis code is recorded for each patient by staff employed to do this. Of course, there is also the chance of human error at this point.

Should a hospital decide to focus their efforts on improving the rate of diagnosis of a specific condition, such as infection, and this is effective, with the hospital entering an increased number of patients with a diagnosis code for this, if this statistic is considered alone it would appear as though the hospital has an increased number of patients with infection. This would clearly be incorrect and could be an instance where criticism is made of a hospital that is actually working hard to improve standards.

As this figure depends on so many variables that do not relate to the standard of care provided, I agree with Professor Black that the HSMR should not be used as an indicator of poor standards in isolation, as this is likely to be misleading. However, it should not be ignored altogether and should be considered alongside other statistics to consider whether the collection of data is overall indicative of a problem. Ignoring such information is not helpful. Now more than ever, we should be paying attention to the information provided by hospitals. However, we need to be armed with a full explanation of what it actually means and its true significance.

As experts disagree about the validity of the data, the public are likely to feel left in limbo, uncertain about which hospital performance figures they can rely upon.

The HSMR data has been considered in the Care Quality Commission's latest round of hospital inspections. Reviews of NHS hospital performance must assess all the data available to provide the most accurate picture.

The public would understandably like to have a simple way of assessing hospital performance. We all like to have a list of the good and bad to help us to reach a considered opinion. However, this approach is dangerous when looking at statistical evidence.

The risk is that drawing attention to a particular statistic, will create alarm, and may even lead to people avoiding going to hospitals that they consider bad performers. This may lead to a delay in people receiving medical treatment, either because they travel further to an alternative hospital, or because they delay going at all.

Increased openness and transparency in the NHS should be encouraged. However we must be careful that the information becoming available is explained properly so that people know what weight to give to it. Without this, people could be put off from accessing the medical treatment they need.

Suzanne Trask, partner and medical negligence specialist at Bolt Burdon Kemp - www.boltburdonkemp.co.uk