Are mortality rates really an accurate signal for poor quality?

A recent study has concluded that the NHS should not use hospital-wide mortality rates as a ‘smoke alarm’ to identify poor quality hospitals. The research, commissioned by NHS England, looked at whether standardised mortality ratios provide an accurate indicator of the number of avoidable deaths.

Though mortality rates are a more useful measure than the crude number of deaths, there are numerous factors that may influence this number. Often, hospitals want to know if the mortality rate is higher or lower than average or how hospitals compare to each other. An important factor to consider when looking at mortality is that the chance of dying is very strongly related to age, as such, when calculating summary death statistics, allowances are made for differences between age structures of populations – this process is called standardisation.

As well as standardising for age, most mortality measures also attempt to make adjustments for differences in patients’ gender, diagnoses, deprivation, comorbidity and palliative care. There are two main standardised mortality measures (SMRs) – hospital-standardised mortality ratio (HSMR) and summary hospital-level mortality indicator (SHMI), which look at whether the number of observed deaths is higher or lower than expected.

The review looked at the link between the two main mortality indicators and the proportion of avoidable deaths in a hospital as determined by case note reviews by assessing if there was any relationship between “excess mortality rates” (based on hospital-wide SMRs) and “actual avoidable deaths” (based on retrospective case record review by experienced clinicians).

“Actual avoidable deaths” was considered to provide a more meaningful indication for quality of care, as it is based on clinicians’ review of each death rather than statistical probability of routine administrative data. A moderately strong association between the two measures would provide some reassurance of the validity of hospital-wide SMRs as a measure of mortality associated with poor quality care.

Researchers said that producing “avoidable death rates” via case note reviews was not a good judge of quality, calling into question government plans to band hospitals depending on the number of estimated avoidable deaths. It concluded “no significant association” between hospital wide standardised mortality ratios and the proportion of avoidable deaths in a trust.

A difference in SMRs between 105 and 115 would be associated with a difference in the proportion of avoidable deaths by only 0.3 percentage points. While the authors of the report concluded that hospital-wide mortality rates were not a good way of comparing trusts, they said looking at these indicators for individual, high fatality conditions could still be valuable.

The study has its limitations – such as a small sample of case notes per hospital and flaws in case note reviews in attributing preventability of deaths, for example, using two reviewers has only moderate reliability.

The Department of Health has said the review would form part of the evidence base for their work on avoidable deaths. Sir Bruce Keogh said the report’s findings provided “additional insight and show we need to look at a whole range of methods to measure healthcare and quality”. Sir Mike Richards, the chief inspector of hospitals at the Care Quality Commission, said he agreed that “looking at mortality rates in isolation will not give a full picture of quality. This is why we look at a range of measures during every inspection, including talking to staff and patients, to assess if a trust is providing safe and effective care”.

Source: Hazel, Will. Hospital mortality rates ‘should not be smoke alarm’ for poor quality (2015, 14 July). Retrieved from:

Follow Will Hazel on Twitter