Sunday 20 March 2011

Counting the deaths that count

According to the American columnist H L Mencken, ‘there is always a well-known solution to every human problem – neat, plausible, and wrong.’

For example, in assessing efficiency of healthcare, nothing is simpler or more plausible than to measure length of stay. So we’ve had countless studies comparing hospitals on the basis of ‘average’ (i.e. mean) length of stay. A particular hospital may have a mean value of, say, 4.8 against 4.5 for a peer group. That difference becomes the basis for the conclusion that if there are 80,000 inpatient stays, the hospital could save 24,000 days and close some eye-watering number of beds. All this is advanced without a thought as to whether a mean value is even appropriate for a measure like length of stay, which is usually distributed with a very long tail (small numbers of patients with massively long stays, usually due to the complexity of their condition), or whether a difference of 0.3 is even significant.

In case anyone thinks this is a wild exaggeration, we’ve seen a hospital rebuilding programme that took this kind of analysis as the basis for calculating its required number of beds, and paid the price when it discovered that the new building had far too few. 

In addition, we need to ask whether this kind length of stay analysis even compares like with like. We’ve seen comparisons with peer groups which confidently predict efficiency savings, only to find on closer examination that the hospital treats a sub-group of complex patients that the peer group doesn't. Careless analysis can lead to bad and costly conclusions.

If length of stay taken in isoloation is the simple, plausible and wrong measure of efficiency, the equivalent in the field of care quality is mortality. Now, there’s no denying that the patient’s death is not a desirable outcome. Keeping mortality down is an obvious step in keeping quality up. There are however two problems with the measure.

The first is that there are huge areas of hospital care in which mortality is simply too low to be useful as a blunt comparative measure. Mortality in obstetrics has now fallen to such a level, for example, that it would be perfectly possible to find just two deaths in an entire year in one hospital, and one in another. To conclude that the first delivers care that is 100% poorer than the second would be a conclusion that can only really be described as rash. Or, as Mencken would no doubt have told us, plain wrong.

That is not to say that these individual deaths shouldn’t be monitored and investigated: rare events such as a maternal mortality or, say, death following a straightforward elective procedure should be thoroughly investigated. It's simply that they cannot in isolation form the basis of an overall assessment of one hospital's care quality compared to another.

Again, don’t think that this is a wild exaggeration – we know of reports suggesting poor performance by a clinician, based on comparisons as meaningless as these. And we’ve argued before that the use of crude mortality figures in analysing Mid Staffs hospital distorted the debate. We don't of course mean that there were no quality problems at Mid Staffs: there were and it was appropriate to address them.

Interestingly these problems were highlighted by patients and relatives some time before various organisations began to raise any issues. Unfortunately, once information analysis began to appear, it focused on mortality data and drew conclusions from the figures which they couldn't properly support. Many of the problems at mid-Staffs were on wider quality issues that the patients identified but weren't measured or when highlighted did not appear to be investigated. 

The temptation to make mortality a focus is understandable. It's a measure that's easy to obtain because hospitals routinely record their deaths. So it's natural to want to tot them up and convince ourselves that we then have a valid measure of comparison.

Well, do we? Here we come up to the second objection to mortality as an idicator. Let's start by taking another look at length of stay. If a hospital keeps patient stays short, might that not reflect a lot of early discharges, including perhaps a number of patients who go home and die there, with the result that they’re not included in the hospital’s death figures?

And what about transfers? If one hospital is transferring a high proportion of particularly ill patients to a tertiary referral centre, won’t its own mortality figures be artificially reduced while the receiving institution’s are inflated?

That’s why if you’re going to use mortality as a measure of quality, you need firstly to ensure that you’re applying it to specialties, conditions or procedures where it makes sense, and secondly that you’re measuring not just in-hospital mortality but also mortality after discharge, choosing a period beyond discharge that is appropriate to the patient's condition.

Now HES data has been analysed with Office of National Statistics death records linked to them, on an annual basis, for some years now – since about 2002. This means that since then it has been possible to take a look at mortality following hospital treatment in a much more comprehensive and useful way. What’s surprising is how few NHS and commercial providers have taken advantage of this information.

It’s not as though there haven’t been innovative thinkers who’ve used this kind of data to produce interesting conclusions. For example, we have the National Clinical and Health Outcomes Base (NCHOD) studies on 30-day mortality following emergency admissions for stroke. This has all the characteristics you’d want: an area of care – emergency strokes – for which mortality is a useful indicator, and the right measure, taking in much more than deaths in hospital.

As it happens, this analysis itself needs to be taken further. Mortality, like length of stay, is only one measure and can still mislead when used in isolation. It really needs to be supplemented by looking at indicators concerning the quality of the care itself. Useful measures have been proposed and are being used by the Royal College of Physicians, including the type of facility that treats the patients, the provision of thrombolysis and the effective monitoring of patients in the first few days of admission, all factors which improve the outcome of care. This is a subject to which we might return in a future post.

Using the Royal College of Physician indicators would improve the analysis. However, at least the figures published by the Department of Health back in 2002/3 showed a way forward towards a more rational use of mortality figures themselves. It's disappointing that eight years on so few have followed that promising lead. 

Perhaps we can start to catch up before the decade is over.