Thursday 19 August 2010

Indicators are only useful if they’re useful indicators

We’ve seen that pulling healthcare data from disparate sources, linking it via the patient and building it into pathways of care are the essential first steps in providing useful information for healthcare management. They allow us to analyse what was done to treat a patient, how it was done and when it was done. Paradoxically, however, we have ignored the most fundamental question of all: why did we do it in the first place?

The goal of healthcare is to leave a patient in better health at the end than at the start of the process. What we really need is an idea of what the outcome of care has been.

The reason why we tend to sidestep this issue is that we have so few good indicators of outcome.

In this post we’re going to look at the difficulties of measuring outcome. In another we'll review the intelligent use that is being made of existing outcome measures, despite those difficulties, and at initiatives to collect new indicators.

The first thing to say about most existing indicators is that they are at best proxies for outcomes rather than direct measures of health gain. They're also usually negative, in that they represent things that one would want to avoid, such as mortality or readmissions.

Calculating them is also fraught with problems. Readmissions, for example, tend to be defined as an emergency admission within a certain time after a discharge from a previous stay. The obvious problem is that a patient who had a perfectly successful hernia repair, say, and then is admitted following a road traffic accident a week later will be counted as a readmission unless someone specifically excludes the case from the count.

At first sight, it might seem that we should be able to guard against counting this kind of false positive by insisting that the readmission should be to the same speciality as the original stay, or that it should have the same primary diagnosis. But if the second admission had been as a result of a wound infection, the specialty probably wouldn’t have been the same (it might have been General Surgery for the hernia repair and General Medicine for the treatment of the infection). The diagnoses would certainly have been different. However, this would certainly have been a genuine readmission, and excluding this kind of case would massively understate the total.

It’s hard to think of any satisfactory way of excluding false positives by some kind of automatic filter which wouldn’t exclude real readmissions.

Another serious objection to the use of readmission as an indicator is that in general hospitals don’t know about readmissions to another hospital. This will depress the readmission count, possibly by quite a substantial number.

Things are just as bad when it comes to mortality. Raw figures can be deeply misleading. The most obvious reason is that clinicians who handle the most difficult cases, precisely because of the quality of their work, may well have a higher mortality rate than others. Some years ago, I worked with a group of clinicians who had an apparently high death rate for balloon angioplasty. As soon as we adjusted for risk of chronic renal failure (by taking haematocrite values into account), it was clear they were performing well. It was because they were taking a high proportion of patients at serious risk of renal failure that the raw mortality figures were high.

This highlights a point about risk adjustment. Most comparative studies of mortality do adjust for risk, but usually based on age, sex and deprivation. This assumes that mortality is affected by those three factors in the same way everywhere, and there's no really good evidence that they really do. More important still, as the balloon angioplasty case shows, we really need to adjust for risk differently depending on the area of healthcare we’re analysing.

This is clear in Obstetrics, for instance. Thankfully, in the developed world at least, death in maternity services is rare these days, so mortality is no longer a useful indicator. On the other hand, the rate of episiotomies, caesarean or perineal tears are all relevant indicators. They need to be adjusted for the specific risk factors that are known to matter in Obstetrics, such as the height and weight of the mother, whether or not she smokes, and so on.

Mental Health is another area where mortality is not a helpful indicator. Equally, readmission has to be handled in a different way, since the concept of an emergency admission doesn't apply to Mental Health, and generally we would be interested in a longer gap between discharge and readmission than in Acute care.

Readmission rates and mortality are indicators that can highlight an underlying problem that deserves investigation. They have, however, to be handled with care and they are only really useful in a limited number of areas. If we want a more comprehensive view of outcome quality, we are going to have to come up with new measures, which is what we’ll consider when we look at this subject next.

No comments:

Post a Comment