Sunday 1 August 2010

Lies, damned lies and misused healthcare statistics


If you live in Stafford, as I do, it comes as a great relief to see that the Care Quality Commission (CQC) has decided to lift five of the six restrictions it had placed on the Mid Staffordshire Trust, our local acute hospital, following the scandal there over care quality. It was particularly gratifying to read that mortality rates have fallen and numbers of nurses are up to more sensible levels. It’s obviously good news that a hospital that had been having real problems with quality seems to be well on the way to solving them.
On the other hand, much of the original scandal had missed the fundamental point. Much was made of the finding that between 400 and 1200 more people had died in the hospital than would have been expected. The implication was that poor quality had led to excess deaths, even though there was no way of linking the deaths to care quality defects. Indeed, the Health Care Commission, the predecessor of the CQC, had decided to take action over the quality of care at the Trust, but not because of the mortality figures which it had decided not to publish and was irritated to see leaked.

Now on 28 May, Tim Harford’s Radio 4 programme More or Less examined the use of statistics about Mid-Staffs. David Spiegelhalter, Professor of the Public Understanding of Risk at the University of Cambridge, warned listeners that we need to be careful with the concept of ‘excess deaths’, because it really only means more deaths than the average and ‘half of all hospitals will have excess deaths, half of all hospitals are below average.’

What we need to look out for is exceptionally high values, although even there we have to be careful as there are many reasons why a hospital might be extreme: ‘first of all you’ve just got chance, pure randomness: some years a hospital will be worse than average even if it’s average over a longer period.’

Spiegelhalter also questions the techniques used to make the statistics more relevant, such as risk adjustment. That kind of adjustment aims to take into consideration the extent to which mortality might be affected by factors external to the hospital, such as race, poverty or age. That should give a better way of comparing hospitals, but in reality the procedure is inadequate because ‘there’s always variability between hospitals that isn’t taken into account by this risk adjustment procedure, not least of which is that we assume that factors such as age, ethnicity and deprivation have exactly the same effect in every hospital in the country’, an assumption that we’re not justified in making.

Spiegelhalter’s conclusion? Mortality is ‘a nice piece of statistics and it’s great as a performance indicator, something which might suggest that something needs looking at, but you can’t claim that excess mortality is due to poor quality care.’ Not that such considerations stopped many newspapers making exactly that claim.

Of course, Spiegelhalter could have added that a lot depends on which mortality statistics you measure anyway. It’s fascinating to see that the body that produced the original mortality figures for Mid Staffs, Dr Foster Intelligence, was later asked to look at a different range of performance indicators, including some more narrowly defined mortality values, and placed Mid Staffordshire ninth best performing hospital in the country – less than a year after the original scandal broke.

Tim Harford also interviewed Richard Lilford who is Profession of Clinical Epidemiology at the University of Birmingham. Lilford suggested a different approach to assessing hospitals: ‘I’ve always felt that we should go for more process-based measurements. What we should look for is whether hospitals are giving the correct treatment.’ Professor Lilford felt this approach had two advantages. The first is that if differences in quality of care can be traced to the processes used, it’s difficult to the right them off as a result of statistical bias. Most important of all, though, if we really want to improve the care provided, ‘we need to improve the hospitals that are in the middle of the range not just those that are at the extreme of the range.’ In fact, he finds that there is more to gain from improving the middle-range of hospitals than from improving the extremes.

In any case, I don’t think I’ve ever come across a good or bad hospital. Some hospitals are strong in certain specialties and weak in others, or have stronger and weaker clinicians, or even clinicians who are good at certain times or at certain things and bad at others. Lilford makes much the same point: ‘the fact of the matter is that hospitals don’t tend to fail uniformly, they’re seldom bad at everything or good at everything. If you go for process you can be specific. You can improve process wherever you find it to be sub-optimal.’

That’s the key. When we understand processes, we can see where problems are arising. There clearly were problems at Mid Staffs. What was needed was careful analysis of what was being done wrong so that it could be fixed, so that processes could be improved. This is the reason for my enthusiasm for analysing heathcare in terms of pathways, spanning whole processes, rather than isolated events.

It’s really good news that the CQC feels that the work at Mid Staffs has produced results.

How much better things might have been if this work of improvement hadn’t had to start in the atmosphere of scandal and panic set going by wild use of mortality figures.

Last word to Professor Lilford.

‘Using mortality as an indication of overall hospital performance is what we would call, in clinical medicine, a very poor diagnostic test. What we’re really interested in when we measure mortality isn’t mortality, it’s not the overall mortality, for this reason: we all will die some day and most of us will do so in hospital. So what we’re really interested in is preventable or avoidable mortality and, because avoidable mortality is a very small proportion of overall mortality, it’s quixotic to look for the preventable mortality in the overall mortality.’

Time we stopped tilting at windmills and took hospital performance a little more seriously.

No comments:

Post a Comment