Sunday, 25 December 2011

Frailty, thy name is more and more of us


Daniel Dreuil, a geriatrician, and Dominique Boury, a Medical Ethics specialist, both from Lille in Northern France, gave a paper to the Fourth International Congress on Ethics in Strasbourg in March 2011. They started by talking of the case of Mrs B., an 87-year old recently admitted to an Accident and Emergency department:

She had spent the night on the bathroom floor following a fall. She had suffered three other falls in the previous six months, two of which had led to hospital admissions. Her poor eyesight and arthritis meant that she was at risk of falling again. On arrival at A&E, Mrs B was suffering from confusion: she no longer knew where she was , was unaware of the date or time, dozed during the day and tossed and turned at night, was suffering from anxiety and cried out when she was not depressed, complaining that money was being stolen from her and that hospital staff were trying to harm her. Though she had been widowed two years earlier, she claimed that her husband was going to fetch her and take her away. Her clinical, lab and radiological examination suggested early stage dehydration.

She was referred to Care of the Elderly where the dehydration was treated; the confusion increased and lasted a week, before receding rapidly. She had fortunately avoided a fracture on this occasion, but had to be referred for rehabilitation since the fall had affected her ability to walk: she was displaying symptoms of ‘post-fall syndrome’, specifically retropulsion when walking – she would lean backwards – which threw her off balance and made her very apprehensive as soon as she stood up. It would take her several weeks of rehabilitation to become a little more sure of herself.

During her six-week stay, a memory assessment revealed incipient Alzheimer’s disease. On her return home, a new treatment regime was put in place for Mrs B., an intensified programme of domiciliary care and home-based rehabilitation to master walking again. She is being monitored by a home care network coordinated by her GP and is due to see a neurologist. In retrospect, Mrs B talks of her fall and her admission to hospital as a traumatic event, as a ‘collapse’.

Dreuil and Boury gave this case study as a striking example of a condition known as frailty. It was an eye-opener to me, as I hadn't previously come across it, though it has been known about for decades and has been attracting increasing attention in recent years. Intermediate between good health and incapacity, it is a state in which a person is coping reasonably well with life but can be plunged through a relatively insignificant event into a state, to use Mrs B.’s own word, of collapse, characterised by multiple simultaneous pathologies: Mrs B had multiple physical conditions, some related to her fall, some to other diseases such as arthritis or the incipient Alzheimer’s, but was also suffering from mental difficulties, specifically confusion and depression.

The event that had precipitated her difficulties was a fall, from her own height. In a completely healthy individual, that is unlikely to have any serious consequences  bruising or simply a little pain, at worst a sprain  perhaps the most serious consequence would be the hurt pride caused by the laughter and mockery or our so-called loved ones. But in a frail individual, the effect can be devastating.

From being able to cope, Mrs B. was plunged into a condition where she could no longer manage her life at home. As well as healthcare, she needed social services far more intensively: domiciliary visits for now, but with the prospect of residential care clearly on the horizon and increasingly imminent.

This is a French example, but precisely the same type of case is common, and indeed increasingly common, in Britain and other nations. And certainly frailty is a condition that is being met throughout the wealthier nations more and more frequently – and for the very best of reasons: although it can affect people of any age it is much more likely to afflict the old, as is the case of Mrs B., and more and more of us are living to increasing old age. That great success of nutrition, of social care and above all of healthcare, is creating new healthcare challenges – and frailty is one of the most significant.

Now let us look back at my previous post in this series. It compared two women of 61 and 62, both of whom had suffered strokes, but one of whom had been discharged from hospital very quickly. I focused on the other, and by looking at her earlier record of treatment in both healthcare and social care, saw that she was suffering from multiple conditions that had caused her to be treated repeatedly in hospital and to require significant levels of domiciliary and residential care.

Doesn’t that sound similar to Mrs B.’s case? Though the conditions were different and the 62-year old stroke patient was far younger, don’t we again see many of the symptoms of frailty? Ill in multiple ways, undergoing repeated treatment of many different kinds. This feels like a woman who had been in a frail condition and has now been precipitated into a state of collapse.

Understanding her case, as we saw, meant bringing information together from many different sources: admitted patient care, outpatient attendances, inpatient stays, community treatment in or out of hospital, domiciliary care provided by social services or residential care.

The concept of frailty has been a bit of an eye-opener to me. But the message I take it from it is one that I’ve stressed again and again in these occasional posts: we need to monitor patients over the long term and we need to do it across care settings, so that we understand what is happening to the patient in the many different areas of care he or she encounters.

A frail person suffering a collapse will need care provided by many different specialties and professions. If the patient is to get the most of out of it, and society is to deliver care in the most cost-effective way, we need to understand what they are all doing and to ensure that their efforts are coordinated as fully as possible.

Frailty: in information terms it just means that it is more urgent than ever to break down data silos.

Friday, 26 August 2011

A tale of two stroke patients

It’s been a while since I last put up a blog here. My only excuse is that I’ve been so heavily involved in doing healthcare information that I’ve not had enough time to talk about it.

In particular I’ve been working on what remains as much as ever my hobby horse, pathways. So I thought it might be interesting to give an example of one. Or rather two. 

Two women, one aged 61 and the other 62, both had emergency hospital admissions for strokes. The first woman’s hospital stay only lasted a night, which meant it incurred just a short stay emergency charge of about £1400, but the second stayed four nights and cost £4400.


So the obvious issue is – why was there such difference between them?

The first place to look is among the secondary diagnoses recorded for both women.

The short-stay case, primary and secondary diagnoses

CodeDiagnosis
I639Cerebral infarction, unspecified
I251Atherosclerotic heart disease
I248Other forms of acute ischaemic heart disease
G409Epilepsy, unspecified
Z867Personal history of diseases of the circulatory system

Diagnoses for the four-day case

CodeDiagnosis
I639Cerebral infarction, unspecified
I678Other specified cerebrovascular diseases
F329Depressive episode, unspecified
Z870Personal history of diseases of the respiratory system

To a non-clinician like me at least, nothing springs out from this to explain the differences between the two cases. And that’s the problem with focusing exclusively on a single event in this way, in this case on the hospital spell: it gives much too limited a view of the patients’ real experience.

The picture changes fundamentally if we take a longer view. We don’t have information about the GP care of these two women, but we do know about all their treatment in acute hospitals, in community hospitals, in community health services, even in social care. So let’s take a look at what happened to them both in the period leading up to their strokes.

For the patient who was in for a day after the stroke, the only care we know about over the previous eighteen months were two outpatient attendances in Cardiology. It seems that she must have shown symptoms of a developing heart problem, but nothing serious enough to justify further hospital treatment. Ten months after the second outpatient clinic, she attended A&E followed her stroke and was admitted for emergency treatment.

With the other patient, on the other hand, the picture could hardly be more different. Below is the pathway of just six months before her stroke (drawn to the scale of the lengths of each event):


The poor woman has been through a real catalogue of misfortunes:
  1. She was admitted for an acute myocardial infarction five months before the stroke
  2. A month later she was in for a pulmonary embolism
  3. She had a great deal of care in the community, including physio, occupational health as well as district nursing
  4. She was taken into residential care
  5. Despite the care she was receiving, she had four more emergency admissions for respiratory or suspected cardiac symptoms over a period of about a month some three months before the stroke.
  6. She then had her stroke
All we need is to move away from our focus on a single acute event and look instead at the whole pathway of her care, to understand that we are talking about two profoundly different cases. This woman is simply far more ill, in a state similar to what is referred to as frailty’ in older patients: any problem, even a small one, can lead to a string of others, some far more serious.
So there’s absolutely nothing surprising about the fact that she needed a longer stay in hospital after the stroke. In fact, it’s now clear that while the stroke was a major event in the history of the other woman, for this one it was just the latest in a series of severe problems. If we wanted to take a look at ways of making her care more effective, or more cost-effective, it might not even be with the stroke event that we’d start (after all, she was in hospital for 25 days after the myocardial infarction).
All it takes to get this much richer and, I’m sure you’ll agree, much more valuable view of the patient’s healthcare is to take a pathway view. And all that needs is to get hold of the data and string it together...

Sunday, 20 March 2011

Counting the deaths that count

According to the American columnist H L Mencken, ‘there is always a well-known solution to every human problem – neat, plausible, and wrong.’

For example, in assessing efficiency of healthcare, nothing is simpler or more plausible than to measure length of stay. So we’ve had countless studies comparing hospitals on the basis of ‘average’ (i.e. mean) length of stay. A particular hospital may have a mean value of, say, 4.8 against 4.5 for a peer group. That difference becomes the basis for the conclusion that if there are 80,000 inpatient stays, the hospital could save 24,000 days and close some eye-watering number of beds. All this is advanced without a thought as to whether a mean value is even appropriate for a measure like length of stay, which is usually distributed with a very long tail (small numbers of patients with massively long stays, usually due to the complexity of their condition), or whether a difference of 0.3 is even significant.

In case anyone thinks this is a wild exaggeration, we’ve seen a hospital rebuilding programme that took this kind of analysis as the basis for calculating its required number of beds, and paid the price when it discovered that the new building had far too few. 

In addition, we need to ask whether this kind length of stay analysis even compares like with like. We’ve seen comparisons with peer groups which confidently predict efficiency savings, only to find on closer examination that the hospital treats a sub-group of complex patients that the peer group doesn't. Careless analysis can lead to bad and costly conclusions.

If length of stay taken in isoloation is the simple, plausible and wrong measure of efficiency, the equivalent in the field of care quality is mortality. Now, there’s no denying that the patient’s death is not a desirable outcome. Keeping mortality down is an obvious step in keeping quality up. There are however two problems with the measure.

The first is that there are huge areas of hospital care in which mortality is simply too low to be useful as a blunt comparative measure. Mortality in obstetrics has now fallen to such a level, for example, that it would be perfectly possible to find just two deaths in an entire year in one hospital, and one in another. To conclude that the first delivers care that is 100% poorer than the second would be a conclusion that can only really be described as rash. Or, as Mencken would no doubt have told us, plain wrong.

That is not to say that these individual deaths shouldn’t be monitored and investigated: rare events such as a maternal mortality or, say, death following a straightforward elective procedure should be thoroughly investigated. It's simply that they cannot in isolation form the basis of an overall assessment of one hospital's care quality compared to another.

Again, don’t think that this is a wild exaggeration – we know of reports suggesting poor performance by a clinician, based on comparisons as meaningless as these. And we’ve argued before that the use of crude mortality figures in analysing Mid Staffs hospital distorted the debate. We don't of course mean that there were no quality problems at Mid Staffs: there were and it was appropriate to address them.

Interestingly these problems were highlighted by patients and relatives some time before various organisations began to raise any issues. Unfortunately, once information analysis began to appear, it focused on mortality data and drew conclusions from the figures which they couldn't properly support. Many of the problems at mid-Staffs were on wider quality issues that the patients identified but weren't measured or when highlighted did not appear to be investigated. 

The temptation to make mortality a focus is understandable. It's a measure that's easy to obtain because hospitals routinely record their deaths. So it's natural to want to tot them up and convince ourselves that we then have a valid measure of comparison.

Well, do we? Here we come up to the second objection to mortality as an idicator. Let's start by taking another look at length of stay. If a hospital keeps patient stays short, might that not reflect a lot of early discharges, including perhaps a number of patients who go home and die there, with the result that they’re not included in the hospital’s death figures?

And what about transfers? If one hospital is transferring a high proportion of particularly ill patients to a tertiary referral centre, won’t its own mortality figures be artificially reduced while the receiving institution’s are inflated?

That’s why if you’re going to use mortality as a measure of quality, you need firstly to ensure that you’re applying it to specialties, conditions or procedures where it makes sense, and secondly that you’re measuring not just in-hospital mortality but also mortality after discharge, choosing a period beyond discharge that is appropriate to the patient's condition.

Now HES data has been analysed with Office of National Statistics death records linked to them, on an annual basis, for some years now – since about 2002. This means that since then it has been possible to take a look at mortality following hospital treatment in a much more comprehensive and useful way. What’s surprising is how few NHS and commercial providers have taken advantage of this information.

It’s not as though there haven’t been innovative thinkers who’ve used this kind of data to produce interesting conclusions. For example, we have the National Clinical and Health Outcomes Base (NCHOD) studies on 30-day mortality following emergency admissions for stroke. This has all the characteristics you’d want: an area of care – emergency strokes – for which mortality is a useful indicator, and the right measure, taking in much more than deaths in hospital.

As it happens, this analysis itself needs to be taken further. Mortality, like length of stay, is only one measure and can still mislead when used in isolation. It really needs to be supplemented by looking at indicators concerning the quality of the care itself. Useful measures have been proposed and are being used by the Royal College of Physicians, including the type of facility that treats the patients, the provision of thrombolysis and the effective monitoring of patients in the first few days of admission, all factors which improve the outcome of care. This is a subject to which we might return in a future post.

Using the Royal College of Physician indicators would improve the analysis. However, at least the figures published by the Department of Health back in 2002/3 showed a way forward towards a more rational use of mortality figures themselves. It's disappointing that eight years on so few have followed that promising lead. 

Perhaps we can start to catch up before the decade is over.

Thursday, 7 October 2010

Sometimes we get it right – but we don’t make it easy

At the cost of sounding like a health information geek, I have to say it’s been fascinating to get to know the Sixth National Adult Cardiac Surgical Database Report 2008.
What makes it so interesting is that it’s clearly designed to deliver to clinicians exactly the information they need to be able to compare their own performance in cardiac surgery against national benchmarks. The report, one of several produced by e-Dendrite Clinical Systems Ltd on behalf of different clinical associations and based on one of the many data registries they hold, shows indicators defined by clinicians and calculated to their specifications.

This is diametrically opposed to the approach adopted by the national programmes that have dominated health informatics in England over the last ten years, apparently about to vanish without trace and without mourners. They based themselves on datasets that used at one time to be called ‘minimum’. The word has been dropped but it still applies: these dataset represent the least amount of data that a hospital can sensibly be expected to collect without putting itself to any particular trouble. Essentially, this means an extract from a hospital PAS alone, with none of the work on linkage between different data sources that I’ve discussed before.

The indicators that can be produced from such minimal information are necessarily limited. Usually we can get little more than length of stay, readmissions and in-hospital mortality. We’ve already seen how misleading the latter can be when I talked about the lurid headlines generated over Mid Staffordshire Trust.

The contrast with the Cardiac Surgery Database could hardly be more striking.

Clinicians have defined what data they need, and have made sure that they see just that. If it’s not contained in an existing hospital system, they collect it specifically. The base data collection form shown in the report covers six pages. There are several other multi-page forms for specific procedures.

An automatic feed from Patient Administration System can provide some of the patient demographic data, but apparently in most contributing hospitals, the feed is minimal. All the other data has to be entered by hand. It must be massively labour-intensive, but clinicians ensure it’s carried out because they know the results are going to be useful to them.

An example of the kind of analysis they get is provided by the graph below, showing survival rates after combined aortic and mitral valve surgery, up to five years (or, more precisely, 1825 days) following the operation. What’s most striking about this indicator is that it requires just the kind of data linkage that we ought to be carrying out routinely, and this case with records from outside the hospitals: patient details are being linked to mortality figures from the Office of National Statistics, meaning that we’re looking at deaths long after discharge and not just the highly limited values for in-hospital deaths that were used in the press coverage about Mid Staffordshire.



Less obvious but at least as significant is the fact that the figures have been adjusted for risk – and not using some general rule for all patients, but on a series of risk factors relevant to cardiac surgery: smoking, history of diabetes, history of hypertension, history of renal disease, to mention just a few.

Looking at the list of data collected, it’s clear that more automatic support could be provided. For instance, it should be possible to provide information about previous cardiac interventions or investigations, at least for work carried out in the hospital. Obviously, this would depend on the hospital collecting the data correctly, but a failure in data collection is surely something to be fixed rather than an excuse for not providing the information needed.

It is unlikely that the hospital could provide the information if the intervention took place at another Trust, so cardiac surgery staff would still have to ask the question and might have to input some of the data themselves. Automatically providing whatever data is available would, however, still represent a significant saving of effort.

The converse would also be invaluable: if cardiac surgery staff are adding further data, surely it should be uploaded into the central hospital database or data warehouse? That would make it available for other local reporting. It seems wasteful to collect the data for one purpose and not make it available for others.

Of course, all this depends on linkage between data records. It’s becoming a recurring refrain in these posts: linkage is something that should be a key task for all Trust information departments. What we have here is another powerful reason why it needs to be done systematically.

And while we’re thinking about data linkages, let’s keep reminding ourselves that this Report uses links to ONS mortality data. Doing that for hospital records generally would provide far more useful morality indicators. So what’s stopping us doing it?


Wednesday, 29 September 2010

What's the White Paper going to mean for Healthcare Information?

The need for better information is a theme at the core of the reforms announced in the health White Paper.

The preamble to the document states some of the principles on which the government is basing its approach. For instance, ‘patients will have access to the information they want, to make choices about their care. They will have increased control over their own care records.’ Empowering patients, a core value of the White Paper, will require far greater access to information than in the past.

Another view of the same principle appears in the declaration that ‘a culture of open information, active responsibility and challenge will ensure that patient safety is put above all else, and that failings such as those in Mid-Staffordshire cannot go undetected.’

Poor, much-maligned Mid-Staffs seems destined to be the icon for poor hospital performance for a while longer. This is the case even though it’s not obvious that it suffered from much more than resource starvation as a result of a headlong rush for Foundation Trust status. Indeed, it’s not even clear that it performed substantially less well than other Trusts: even Dr Foster whose figures precipitated the original scandal, classified it ninth in the country within the year, using broadly similar indicators.

Still, the point here is that we’re again talking about information openness with patient considerations at the centre. For our purposes, all we need to take out of the Mid-Staffs experience is the lesson that getting information right is at least as important as making it accessible.

We also read that ‘the NHS will be held to account against clinically credible and evidence-based outcome measures, not process targets.’ The focus on outcomes will be a major theme for anyone involved in healthcare information in the coming years.

All this needs to be set against the background of the increasing shift towards GP-led commissioning and the unravelling of the National Programme for IT.

It may be a little premature to assume that the Commissioning Consortia will get off the ground any time soon. It feels to me as though a lot more money will have to be found to cover management costs. The Tory Andrew Lansley is unlikely to like the idea that he’s following in the footsteps of Nye Bevan, Socialist founder of the NHS, but he may find himself having to deal with doctors as Bevan did, by ‘stuffing their mouths with gold.’

Two aspects are going to dominate a reformed commissioning process. The first is that GPs are going to be interested in buying packages of care – treat this diabetic, remove that cataract, manage this depression – rather than just elements of a package – carry out this test, administer that medication, provide this therapy . So it’s no good saying ‘we provided another outpatient attendance’ if the protocol for the condition doesn’t allow for another attendance: the consortium will challenge the need to pay for the additional care. An approach based on care packages means analysis of pathways, not events.

The second is that outcomes are going to be crucial. So pathways have to be taken to their conclusion, which means they can’t be limited to what happened before discharge from hospital.

It seems to me that this is going to mean moving beyond current measures – readmissions or in-hospital mortality – to look at outcome indicators that tell us far more: patient reported outcome measures, through questionnaires about health status after treatment, and mortality within beyond discharge, through linking patient records with Office of National Statistics data.

Only with these measures will it be possible to see if care is delivering real benefit and, therefore, real value for money.

What about the demise of the National Programme? It always struck me as odd that a drive for efficiency and cost control should have set out to create local monopolies for the supply of software. How could that maintain the kind of choice that we wanted to offer patients, and therefore exercise the downward pressure on price and upward pressure on quality that competition is supposed to generate?

There also seemed to be a fundamental misconception in the idea that it was crucial to get all your software from a single source. One aim, of course, was to ensure that all systems were fully integrated. But if we’re building pathways, we do it with data from different sources linked in some intelligent way. That can be done by imposing on a range of suppliers the obligation to produce the necessary data in the appropriate form. That doesn’t require monopoly but accreditation.

The end of the National Programme, at least as far as local software supply is concerned, has to be welcomed. But, taken with the new pressures for different kinds of data, it will lead to significant pressure on healthcare information professionals, above all to learn to link data across sources and even beyond the limits of a hospital.

A challenge. But, I would say, an exciting one.

Thursday, 23 September 2010

Showing the pathway forward

When it comes to building pathways out of healthcare event data, it’s crucial not to be put off by the apparent scale of the task. In fact, it's important to realise that a great deal can be done even with relatively limited data. Equally, we have to bear in mind that a key to achieving success is making sure that the results are well presented, so that users can understand them and get real benefit from them. 

Since that means good reporting, and therefore a breach with the practice I described in my last piece, this post is going to be rich in illustrations. They were most kindly supplied by Ardentia Ltd (and in the interests of transparency, let me say that I worked at the company myself for four years). The examples I've included are details of screenshots from Ardentia's Pathway Analytics application, based on sample data from a real hospital. At the time, the hospital wasn't yet in a position to link in departmental data, for areas such as laboratory, radiology and pharmacy, so the examples are based on Patient Administration System (PAS) data only. My point is that even working with so apparently little can provide some strikingly useful results.  

The screenshots are concerned with Caesarean sections for women aged 19 or over. The hospital has defined a protocol specifying that cases should be managed through an ante-partum examination during an outpatient visit, followed by a single inpatient stay. Drawing on Map of Medicine guidelines, it suggests that a Caesarean should only be carried out for patients who have one of the following conditions: 
  • Gestational diabetes
  • Complications from high blood pressure
  • An exceptionally large baby
  • Baby in breech presentation
  • Placenta praevia
The protocol can be shown diagrammatically as two linked boxes with details of the associated conditions or procedures. For example, the second box in the diagram shows the single live birth associated with the section, and then five conditions one of which should be present to justify the procedure. 
Detail of an Ardentia Pathway Analytics screen with a protocol for Caesarean Sections
Next we compare real pathways to the protocol. At top level, we look only at the PAS events (OPA is an outpatient attendance and APC is an inpatient stay):
Part of the screen comparing actual pathways to the protocol (the full screen contains several more lines).
Note the low-lighted line, second from the top, that corresponds to the protocol shape.
The first striking feature of the comparison is that only a minority (14%) of the cases corresponds to the protocol at all. 65% of the cases have a single admitted patient care event without an outpatient attendance. This should lead to a discussion of whether the protocol is appropriate and whether this kind of case could be legitimately handled with a single inpatient stay and no prior outpatient attendance (perhaps as a an alternative protocol structure).
Another feature is the number of cases involving a second or subsequent inpatient stay. Now there’s a health warning to be issued here: these screens are from a prototype product and the analysis is based on episodes, not spells, so we can’t be certain the second admitted patient care event is an actual second stay – it might be a second episode in the same stay. If, however, an enhanced version of the product showed there really were subsequent stays, we’d have to ask whether what we are seeing here are readmissions. In which case, is something failing during the first stay?
We can drill further into the information behind these first views. For instance, we could look more closely at the eight cases which apparently involved an outpatient attendance followed by two inpatient stays:
Three pathway shapes or types followed by cases that apparently
involved an additional inpatient stay
 The first two lines show instances where the delivery took place in the first inpatient stay (the box for the first stay is associated with a circile containing the value '1', corresponding to the entry in the protocol for a single live birth). In seven of these cases, the patient needed further inpatient treatment after the Caesarean. The last line shows something rather different: the patient was admitted but not delivered and then apparently had to be brought back in for the delivery.
Note that the middle line shows that just two cases out of the total of eight are associated with a diagnosis specified by the protocol as a justification for a Caesarean: they are linked with condition 5, breech presentation. The fact that no such information is recorded for the other six suggests either that Caesarean sections have been carried out for cases not justified by the protocol, or that key data is not being recorded. Either way, further investigation seems necessary.
We can also look in more detail still at individual cases.

Clinical details for a specific event
The example shows a case that has followed the protocol: an ante-partum examination was carried out on 5 May and the patient was admitted on the same day, with the Caesarean section taking place on 8 May. The ticked box in the greyed-out diagram shows that a condition justifying the caesarean has been recorded (placenta praevia). This is confirmed by the highlighted box of detailed information (note that the consultant's code has been removed for confidentiality reasons).

The simple examples here show that pathway analysis provides a real narrative of what happens during the delivery of healthcare to a patient. On the one hand, it can answer certain questions, such as why a Caesarean was carried out at all, and on the other it can suggest further questions that need investigation: what went wrong with this case? why was the procedure carried out in the first place? was there a significant deviation from the protocol?

If we can do that much with nothing more than simple PAS data, imagine how powerful we could make this kind of analysis if we included information from other sources too...

Monday, 20 September 2010

Caution: software development under way

Software development work has admirably high professional standards. Unfortunately, developers find it a lot easier to state them than to stick to them.

If your development team is one of those that lives up to the most exacting standards, this blog post isn’t for you. If, on the other hand, an IT department or software supplier near you is falling short of the levels of professionalism you expect, I hope his overview may give you some pointers as to why.

Estimates, guesstimates

One of the extraordinary characteristics of developers is their ability to estimate the likely duration of a job. It’s astonishing how they can listen for twenty minutes to your description of what you want a piece of software to do, and then tell you exactly how long it’ll take to develop.

Afterwards you may want to get an independent and possibly more reliable estimate, say by slaughtering a chicken and consulting its entrails.

At any rate, multiply the estimate by at least three. This is because the developer will have left a number of factors out of account.
  • He – and it usually is a ‘he’ – already has three projects on the go. He’s told you he’s fiendishly busy, but that won’t stop him estimating for the new job as though he had nothing else on. Why? Because a new project is always more interesting than one that’s already under way.
  • He’ll make no allowance for interruptions, though he knows that his last three releases were so bug-ridden that he’s spending half of every day dealing with support calls.
  • He’ll make no allowance for anything going wrong. I suffered from this problem in spades. I remember very clearly the one project I worked on which came in less than 10% over schedule. It’s the only one I remember when I’m estimating. All the others, which came in horrendous overruns, are expunged from my mind.
  • He only thinks of his own time. For instance, he hasn’t allowed for documentation. Developers are part of a superior breed that lives by mystical communication with each other. Writing things down is like preparing a pre-nuptial agreement: it destroys all the romance. As for QA, well, yes, of course there should be some, but only to confirm that the software is bug-free. A few days at the end of the project. You object that there may be some feedback, as QA finds errors that need to be fixed. Well, yes, add a few days for that too, but believe me, it really isn’t going to hold up the project.. The Red Queen in Lewis Carroll’s Through the Looking Glass claimed ‘sometimes I've believed as many as six impossible things before breakfast.’ If you want to emulate her, start with ‘you can get good software without extensive QA.’ 
  • He hasn’t actually seen a specification yet. That one deserves a section to itself.
Specifications – who needs one?

Once you’ve told the developer what you need from the new system, he just wants to get on with it. He doesn’t want to waste time writing requirements down. He wants to get on with cutting code.

Perhaps I should have written ‘Code’, with a capital ‘C’, since it has the status of near-sacred text. For those who don’t know it, it’s made up of large blocks of completely opaque programming language. Some unorthodox developers believe in including the occasional line of natural English, so-called comment lines, with the aim of explaining to someone who might come along later just what the code was intended to do. To purists, these lines just interrupt the natural flow of the Code. The next guy will be another member of the sacred band and will work out what the Code was intended to do, just by reading it and applying his mystic intuition. Comment lines are just boring, like specifications.

It’s true that specifications can be deadly. I recently saw a 35-page text covering work that we actually carried out in less than five days. The spec contained at least one page marked ‘This page intentionally left blank.’ Whenever I see one of those I want to scribble across it ‘why wasn’t this blank page intentionally left out?’

That kind of thing gives specifications a bad name, but something tighter and more to the point can be really useful. It can at least ensure that we understand the same thing by the words we use.

Once I came across a system which assumed that ‘Non-Elective’ meant the same as ‘Emergency’. Terribly embarrassing when you have to explain to a Trust why its emergency work has shot up while its maternity work has fallen to zero along with the tertiary referrals it used to receive.

Of course, if you’re working with people who just never need anything like that made clear to them, you may well be able to get away without a proper spec. One word of warning though: it’s really difficult to see how you can test software to see if it’s behaving properly, if you haven’t previously defined what it should be doing in the first place.

It’s about the reporting, dummy.

Ever since Neanderthal times man has been expressing himself by means of graphics. So why in the twenty-first century are IT professionals finding it so difficult to understand the need to do the same?

Most users of healthcare information think its aim should be to help take better decisions about care delivery. So they might want to see a range of indicators all included in a single dashboard-type report. They may want to be able to drill up and down, say from a whole functional area within a hospital down to individual clinical teams to groups of patients with similar diagnoses. They may want to see values for the whole year or for a single months or, indeed, for trends across several months. They may ultimately want to get down to the level of the individual patient records behind the general indicator values.

Now many IT people will simply yawn at all this. Have you ever come across the term ‘ETL’? It means extract, transfer and load. The most exciting for an IT person is ‘Extract’. This means he’s looking at someone else’s system. This is a challenge, because the probability is that the teams who built that system didn’t bother with any comment lines or documentation – they’re kindred spirits, in fact. So it’s a battle of wits. Our man is hunting among tables with names like ‘37A’ or ‘PAT4201’, trying to identify the different bits of information to build up the records he’s been asked to load. And it’s a long-term source of innocent fun: system suppliers are quite likely to change the structure of their databases without warning, so that our developer can go through the whole process again every few months.

Next comes Transfer. Well, that’s a bit less absorbing though it can still be amusing. Your tables, for instance, might hold dates of birth in the form ‘19630622’ for 22 June 1963. The system you’re reading from might hold them in the form ‘22061963’. You can fill some idle hours quite productively writing the transfer routines mapping one format to the other.

Finally, there’s Load. Well, OK. Yes, it has to be done, but it’s not half as exciting. Get the data in, make sure that it’s more or less error-free. A bit fiddly, but there you go. Once it’s finished, your work is done.

Have you noticed the omission? We’ve done E, T and L. There’s been no mention of ‘R’ – Retrieval. What matters for IT is getting the data in and storing it securely. Making it available for someone else to use? Come on, that’s child’s play once the data’s been loaded into a database. It’s someone else’s department altogether.

Sadly, that’s not a department that is always as well staffed as it might be. That’s why hospitals are awash with data but short on information. That’s why we’ve spent billions on information systems with so little to show for it. That’s why clinical departments that want to assess their work build their own separate systems, creating new silos of data.

Unfortunately, investment isn’t as easy any more, and we certainly can’t keep making it without the real promise of a return. It’s great to see developers having fun, of course, but wouldn’t be even more fun to deliver systems that really worked and that healthcare managers really wanted to use? Systems that genuinely delivered information for management?

Sound utopian? Maybe it is.

But since it’s pretty much the minimal demand any user should be making of an information system worthy of the name, maybe it’s time we started insisting on it a bit more forcefully.