Evidence for evidence

At a recent patient safety meeting the subject of the ‘theatre cap challenge’ came up. This is an initiative that encourages all operating room (OR) staff to write their name in pen evid theatrecapon the front of their disposable theatre cap so their identity is clear, which is especially important in an emergency. There have been incidents where not knowing who was who in a tense situation caused patient harm. Seems sensible. But it has not been adopted by all. It is a change. Its proponents do not understand how anyone could argue with it, while opponents explain that it’s not necessary, things have been fine without it, and who are you to tell us what to do anyway? A lively argument has ensued.

One of the arguments against the theatre cap challenge is: what’s the evidence that it improves patient safety? This is a question that can be applied to any change in process. In the discussion I sat in, it was suggested that this question – ‘where’s the evidence?’ is often used to block innovation. Modern medicine is founded on evidence, and we call it EBM – evidence based medicine. Without evidence, a new treatment or process is just a hunch.

Modern doctors were brought up to have absolute respect for evidence, and to base their clinical decisions on it. Most of the time, national or local guidelines and protocols do the hard work for us, having been written by experts who have read and understood the medical literature. Often, in areas that are less well studied, guidance comprises expert opinion alone. There is not always an evidence base to look to. That is not EBM’s only weakness. Trisha Greenhalgh’s 2014 BMJ article ‘Evidence based medicine: a movement in crisis?’ brilliantly describes its weaknesses, which include generalizing ‘average’ outcomes to the non-average individual, excluding the patient’s agenda or voice in decision-making, and the vested interests (i.e. drug and medical device companies) of those who sponsor and develop trials. Because EBM cannot provide every answer, in real life we also depend on ‘XBM’ – experience based medicine. Who is to say which is more powerful, and which is more often right? A combination, surely. For now though, EBM rules.

From the Centre for Evidence Based Dentistry, showing the interaction of evidence with experience and patient’s preferences.

So what exactly is evidence? Classically, evidence is the result of a well-conducted study – preferably a randomised controlled trial in which two or more groups of patients are treated in different ways. In a blinded trial the patients don’t know what treatment they are getting, thus negating the placebo effect. In a double-blinded trial the doctors and nurses don’t know either, so they can’t influence the results with unconscious bias. The patients’ survival, response or well-being is measured over time, and the treatment associated with the group who does best is deemed to have won. Over time, several trials looking at the same intervention or drug can be grouped together for a meta-analysis, the most powerful of studies. Sounds easy. Individual trials take years to plan, develop and analyse. They are well suited to medicines, and have been the mainstay of improvements in survival for patients with leukaemia and other cancers for instance.

Could such evidence be gathered for the theatre cap challenge? Possibly. You could have three hospitals who use it and three who don’t, and measure the number of patient safety incidents in the OR. Or you could analyse one hospital for a year, introduce the cap naming policy, then analyse it for another year. Or… you could all agree that it’s just plain common sense, we don’t need a trial, this is silly, just do it. The latter is tempting. This ‘common sense’ approach seems well suited to certain areas of improvement in medicine. You have an idea, in response to a problem, you develop it, try it out locally, then seek to influence others so that they roll it out. Perhaps you persuade a Royal College or specialist society to make a policy announcement. In a few years, it’s embedded.

The trouble is, where do you draw the line in deciding which innovations require good quality evidence? A new medicine – of course; we would all agree that it should go through phase 1 to 3 of the trial process so that it can be shown to be non-toxic, basically efficacious, and then genuinely safe and beneficial over the long term. Phase 4 studies can then analyse the evid tgneffects on large group of patients after the drug has been signed off. The volunteers who suffered terrible injuries at a trial centre in Northwick Park in 2006 were part of a phase 1 trial into a new molecule TGN1412 – its first use in humans. Following this failure, the drug never reached phase 2. Without that phase 1 trial, many more could have been harmed.

What about new surgical techniques? Should, say, an ‘improved’ drill for making holes in bones be tested against its predecessor in proper trial? A drill is a drill, right? If the surgeons like it, why not just let them use it? Favouring absolute stringency and long-term data collection for surgical kit is the suffering associated with vaginal mesh repairs. This material went through the required tests, but began to be used in a wider range of indications (stress urinary incontinence and pelvic organ prolapse). Now, after numerous reports of injury and reduced quality of life, it is accepted that it should not be used in those two situations. As a healthcare system, we got it wrong. And it took quite some time to recognise it.

The threshold for requiring evidence can change depending on your situation. Some patients who appear likely to die from advanced malignancy are keen to try anything, even if it experimental. Two years ago this led to a major controversy as Lord Saatchi proposed the ‘Medical Innovation Bill’ that would allow access to unproven therapies. He argued that terminally ill people (such as his wife, Josephine Hart, who had died of primary peritoneal cancer) should be allowed to access unproven therapies. The bill was defeated, in part because of the persuasive counter-argument that in bypassing the rigorous trial process the evidence base would be undermined for future patients. Hundreds of people might receive treatment X, but without trials we never be the wiser as to whether it worked or not. Lord Robert Winston, opposing the Bill, recalled how his own father had died in his 40’s from an ‘innovation’ in the administration of antibiotics.

My rational, evidence-based brain also opposed the Bill – apart from the undermining of the evidence base, patients might suffer. Their vulnerability, through illness or desperation, could made them targets for the purveyors of these novel therapies. Yet the other side of my brain thought, if it was me, and if there were no new drugs to try through established trials, I might well ask the same to anything. One chance in a 100 for extra life is better than zero.

evid right to tryIn this instance the requirement for good quality evidence was in opposition to human nature. This instinct was reflected in the media, where support for the Bill was strong in some quarters. Letters were written by groups of doctors. It was emotive. Across the Atlantic, Donald Trump signed off on a US version of this Bill in 2018 – The Right To Try Bill. Typically, emotion was heightened by the presence of a boy who might benefit from it. A populist Bill? A libertarian Bill. For many, a common sense Bill.

Walking through a ward, one sees many practises that are not evidence based. You can take this to extremes. Is there evidence that antibiotics help treat urine infections? Probably not, but we do it. Everybody knows they work, don’t they? Is there evidence that assiduously keeping a patient’s oxygen saturation over 93 or 94% is beneficial? No. Is it really necessary to slavishly maintain their potassium in the normal range? Who really knows, there hasn’t been a study? [Please do correct me if I’m wrong.]

How about this one: is there evidence that consultants doing ward rounds every day to see patients every day is better than their trainees leading ward rounds? No. But wait, what am I saying? Am I suggesting consultants shouldn’t see their patients every day? Well, it’s a question. Jeremy Hunt wholeheartedly supported the driver a 7-day consultant delivered ward service, and invoked evidence to prove that without it more people were dying than would with it. He got himself into trouble, as his interpretationevid 7day and use of available evidence was felt to be flawed. Yes, it seems sensible, eminently sensible to patients and relatives who have sat in the ghost town of a hospital on a Sunday afternoon… but staffing 7 days will take doctors away from other duties during the week. Before making such a major change, surely we need good evidence. Or are those who demand evidence just naysayers and obstructionists who value their weekends too much? Are they vexatiously applying the dogma of EBM to an area that should not require evidence?

Some things are so deeply embedded in our medical education and culture that when evidence is at last gathered, we are surprised. For instance, during cardiac arrests we have always given adrenaline injections. Now, years after this treatment was introduced, there is pretty good quality evidence that it doesn’t help the brain, and might actually damage it when given during out of hospital arrests. The advanced cardiac life support algorithm goes so far as to say there is no evidence that it helps patients long term.

During heart attacks, I always gave patients high flow oxygen; now there is evidence that it doesn’t help. In septic shock, I gave litres of Gelofusine – a once commonly used plasma substitute. Now, it’s basically regarded as poison and can’t be found in the hospital. All those ‘sensible’ ideas, proved wrong over time.

The need for evidence is subjective. The EBM purist may inhibit or at least slow down innovation, while the confident ones who say ‘just get on and do it’ may risk future harm. The current controversy over Babylon’s health app is a good example. It seems like a good idea, but without rigorous testing may end up convincing people who are having heart attacks that they are having a panic attack and just need to relax. On the one hand it appears potentially ‘transformative’, on the other it is formally untested. Is it a new treatment? No. Is it an new process? Most definitely yes! In the light of the mistakes that have come before, it seems sensible to test it, in case we find that a glitch results in deaths or harm.  As it stands this mysterious ‘black box’ of technology is an example, in Ben Goldacre’s words, of ‘closed science’.

evidence goldacre

Once they have it, experts can debate endlessly the merits of the evidence base; an organisation such as NICE exists to do just this. But prior to this, it seems we need to agree on what kind of innovations require evidence, and what kinds justify a ‘suck it and see’ approach.

 

 

Explore my medical and fiction books on Amazon

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog at WordPress.com.

Up ↑

%d bloggers like this: