Patient safety

Checklist mentality

 

The case for checklists has been made so well – see this fantastic article by Atul Gawande – yet those responsible for embedding them struggle. They are an effort, an obstacle, an apparently petty imposition. I know it’s the right patient! I know they’re not on Warfarin! I know what equipment we need to do the procedure. It’s all so obvious. Yet, now and again, something goes wrong. Never events happen. (See the recent Health Safety Investigation Branch findings.) The wrong patient is operated on, the wrong tooth is removed, or an allergy is missed and a drug that is dangerous to them is injected. Checklists reduce these events, so what’s the problem?

My experience with checklists has been interesting. I am a proponent, a kind of champion, yet often I huff and puff as, on the brink of putting an endoscope down, the wretched piece of paper is waved at me. Grrrr!

The process of completing the particular checklist that we have developed takes a minute at most. It requires standing still (a problem when you’re in a hurry), focusing the team’s attention on the responses (for what is the point of the patient telling me they are allergic to something if no one else in the room is aware?) and communicating with the patient (a problem for some, especially when in a hurry).

Perhaps it’s this need to pause and be still that frustrates doctors and surgeons. We, they, like to keep moving, to flow through the tasks, to get to the nitty gritty (the technique, the findings, the pathology, the treatment) as soon as possible. It is this habit that is so difficult to break. It is a mind-set. And it reveals something about our approach our surgical lists. They are our lists. They have our names on them. Their character – relaxed, rushed, efficient, friendly, spikey, miserable – stems from our own behaviour and the clinical leads in the room. The checklist is an obstacle to our progress through the day and to a successful outcome. With this mind-set, the fact that it is the patient’s procedure can be forgotten; forgotten also the fact that around the surgeon buzzes a team of highly assistants without whom the procedure could not take place. The checklist is the best, probably only way to ensure that for a moment, everyone is focussed on that patient, and the last opportunity to identify possible harm is heeded.

Gradually, slowly, the checklist should become natural, and depending on your psychology, you feel that something is missing without it. We didn’t to clean our hands before and every patient contact; now, if I haven’t, I feel kind of tainted, as though there is something on my skin that hasn’t been taken off. They are clean of course, but the habit has become so ingrained my mind insists on the slap of antiseptic foam before moving on. In the same way, the checklist should become a door through which your mind insists on moving before embarking on the procedure.

I’m not sure I’m at that stage yet. If there are distractions, or if I am running very late, the checklist can be overlooked, until a colleague holds it up and pulls me back. I know that if harm does occur in relation to a surgical procedure, the absence of a checklist looks bad. Completed correctly, it serves to protect you, as the surgeon. It demonstrates that care was taken, and thought given to the patient as an individual, not as a ‘procedure’.

Most doctors and nurses are already converted. But the checklist mentality remains a change, and a challenge. As Gawande says, comparing the flair and fluidity with which surgeons like to move through their lists with the early astronauts from The Right Stuff,

‘…the prospect [of checklists] pushes against the traditional culture of medicine, with its central belief that in situations of high risk and complexity what you want is a kind of expert audacity—the right stuff, again. Checklists and standard operating procedures feel like exactly the opposite, and that’s what rankles many people.’

Expert audacity vs regimentation, again in Gawande’s words. This points to the same psychology I explored above. The audacity, the flair, the speed, all relate to the surgeon.

But it’s the patient’s procedure.

 

Advertisements

‘Stuff happens’: patient safety incidents and 2nd victims

Bad things happen in medicine. Sometimes, as doctors or nurses, the things we do, or the things we didn’t think of doing, cause harm. How we respond to those incidents determines the direction our careers follow. If the response is catastrophic, and the puncture in our confidence or self-esteem proves irreparable, we may drop out entirely. This article explores the idea of the ‘2nd victim’, that is the health care worker (HCW) involved in events that result in harm to patients. It is based on a Grand Round lecture I delivered at Frimley Park Hospital.

***


‘Stuff happens,’ as Donald Rumsfeld sanguinely commented after being questioned about the looting that took place in the fall of Baghdad. ‘Stuff happens, and it’s untidy.’ He certainly felt no personal responsibility for the adverse consequences of a military decision that he had been invovled in. The looting was a kind of ‘complication’.

That is not the typical response among healthcare workers. Dan Walter, in his book Collateral Damage, describes a terrible complication suffered by his wife, and the panic that he perceived in the young doctor who was involved. A novel cardiac ablation catheter was incorrectly deployed by a trainee, resulting in its spiral end becoming entwined in the cordae tendinae of the mitral valve. When the catheter was eventually removed, bits of heart valve tissue could be seen hanging off it. She developed cardiogenic shock, and had to have an emergency mitral valve repair. Dan Walter approached his wife’s cubicle;

A vivid picture there of the 2nd victim – although many would say that the author – the spouse –  is the true 2nd victim here, the doctor the 3rd. For now though, I’ll stick with ‘2nd’.

The impact on HCW  has been studied. Scott et al described common symptoms, both early and late, in the table below. First the physical, then the psychological. In some, there is avoidance of particular patients, and chronic uncertainty.

The authors then identified several phases in the natural history of psychological response:

At the end, the HCW ‘thrives’, that is they learn, improve, and possibly use their experiences to help other in similar situations. Others carry on, still feeling the harm and perhaps avoidng certain situations, while a third group drops out. The injury to their confidence is too deep.

As a student and trainee I saw how those around me reacted in these situations. I vividly remember the bloodless expression in the house officer whose patient became comatose after receiving a duplicated prescription of insulin; the SHO who gave Tazocin to someone who was penicillin allergic; the registrar who inserted a central line into the carotid artery accidentally. Having convinced myself that I had prescribed IV salbutamol at 10x the usual dose at three in the morning, I made up the term ‘Gut Thump’. This equates to the adrenaline-driven, panic-soaked reaction that comes minutes after the event.

Much later, after a complication, I charted my own psychological journey starting from the moment I received the CT scan report showing the damage, and I drew it on a graph. Many may recognise this line. The time it takes to reach equilibrium will vary, depending on the sense of culpability, and the outcome of the patient. In this case the patient was absolutely fine, but there was a period during which this was not guaranteed. The road to equilibrium involves communicating, receiving reassurance, and doing stuff to make it better. Also, the understanding of the patient helps.

Van Gerven et al, surveying 913 healthcare workers who had been involved in a patient safety incident, and using an Impact of Event Scale, found,

 

‘…higher psychological impact is related with the use of a more active coping and planning coping strategy, and is unrelated to support seeking coping strategies. Rendered support and a support culture reduce psychological impact, whereas a blame culture increases psychological impact.’

This appears to correlate the intensity of response to the pro-activity shown by the HCW in dealing with things, which is interesting. I would have assumed the HCW who just let things be might feel less of an impact. This might indicate a link between conscientiousness, and psychological injury.

A qualitative analysis of 21 staff by Ullstrom et al, ‘Suffering in silence: a qualitative study of second victims of adverse events’, found that non-judgmental support from peers was vital. One interviewee said,

I really want to highlight how important that support is (…) without it, I don’t know where I would have been now (…), if I would have ever dared to come back and work as a nurse again. (Interviewee No 14, Profession: Nurse, Type of adverse event: Wrong medication dose)

While another spoke about reluctance of doctors to seek external help if they are not recovering,

I think there is an inner resistance towards getting external help. At least, among doctors [the idea is] “I can handle this” (…) but I think that really we should have much more general support. In difficult situations overall. Not only after adverse events. (Interviewee No 18, Profession: Doctor, Type of adverse event: Operation went wrong)

 

*

 

We are encouraged to be open about our mistakes, and our leaders in the profession have shown us their example. The booklet ‘Medical Error’ (published by the National Patient Safety Agency) contained vignettes from the careers of, among others, the then GMC President and the then President of the Royal College of Physicians.

 

Error then, happens to the best of us.

Yet, we cannot accept our role in these errors with equanimity. It takes something out of us. This is normal human behaviour surely – regret, guilt. We are now required to express these feelings to those who have been harmed, as per the Duty of Candour, which became law in March 2015. Following a series of healthcare scandals, the Francis report described a culture of obfuscation, and this was followed by A Promise to Learn… by Don Berwick which enlarged on the idea of transparency, and finally came specific recommendations from the Royal College of Surgeons which preceded Regulation 20.

So now, while handling our response as 2nd victims, we must take ourselves to the person we have harmed and apologise. This might compound the emotional challenge of the situation, or it may in fact accelerate resolution. It is amazing how a patient’s forgiveness can set an anxious doctor back on track.

It is worthwhile dwelling on how to handle Duty of Candour conversations. I have heard and used various verbal formulations, which to the outside observer might be surprising or evasive… for how hard can it be to say sorry? But… what are you sorry for. Are you sorry you did it? Are you sorry ‘we’ did it, i.e. the team, the department, the hospital? Or are you sorry it happened, in an impersonal way, the same way you felt sorry when you heard on the news that someone got run over last weekend? Which sorry? And while finding your way through the post-incident psychological reaction, do you have the emotional strength to handle the expression of sorrow, whatever form it takes? It is quite possible that a natural feeling of vulnerability and defensiveness will influence the words that are chosen, and make the conversation less candid than intended. On the other hand, perhaps, as in the figure below, those who accept a degree of personal culpability and are affected by that, are more likely to demonstrate candour than the flint-skinned individual who regards adverse outcomes as inevitable complications over which only fate can exert influence.

On the subject of defensiveness, it is impossible to discuss medical error or patient safety incidents without referring to the legal situation. We know, following the recent trials of both Dr Hadiza Bawa-Garba (which occurred after I gave this lecture) and Mr David Sellu, that doctors are not immune to prosecution following ‘omission’ harm events. These names are likely to weigh heavily in the minds of doctors who become involved safety incidents, and are likely to exaggerate the feelings of panic and ‘chaos’ that were described in the Scott paper.

 

This article has focussed on the healthcare workers. The response of the true 2nd victims, sons, daughters, mothers, partners, have been overlooked, but that subject would require an article of its own, and I am probably not best placed to write it. However, the Duty of Candour has, in my opinion, brought the two spheres of psychological response closer together. The (primarily physically) injured or suffering patient is now more likely to meet the (psychologically) traumatised doctor. The shared experience, and insights into the stresses experienced, may actually improve understanding. But the resources required of doctors and nurses to deal with their own regret and self-criticism, while simultaneously approaching patients or relatives, should not be underestimated.

 

 

 

###

 

Blog compilation books, on sale:

Motives, emotions and memory: exploring how dcotors think

Spoken / unspoken: hidden mechanics of the doctor-patient relationship

A face to meet the faces

A hand in the river

Why did that man receive CPR? An inquiry

 

Accountability, blame and medical error after Bawa-Garba

 

The reaction to the Dr Bawa-Garba case has shown that the medical community finds it hard to accept that individuals can be held personally accountable for underperformance (once we exclude malice, drunkenness or other gross examples). Rather, deficiencies in the healthcare system surrounding the individual should be identified and corrected. Don Berwick was very clear in his report ‘A promise to learn – a commitment to act: Improving the Safety of Patients in England’,

 

NHS staff are not to blame – in the vast majority of cases it is the systems, procedures, conditions, environment and constraints they face that lead to patient safety problems.

 

Anyone involved in investigating patient safety incidents will recognise that preceding each one there is usually a system-based issue to be found. For instance; under-staffing/surges in demand, confusing protocols, similarly packaged drugs, or allowance of distraction. At the extreme of de-individualisation – if that is even a word – the system can also be held responsible for placing an underperforming or inexperienced doctor in front of patients in the first place. It can also be blamed for failing to identify and support an individual when they enter a situation that allows their deficiency in knowledge, pattern recognition, or prioritisation to manifest as harm. Etc., etc., ad absurdum. So, does personal accountability for under-performance (in good faith) exist at all?

 

Smoking gun

It is well established in the patient safety literature, and in the modern philosophy of healthcare, that personal accountability, AKA ‘blame’, inhibits system-wide improvement in safety. Fear of blame dissuades healthcare staff from reporting errors, thus allowing the same mistake to be made again in the future. Fear exists: the Kirkup review into failings in Liverpool Community Health NHS Trust describes how those involved in clinical incidents were brought in for questioning:

 

‘In practice they were “an interrogation and a frightening experience”. Staff reported feeling physically sick beforehand and approached them with trepidation. Across the organisation shouting and finger-pointing became the norm.’ (Richard Vise, Guardian)

 

An article by Bell et al in the journal Chest, describes how a missed lung cancer diagnosis can be attributed to multiple failures in the system, but the ‘smoking gun’ is to be found in the hand of the pulmonologist who last had contact with the patient. A classic case. The authors conclude that the pulmonologist, who did carry some responsibility, is absolved by his or her active engagement in fixing the system such that the same error cannot happen twice. This is a message we all must take away; our accountability lies in the duty to work constantly on improving the safety of our systems for patients yet to enter our hospitals – through audit, reporting, and being open.

Speaking at the Global Patient Safety Summit in 20016, The Secretary of State for Health Jeremy Hunt aligned himself closely to this philosophy :

‘…to blame failures in care on doctors and nurses trying to do their best is to miss the point that bad mistakes can be made by good people. What is often overlooked is proper study of the environment and systems in which mistakes happen and to understand what went wrong and encouragement to spread any lessons learned. Accountability to future patients as well as to the person sitting in front of you.’

Yet Dr Bawa-Garba’s fate has shown that despite all these words and aspirations, personal accountability for particularly poor performance still exists. Is this justified?

 

Individual accountability within a Just Culture

Philip Boysen, an American anaesthiologist, wrote about how to develop a ‘just culture’ in healthcare, drawing from various industries and organisations, some historical. He acknowledged that blame may still have a role within a just culture;

‘While encouraging personnel to report mistakes, identify the potential for error, and even stop work in acute situations, a just culture cannot be a blame-free enterprise.’

Boysen refers to a paper ‘The path to safe and reliable healthcare’ by Leonard and Frankel, which presents a spectrum of behaviours associated with safety incidents, including ‘reckless’, ‘risky’ and purely ‘unintentional’ error. These result in ‘discipline’, participation in teaching others, ‘retraining’ or at the very least involvement in the investigation.

The UK’s Sign up to Safety campaign, which promotes difficult but necessary conversations as a way of exploring safety issues, breaks down personal accountability along just these lines:

The Boysen paper also refers to an (older) NHS algorithm that poses a ‘substitution’ test after medical error; ‘Would another provider put in the same circumstances in the same systems environment make the same error?’

These are all efforts to unpick and define the place of personal accountability. It seems clear that it does exist, but that censure or ‘discipline’ should come late, and only if you make a mistake while not adhering to policies, or worse, are reckless.

 

What is safe anyway? 

How do we define a safe environment? Addressing factors that permit greater potential for error, such as poor staffing, fatigue and IT functionality are clearly vital, but we are not agreed, yet, on what ‘safe’ looks like. Staffing ratios are a start, but do not necessarily take into account fluctuations in demand, or the effect that one highly complex patient might have on a service. However safe we make the environment, however rigorously we modify the ergonomics to take into account the variables arising from human factors, patients still rely on individual doctors to make the right decisions at the right time. The environment will not protect patients from mis-diagnosis or knowledge gaps; or, in the case of Dr Bawa-Garba, what has been called by some, ‘cognitive failure’.

In recent weeks many NHS workers have been reassured by their trusts that unsafe environments should be called out, and that they are encouraged to speak up. The GMC published a flow chart to help people decide how to raise concerns. Yet we all know that in the immediate term, on a Saturday night when you are two colleagues down because the planned locum fell through and the ward F2 has rung in with the ‘flu, that extra resources are unlikely to arrive. How do we apportion individual accountability here? Is it true that whatever happens on this night, the doctors should not be blamed? Will their errors, should they make them, and however odd they might appear from the outside, be overlooked because they were too pressed? Does personal accountability for under-performance completely evaporate in sub-optimal conditions?

 

Intrinsic accountability: the map of experience

Although the backlash against Bawa Garba’s (clearly excessive in most peoples’ minds) Gross Negligence Manslaughter judgment has suggested that there is no place for blame when things go wrong in substandard systems, we should remember that even in well-provided Trusts with working computers, risk lurks, ready to strike, and those of us who are standing by when it does so will be asked to explain what happened. Being asked to explain feels like blame. That is because we, as doctors, naturally feel responsible. That is our baseline moral state: responsible, slightly fearful (especially in the early years), anxious to make the correct decision. We feel guilty when things turn out badly. We generate our own sense of accountability, and subsequently we may experience weeks of self-examination. Sometimes, we need to be reassured by older hands that it is not our fault. Otherwise we will burn out in the slow flame of self-doubt and fearfulness.

 

As I have observed before, there is a place for this sense of blame. It sharpens the senses and opens the psyche to deeper lessons. The mistakes for which we accept a degree of responsibility leave indelible marks, which over a career coalesce to form a map of hard-won experience, the better to help us navigate the tricky situations to come. A well-known consultant in my field said, on a training day, ‘An expert is someone who has made every silly mistake possible.’; yes, but none of them twice. The same probably goes for Dr Peter Wilmshurt, a cardiologist and well-known whistle-blower who has referred himself to the GMC for a career of errors. This act makes the point; medical careers teem with error. We become good through error. But if we blame our errors wholly on the systems around us, we will not lay down the ink that makes that map. It may be an unpopular view, but I think part of being a doctor is learning how to receive those stinging tattoos.

 

Explore more books on my author page here

Justice and safety: a dialogue on the case of Dr Bawa-Garba

 

Everyone must have a view. Thousands have expressed theirs. Many have committed to funding an independent legal review. None were there. None heard what the jury heard. Most have read the essentials of the case, and we are worried that if we commit a serious clinical error, we may be ‘hounded’, ‘scapegoated’ or ‘persecuted’, first by the criminal justice system, and then by the GMC. But the GMC says this was no ordinary error. The court found her performance to be ‘truly exceptionally bad’. Yet the system in which she worked was limping, and unable to provide the support a doctor should expect. What would have been a proportionate punishment, if indeed punishment was required?

I present a dialogue between two doctors of differing views. This allows me to present both sides of the case, and also to explore my own ambivalence behind a creative framework. Because my response to this sad case is not straightforward, and it is still consuming my thoughts.

If you are unfamiliar with the case, it will help to read this BMJ article. Also useful is the MPTS (Medical Practitioners Tribunal Service) report [link subsequently taken down] and the transcript of the recent High Court judgment. [On 13.8.18 the Court of Appeal overturned the High Court’s judgement]

Dr A, you will soon realise, is hawkish and unsympathetic to her plight.

 

*

 

Dr A. You know, my first reaction when reading about the errors made that night was – What? Lactate 11, pH 7.0, that’s clearly a sign of extreme physiolgical stress, actually of imminent dying… there can have been no sicker patient in the hospital… how could a doctor go off and do something else for several hours before checking up on the child?

Dr B. At the start she treated the child correctly, that has been accepted. But she had no choice but to ‘go off’. She was running the entire service, carrying the crash bleep, and struggling against a failed IT system. If she’d stayed with one child the other patients would have been neglected.

Dr A. It was busy. We’ve all been there. So when the pressure is on you have to prioritise, and if that results in two equally deserving cases needing simultaneous attention, and you can’t give that attention, you escalate.

Dr B. To the consultant you mean?

Dr A. Yes. He was there, there was a meeting in the afternoon. The blood gas results were read out. He could have been asked to help.

Dr B. But he didn’t offer to see the patient, did he, despite having heard the result?

Dr A. So what? A registrar of that seniority would be expected to ask, and assert themselves if they didn’t get the answer they needed. No consultant would refuse.

Dr B. We don’t know what was said. What does your consultant do – offer proactively to see anyone who sounds sick, or wait to be directed by you?

Dr A. A mixture, it depends who it is, keen, passive… they vary.

Dr B. But you insist she was the prime coordinator, the clinical leader in that situation, the one who should have coped. It was all on her?

Dr A. She was the one with the first-hand knowledge of the patient. So yes. I am critical. The enalapril – again, it sounds like a lack of asserting her impression on the plan, i.e. she should have said, don’t give that drug, whatever happens. And the DNACPR error, that seems to belie a mind sinking in the tide of events…

Dr B. So you accept that events, the environment, the circumstances, were also a factor.

Dr A. Yes, of course. We all work in similar circumstances, we always have done. And we cope, or recognise that we are sinking and ask for help.

Dr B. You really are a hawk on this. Do you feel sorry for her?

Dr A. Yes, but this is beyond emotion. This is about safety. And, based on what I have read, there was justification in the gross negligence manslaughter judgement. Moreover, I don’t see how the GMC had any choice but to press the point by overturning the MPTS who, the High Court judge feels, over-reached themselves in downgrading her culpability. You can’t have doctors guilty of gross negligence running acute paediatric services… surely. The GMC are, if you like, accommodating a decision made by a higher power in the land, a jury. It doesn’t matter if a tribunal panel feels it was over-harsh, given the extenuating circumstances, to take away her career and livelihood forever. The GMC have to cut the regulatory cloth to fit the ‘criminal’ form, i.e. strike her off.

Dr B. But the MPTS saw evidence of remediation. She was employed for two years after the incident, seeing children every single day. Clearly, she was not unsafe. She had learned, improved. Isn’t our training all about learning from the mistakes we have made to become better doctors?

Dr A. There is a limit. And by year 6 of specialty training, most of the basic lessons should have been learned. Look at it through the prism of public confidence, which I suppose is what the GMC must do. If she goes back to work, even under supervision, will a parent be told that the doctor on call who is coming to see their child was, in the last few years, found guilty of gross negligence? Wouldn’t you want to know, if it was your child? Or do you have sufficient faith that remediation, and training, are good enough to ensure that those traits that led to a guilty verdict have been abolished for good? The high court said it couldn’t be sure that she wouldn’t suffer another ‘collapse’ in performance one day. I agree. It happened once…

Dr B. But look at any hospital. There is a spectrum of competence. There has to be, because there is human variability. And I do not expect to be made aware of the competence level of each doctor I see. I must have faith, in the training system, in the deaneries and in the Trusts – actually, in the GMC, that each of them is safe. If the MPTS felt that she was safe, and had remediated, why not believe them? Why look simplistically at the jury’s verdict and use that as a permanent, inerasable, measure of performance, one that was made without some pertinent facts.

Dr A. So you wish to re-try the case, in your own head. You would overturn the jury’s decision?

Dr B. Yes. I believe it was unjust.

Dr A. You know better?

Dr B. Perhaps.

Dr A. Naïve. That is not how justice works in this country. The jury has the final word. I’m sorry. You can’t second guess it.

Dr B. Juries have been wrong.

Dr A. Yes, when miscarriages of justice have occurred. But that is not the case here. The High Court examined the question of what the jury were told, and found no problem with it. There has been no miscarriage of justice. No-one is saying that.

Dr B. Yet… it is unjust.

Dr A. Once the ball of justice began to roll, once it became a police matter, there was no going back.

Dr B. So perhaps the thing that should have been done differently would be for her not to have been arrested and tried. Perhaps the very concept of gross negligence manslaughter is wrong. Where there is no will to cause harm, only failure to do well (whatever the circumstances), perhaps we should not involve the courts.

Dr A. But a child died, possibly needlessly, definitely earlier than he should have. How can that not arrive at the door of Justice?

Dr B. Avoidable deaths are all around us. We see them, we discuss them, we learn from them, every week and month. Avoidable deaths are grist of the mill of patient safety. I saw an estimate that there are 9000 per year attributable to poor care in hospitals. We must accept that avoidable deaths will occur, not pounce on them and send each to Law. This is the problem, don’t you see? This is the harm. By raising the fear of recrimination and sanction in the minds of doctors, those weaknesses in our systems, all those near-misses or harms that could signal a fatal accident to come, will go unexamined. Who, having been involved in a clinical incident that caused any meaningful harm, or even death, will now put up their hands to attract attention and bring on a good investigation? Fewer, now. Because if the patient or the family decide to pursue the individual, and by degrees the incident moves into the view of the Crown Prosecution Service, then they could end up losing everything. That is the harm here. The future of patient safety.

Dr A. You ask too much of the GMC and the courts. I would rather base decisions on the definite past than the possible future. It happened. The worst thing that can happen to a patient, neglect, incompetence, happened. On that day she was ‘truly, exceptionally bad’ – did you read the judgement? There are very few people who disagree with that assessment. The MPTS also accepted that there was gross underperformance, as far as I understand. A boy died, despite having signs and clinical features that anyone, paediatrician or not, would have recognised as deserving of the closest attention, and escalation, and absolute prioritisation. There is more to this than her career, and her ability to improve. There is a wrong, of such magnitude that time cannot just be allowed to roll on, allowing her to resume her career.

Dr B. I am surprised. You really have no sympathy, no sense of professional camaraderie?

Dr A. It’s irrelevant. And dangerous. Camaraderie is also called ‘closing ranks’. Just because we belong to the same professional group does not mean that I should automatically support her in this. I know there are bad doctors out there, I’ve worked with them. A line has to be drawn. Look… her qualities have been examined to the utmost, by intelligent people from all walks of life, and mitigating circumstances have been examined, and despite this, her fitness to be a doctor has been found lacking in the High Court. What more can you ask for?

Dr B. Perhaps, one day, you also will find yourself sinking in events, off your A-game, unable to make good decisions, unsupported by a passive consultant… wouldn’t you expect sympathy from your colleagues?

Dr A. I would expect a fair process.

Dr B. And you think the process has been fair here?

Dr A. Harsh, yes… but fair.

 

*

Note: today (30.1.18) the GMC has undertaken to examine the role of Gross Negligence Manslaughter cases, ‘ in situations where the risk of death is a constant and in the context of systemic pressure. That work will include a renewed focus on reflection and provision of support for doctors in raising concerns’.

 

 

A few excerpts:

The MPTS, quoting a previous tribunal in which a doctor found guilty of gross negligence manslaughter was NOT struck off – “The Committee was rightly concerned with public confidence in the profession and its procedures for dealing with doctors who lapse from professional standards. But this should not be carried to the extent of feeling it necessary to sacrifice the career of an otherwise competent and useful doctor who presents no danger to the public in order to satisfy a demand for blame and punishment.”

MR JUSTICE OUSELEY, in the high court –However […] the Tribunal (MPTS) did not respect the verdict of the jury as it should have. In fact, it reached its own and less severe view of the degree of Dr. Bawa-Garba’s personal culpability. It did so as a result of considering the systemic failings or failings of others and personal mitigation which had already been considered by the jury; and then came to its own, albeit unstated, view that she was less culpable than the verdict of the jury established.’

MR JUSTICE OUSELEY, on systemic failings that were not shown to the jury in the original GNM hearing – ‘There were two “systemic” failings not explored at trial which Mr Hare acknowledged, but we accept his submission that Dr. Bawa-Garba was convicted notwithstanding the difficulties to which they gave rise, and that they could not have affected the verdict.’

MR JUSTICE OUSELEY – ‘Dr. Bawa-Garba, before and after the tragic events, was a competent, above average doctor. The day brought its unexpected workload, and strains and stresses caused by IT failings, consultant absences and her return from maternity leave. But there was no suggestion that her training in diagnosis of sepsis, or in testing potential diagnoses had been deficient, or that she was unaware of her obligations to assess for herself shortcomings or rustiness in her skills, and to seek assistance. There was no suggestion, unwelcome and stressful though the failings around her were, and with the workload she had that this was something she had not been trained to cope with or was something wholly out of the ordinary for a Year 6 trainee, not far off consultancy, to have to cope with, without making such serious errors. It was her failings which were truly exceptionally bad.’

LORD JUSTICE GROSS (sitting with Ousely in the High Court) – ‘Like Ouseley J, I reach this conclusion with sadness but no real hesitation.’

Systems and sense

 

 

The controversy surrounding paediatrician Dr Hadiza Bawa-Garba has got me thinking about the relationship between individuals and systems in healthcare. In this case, it has been suggested that system failures, including under-staffing, contributed to a young patient’s death. So important do those factors appear, many feel she should be allowed to continue practising despite a prior manslaughter judgment against her. How do we decide how much blame resides with an individual doctor, and how much can be attributed to the sub-optimal system? I do not know the answer, but it is a question worth exploring.

I cannot recall a single avoidable death where the ‘system’ (i.e. processes, ways of working in the hospital) was not at some level criticised. This is because I have yet to work in a hospital where safety-netting systems were perfect. From slow or inconsistent IT, to lost correspondence, inadequate hand-over arrangements or over-stretched teams… there is always something in the background that appears to diminish an individual doctor’s ability to make the right diagnosis, or initiate the right treatment, in an acceptable timeframe. That’s why it is rare for investigations into avoidable deaths to conclude that a single person’s act of commission or omission was to blame. Blame, of course, is a word we avoid, though as I explored in a previous article, a sense of personal culpability may be important as a driver of self-improvement.

Thinking back to formative errors I made in my own training, I recall an incident of gentamicin induced renal failure. I prescribed it on a Friday, handed over the job of checking the levels (it is toxic if it builds up in the bloodstream), and went off for the weekend. The patient was given it as prescribed (this was back in the day when dosing was written up regularly, but with the caveat ‘check trough levels first’). But no levels were checked. Her renal function deteriorated, and she ended up on dialysis for a while. Disaster.

The system did not help me. There was no gentamicin prescribing protocol; no system of flagging abnormal kidney results to doctors on call; the handover book was a scrawl – so many ways in which better systems could have helped prevent harm. Yet, that was the environment in which I worked. It was my handwriting on the chart that damaged her kidneys. I learned that if a result is important, you need to chase it, and if something needs doing when you’ve gone home, you have find out who is supposed to do it and make sure they are completely aware. You can’t be passive; you can’t leave it to the system.

Take this example, from a Human Factors in Healthcare document. I have underlined the areas where it was felt the system let the patient down, and put into bold those where an individual made an error.

_

A child with a known penicillin allergy was prescribed and administered an intravenous dose of an antibiotic of the penicillin class’

A child was due to have a pacemaker fitted. On pre-admission an allergy to penicillin was recorded. This was noted on both the nursing admission assessment form and the anaesthetic record chart. Prior to operation, the allergy was discussed with the specialist paediatric cardiology registrar, the consultant paediatric anaesthetist, anaesthetic specialist registrar and the cardiology consultant. However, following the procedure the patient’s plan included intravenous and oral penicillin.

How did this happen?

  • Intravenous penicillin is the usual antibiotic used following a pacemaker being fitted. There was no up-to-date protocol on what other antibiotics should be used if a paediatric cardiac patient has a penicillin allergy, which initially caused confusion;
  • There was no clear record of the allergy in the medical notes when the Consultant Cardiologist advised treatment;
  • No system was in place to prevent penicillin prescription when a known allergy was recorded.
  • A number of appropriate checks were not followed prior to administration of the antibiotics.
  • During independent checks, neither nurse checked allergy status, and both were under pressure to complete tasks. The patient’s allergy band was on the same side as their identity band, both of which were covered with a bandage for an intravenous drip.

_

Imagine the child had received penicillin and died from anaphylaxis. Would it seem reasonable for any of the individuals involved in the actions highlighted in bold to have been blamed, censured, or worse, accused of manslaughter?

The cardiologist put penicillin in the post-op plan, despite having been told about the allergy. Neither nurse checked for allergy, not thinking to peel back the obscuring bandage. Somebody put the bandage on without moving the allergy bans. All were at fault. But, an electronic prescribing package that automatically pulled the allergy from the patient’s record and blocked any doctor from writing up a penicillin-related compound would have rescued the situation. Can the absence of such a system be blamed for an error that results in death? Would its lack be used, in court let us say, to extenuate the error medical staff? Or should staff be judged in the context of the environment in which they find themselves?

At what point does responsibility for errors cease to be attributable to systems, and start to accumulate around individuals? There is no visible line or threshold. Regulators and courts must determine what was reasonable in the circumstances, and if a doctor meets minimum acceptable standards within Good Medical Practise. For the trainee, it is important to understand that all systems are imperfect, and to develop a sense for when to drive management forward, well before the backstops provided by the ‘system’ throw up a red flag.

***

Omissions: reading the Kennedy report on Ian Paterson

 

This imagined reflection by a doctor who worked with Ian Paterson is, of course, ill-informed. I was not there. But I have read Sir Ian Kennedy’s brilliantly written report (2013), and think that the messages it contains should be seen by the wider medical community. The report is 166 pages long, but perhaps this ‘story’ will help introduce people to it.

In the excerpts from the reports that follow the reflection, I have removed the names of clinicians. However, it is all in the public domain. The Kennedy report focuses on Mr Paterson’s unacceptable surgical technique, and the NHS Trust’s slow recognition and response. It does not examine the unjustified operations and investigations in the private sector, for which he was recently convicted.

This article sits with two other posts, ‘Why Michael didn’t blow the whistle: pub scene’ and ‘The eyes and the ears: why Adam blew the whistle’. Like those, it explores a doctor’s internal battle of the conscience, insecurities and the concept of moral bravery in the workplace.

 

***

 

“I wasn’t directly involved, but I was in a position to observe. When he was suspended I wasn’t surprised; it was high time. The criminal stuff, that did come as a surprise. I had no idea he was doing operations unnecessarily. But this is less about him than us, as a group. About me.

“We knew he was no good. His reputation preceded him, and as time went on a few people discovered firm evidence that he was an outlier. So your question is valid – why didn’t we act sooner? Why didn’t I?

Ian Kennedy

“When the weight of complaint was sufficient, action was taken. But before that, for years, we did what Kennedy said we did in his report, we worked around him. That’s what you do with difficult personalities. A jagged rock in the stream, which will not be eroded. The water goes around it. Decisions were made without him. He was excluded from the panel when the second surgeon was appointed. They couldn’t risk having him anywhere near the process.

“I watched him in the MDTs. He led from the front, made decisions quickly, and helped to ensure that the huge list of patients was dealt with. Snappy assessments and decisions were necessary. The referrals never let up. From time to time the oncologists pushed back, about the type of surgery, the need for revisions when you’d have expected a cure… but their searching tones changed to resignation after a while. They had done an audit on the resection margins, had proven he was an outlier, but nothing changed. What could they do? And anyway, they, the ones who were at the receiving end, who knew the outcomes were not right, didn’t actually work in the same Trust. You could see their faces, a bit fuzzy on the video link during the MDT… and they just looked neutral.

“The signal had been raised, the data had been forwarded… they say we are all managers, but we aren’t.  We are clinicians who rely on senior managers to tackle the problems while we get on with our jobs, which is seeing patients. That’s what they are for, to review the whole picture and make a judgment call.

“Okay, you say, what about your responsibility as a doctor to keep the pressure on, in the face of managerial inertia and an ongoing threat to patient safety. Well look at it… there was an external peer review around this time, and it concluded that apart from needing a few tweaks, the service was sound. In fact it was congratulatory. Once I heard that, I began to wonder if we, the doubters,  were the ones who had got it wrong.

“To keep the pressure on in this kind of situation you need to have absolute confidence in yourself. It’s got to be more than a suspicion or a sense of unease. So, if you hear that a review or an audit has been conducted, and that the people upstairs see no indication for urgent or fundamental change, you back off.

“Yes, even if you know, in your heart of hearts, that he’s probably doing harm. Because the risk in keeping your head above the parapet is substantial – not that it will be blown away, the NHS is not like that nowadays – but that your everyday professional life will become deeply unpleasant. There is enough sadness in cancer medicine, in the illness and grief we meet daily. If your interpersonal relationships breakdown, if you can’t look at your colleague in the eye or have a conversation, then coming to work becomes miserable. You might say that a little bit of discomfiture is nothing compared to protecting patients, but it’s all a balance. We go through our careers observing colleagues who may well under average, but we can’t act to remove all of them. Half of us are below average by definition, aren’t we? Quality lies on a spectrum. Who am I to say, not bring a surgeon, where one should lie on that spectrum?

“I did think about raising hell, once. This was when I met a patient who had a recurrence in breast tissue that should have been removed first time. She was living proof that his surgical method was wrong. There in front of me was the embodiment of disappointment and suffering, and also of dishonesty… because when she consented to her mastectomy she did not know that his particular method, to leave some fatty tissue behind, put her at a greater risk of recurrence. She, and her husband, assumed that the person in front of them knew best, that the expert was an expert, and would only suggest a treatment that was effective.

“When I saw the situation from the perspective of the patient, I shook myself out of my comfort zone, and I went to speak to someone. I won’t say who. And that conversation cooled my anger. Another perspective was provided. It was explained to me… that he carried the service, that he was industrious, not lazy… which you can’t say for everyone… that the patients trusted him and that didn’t happen accidentally, that there was actually an infrastructure in place for monitoring people like him, called appraisal, which he flew through each year. I walked away from the meeting with a new understanding. I didn’t have to sacrifice my professional quality of life, I didn’t have to go on a mission to get this guy out. Others were aware of the ‘problems’, and they were generally happy that although he was an outlier, he did not lie far enough outside the norm to be stopped.

“And of course, they were wrong. Perhaps they were all looking at each other, talking to each other, and hearing the same thing. Echo chamber. There is no real problem here… so many patients treated… targets met. Targets met… the echo.

“When, as a non-surgeon, you look at a surgeon, there is a certain awe. It sounds childish perhaps, and I’m no worshipper, but I know – we all know – that the job they do, cutting into others, is different. It takes confidence and skill to get through the training. There are technical factors that the non-surgeon cannot hope to understand. The interaction between tissue and metal is a mystery to people like me, I can’t judge it with confidence. The outcomes yes, but not the technique. That requires others to come in a make a judgement. The Trust did that… and we did not see the conclusions, not for years.

“These are not excuses. I am not proud of my inaction. I accept I played a part in the acquiescence. If I had made more of a fuss, perhaps fewer patients would have undergone bad operations. But for all of us to watch for 8 whole years between 2003, when the first concerns arose, and his exclusion from the Trust in 2011, it must have been something more than individual weakness… it must have been a permissive environment that prioritised surface efficiency over quality. Kennedy’s report focuses on the role of the non-executive directors, who incuriously accepted what they were fed by the executive, who had a rose-tinted view… on the secrecy of HR processes, on reports and audits being unsupported… organisational. Cultural. He does not put the blame on individuals like me, even though we were the ones of knew…

“And next time? That’s the problem you see. Although I can recognise my omissions in this case, I’m not sure I’ll act differently next time. Because you don’t know, until you’ve seen the proof, that the doctor you are worried about is a doing real harm, or is actually malign. You might have your suspicions, but the proof – which in this field is, ultimately, death – does not present itself.

“Unless we all agree that a certain degree of suspicion, a certain number of reports or complaints should result in suspension, we are not going to put these people on gardening leave just in case. Our clinical services could sustain it. There isn’t enough slack in the system. There wasn’t enough slack to give the two guys who were asked to write reports the time off from clinical duties to produce something quickly. It one of them took three months. We need the time and the space to work on these issues. We need to act on risk, not proven harm. In doing that, we might have to suspend five surgeons to confirm one case of unacceptable practise. ‘NNS, the number needed to suspend’ – do we buy into that? Perhaps we should, because when that risk is proven to be real, the time elapsed will have seen more patients come to harm while we vacillated.”

 

***

 

Excerpts from the Kennedy report on which this fictional reflection is based:

 

‘He came with something of a reputation as being a difficult person to work with. When he applied for the appointment, Dr _______, a senior manager at Good Hope Hospital, telephoned one of the Medical Directors at the Trust, Dr _______, to alert him to the fact that Mr Paterson had been the subject of an investigation and suspended in 1996 following an incident in which an operation on a patient had exposed the patient to a significant risk of harm. A review had been commissioned by the Royal College of Surgeons.’

 

‘That said, there was a level of informal knowledge. As one of the senior radiologists, told me, “To be honest, when we heard he was coming … it was, you know, ‘What’s gone on then?’ His reputation was well-known as being difficult and having open rows with a colleague at Good Hope. … it’s always a surprise to us why they took him on when they knew he was trouble”.’

 

‘Mr Paterson was described as high-handed to the point of being dismissive of colleagues. Forewarnings of this pattern of behaviour were already evident when Mr Paterson worked in the vascular unit. This unit was run in a very collaborative way, but Mr Paterson did not participate and rarely attended the MDT. When Mr Paterson moved to breast surgery, he behaved in a similarly challenging way. The hope was, it appears, that the managerial and governance arrangements in place would deal with whatever had to be dealt with. It was a forlorn hope.’

 

‘He had been the subject of an investigation and suspension two years previously by his then employer, Good Hope Hospital and had been required to undergo a period of supervised practice before recommencing laparoscopic surgery. The Trust was advised of this prior to his appointment.’

 

‘He is described as charismatic and charming and was much-liked by his patients. He was not, however, a team-player in an area of care which is absolutely dependent on clinicians working efficiently and effectively as a team.’

 

‘They [his colleagues] were faced by an awful ethical dilemma: what to do about the patients whom they were seeing who were supposed to have had a mastectomy but had not, in fact, had one…’

 

‘The Report overlooked a crucial issue: the issue of consent. Women were giving their consent to a mastectomy. But, on occasions, a variation of a mastectomy was being carried out; what became known later as a “cleavage sparing mastectomy”. This was not a recognised procedure. Women did not consent to it in any properly informed way.’

 

‘Senior managers saw Mr Paterson at the time as a highly effective surgeon performing efficiently, enabling the Trust to meet its targets.’

 

‘The concerns over Mr Paterson’s clinical competence went unaddressed. Mr Paterson continued to operate as before for nearly four years. The oncologists who were based in another Trust felt ignored. They had expressed their concerns and supplied evidence. They felt that no-one at Mr Paterson’s Trust was listening.’

 

‘They were told the good news from the Report of the Peer Review in 2005. They were not told of Mr _____’s Report, nor the less favourable views expressed by the initial and follow-up QA Visits in 2004, and the recommendations which followed. Good news was preferred to true news.’

 

“…we did raise that we had some concerns and we were told not to worry about it, so for the next few years we didn’t say anything”

 

‘They took the view that because they were not surgeons, they were defined out of competence. As Dr _______ put it, “I had taken the trouble to go through 100 cases, two thirds of my case-load for a year basically, and anything other than the most rudimentary examination of that would have shown substantial problems and the Trust took not a blind bit of notice of it and, not only that, they swept it … under the carpet”.’

 

‘When the Trust decided to make a new appointment in 2007, Mr Paterson was excluded from the process of selection, despite his being the leading surgeon, for fear that he would again put off any applicant. This is just one example of how senior managers behaved, towards Mr Paterson. Rather than confront him, they preferred to work around him.’

 

‘The new surgeon appointed in 2007 soon began to raise concerns about Mr Paterson’s surgery after seeing some of Mr Paterson’s patients, under the newly introduced system of cross-cover. The senior managers decided to launch an investigation.’

 

‘… if the issue of consent had been identified, as it should have been, a reason to require Mr Paterson to cease operating had existed for several years earlier.’

 

‘He [a colleague] talked of “raising his head above the parapet”. This speaks volumes about the perception of the way that the Trust then worked: that raising concerns was to be characterised as putting your head above a parapet, with the implication that the head would be shot at rather than welcomed and invited over the battlements to talk further.’

 

‘He realised that what he lacked was proof that women were being put at risk. The only way that he would obtain that proof was if women presented with recurrences of their cancer. And given that it might be several years before recurrences occurred, there was nothing he could do in the meantime.’

 

‘Evidence of actual harm, except in the most obvious cases, is usually hard to come by. It takes careful documentation, proper sampling and statistical analysis. Without all these, the concerns will be at risk of being dismissed. Dr ______ provided evidence but it did not show harm. It showed a deviation from accepted practice and a risk of harm.’

 

‘They told me that by the time their own concerns were coming to the fore, “everybody was aware of this”. One replied, “… it’s like stating the bleedin’ obvious, they already knew. … the senior management had been informed by the rest of the team, the consultants, and I can see that us adding our voice to that may have had – well, I don’t believe it would have had any effect but I can see that there is an argument that you could say, well, you know, you didn’t raise concerns as well but they’d already been raised…”.

 

‘…once the HR procedures were invoked, everything was covered by a blanket of confidentiality. Like others, they were kept in the dark.’

 

‘Organisations can tend to become closed, to exclude others and become disinclined to listen to the voice of “outsiders”. This is usually a bad sign in terms of the performance of the organisation… The “outsider” may see himself in such terms, feel he has done his bit and retreat to familiar territory.’

 

‘It is impossible to overstate the emotional burden that he and others shouldered for years. As Mr _______, who carried out an investigation in 2007, put it to me, while he did not want to emphasise the element of emotion in what he heard as he gathered evidence for his Report, “to see someone virtually in tears was an eye opener”.’

 

‘He realised that what he lacked was proof that women were being put at risk. The only way that he would obtain that proof was if women presented with recurrences of their cancer. And given that it might be several years before recurrences occurred, there was nothing he could do in the meantime.’

 

‘A concern about the practice of a clinician is raised. It is perceived as a criticism of the clinician rather than a concern about patients. The perspective is that of the clinician. The response of managers to the person expressing concerns is to demand evidence: to “put up or shut up”’.

 

‘The call for proof, in a situation such as the one under review, was based on two flaws. First, it proceeded on the basis that the issues at stake were scientific and technical and could and should only be addressed scientifically and technically. This is the way that clinicians tend to think. It is their comfort zone. And, it allows arguments about data and its interpretation to go on for years. The flaw is that, while there may be technical issues to address, the primary issue is that concerns are being expressed about the care of patients [   ] the proper response is to stop and look.’

 

‘Peer Review Visits do not have sufficient rigour to be regarded as a reliable guide to performance. They should either acquire the necessary rigour or be regarded as a useful exercise in bringing people together but not a serious examination. Currently, organisations may present the results of a Peer Review Visit in self-congratulatory terms, even though, on occasions, self-congratulation, on a more careful analysis, may be unwarranted. Patients and the public, therefore, should be alert to this when forming a view on the performance of a service or unit.’

 

‘Further light is cast on the failure to grasp the importance of consent by the practice, which I still encountered in 2013, of clinicians talking of “consenting” patients. The objections to this awful phrase are not merely linguistic. They go to the heart of a proper understanding of the relationship between patients and clinicians.’

 

The corrections

 

corrections1

I can number on the fingers of one hand the times I have been explicitly corrected during my medical career. There was the time I treated a patient all night for septic shock when in fact he had cardiogenic shock – the fluid nearly drowned him. There was the time I performed a lumbar puncture on an obese patient, and put the needle three inches away from the correct inter-spinous space. There was the time I failed to check a gentamicin level before the weekend, and came back on Monday to find the patient in renal failure. And during higher training, the course where my endoscopic technique was picked apart against a list of errors that the assessor held in his hand; he described three issues, but I glimpsed the piece of paper and the list was at least ten items long. There are others, some of which are littered across this blog.

Each time I felt embarrassed and defensive. I reacted by rationalizing. The reasons, or excuses, were various, and included the way I had been trained, the pressure of time, the load of patients, the need to balance speed and vigilance, and plain bad luck. But each time the fact that I had been criticised ate away at me. I was not used to it. Few of us are.

Medical students tend to come from the highest strata of achievement in secondary education, where their performance requires very little correction. Most float through training in the middle of the pack, periodically struggling to stay above the flood of knowledge, neither excelling nor failing. They require little in the way of feedback, just the odd nudge back on track. They become competent in the early years on the job and deliver medicine safely. Errors occur, many due to weaknesses in the system rather than personal fallibility. Corrections happen, but they are infrequent. And then, before you know it… they are practising more or less independently. They are part of a team, but they are essentially ‘complete’. They habituate to many forms of stress, but one that they are not accustomed to is ‘constructive’ feedback. When it comes, if it comes, it hurts.

How else do we improve once we have arrived at our natural ceiling of seniority? Continuing professional education is mandatory, we do it, and our knowledge is augmented, but weaknesses are not identified by passive absorption. Appraisal? Somewhat routine, and focused more on our perception of ourselves than feedback. Revalidation – mmmm… we’ll see. So how do our weaknesses get identified? The answer is, by our peers – those whom we work with day in and day out. The difficulty with this is that they are the last people who wish to engage us on our deficiencies. They are colleagues and friends.

Most error is self detected and self corrected. Although I listed only a handful of occasions where mistakes were fed back to me, there are hundreds (well, let’s say ‘tens’) of similarly significant mistakes which I identified myself, and reflected on. Nobody came to tell me that such and such happened because I missed a clinical clue or performed a procedure incorrectly – there is no ubiquitous or all-seeing observer to perform this function. The continuous feedback loop of self-improvement requires attention to consequence, and the ability to accept that something bad has happened because of what you did or omitted to do. Without a willingness to seek the consequences of our decisions we will blunder on regardless. However, a safe culture cannot be expected to rely solely on such of subjective system.

Receiving  feedback as a junior doctor in training is hard enough, but it is standard and expected. You rely on it. The discomfort that comes with receiving negative feedback from a colleague of equal seniority, at consultant or GP level, is even more acute. The same rationalisations occur, recourse to the same ‘excuses’ – the pressures, the pace, the reaction to diffuse responsibility that appears to have unjustly landed on your head just because your name was over the patient’s bed. So much for receiving; how about giving? It’s even worse. But becoming comfortable with discomfort seems an absolute requirement for a safe medical culture. It is easily described, but not so easily undertaken.

Soon after becoming a consultant I took on the task of reviewing the notes of patients who had suffered ‘hospital acquired’ venous thromboembolism (ie. DVT or PE). It sounded easy, and quite interesting. I flicked through the charts, identified possible lapses in prescriptions of anti-coagulant, and marked them as avoidable or unavoidable. The catch was… I had to interview the consultant in charge of patients deemed to have suffered avoidable events. I sent out emails, arranged convenient times to meet, and found myself addressing equally experienced or more established consultants. It was not easy. The key to converting it from a repeatedly painful and nerve-wracking exercise was this – I too had been called up to justify a similar lapse, months earlier. The discomfort, the access of humility, the acceptance that yes, it could have been done better, we (I) should have been more vigilant, served as a brief lesson in correction. That was the angle when it came to phrasing my own feedback:

‘It happens to all of us at some point, it’s bound to. Happened to me last year. And it worked. If I hadn’t been asked to attend the meeting I wouldn’t have known that so and so had a big PE three weeks after they went home from my ward. It worked. It made be think twice about checking it on ward rounds, brought it home. It’s not about criticism, it’s about focusing minds on the things that are easy to let slip through your fingers…’

You get the gist. Correction shouldn’t be exceptional; it’s inevitable.

If hospitals and surgeries are to witness more of those ‘difficult conversations’ we keep hearing about, in order to promote a safe culture, each of us has to find a way to get comfortable with starting those conversations. The best way – in my limited experience – is to bracket them in the context of our own fallibility, for none of us are perfect, and we are all bouncing from error to error as we move forward in our careers. That’s medicine.

___

There do not seem to be many articles on how to deal with the discomfort of giving negative feedback, but I did find these two in the BMJ and the Hospitalist (US).

 

New booklet, click picture to explore…

bookletamazonCover

The place of blame

The importance of a ‘no blame’ culture in the NHS has become axiomatic. It is accepted that the chain of learning that connects adverse healthcare events to improvements in safety is fatally interrupted when incidents are not reported. If people fear blame and censure, they will not report. Berwick focussed on this in his 2013 post-Francis report, ‘A promise to learn – a commitment to act Improving the Safety of Patients in England’. It says,

Patient safety problems exist throughout the NHS as with every other health care system in the world.

NHS staff are not to blame – in the vast majority of cases it is the systems, procedures, conditions, environment and constraints they face that lead to patient safety problems.

Fear is toxic to both safety and improvement.

 and recommends,

To address these issues the system must: 

– Recognise with clarity and courage the need for wide systemic change.

– Abandon blame as a tool and trust the goodwill and good intentions of the staff.

 blameberwreportfront

 

As a doctor who is involved in mortality and morbidity meetings, I often think about the role of blame in learning lessons. At my level medicine consists of numerous interactions between just two or three kinds of people; patients and doctors/nurses. If a mistake is reported an assessment is undertaken as to whether that mistake was due to a fault in the system (eg. poor process, unclear guidelines, bad IT, poor labelling) or a restricted error on behalf of the health care worker. The latter might include mistakes due to a lack of knowledge, or due to a lack of concentration.  But what if the lack of concentration is the result of unintelligent rota design, or distraction due to an over-burdened system? The spectrum of potential accountability is wide, but whenever an error is identified it is necessary to see at what point on that spectrum the underlying cause lies. Or the blame.

blametz

Example: a junior doctor gives a patient an antibiotic that she is allergic to, the doctor having failed to recognise that the trade name (let’s say Tazocin) disguises the fact that it contains penicillin. The patient has an anaphylactic reaction and spends a week on intensive care.

The focus of accountability could reasonably fall on one of several points. It could be the doctor not being aware that the antibiotic contained penicillin, or for not checking that the patient was allergic. For not bloody thinking, his exasperated supervising consultant might say to herself, immediately succumbing to the emotional retort that is ‘blame’.  Or, could it be the doctor’s educators who have not emphasised that fact in his training? Or the drug firm for releasing a medication which does not make its crucial ingredient plain? Or the Trust for not being responsive to the fact that this doctor is routinely over-pressurised at night, and making decisions in a hurried way. Or the Department of Health for capping central funding, or David Cameron for supporting a policy of austerity…  or mortgage lenders in America for contributing to the 2007 financial crisis.

This begins to sound sounds absurd, but the point I’m trying to make is that the chain of blame could be a long one. And when you make a mistake, it is natural to look up and around for mitigating circumstances.

Now imagine that the junior doctor is brought into his educational supervisor’s office. It is explained that the patient came to harm because the doctor prescribed the drug to which the patient was known to be allergic. It’s my fault, is how the doctor will feel. But the educational supervisor will be quick to soften the criticism by explaining that there will be a review of systems, more nurse education so that injections are not actually  given if it says penicillin allergy on a drug chart, and the Trust will arrange some extra pharmacology teaching for House Officers. Use of the misleading trade name will be banned. The system has learned. It’s not your fault.

Immediately the sense of blame rises from the shoulders of the junior doctor and it becomes clear to him that it is not just his problem. Should that doctor walk out of the room with no sense to blame? Well, I can recall most of the mistakes that I have made in my career, and the intense sense of blame and guilt that accompanied them. Whether it was mismanaging Gentamicin and causing renal failure, missing a cord compression or making a late diagnosis on data that I should have interpreted correctly. It is blame, the sense of personal responsibility, that nagged at my mind and made sure I never made the same mistake again. For this reason I think individual blame does have a role. And I am not alone. A National Patient Safety Agency presentation from 2004 includes these slides:

blamejustculture

blamegraph1

 

At least 90% of error can be attributed to system problems or ‘honest’ errors, while only a small percentage are deemed ‘culpable’. This data is largely derived from the aviation industry, where many parallels with healthcare have been identified. But even this presents problems. The ‘honest error’ is still an error. Just because it is honest (ie. not intentional or negligent, an error that quite would probably have been committed by a peer in the same combination of circumstances) there is still something to be learned by the individual.

The psychologist James Reason describes person and system approaches. The former attributes unsafe acts to,

aberrant mental processes such as forgetfulness, inattention, poor motivation, carelessness, negligence, and recklessness.

 Whereas a system approach accepts that,

 humans are fallible and errors are to be expected, even in the best organisations. Errors are seen as consequences rather than causes, having their origins not so much in the perversity of human nature as in “upstream” systemic factors.

 

For those of us dealing with error on a day to day basis, an approach that tackles individual blame while paying heed to system-wide lessons must be taken. For reporting to be encouraged, blame must not apportioned in public… for that is where shame develops, and its lethal consequence – inhibition. But in private, as we look at near misses and significant errors, we will sometimes accept that it really was a silly thing to do. And we will not hide our concern, nor conceal our disappointment if the error appears to indicate a worrying gap in knowledge, method or attitude. If this is the case a way must be found to lighten the sense of personal blame by looking up at systemic factors (if present), but without allaying personal discomfort entirely. In this way we (for it will happen to all of us at some point) will remain alert, and perhaps even a little paranoid, when we enter a similar clinical scenario in the future.

 

oOo

book2 coveramazon