Month: February 2018

‘Stuff happens’: patient safety incidents and 2nd victims

Bad things happen in medicine. Sometimes, as doctors or nurses, the things we do, or the things we didn’t think of doing, cause harm. How we respond to those incidents determines the direction our careers follow. If the response is catastrophic, and the puncture in our confidence or self-esteem proves irreparable, we may drop out entirely. This article explores the idea of the ‘2nd victim’, that is the health care worker (HCW) involved in events that result in harm to patients. It is based on a Grand Round lecture I delivered at Frimley Park Hospital.

***


‘Stuff happens,’ as Donald Rumsfeld sanguinely commented after being questioned about the looting that took place in the fall of Baghdad. ‘Stuff happens, and it’s untidy.’ He certainly felt no personal responsibility for the adverse consequences of a military decision that he had been invovled in. The looting was a kind of ‘complication’.

That is not the typical response among healthcare workers. Dan Walter, in his book Collateral Damage, describes a terrible complication suffered by his wife, and the panic that he perceived in the young doctor who was involved. A novel cardiac ablation catheter was incorrectly deployed by a trainee, resulting in its spiral end becoming entwined in the cordae tendinae of the mitral valve. When the catheter was eventually removed, bits of heart valve tissue could be seen hanging off it. She developed cardiogenic shock, and had to have an emergency mitral valve repair. Dan Walter approached his wife’s cubicle;

A vivid picture there of the 2nd victim – although many would say that the author – the spouse –  is the true 2nd victim here, the doctor the 3rd. For now though, I’ll stick with ‘2nd’.

The impact on HCW  has been studied. Scott et al described common symptoms, both early and late, in the table below. First the physical, then the psychological. In some, there is avoidance of particular patients, and chronic uncertainty.

The authors then identified several phases in the natural history of psychological response:

At the end, the HCW ‘thrives’, that is they learn, improve, and possibly use their experiences to help other in similar situations. Others carry on, still feeling the harm and perhaps avoidng certain situations, while a third group drops out. The injury to their confidence is too deep.

As a student and trainee I saw how those around me reacted in these situations. I vividly remember the bloodless expression in the house officer whose patient became comatose after receiving a duplicated prescription of insulin; the SHO who gave Tazocin to someone who was penicillin allergic; the registrar who inserted a central line into the carotid artery accidentally. Having convinced myself that I had prescribed IV salbutamol at 10x the usual dose at three in the morning, I made up the term ‘Gut Thump’. This equates to the adrenaline-driven, panic-soaked reaction that comes minutes after the event.

Much later, after a complication, I charted my own psychological journey starting from the moment I received the CT scan report showing the damage, and I drew it on a graph. Many may recognise this line. The time it takes to reach equilibrium will vary, depending on the sense of culpability, and the outcome of the patient. In this case the patient was absolutely fine, but there was a period during which this was not guaranteed. The road to equilibrium involves communicating, receiving reassurance, and doing stuff to make it better. Also, the understanding of the patient helps.

Van Gerven et al, surveying 913 healthcare workers who had been involved in a patient safety incident, and using an Impact of Event Scale, found,

 

‘…higher psychological impact is related with the use of a more active coping and planning coping strategy, and is unrelated to support seeking coping strategies. Rendered support and a support culture reduce psychological impact, whereas a blame culture increases psychological impact.’

This appears to correlate the intensity of response to the pro-activity shown by the HCW in dealing with things, which is interesting. I would have assumed the HCW who just let things be might feel less of an impact. This might indicate a link between conscientiousness, and psychological injury.

A qualitative analysis of 21 staff by Ullstrom et al, ‘Suffering in silence: a qualitative study of second victims of adverse events’, found that non-judgmental support from peers was vital. One interviewee said,

I really want to highlight how important that support is (…) without it, I don’t know where I would have been now (…), if I would have ever dared to come back and work as a nurse again. (Interviewee No 14, Profession: Nurse, Type of adverse event: Wrong medication dose)

While another spoke about reluctance of doctors to seek external help if they are not recovering,

I think there is an inner resistance towards getting external help. At least, among doctors [the idea is] “I can handle this” (…) but I think that really we should have much more general support. In difficult situations overall. Not only after adverse events. (Interviewee No 18, Profession: Doctor, Type of adverse event: Operation went wrong)

 

*

 

We are encouraged to be open about our mistakes, and our leaders in the profession have shown us their example. The booklet ‘Medical Error’ (published by the National Patient Safety Agency) contained vignettes from the careers of, among others, the then GMC President and the then President of the Royal College of Physicians.

 

Error then, happens to the best of us.

Yet, we cannot accept our role in these errors with equanimity. It takes something out of us. This is normal human behaviour surely – regret, guilt. We are now required to express these feelings to those who have been harmed, as per the Duty of Candour, which became law in March 2015. Following a series of healthcare scandals, the Francis report described a culture of obfuscation, and this was followed by A Promise to Learn… by Don Berwick which enlarged on the idea of transparency, and finally came specific recommendations from the Royal College of Surgeons which preceded Regulation 20.

So now, while handling our response as 2nd victims, we must take ourselves to the person we have harmed and apologise. This might compound the emotional challenge of the situation, or it may in fact accelerate resolution. It is amazing how a patient’s forgiveness can set an anxious doctor back on track.

It is worthwhile dwelling on how to handle Duty of Candour conversations. I have heard and used various verbal formulations, which to the outside observer might be surprising or evasive… for how hard can it be to say sorry? But… what are you sorry for. Are you sorry you did it? Are you sorry ‘we’ did it, i.e. the team, the department, the hospital? Or are you sorry it happened, in an impersonal way, the same way you felt sorry when you heard on the news that someone got run over last weekend? Which sorry? And while finding your way through the post-incident psychological reaction, do you have the emotional strength to handle the expression of sorrow, whatever form it takes? It is quite possible that a natural feeling of vulnerability and defensiveness will influence the words that are chosen, and make the conversation less candid than intended. On the other hand, perhaps, as in the figure below, those who accept a degree of personal culpability and are affected by that, are more likely to demonstrate candour than the flint-skinned individual who regards adverse outcomes as inevitable complications over which only fate can exert influence.

On the subject of defensiveness, it is impossible to discuss medical error or patient safety incidents without referring to the legal situation. We know, following the recent trials of both Dr Hadiza Bawa-Garba (which occurred after I gave this lecture) and Mr David Sellu, that doctors are not immune to prosecution following ‘omission’ harm events. These names are likely to weigh heavily in the minds of doctors who become involved safety incidents, and are likely to exaggerate the feelings of panic and ‘chaos’ that were described in the Scott paper.

 

This article has focussed on the healthcare workers. The response of the true 2nd victims, sons, daughters, mothers, partners, have been overlooked, but that subject would require an article of its own, and I am probably not best placed to write it. However, the Duty of Candour has, in my opinion, brought the two spheres of psychological response closer together. The (primarily physically) injured or suffering patient is now more likely to meet the (psychologically) traumatised doctor. The shared experience, and insights into the stresses experienced, may actually improve understanding. But the resources required of doctors and nurses to deal with their own regret and self-criticism, while simultaneously approaching patients or relatives, should not be underestimated.

 

 

 

###

 

Blog compilation books, on sale:

Motives, emotions and memory: exploring how dcotors think

Spoken / unspoken: hidden mechanics of the doctor-patient relationship

A face to meet the faces

A hand in the river

Why did that man receive CPR? An inquiry

 

Advertisements

Accountability, blame and medical error after Bawa-Garba

 

The reaction to the Dr Bawa-Garba case has shown that the medical community finds it hard to accept that individuals can be held personally accountable for underperformance (once we exclude malice, drunkenness or other gross examples). Rather, deficiencies in the healthcare system surrounding the individual should be identified and corrected. Don Berwick was very clear in his report ‘A promise to learn – a commitment to act: Improving the Safety of Patients in England’,

 

NHS staff are not to blame – in the vast majority of cases it is the systems, procedures, conditions, environment and constraints they face that lead to patient safety problems.

 

Anyone involved in investigating patient safety incidents will recognise that preceding each one there is usually a system-based issue to be found. For instance; under-staffing/surges in demand, confusing protocols, similarly packaged drugs, or allowance of distraction. At the extreme of de-individualisation – if that is even a word – the system can also be held responsible for placing an underperforming or inexperienced doctor in front of patients in the first place. It can also be blamed for failing to identify and support an individual when they enter a situation that allows their deficiency in knowledge, pattern recognition, or prioritisation to manifest as harm. Etc., etc., ad absurdum. So, does personal accountability for under-performance (in good faith) exist at all?

 

Smoking gun

It is well established in the patient safety literature, and in the modern philosophy of healthcare, that personal accountability, AKA ‘blame’, inhibits system-wide improvement in safety. Fear of blame dissuades healthcare staff from reporting errors, thus allowing the same mistake to be made again in the future. Fear exists: the Kirkup review into failings in Liverpool Community Health NHS Trust describes how those involved in clinical incidents were brought in for questioning:

 

‘In practice they were “an interrogation and a frightening experience”. Staff reported feeling physically sick beforehand and approached them with trepidation. Across the organisation shouting and finger-pointing became the norm.’ (Richard Vise, Guardian)

 

An article by Bell et al in the journal Chest, describes how a missed lung cancer diagnosis can be attributed to multiple failures in the system, but the ‘smoking gun’ is to be found in the hand of the pulmonologist who last had contact with the patient. A classic case. The authors conclude that the pulmonologist, who did carry some responsibility, is absolved by his or her active engagement in fixing the system such that the same error cannot happen twice. This is a message we all must take away; our accountability lies in the duty to work constantly on improving the safety of our systems for patients yet to enter our hospitals – through audit, reporting, and being open.

Speaking at the Global Patient Safety Summit in 20016, The Secretary of State for Health Jeremy Hunt aligned himself closely to this philosophy :

‘…to blame failures in care on doctors and nurses trying to do their best is to miss the point that bad mistakes can be made by good people. What is often overlooked is proper study of the environment and systems in which mistakes happen and to understand what went wrong and encouragement to spread any lessons learned. Accountability to future patients as well as to the person sitting in front of you.’

Yet Dr Bawa-Garba’s fate has shown that despite all these words and aspirations, personal accountability for particularly poor performance still exists. Is this justified?

 

Individual accountability within a Just Culture

Philip Boysen, an American anaesthiologist, wrote about how to develop a ‘just culture’ in healthcare, drawing from various industries and organisations, some historical. He acknowledged that blame may still have a role within a just culture;

‘While encouraging personnel to report mistakes, identify the potential for error, and even stop work in acute situations, a just culture cannot be a blame-free enterprise.’

Boysen refers to a paper ‘The path to safe and reliable healthcare’ by Leonard and Frankel, which presents a spectrum of behaviours associated with safety incidents, including ‘reckless’, ‘risky’ and purely ‘unintentional’ error. These result in ‘discipline’, participation in teaching others, ‘retraining’ or at the very least involvement in the investigation.

The UK’s Sign up to Safety campaign, which promotes difficult but necessary conversations as a way of exploring safety issues, breaks down personal accountability along just these lines:

The Boysen paper also refers to an (older) NHS algorithm that poses a ‘substitution’ test after medical error; ‘Would another provider put in the same circumstances in the same systems environment make the same error?’

These are all efforts to unpick and define the place of personal accountability. It seems clear that it does exist, but that censure or ‘discipline’ should come late, and only if you make a mistake while not adhering to policies, or worse, are reckless.

 

What is safe anyway? 

How do we define a safe environment? Addressing factors that permit greater potential for error, such as poor staffing, fatigue and IT functionality are clearly vital, but we are not agreed, yet, on what ‘safe’ looks like. Staffing ratios are a start, but do not necessarily take into account fluctuations in demand, or the effect that one highly complex patient might have on a service. However safe we make the environment, however rigorously we modify the ergonomics to take into account the variables arising from human factors, patients still rely on individual doctors to make the right decisions at the right time. The environment will not protect patients from mis-diagnosis or knowledge gaps; or, in the case of Dr Bawa-Garba, what has been called by some, ‘cognitive failure’.

In recent weeks many NHS workers have been reassured by their trusts that unsafe environments should be called out, and that they are encouraged to speak up. The GMC published a flow chart to help people decide how to raise concerns. Yet we all know that in the immediate term, on a Saturday night when you are two colleagues down because the planned locum fell through and the ward F2 has rung in with the ‘flu, that extra resources are unlikely to arrive. How do we apportion individual accountability here? Is it true that whatever happens on this night, the doctors should not be blamed? Will their errors, should they make them, and however odd they might appear from the outside, be overlooked because they were too pressed? Does personal accountability for under-performance completely evaporate in sub-optimal conditions?

 

Intrinsic accountability: the map of experience

Although the backlash against Bawa Garba’s (clearly excessive in most peoples’ minds) Gross Negligence Manslaughter judgment has suggested that there is no place for blame when things go wrong in substandard systems, we should remember that even in well-provided Trusts with working computers, risk lurks, ready to strike, and those of us who are standing by when it does so will be asked to explain what happened. Being asked to explain feels like blame. That is because we, as doctors, naturally feel responsible. That is our baseline moral state: responsible, slightly fearful (especially in the early years), anxious to make the correct decision. We feel guilty when things turn out badly. We generate our own sense of accountability, and subsequently we may experience weeks of self-examination. Sometimes, we need to be reassured by older hands that it is not our fault. Otherwise we will burn out in the slow flame of self-doubt and fearfulness.

 

As I have observed before, there is a place for this sense of blame. It sharpens the senses and opens the psyche to deeper lessons. The mistakes for which we accept a degree of responsibility leave indelible marks, which over a career coalesce to form a map of hard-won experience, the better to help us navigate the tricky situations to come. A well-known consultant in my field said, on a training day, ‘An expert is someone who has made every silly mistake possible.’; yes, but none of them twice. The same probably goes for Dr Peter Wilmshurt, a cardiologist and well-known whistle-blower who has referred himself to the GMC for a career of errors. This act makes the point; medical careers teem with error. We become good through error. But if we blame our errors wholly on the systems around us, we will not lay down the ink that makes that map. It may be an unpopular view, but I think part of being a doctor is learning how to receive those stinging tattoos.

 

Explore more books on my author page here