The reaction to the Dr Bawa-Garba case has shown that the medical community finds it hard to accept that individuals can be held personally accountable for underperformance (once we exclude malice, drunkenness or other gross examples). Rather, deficiencies in the healthcare system surrounding the individual should be identified and corrected. Don Berwick was very clear in his report ‘A promise to learn – a commitment to act: Improving the Safety of Patients in England’,
NHS staff are not to blame – in the vast majority of cases it is the systems, procedures, conditions, environment and constraints they face that lead to patient safety problems.
Anyone involved in investigating patient safety incidents will recognise that preceding each one there is usually a system-based issue to be found. For instance; under-staffing/surges in demand, confusing protocols, similarly packaged drugs, or allowance of distraction. At the extreme of de-individualisation – if that is even a word – the system can also be held responsible for placing an underperforming or inexperienced doctor in front of patients in the first place. It can also be blamed for failing to identify and support an individual when they enter a situation that allows their deficiency in knowledge, pattern recognition, or prioritisation to manifest as harm. Etc., etc., ad absurdum. So, does personal accountability for under-performance (in good faith) exist at all?
It is well established in the patient safety literature, and in the modern philosophy of healthcare, that personal accountability, AKA ‘blame’, inhibits system-wide improvement in safety. Fear of blame dissuades healthcare staff from reporting errors, thus allowing the same mistake to be made again in the future. Fear exists: the Kirkup review into failings in Liverpool Community Health NHS Trust describes how those involved in clinical incidents were brought in for questioning:
‘In practice they were “an interrogation and a frightening experience”. Staff reported feeling physically sick beforehand and approached them with trepidation. Across the organisation shouting and finger-pointing became the norm.’ (Richard Vise, Guardian)
An article by Bell et al in the journal Chest, describes how a missed lung cancer diagnosis can be attributed to multiple failures in the system, but the ‘smoking gun’ is to be found in the hand of the pulmonologist who last had contact with the patient. A classic case. The authors conclude that the pulmonologist, who did carry some responsibility, is absolved by his or her active engagement in fixing the system such that the same error cannot happen twice. This is a message we all must take away; our accountability lies in the duty to work constantly on improving the safety of our systems for patients yet to enter our hospitals – through audit, reporting, and being open.
Speaking at the Global Patient Safety Summit in 20016, The Secretary of State for Health Jeremy Hunt aligned himself closely to this philosophy :
‘…to blame failures in care on doctors and nurses trying to do their best is to miss the point that bad mistakes can be made by good people. What is often overlooked is proper study of the environment and systems in which mistakes happen and to understand what went wrong and encouragement to spread any lessons learned. Accountability to future patients as well as to the person sitting in front of you.’
Yet Dr Bawa-Garba’s fate has shown that despite all these words and aspirations, personal accountability for particularly poor performance still exists. Is this justified?
Individual accountability within a Just Culture
Philip Boysen, an American anaesthiologist, wrote about how to develop a ‘just culture’ in healthcare, drawing from various industries and organisations, some historical. He acknowledged that blame may still have a role within a just culture;
‘While encouraging personnel to report mistakes, identify the potential for error, and even stop work in acute situations, a just culture cannot be a blame-free enterprise.’
Boysen refers to a paper ‘The path to safe and reliable healthcare’ by Leonard and Frankel, which presents a spectrum of behaviours associated with safety incidents, including ‘reckless’, ‘risky’ and purely ‘unintentional’ error. These result in ‘discipline’, participation in teaching others, ‘retraining’ or at the very least involvement in the investigation.
The UK’s Sign up to Safety campaign, which promotes difficult but necessary conversations as a way of exploring safety issues, breaks down personal accountability along just these lines:
The Boysen paper also refers to an (older) NHS algorithm that poses a ‘substitution’ test after medical error; ‘Would another provider put in the same circumstances in the same systems environment make the same error?’
These are all efforts to unpick and define the place of personal accountability. It seems clear that it does exist, but that censure or ‘discipline’ should come late, and only if you make a mistake while not adhering to policies, or worse, are reckless.
What is safe anyway?
How do we define a safe environment? Addressing factors that permit greater potential for error, such as poor staffing, fatigue and IT functionality are clearly vital, but we are not agreed, yet, on what ‘safe’ looks like. Staffing ratios are a start, but do not necessarily take into account fluctuations in demand, or the effect that one highly complex patient might have on a service. However safe we make the environment, however rigorously we modify the ergonomics to take into account the variables arising from human factors, patients still rely on individual doctors to make the right decisions at the right time. The environment will not protect patients from mis-diagnosis or knowledge gaps; or, in the case of Dr Bawa-Garba, what has been called by some, ‘cognitive failure’.
In recent weeks many NHS workers have been reassured by their trusts that unsafe environments should be called out, and that they are encouraged to speak up. The GMC published a flow chart to help people decide how to raise concerns. Yet we all know that in the immediate term, on a Saturday night when you are two colleagues down because the planned locum fell through and the ward F2 has rung in with the ‘flu, that extra resources are unlikely to arrive. How do we apportion individual accountability here? Is it true that whatever happens on this night, the doctors should not be blamed? Will their errors, should they make them, and however odd they might appear from the outside, be overlooked because they were too pressed? Does personal accountability for under-performance completely evaporate in sub-optimal conditions?
Intrinsic accountability: the map of experience
Although the backlash against Bawa Garba’s (clearly excessive in most peoples’ minds) Gross Negligence Manslaughter judgment has suggested that there is no place for blame when things go wrong in substandard systems, we should remember that even in well-provided Trusts with working computers, risk lurks, ready to strike, and those of us who are standing by when it does so will be asked to explain what happened. Being asked to explain feels like blame. That is because we, as doctors, naturally feel responsible. That is our baseline moral state: responsible, slightly fearful (especially in the early years), anxious to make the correct decision. We feel guilty when things turn out badly. We generate our own sense of accountability, and subsequently we may experience weeks of self-examination. Sometimes, we need to be reassured by older hands that it is not our fault. Otherwise we will burn out in the slow flame of self-doubt and fearfulness.
As I have observed before, there is a place for this sense of blame. It sharpens the senses and opens the psyche to deeper lessons. The mistakes for which we accept a degree of responsibility leave indelible marks, which over a career coalesce to form a map of hard-won experience, the better to help us navigate the tricky situations to come. A well-known consultant in my field said, on a training day, ‘An expert is someone who has made every silly mistake possible.’; yes, but none of them twice. The same probably goes for Dr Peter Wilmshurt, a cardiologist and well-known whistle-blower who has referred himself to the GMC for a career of errors. This act makes the point; medical careers teem with error. We become good through error. But if we blame our errors wholly on the systems around us, we will not lay down the ink that makes that map. It may be an unpopular view, but I think part of being a doctor is learning how to receive those stinging tattoos.
Explore more books on my author page here