What does the term ‘patient experience’ mean to the average doctor? Why does this matter? It matters because patients’ subjective assessments of how they were treated is a pillar of overall quality in healthcare, alongside safety and clinical effectiveness. You could argue that the patient experience pillar is stouter than the other two, for at the end of the day how patients feel and function after their journey through the NHS is the ultimate marker of safety and effectiveness. A surgeon who values effectiveness and safety over experience might, for example, perform the most amazing cancer operation without any complication, and feel very proud of herself, while the debilitated patient returns home never to set foot outside again or enjoy what mattered most to him. (A caveat: doctors who prioritised short-term patient experience above all else probably contributed to the US opioid crisis – see this article in Patient Engagement Hit).
Patient experience is also an untapped source of clues and signals that may point towards important organisational and cultural ills. If the unsatisfactory experience of patients and families in Mid Staffs, Gosport or Morecombe Bay had been gathered, made visible and acted upon, changes might have been made much earlier. In his letter to the Secretary of State for Health on 5th January 2013, Francis put the following at number 3 in his list of causes: ‘Standards and methods of measuring compliance which did not focus on the effect of a service on patients.’ – i.e. they weren’t looking at patient experience.
There is now a well-established industry around the acquisition and presentation of patient feedback. We are all aware of the Friends and Family test. Healthwatch actively canvasses local populations on their priorities and concerns. The National Cancer Patient Experience Survey has produced comprehensive reports at Trust level since 2010. Patient Reported Outcome measures are routinely collected in relation to orthopaedic procedures. Within Trusts, in-patient and out-patient NHS services have paper-based or electronic feedback arrangements, so that departments can review dashboards on a regular basis. At meetings, the under-performing areas show up in red, leading to conversations about how to make them better.
Then there are the patient comments – the rich data. This is where the value lies – in people who take the trouble to articulate their concerns (or to record how impressed they were). As much time is spent looking at individual comments than a dashboard’s headlines, despite the far higher numbers involved in generating the dashboards. This is interesting, because doctors hate anecdotes as a driver of change (while secretly enjoying them as vignettes).
Here lies the contradiction. As humans and occasional patients ourselves, we know that the potential learning to be derived from the story of one dissatisfied patient may be far more important than marks out of ten given by the many in a questionnaire on an I-pad. One person’s difficult journey through an unsympathetic bureaucracy or un-listening clinical service may be much more deserving of scrutiny than the other 500 who whizzed through without a hitch. This sounds right – but to the doctor wedded to a respect for evidence and statistical significance, there may be some doubts.
Imagine a doctor who is named in a complaint (it has happened to all of us). He or she is mentioned by a family who are upset that their loved one appeared to have been neglected or mis-diagnosed during their final months. The doctor sits back and thinks… what does this say about me and the service that I represent?
After the initial sense of disquiet, he begins to rationalise. No… I’m not having this. I do a good job. My service does a good job. For every one of these cases, we treat another 500 without any problems, without any complaints. This is a one off. This is not representative. I’m not going to change what we do here in response to this one case. Oh, I remember that family now… they were pretty demanding at the time, yes, I remember, on the ward, the long conversations… hard work…
Thus, the rich feedback is consigned to a skewed anecdote. There is no change.
The tendency to diminish the significance of an individual’s feedback is reflected in the findings of study that is referenced in a BMJ Safety & Quality article A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. Although the main review finds positive associations between the gathering of patient experience reports and quality of healthcare overall, the cited article (Can patient safety be measured by surveys of patient experiences?) says,
‘…only 2% of patient-reported errors were classified by medical reviewers as ‘real clinical medical errors’ with most ‘reclassified’ by clinicians as ‘misunderstandings’ or ‘behaviour or communication problems’’
Was this what happened in the Trusts now caught up in the aftermath of many years of inadequate care? Was each complaint or piece of feedback felt not to indicate ‘real’ medical issues, just ‘misunderstandings’?
The Nuffield Trust’s interesting publication Francis report: One Year On, explores the issue of combining ‘hard’ and ‘soft’ data, soft data being anecdote and opinion, arising from staff or patients. One interviewee says,
[T]hat to me was one of the important things, to be sure that we really were picking up issues, triangulating wherever possible, three soft issues equals probably a hard issue. But if you only get one soft and you don’t hear about the other two then you don’t necessarily do something about it.
The metrics will tell you so much but what you’re picking up on the ground or people are telling you in the pub, where is the opportunity to bring that into the open…
Imagine a chronically failing department with safety issues. There is a trickle of complaints relating to one particular aspect of the service. The odd comment is made on public forums (such as Care Opinion). What would it take for the department to accept that there is something fundamentally wrong? The clinicians’, maybe the managers’, first instinct might be to reach for nationally accepted markers of efficacy and safety. They may look at the SHMI (summary hospital-level mortality indicator), at the number of SUIs (serious untoward incidents) and RCAs (root cause analyses). They’re all fine. Come on, we’re well below the thresholds that cause concern. What’s the problem? Unrealistic patient expectation, probably.
If we agree that patient experience is valuable, and that important themes may coalesce within the morass of ‘anecdote’, we must also agree on how to notice and extract those warning signs. We don’t seem to be there yet. I spent a bit of time perusing The Patient Experience Library recently, my attention having been drawn by a BMJ article by Miles Sibley. It appears to be a start, as a repository of sources, but it does not attempt to lift out trends. That job must go local governance groups, and maybe Healthwatch. The Care Quality Commission (CQC) will surely have a role here, as they invite members of the public to contact them with concerns directly, and have initiated inspections based on these. A feedback loop involving patient experience and the inspectorate appears to exist; but at the level below complaints, that is, in the mixed swirl of general opinion, good and bad, the mechanisms by which true issues can rise to the surface are less clear.
As a clinician, this brief journey into patient experience has reminded me that only those with the strongest opinions will record them. I am not convinced that if ten separate families express the same opinion about a specific area in the NHS, that area will be scrutinised. Most people are too busy living, or maybe grieving, to take the trouble to engage and emphasise those opinions. Those that do must have something important to say. For now, clinicians and managers need to look around regularly and ask themselves – could it happen here?