Looking out for patient safety is surprisingly new. It wasn’t until the end of the 1990s that we fully appreciated how important it is to scrutinise how we work and make improvements when shortfalls are identified. After some horrific failings were identified in the NHS, the landscape was changed and we now all consider clinical governance to be vital.
Sir Liam Donaldson was the Chief Medical Officer for the NHS at the time of the Bristol Children’s Hospital inquiry. It became apparent that a frightening number of paediatric deaths in the cardiac surgery department of that hospital were a result of institutional failings. The conclusions from the inquiry were devastating, and highlighted some real changes that needed to be made in many aspects of health. Liam used the term ‘Clinical Governance’ in 1998 and this has been adapted and used in many health systems. The Australian Department of Health describes it as:
‘…a systematic approach to maintaining and improving the quality of patient care within a clinical care setting, health program or health system.’
This is a way of saying that we need to keep making sure we are doing right by our patients.
There are 7 ‘pillars’ of clinical governance and they interlink to try and provide an effective patient outcome.
Together, they contribute to 3 main ideas:
That seems quite sensible, but it’s terrifying that the idea of systems being in place to ensure good quality care has only arisen in the last 25 years.
Risk management in healthcare had historically relied on a ‘safety 1’ approach. This is a method adapted from the airline and nuclear power industries whereby negative incidents (radiation leaks, plane crashes etc.) are analysed to find where system problems can be fixed. This relies on 2 main ideas – the system is decomposable (can be broken down to its parts), and is bimodal (right or wrong, working or not). Together these form the ‘causality credo’ and allow you to fix a problem. Due to this, the airline industry has become incredibly safe and we are all happy to fly.
In health, the safety 1 approach relies on incident reporting and, in significant events, e.g. unexpected death, performing a root cause analysis (RCA). This process uses senior clinicians, who interview all involved, review the care and see if they can find a system issue that will prevent the bad outcome from happening again. But this relies on the assumption that you can decompose the event and that the outcome was right or wrong. It suggests that there was a root cause. In a complex techno-societal system like a hospital this is often very tricky to do. Interactions are numerous, often brief, and their influence on an outcome is hard to quantify. Anyone who has been involved in an RCA knows that it is impossible to not feel some blame, or to be worried that you were at fault. This can be horrible.
In 2015, the health systems were involved in writing a document for the NHS entitled, ‘From Safety-I to Safety-II: A White Paper.’ This paper flipped risk management in health on its head to change the attitude from, ‘ensure as few things go wrong as possible’ to, ‘ensure as many things as possible go right.’ The basis for this is that, by far, the biggest proportion of what we do goes well with good patient outcomes. Not only is there a lot that we can learn from in what is right, but it is also an opportunity to remind each other that we are nearly always successful and excellent.
A safety 2 mentality also questions the retrospective analysis of incidents, as these are fraught with issues; recollection of events, the complexity of systems involved, and the usefulness of any recommendations that arise from the report.
If we assume that most of what we do is excellent then we need to reinforce that. Put simply, we should identify what goes well and ensure it goes well again. From a day to day basis, that is as easy as appreciating our colleagues and letting them know that their work is good.
But, we can also analyse good outcomes and see where things went right despite the challenges involved. That ability to be flexible in the face of variability is a cornerstone of providing good health, and we can analyse how that was achieved. The diagram below demonstrates that there is a lot to be learned from the challenging but successful. This large group are a great source of data for systemic improvements.
Systems can be changed or established to ensure good outcomes next time as well. This is a different method to changing systems that we think don’t work and cause bad outcomes (safety 2). We can find these data all around us; patient compliments, team interactions (resuscitations, clinics, operative lists, etc.), achieved key performance indicators (KPIs), observed interactions. The list is as long as the great stuff we achieve on a daily basis.
From this great stuff, we can also look at how often we get it perfect or good outcomes easily, and then understand why we do. This analysis and shared. For example, audits can identify that we’re doing really well, and demonstrating this to the team (and what makes the care for that condition good) is very empowering and reinforcing. Pride in the positives is very powerful.
One of the most significant side effects of a safety 2 culture is kindness. By recognising each other as providers of excellent healthcare, we reinforce the good and we ensure that we want to come back for more. Safety 1 has its place, and without the analysis of those negative outcomes in 1995 we wouldn’t have a clinical governance culture at all. But we are missing a huge opportunity to improve patient outcomes by neglecting safety 2 (and it’s very rewarding to see the smiles).
There are programs for recognising positives being introduced across the world. The Learning from Excellence group in the UK is a great initiative that many are learning from, and their website is a great source of ideas: https://learningfromexcellence.com/.
So when you’re asked to take part in an RCA, have an HCCC complaint made against you, or are the subject of an IIMS, etc. –