This was first posted on Safety Differently.
My grandfather was a Military Policeman in the Royal Air Force during the Second World War. My father was a Royal Marine. I remember as a child seeing a clipping of an old article in a national newspaper with a large photograph of him captioning a story of their deployment in the Middle East. This made me very proud, but it never made me want to join up. My brother thought differently and he went on to serve in the Royal Navy for almost 20 years.
In thinking about why the military life was not for me, one thing has always stood out – I don’t really like being told what to do. This, I’m sure, would have been generally seen as a career handicap in that environment, so I believe I made the right choice. I understand the need to have obedience in a live conflict, where questioning orders may be life-threatening, particularly as one rarely has all the facts to hand to make an informed choice. Yet there are examples of commanding officers who gave wrong orders with disastrous results – General Custer, for example, or pretty much everyone involved with trench warfare in WWI. On balance, the military (at least the better-trained ones) probably get it right more often than not and, as Taleb points out in Anti-Fragile, those approaches that have proven to be well-founded over many years of stress-testing should be respected, even if open to challenge.
Many organisations also use rules and compliance as their approach to safety management. But in the workplace where risks are, generally, less clear and obvious and where a loss of life is never an acceptable outcome, is an environment of rules and compliance appropriate or necessary? In an era where businesses are more open than ever; where employee engagement requires buy-in and understanding of corporate strategy; and where teams with a greater degree of control over their work are shown to perform better, why does safety largely remain in a hierarchical rules-based world?
Fundamentally, rules are about control. If we can control what is happening, we can determine outcomes. This is usually well-meaning, but just as no battle plan survives its first encounter with the enemy, no plan can fully anticipate every potential outcome in an ever-changing work environment, particularly once we throw human variability into the mix.
Rules attempt to remove this variability, but in doing so, can also remove with it innovation, entrepreneurialism and responsiveness. It lobotomises the organisation. When an unusual situation occurs, we no longer have the capacity to respond unilaterally. Many major accident investigations point out opportunities to have prevented the situation escalating had the people involved had either the risk awareness to identify it (Deepwater Horizon), or felt confident enough to act without someone else’s authority (Piper Alpha). Both mind sets are quashed by compliance cultures.
Chief among the compliance requirements are the so-called golden rules, life-saving rules or some similar variant. Again, well-meaning, these are typically based on those activities most likely to have caused fatal accidents. However, it is overly simplistic to believe people will stop doing something life-threatening because there is a rule in place. If the threat to life wasn’t enough to prevent it, is a threat of dismissal going to?
Like many bureaucratic systems over time, rules become more important than that which they were intended to protect. I have seen someone punished for non-compliance with a seatbelt rule while reversing a vehicle at very slow speed. Conversely, I have seen use of a live ignition source in a flammable atmosphere deemed not to be an offence because the rule in place related to smoking in designated areas, not to use of the lighter. If the outcome is completely at odds with the risk, something is broken.
People respond better to being cared for than being told what to do. A fatal risk programme that identifies those major risks and provides information to help workers make better risk informed decisions will contain essentially the same information as a set of rules. But much more buy-in is achieved from a message that says, “We care about your wellbeing – please remember these principles, it could save your life” rather than “Don’t do this or we’ll fire you.”
The wording, tone and symbols used dramatically changes the impact of the message. An example where this is used to great effect can be found in the safety principles and safety habits posters here [Note – this page has now gone so I’ve removed the link. The company took standard life saving rules and essentially re-cast them as critical risk reminders, offering support to help people, providing a much more caring perspective].
So do we need rules at all? Clearly it is unreasonable to expect workers to make every single decision based on a detailed risk assessment of the circumstances at hand. In these instances rules can be helpful to provide a rapid solution. Yet there are other occasions where existence of the rule implies safety right up to the point where the rule is breached, when this may not be the case if there are additional risk factors involved.
As in the military example, rules are most beneficial when not all the information is known (or even knowable) and people cannot make risk-informed decisions. This is typically the case in complex systems with high hazard potential where a quality decision can only be reached by careful consideration of all factors by a multi-disciplined team pooling their knowledge. In a nuclear waste facility dealing with plutonium contaminated material, for example, there is a safe upper limit for the surface density of an array of stored material. It is not possible for a process operator to make risk-informed real-time decisions about the structure of the array, but (in combination with other controls elsewhere) a simple rule can be established about the number of containers allowed in a stack.
Limiting rules to certain circumstances where risk is higher has the benefit of emphasising the importance and so making compliance more likely. They must also be quite specific. The broader the rule – always wear a seatbelt – the more likely it is to be seen as inappropriate in some circumstances and therefore optional. After all, most of us have broken the speed limit because we know it isn’t realistic in all situations and there are times when it can be broken without high likelihood of accident (with apologies to all traffic police).
Professor Andrew Hopkins has stated (Working Paper 72 – National Research Centre for OSH Regulation) that the control pendulum has swung too far towards risk management and needs to swing back to more rule-compliance, arguing that operationally workers need the simplicity provided by rules. He does, however, recognise the need for balance between the two. As ever, there is no black and white, right or wrong in safety. How do we find the balance in the grey zone?
- Be careful of phrasing. Couch requirements in terms of supporting safe action, not in the language of absolutes and threats;
- Impose rules only where risk is high, to emphasis their importance;
- Impose rules for specific, usually complex situations, where local decision making is difficult;
- Use the rules to build a framework within which workers are given the licence to use their core skills to change, adapt and improve;
- When the framework becomes challenged or changed, involve the workers in consideration of the implications.
Rules are the power tools of safety. They are labour saving devices that do most of the work, but have their own significant risks if mishandled and fail when it comes to the precision needed for a fine finish. For that we need to overlay the hand tools of carefully applied risk management. It’s slower, it takes more focus and more expertise but it can achieve that final few percent of improvement.