Behavioural safety – Tales of the unexpected30 May 2012
Mei-Li Lin offers an overview of what organisations should consider in terms of managing unexpected events that have the potential to lead to catastrophic outcomes, with a particular emphasis on the importance of the human and organisational-culture elements in risk identification and control.
In recent years, health and safety practitioners have increasingly been questioning the validity of the Heinrich Triangle,1 which claims that reducing unsafe acts and unsafe conditions will eventually lead to a reduction in severe injuries. The fundamental question is: if the Triangle is correct, why hasn’t the decline in injury rates been reflected in the fatality rate?
(According to HSE statistics, over the last five years the major-injury incidence rate has gone down from 113.5 per 100,000 workers to 99, while the fatality rate has seen a much smaller decrease, from 0.8 to 0.5.)2
The value of the Heinrich Triangle lies in its role in advocating a proactive approach to injury prevention. It brings our attention to unsafe conditions and behaviours (at the base of the Triangle) and urges us to learn from them before they lead to injuries.
These learning opportunities remain valid but a fundamental problem with the Triangle is that it does not discern risk potential, i.e. it doesn’t differentiate between hazards that have higher risk potential (those that are more likely to result in severe injuries) and those that do not. Anderson and Denkl3 illustrated that hazards that are more likely to cause a fatality, such as fire and explosion, produce fewer minor incidents.
This finding sheds some light on why a reduction in minor incidents has a proportionately smaller impact on fatalities or catastrophic events. Conversely, since those major hazards cause fewer minor incidents, there are relatively fewer cases for investigation and learning. As a result, limited systemic prevention efforts have been developed to address them.
Anderson and Denkl’s finding also highlights the importance of expanding our thinking to risk-based loss prevention. In dealing with concerns about low-frequency/high-severity events, instead of challenging the Heinrich Triangle a more constructive question to ask is: “How will a risk-based approach require us to make fundamental changes to the ways we identify risks, the way we train our workforce, and the way we design, operate and manage?”
A complex web
An incident often involves a complex web of causal factors; arguably, the more severe the incident, the bigger and more complex the web. For decades, we have heard at conferences and read in articles that a majority of industrial accidents are caused by human errors. This may explain why some incident-investigation efforts stop when human errors – the outermost layer of the complex causal web – are found.4 This thinking may also have guided most of the injury-prevention interventions to focus more on observable human behaviours and less on factors that lie underneath and hidden within business decisions, operating systems and processes.
Yet, lessons from major incidents urge us to re-examine this assumption. We need not look far. The Deepwater Horizon accident investigation panel reported: “[The investigation] team did not identify any single action or inaction that caused this incident. Rather, a complex and interlinked series of mechanical failures, human judgements, engineering design, operational implementation and team interfaces came together to allow the initiation and escalation of the accident.”5
The investigation into the Challenger space-shuttle tragedy in 1986 also revealed a complex causal web that “shifts our attention from individual causal explanations to the structure of power – factors that are difficult to identify and untangle yet have great impact on decision-making in organisations”.6
So, it is time to take another look at the fundamental concept of the Triangle and expand our thinking from an “accident” triangle to a “risk” triangle (see Figure 1) and explore the three types of risks hidden at the bottom of the triangle, i.e. behaviour risks, design and process risks, and culture and decision-making risks.
Behaviour risks are relatively easy to observe: lack of attention during training, tolerance of occasional shortcuts, inconsistency in housekeeping, rush to act, etc. Behaviour risks provide good references to the deeper design and process risks, as well as culture and decision risks. However, safety audits and behaviour-based observation programmes alone do not address the root causes of behaviour risks. A more holistic approach, which provides inner drive to behave safely, must also be applied if behaviour change is to be sustained.7
Design and process risks are less observable, and most exist long before an incident occurs.8 They involve interactions with equipment designs and processes, such as confusing control signals or displays, awkward automation, outdated maintenance specifications, or a design configuration that hinders proper use of personal protective equipment.
These types of risks may stem from inappropriate machinery design and poor change management.9 They frequently result in unsafe conditions, such as improper isolation of energy sources, ignored warning signals, or undermined machinery integrity – all of which have the potential to result in severe injuries.
Critical and complex operations often merit more sophisticated and tailored risk identification strategies that engage the operators, as well as maintenance crews and contractors. A survey of BP’s Texas City refinery revealed that 42 per cent of maintenance/craft technicians and 39 per cent of contractors felt that the training they had received did not provide them with a clear understanding of the process risk.10
Culture and decision risks are hidden deep inside the management system and organisational culture. They are systemic risks and typically related to social processes. They involve interactions with the people element – the biggest variable in safety and performance equations. They include risks such as delaying decisions, productivity-driven maintenance schedules, informal chains of command, fear of retribution, misalignment in sub-cultures, conflicting organisational and social expectations, and compromised norm of risk tolerance.
Compared with design and process risks, culture and decision risks are even less predictable and observable. Although these risks are often “felt”, they are unlikely to be identified through a structured risk assessment method. Instead, they often require gauging workers’ perceptions and understanding through surveys, focus groups, and learning from grass-roots leaders. Unfortunately, since this approach is less controllable and may cause embarrassment to management, only the most committed and responsible leaders are open to explore culture and decision risks.
Understanding and identifying the different types of risk is just one part of the equation; controlling them is obviously the other. Leaving aside – for the purpose of brevity in this article – design and process risks, risk-reduction efforts benefit hugely from effective integration of the ‘people’ element, as does the creation of a learning and ‘mindful’ culture.11
Consider the well-publicised case of US Airways Flight 1549, in which Captain Sullenberger safely landed the failing plane on the Hudson River in New York on 15 January 2009. In the defining 208 seconds from engine failure to the time the plane made a water landing, what saved the 155 lives on board was the strong operating discipline among the flight crew, the captain’s risk-sensing and decision-making ability, and an enabling culture that allowed a life-saving decision to be executed amid high risks, competing commands, and complexity of the situations.
Arguably, given a different set of crew, or a captain with lesser decision-making capacity, the outcome could have been dramatically different. Consider, for example, what might have happened in the following scenarios:
- A flight attendant who, instead of trusting the captain and calmly helping the passengers, was panicking and causing disturbance, thus further fuelling the fear;
- A co-pilot who, instead of being aligned with the captain’s decision and fully cooperating and focusing on assisting him, entered into a debate and resisted working in tandem with the captain;
- A captain who, instead of being able to focus on sensing the situation in its entirety and analysing and anticipating outcomes, was indecisive and had to rely on the command from the control tower, which lacked full awareness of the conditions;
- An aviation culture which, instead of respecting expertise and giving full authority to the ‘front-line’ decision-maker who has access to the immediate situation, demanded a centralised control mechanism.
Operating dexterity (see Figure 2) characterises people’s – and, in turn, organisations’ – speed and agility in responding to changes or unexpected events. As operating systems become more complex and the pace of change rapidly increases, there is less tolerance for errors and inefficiency. Operating dexterity is therefore a critical attribute to sustainable operation. Studies have shown that most severe injuries occur when changes are involved, non-routine work is performed, or when normal operation becomes abnormal.4,9 These are the very situations in which pre-defined rules and procedures may fall short.
Built on the foundation of sound leadership, management systems, and processes operating dexterity can be achieved by investing in people, so they are:
- sensitive to weak signals and mindful of potential risks;12
- able to make timely and accurate decisions;
- supported by an enabling and empowering culture; and
- equipped with the highest level of operating disciplines.
Sharpening risk sensitivity
Risk sensitivity is particularly critical in dealing with hidden risks, i.e. those that are more likely to lead to fatalities and catastrophic events. Risk sensitivity consists of two basic elements: the ability to sense weak signals and the ability to discern risk potential. It takes persistent effort, great technical expertise, and a strong inner sense of responsibility to anticipate risks. It is human nature to become habituated to our surroundings and experiences. As a result, we are not adapted to detect latent errors, or creeping risks.8 Active mental practices are therefore needed to help us break out of a confined mode of ‘filtering’.
To anticipate risks, an organisation may convene a multi-disciplinary team to conduct a structured, in-depth, scenario-based risk assessment to discover risks hidden deep inside design and processes and to examine how those risks interact with possible changes to the system. The organisation may also consider inviting a third party to conduct focus groups and interviews to identify culture and decision risks. But risk sensing requires constant effort. Regardless of which approach is taken, the fundamental layer of defence is the individual’s ability and determination to engage their mind to detect weak signals.
The shaded area in Figure 1c illustrates the hidden risks that have higher potential to severely impact human lives, our environment, and organisations themselves. Gradual deviations from proper procedures, poor on-time inspection, and cumbersome management of change practices, for example, may appear to be ‘ordinary’ system or operational inefficiency. But, if major hazards, such as high-energy sources, or toxic chemicals, are involved, individuals with the ability to discern risk potential will deal with those signals with a much greater sense of urgency.
An individual’s risk sensitivity can be profiled in several dimensions:
- mindfulness of system complexity;
- sense of vulnerability and urgency;
- sensitivity to operations;12 and
- process ownership and responsibility.
It is a worthy investment for any organisation, especially one in a high-risk industry, to keep a handle on the risk-sensitivity level of its people.
Enhancing decision-making capability
As technologies and systems become more complex, an individual’s role as a decision-maker becomes even more significant. Traditional training tends to be instructive in nature. It focuses on building knowledge and skills on the ‘right way to do things’. Building decision-making capability requires a dynamic and blended-learning strategy that also discusses ‘why’ things are done in certain ways, ‘what’ are the alternatives and the possible outcomes, and the pros and cons of different actions.
For example, in addition to knowledge-based training, an organisation may consider adopting an experiential learning methodology that includes interactive scenario-based simulations, as well as on-the-job coaching. Many engineering-oriented organisations capture lessons learned by revising their operating policies and standards. For the purpose of learning, though, decision-driven case studies are more effective in engaging minds, triggering creative thinking, and building learners’ decision-making capability.
Building an enabling culture
An enabling culture offers blameless and tangible support and:
- embraces transparency;
- encourages innovation;
- challenges assumptions;
- invests in learning;
- recognises good practices;
- seeks to optimise performance; and
- instils pride and a sense of belonging.
These characteristics are interdependent. They collectively make up an enabling culture that fuels the inner drive for an individual to stay vigilant. It takes away fear, clears organisational noises, and allows an individual to focus on decision-making and execution.
Conversely, a lack of any of these characteristics undermines the organisation’s ability to protect its people and to compete. In assessing culture, it is important to examine cultural strengths and weaknesses along all these dimensions. Results of such an assessment help establish the need for change, envision and communicate the preferred end state, and develop a strategy for change.13
Building operating disciplines
Strong operating discipline is another important ingredient in combating low-probability/high-consequence events. It reflects the collective effect of leadership, management system, technology and processes, as well as organisational culture and individual attributes. Operating discipline is like the axle that transfers the combined energy of those elements to produce performance outcomes (see Figure 3). Poor operating discipline is a risk in itself and causes inefficiency, waste, leaks, deviations, stress, or serious incidents in the long run.
Strong operating discipline requires personal mastery of processes and procedures. It allows an individual to perform even under stressful conditions. One of the attributes of strong operating discipline is the desire to learn and continuously improve. To prevent low-probability/high-consequence events, an organisation with strong operating discipline sees failures as learning opportunities and uses investigations as processes to discover hidden risks. Strong operating discipline stops the deviations from turning into an acceptable norm.
It is important to keep in mind that operating discipline is not an exercise to find faults but a mindset and approach to perfect system ‘performance’. Indicators of strong operating discipline include:
- Visible and ‘felt’ leadership;
- Aligned beliefs and goals;
- Engaged workforce;
- Open and active lines of communications;
- Just and fair organisation;
- Systemic continuous improvement mindset and practice;
- Adequate resources and competency;
- Accessible and up-to-date documentation;
- Practices consistent with procedures;
- Seamless and effective teamwork; and
- Excellent housekeeping and personal safety practices.
In the increasingly complex operating environment, to sustain business vitality and performance excellence, an organisation must be able to manage unexpected events and lessen the threat from those low-frequency/high-consequence events. The four elements discussed above – risk sensitivity, decision-making capability, enabling culture, and operating discipline – are interdependent components that collectively build an organisation’s operating dexterity and allow it to navigate through uncharted territories with great speed and accuracy.
1 Writing in the 1930s, American industrial safety pioneer Herbert William Heinrich held that, in a workplace, for every accident that causes a major injury, there are 29 accidents that cause minor injuries and 300 accidents that cause no injuries. Because many accidents share common root causes, he said, addressing more commonplace accidents that don’t cause injuries can prevent accidents that do
2 HSE (2011): Annual Statistics Report 2010/2011 – www.hse.gov.uk/statistics/ overall/hssh1011.pdf
3 Anderson, M and Denkl, M (2010): ‘The Heinrich Accident Triangle – too simplistic a model for HSE management in the 21st century?’ presentation at the SPE International Conference on Health, Safety, and Environment in Oil and Gas Exploration and Production, Rio de Janeiro, April 2010
4 Dekker, S (2006): The field guide to understanding human error, Ashgate Publishing Company, Burlington, VT
5 BP (2010): Deepwater Horizon incident investigation report (internal)
6 Vaughan, D (1996): The Challenger launch decision – risky technology, culture, and deviance at NASA, The University of Chicago Press Ltd, London
7 DeJoy, DM (2005): ‘Behavior change versus culture change: Divergent approaches to managing workplace safety’, in Safety Science, 43, 105-129
8 Reason, J (1997): Managing the risks of organizational accidents, Ashgate Publishing Company, Burlington, VT
9 Manuele, FA (2008): ‘Serious injuries and fatalities’, in Professional Safety, 12/08, 32-39
10 Baker III, J (2007): The report of the BP U.S. Refineries Independent Safety Review Panel – www.bp.com/bakerpanelreport
11 Marsh, Dr T (2012): ‘Cast no shadow’, in SHP January 2012, Vol.30 No.1 – www.shponline.co.uk/features-content/full/safety-leadership-cast-no-shadow
12 Weick, KE and Sutcliffe, KM (2007): Managing the unexpected – resilient performance in an age of uncertainty, John Wiley & Sons, Inc. San Francisco, CA
13 Fernandez, S and Rainey, H (2006): ‘Managing Successful Organizational Change in the Public Sector’, in Public Administration Review, March-April, 168-176
Mei-Li Lin is director, research and development at Dupont Sustainable Solutions in the US.
Join SHP Online
- ✔ Download free reports and research
- ✔ Access free Digital magazine
- ✔ Email newsletter briefings