Informa Markets

Author Bio ▼

Safety and Health Practitioner (SHP) is first for independent health and safety news.
March 5, 2013

Get the SHP newsletter

Daily health and safety news, job alerts and resources

Performance indicators – The numbers game

Writing in SHP in 1996 Neil Budworth discussed the most commonly used indicators of health and safety performance, in an article that became his most referenced work ever.1 Seventeen years on, in the first of a series of three articles, he evaluates the strengths and weaknesses of lagging indicators and considers how they should be used and presented to give the most accurate picture.

Indicators of performance are a critical element of safety manageme­­­nt. The effective measurement of performance can help target precious resource, and a single performance graph can make or break health and safety interventions.

As a starting point, consider a simple question: if I rolled two sixes on some die during one throw what total would I be likely to achieve if I threw the same die ten times? 

Of course, it is impossible to say because you simply don’t have enough information to make a reliable prediction. For example, you do not know how many die I threw, how many sides were on the die, whether the die were loaded, or what numbers were actually written on the die themselves. Sometimes, in life, when thinking about a problem we consider the outcome and then make certain assumptions; we don’t necessarily consider fully the process that leads to the outcome.

To some extent this is the position we have been in when we have measured safety performance. For years we have been guilty of looking at outcomes of safety management and have not always paid sufficient attention to the underlying processes.

To mix metaphors further, there are many organisations that have been trying to move forward but with their eyes firmly stuck on the rear-view mirror!

These days, with the increased emphasis on systematic management of safety, the use of safety management systems, and the focus on process safety risks the need for reliable performance indicators is greater than ever.

Many of the indicators used in the past were very effective when looking at a particular dimension of safety, but with more formalised systems demanding better data, process safety demanding more specific indicators, and health demanding some indicators, then the traditional indicators can fall short.

Firstly, to be clear, there is no single, unambiguous measure of safety and health performance. An organisation’s safety performance can be measured in myriad ways – for example, Zohar argues that it can be correlated with the status of the organisation’s safety advisor, as evidenced by the size of the company car provided.2

This may seem trivial, but it is based on fact: the size of the company car is a direct indicator of where that person sits in the managerial hierarchy and hence gives an indication of the level of importance that the senior management gives to safety. However, this would not be an indicator on which it is useful to report regularly, nor one that helps direct the attention of the organisation to where effort is needed.

Performance indicators are now often categorised as either ‘leading’ or ‘lagging’ indicators. The former measure activity, or processes and don’t require an incident to have occurred – for example, safety audit scores; the latter measure the volume, or frequency of incidents and essentially count events – for example, accident statistics.

Accident data

The single most common indicator used in health and safety today is accident data. Safety programmes are made or broken by subsequent accident figures. But the question is: exactly how good are they as an indicator of performance? 

The case for

The arguments for the use of accident data are, on the face of it, relatively strong. Such data are relatively easy to collect – reportable accidents, in particular, are difficult to suppress. The nature of accidents is such that the reporting rate can be verified by cross-checking the accident reports with the first-aid box stocks, or through anonymised surveys.

Accident data are also widely published in the standardised form of frequency and incident rates, and can be used to benchmark the performance of the organisation against that of similar organisations within the same sector. This benchmark data can prove very useful when trying to convince senior management to invest in safety. Accident data are so obviously linked with safety performance that people can easily grasp the concept.

Accident reports and statistics also play an important role in communicating and managing safety because they can be used to drive actions. They can, for example, be used by the safety committee on site to focus their efforts, or to communicate with the site team about a particular issue, or solicit suggestions for improvement. Accident reports can be used as a focus for discussion and to gain safety representatives’ commitment by encouraging them to investigate the accidents.

Trends in the data, or the details of particular accidents can form the basis of a communication campaign.
Most importantly, accident statistics can be used to identify trends. Before the Clapham rail accident in 1988, for example, there had been several wrong-side signal failures that indicated the potential for a crash3 (a wrong-side failure is where the signal failed to danger, i.e. the signal displayed green when it should have been red). If these failures had been acted upon then the crash, in which 35 people died, may not have occurred. Unfortunately, reports of major accidents are littered with unheeded warning signs.

The case against

Let’s be clear about one thing: accident statistics are a measure of failure, not success. They tell us what went wrong on this particular occasion and not what we have been doing right. At the very least this can be demoralising, as the individuals concerned could be criticised and not praised. Where else in industry is a negative measure used so frequently? 

In most organisations there are, thankfully, relatively few accidents, but this means the figures are subject to random fluctuations and thus are not reliable. In quality management, variations in product quality are analysed and range limits are produced for ‘common’ and ‘special’ cause variations. Common-cause variations are the random fluctuations that exist in nature, and special-cause variations are where the chance of that variation occurring randomly is minute.

As there are relatively few accidents there is often no way of distinguishing between normal random fluctuations and special causes, so an effective safety programme could be scuppered by random fluctuations that have little, or nothing to do with the original programme.

Accident statistics also tend to show the results of decisions made some time ago. A poor decision can be made and have no immediate effect on safety. Some time down the line, however, a piece of equipment, or management system, is subjected to a more demanding test and the equipment, or system fails.

A real-life example illustrates this point well. Some time ago, a chemical plant, after much discussion, decided to install a particular design of isolator in its electrical supply. This system operated well for five years until, without warning, the isolators started to explode. In this case, it was found to be as a result of faulty manufacture. So, today’s accident figures were affected by a decision that was made five years previously.

There is also a time delay in judging the effectiveness of new measures. If this is considered in conjunction with possible random fluctuations then a perfectly good safety initiative could be abandoned because of random fluctuations and poor decisions made some time ago.

Accident data are also difficult to use to motivate or incentivise managers, because the latter feel they have no control over the accidents that happen in their area, or to their staff, and resent having accident targets in their appraisal. The earlier example highlights why this is the case; a manager will feel it unfair if they are marked down in their appraisal as a result of a poor decision several years before by a previous manager.

A major failing with accident statistics is that they do not include cases of gradually developing occupational diseases. Accident figures concentrate on acute events, so only part of the health and safety performance of a company is being measured. Any improvements from an occupational-health initiative will not be measured if accident statistics are the only measure used.

This can be a particular problem for companies that have had a well-established system for a number of years. Further investment in traditional safety measures may bring only limited benefit, and it may be that the most cost-effective way of improving health and safety performance is to concentrate on the longer-term health aspects.

Also problematic is the use of accident data to assess the future risk of high-consequence, low-probability events because an accident rate based on data from lost-time accidents is not a good predictor of the these events. The report into the Texas City refinery explosion,4 among others, clearly makes this point. (See also David Towlson and Hasan Alardi’s article on pp44-46.)

Recently, commentators have been suggesting that rather than one accident triangle there are several overlapping triangles with different accident precursors at their base. If this is the case, and most experienced practitioners now believe that it is, then to avoid serious and fatal accidents and process safety accidents you must identify those accidents and near hits that could have resulted in a serious outcome and investigate them as though they were a serious or fatal accident. 

Simply put, it is unlikely that focusing on the root cause of a slip or trip will eliminate a major-process accident (although, in very rare cases, it could). For each significant type of accident its precursors need to be identified and monitored vigorously. More attention needs to be paid to the potential rather than actual severity of the accident, and a high degree of focus should be directed at those precursors with a high accident potential.

These days, accident statistics are often used to measure the injury severity, not necessarily the potential seriousness of the accident. Strictly speaking, accident statistics do not even do this; Booth and Amis state that time off work does not correlate well with true injury severity.5 The lost-time accident rate may vary according to geography, the industrial-relations climate, and morale of the employees, as all of these affect how much time employees are willing to take off following an accident.

Nor will the reporting rate of accidents on any two sites be the same. Injuries may be under or over-reported, and the reported accident rate may vary as a result of subtle differences in reporting criteria. With programmes based on accident rates, bear in mind that prizes for accident-free periods, or significant senior-manager bonuses related to accident rate can, if not carefully managed, result in the suppression of accident data.

Finally, accident statistics are a poor basis on which to assess the effectiveness of awareness-raising exercises. Staff become more aware of safety and so tend to report more accidents. Hence, the awareness-raising campaign may be entirely successful and the accident rate may rise.

As an indicator, accident data are a vital tool, but the practitioner would do well to remember the failings as well as the strengths of this indicator and to use other indicators in addition.

Near-miss data

The case for

Near-miss data count accidents that haven’t happened. Bird and Germain6 (and Heinrich before them) have shown that near-misses are far more frequent than accidents. They are also upstream of accidents and, as such, do not require someone to be hurt before preventative action is taken.

These are two very positive points in favour of the use of near-miss data. Near-misses are also good predictors of future accidents. Consider the Clapham train crash again: the accident report noted that a total of 15 wrong-side failures, which were a result of inadequate testing, had occurred in the Southern region from 1985 to 1987. Such a failure was the cause of the Clapham crash. If the causes of near-misses are identified and rectified then, effectively, a cause of a potential accident is removed.

Although accidents are largely preventable the outcome of an accident is a matter of chance – for example, if a brick falls from a scaffold then there are four possible outcomes:

  • The brick falls – there is no one, or no equipment around and the incident causes no alarm;
  • The brick falls and narrowly misses an employee, or a piece of equipment (a near-miss);
  • The brick falls and causes a minor injury to an employee, or damages property (a minor accident);
  • The brick falls and severely injures an employee, or causes substantial property damage (a major accident).
     

So, the same accident cause can give rise to four different outcomes. Eliminating the root cause of the near-miss (which will occur more frequently than the minor or major accident) will, in turn, eliminate the root cause of the minor and major accidents, without the resulting losses being incurred.

Safety committees can also use near-miss data to focus their discussions. This could be in the form of investigations, or analysis, with the ultimate aim of getting the committee more involved in the management of safety.

The case against

Near-miss data are difficult to collect. Staff are reticent enough about reporting accidents, so getting them to report accidents that didn’t actually happen is very difficult; people feel foolish and there is often a blame culture within the organisation (i.e. the person is solely blamed for their accident or near-accident). Experience has shown that near-miss reporting works well for some time and then falls into disuse because of these factors.

A common problem with near-miss data is that even if all near-misses were reported, it is unlikely that there would be enough data to produce statistically valid control limits. And, as with accidents, the data can be extremely variable. This is simply a matter of ratios: there may be more near-misses than accidents, but unless the site is very large it is unlikely to produce enough near-miss reports to produce statistically valid data.

Furthermore, because no accident has actually happened, getting site management to appreciate the importance of the event can be difficult.

As with programmes based on accident statistics, those based on number of near-misses reported, if not carefully handled, can mean large volumes of irrelevant or trivial near-misses are raised, which then hide more significant issues. If large volumes are being reported, an assessment of the maximum potential severity is vital to sort the wheat from the chaff.

Accident costs

The case for

Both accidents and near-misses are important indicators and can underpin a good health and safety management system. In themselves, however, they do not always motivate companies to action – although they should.

But the effect on the bottom line for some organisations can be motivational. For this reason, accident costs have, from time to time, been used as a way of monitoring safety performance. In the UK, the Health and Safety Executive has undertaken in-depth studies on the costs of accidents and provided detailed guidance on how to determine meaningful accident costs.7 IOSH, too, provides free information on the costs of accidents and the business case for health and safety.8

Accident costs can be a very powerful persuader and can help translate safety into a language that senior management can understand. They are especially powerful if they are related to the number of extra sales, or turnover that must be generated to cover the costs.

Expressing safety in terms of accident costs can also facilitate the integration of safety into a company’s every-day management systems and, presented on an occasional basis, they can emphasise the ongoing costs of not being safe.

The case against

Unfortunately, accident costs are difficult and time-consuming to collect accurately. Many of the costs involved are extremely difficult to quantify – for example, the loss of morale, or impact of adverse publicity are almost impossible to measure. Even the costs of investigation and first aid can be difficult to assess, sometimes. Average costs have been used in the past, but because they aren’t specific, managers often do not perceive them as relevant to their individual situation.

In many ways, collecting data on accident costs is similar to collecting near-miss data. The most common type of accident and therefore the greatest area of financial loss involves property damage. It is extremely difficult to get property-damage accidents reported on a consistent basis owing to the fear of reprimand. Even if the company operates a no-blame policy, if the same individual has caused a similar type of incident several times, even the most lenient manager will start to express their concerns.

Accident costs are also a negative indicator, i.e. they directly indicate the current level of failure. And they are similar to accident statistics in that they are difficult to use in appraisals; the manager concerned will see them as outside of their control in the same way they see accidents.

A potentially very worrying point is that accident costs do not indicate how much the current accident prevention packages have saved, i.e. the costs that would have been incurred had safety programmes not been in place. The danger is that senior management do not see the whole picture and compare the current accident costs with the costs of the safety department and see that they are indeed overfunding safety. . .

A continual focus on costs alone will start to devalue the moral argument for health and safety management, making it a pure risk-management exercise, balancing the potential costs of litigation against the costs of a safety improvement package, with no regard to the moral duties involved.

Finally, the difficulty in obtaining detailed accident costs makes it a very difficult measure to sustain. This, coupled with the potential for random variation in accident numbers and the long timescales involved in accident causation, can lead to misleading figures and create a double-edged sword.

Summary

Lagging indicators are a vital tool in the armoury of the safety and health practitioner, but two things are clear: firstly, all lagging indicators have strengths and significant weaknesses, and secondly, lagging indicators in themselves are not enough. They need to be coupled with leading indicators to help steer the business effectively. So, are lagging indicators still relevant? Absolutely, but they must be used as apart of a balanced suite of indicators.  

References
1 Budworth, N (1996): ‘Indicators of performance in safety management’, in SHP Vol.14 No.11, pp23-29
2 Zohar, D (1980): ‘Safety Climate in Industrial Organizations: Theoretical and Applied Implications’, in J App Psych, 65, 96-102
3 Hidden, A (1989): Investigation into the Clapham Junction Railway Accident,  Department of Transport, HMSO, London
4 CSB (2007): Investigation Report: Refinery Explosion and Fire, BP Texas City, Texas, March 23, 2005 – US Chemical Safety and Hazardous Investigation Board Report 2005-04-1-TX, pp184-186 – www.csb.gov/assets/ document/CSBFinalReportBP.pdf
5 Amis, RH & Booth, RT (1992): ‘Monitoring health and safety management’, in SHP Vol.10 No.2,
pp43-46
6 Bird, FE and Germain, GL (1986): Practical Loss Control Leadership, International Loss Control Institute Loganville, Georgia, Institute Publishing
7 Health and Safety Executive (1993): The Costs of Accidents at Work HS(G)96, HMSO, London
8 IOSH Li£e Savings Campaign – www.iosh.co.uk/about_us/what_were_up_to/campaigns/life_savings.aspx

Neil Budworth is European EHS director for Houghton Global.

The Safety Conversation Podcast: Listen now!

The Safety Conversation with SHP (previously the Safety and Health Podcast) aims to bring you the latest news, insights and legislation updates in the form of interviews, discussions and panel debates from leading figures within the profession.

Find us on Apple Podcasts, Spotify and Google Podcasts, subscribe and join the conversation today!

Related Topics

Subscribe
Notify of
guest

1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Bob
Bob
11 years ago

Good article, reminded me of an incident we had.

In 2006, we knocked a whole wall panel (2 x 3 x 450mm) into a street from a railway viaduct, a drop of 4.5m.

No one hit, no vehicle hit, no equipment damage, but regardless, a serious near miss.

And the resultant investigation raised serious issues of failure accross the board.

Shifting an obstruction (bridge protection beam) that was not ours, with plant that was borrowed, with no SSoW determined, all in haste to progress the work?

Topics: