Last month, in the first of a three-part series on measuring health and safety performance, Neil Budworth examined the use and misuse of lagging indicators. Here, he focuses on leading indicators, i.e. measurables relating to activity, process, or outcome and which do not rely on an event occurring.
In terms of health and safety management – and much else in life – it is much better to measure and take appropriate action before an incident occurs. Sometimes, however, the correlation between leading indicators and actual on-the-ground performance can be poor. For example, Booth and Amis quote a study on South African mines, where safety-audit scores were good but the accident record was not.1 To be effective there must be a good link between the leading indicator and the actual performance of the business.
As the safety culture of an organisation develops, more and more use should be made of specific leading indicators. Setting specific goals and monitoring the progress towards them has proved itself in general management over many years. The goals chosen need to be clearly linked to the level of safety culture of the organisation, and are limited only by individual ingenuity.
Fortunately, there is very good guidance available on the selection and use of leading indicators.2 Leading indicators that support the development of health and safety:
- are objective and easy to measure and collect;
- are relevant to the organisation, or workgroup whose performance is being measured;
- provide immediate and reliable indication of the level of performance;
- are cost-efficient in terms of equipment, personnel and additional technology required to gather the information; and
are understood and owned by the organisation, or workgroup whose performance is being measured
The reason why the indicator was chosen must be clear and it must drive desirable actions. For example, the safety department could choose to count the number of interventions the site manager has made on safety matters in the year, and then set a target of increasing it by 50 per cent in the next year. The important thing is that those goals are relevant and that they will, in the longer term, improve safety performance. The more specific the measures are to the key risks and activities of the area concerned, the better. Some of the more common measures are discussed below.
The Health and Safety Executive states that “all control systems tend to deteriorate over time, or become obsolete as a result of change”.3 The tool for detecting and remedying that deterioration is the safety audit. The audit evaluates whether or not the elements of the system are still appropriate, and if they are still working correctly.
Safety auditing is supported by serious accident investigations – for example, following the King’s Cross London Underground fire in 1987, Fennell said: “[I]f the internal audit has become the yardstick by which financial performance is measured, then the safety audit should become the yardstick by which safety is measured.”4
The safety audit measures safety performance against a pre-set standard before any event has occurred, and allows action to be taken before anyone is injured, or property is damaged. This predictive element is obviously important, as properly conducted safety audits should prevent accidents and ill health from occurring, and measuring against a pre-set standard means any improvement or deterioration can be quantified.
Many proprietary safety audit systems have scoring systems built in so that the amount of deterioration or improvement can be shown numerically. Thus, the scores of a particular department could be integrated into the appraisal system. Many issues raised as a result of an audit are directly under the control of the site management and, more importantly, they are perceived to be within their control, so using safety audit scores in an appraisal is more acceptable than using accident targets, as these can be viewed by managers as outside of their control.
Good audits should check the theory of the management system against the reality of what is happening on the ground. The audit checks to see if the actuality of site operations matches up with the documented procedure.
Audits are a commonly used management tool in other areas of a company’s operation. For example, all companies go through financial audits, and those with quality systems undergo quality audits. Consequently, most companies are very familiar with the audit as a management tool. This can be helpful because the company leadership can see that safety, too, can be managed and that it is not just a matter of luck.
Audits are not perfect, however. They can’t be performed too frequently, for example, as they lose their effectiveness. Frequent auditing would mean that action items could not be completed between audits and could lead to the site management becoming resigned to the continuation of the problem. Too many audits would result in them being seen as less of a help and more of a hindrance. The fact that the safety audit is only one of a string of audits to which management is exposed compounds this problem.
Because auditors are focused on finding areas that deviate from the ideal an audit inevitably results in a list of actions for the site to undertake – either procedures that need to be modified, or physical conditions that need to be addressed. These situations can often take a long time to correct and usually result in a negative-sounding final report, which can adversely affect the morale of the on-site team.
Benchmarking is also a problem. While a safety audit can show improvement or deterioration in a particular site, they are limited in terms of comparing that standard to that of other sites. The variety of auditing systems available and differences in approach by different auditors adds to this difficulty.
Furthermore, most proprietary audits tend to focus on the site’s management system, with questions about attitudes only being asked in relation to the commitment of management on the site. This can mean that the most significant cause of accidents – human factors – is not always thoroughly investigated.
Finally, the way in which audit findings are received can have a huge impact on the effectiveness of audit as a tool. If the findings are treated as a genuine learning opportunity and embraced, they can be incredibly useful. If, however, an adverse finding is seen as visible evidence of a corporate failure, which must be challenged or suppressed, then a ‘good news only’ culture will develop and potentially fatal faults will remain unchallenged, effectively rendering the audit useless.
Measuring behaviour is a more recent – and controversial – development in safety management. Broadly speaking, the process seeks to encourage safe behaviour through the use of ‘operant conditioning’. This is where behaviour is strengthened by following it with reinforcement, or by removing a potentially unpleasant effect, or it is diminished by dealing with the behaviour via punishment.
Positive reinforcement is much more effective than punishment in safety management; it shows people the right way to do things, whereas punishment only underlines what they have been doing wrong. The behavioural approach to safety management seeks to involve workers in setting their own safety targets (based on behavioural measurement) and publicly displays their progress towards those targets.
Several studies have shown very encouraging results with this technique, though the details would be too lengthy to discuss in this article.5, 6, 7, 8
There have been (and still are) strident challenges in using behavioural safety techniques, with some opponents branding them as schemes that seek to blame the workforce for failures in safety. To be completely clear, a well-constructed full behavioural safety programme should support, encourage and involve the workforce and should use proven scientific management techniques to identify and eliminate the causes of accidents and ill health. Any behavioural safety programme should focus on identifying the root cause of unsafe behaviour, not on blaming the workforce.
The most obvious factor in favour of the use of behavioural measurement and feedback is that it is a leading indicator (of accidents) and it appears to correlate well with accident performance. The number of ‘at risk’ behaviours that can be observed is large, hence statistically valid limits can be determined and significant trends can be separated from random variations. For example, the way in which a job is performed could be observed every time that job is performed. This would give a very large number of measurements on which to base the statistics of safe and unsafe actions.
The information that is fed back to the employee is, generally speaking, positive because it is a measurement of success and not failure. Positive information can be fed back to employees fairly soon after the event, which, in turn, helps emphasise that the results are under their control.
Positive information can be fed back regularly to stimulate interdepartmental rivalry, and this will not be at the expense of accident data being suppressed. This can sometimes have the additional effect of reversing normal peer pressure so that the safe way of doing things becomes the accepted social norm.
A behavioural safety programme does not need to share the weakness of accidents and near-misses. If safe and unsafe behaviours are defined carefully, observations could be focused around managerial behaviour, or behaviour related to critical tasks in process safety.
But there are disadvantages to this approach, also. Firstly, the safe ways of doing things need to be identified and disseminated to staff – a very big exercise, as it means all jobs must be analysed and the safe way of performing them agreed. This requires a significant amount of time and resources, but it should result in an improvement in safety performance, even without the behavioural measurement and feedback, as staff will be and feel more involved in the safety effort.
Observers need to be trained and allowed a fair amount of time to perform enough observations to get good reliable results. This also has obvious cost implications.
Probably the biggest concern with behavioural measurement, however, is the time and commitment it takes from management. Everything hinges on their commitment as, without it, not enough observations can be made and corrective actions will not be taken.
In 1985, a report by the HSE’s Accident Prevention Advisory Unit found that senior management failures resulted in 61 per cent of all accidents, and 47 per cent of those accidents had software (human) causes.9
Consequently, the Executive has long recognised the use of attitude measures in safety management and, together with the Health and Safety Laboratory (HSL), it has developed tools to help organisations measure the safety climate of their organisation.10
Attitude measurements are a leading indicator and should be a good predictor of safety performance (Zohar,11 as well as Cooper and Phillips12 have confirmed this via correlation with traditional safety measures). A safety-attitude questionnaire, if used correctly, should be a useful diagnostic tool that can be applied at site and departmental level.
Attitude measurements can be taken before and after an intervention to determine its effect. Unlike accident data, the climate measure should reveal the effects the intervention has had on attitudes themselves. Attitude measurements also differ from accident data in that they are not as random. In addition, accident data often reflect the situation some time ago and, because reporting levels are often influenced by the level of awareness of the individual, the intervention itself will affect the number of accidents reported. This, in turn, means accident statistics cannot easily be used for measuring the success or failure of a programme aimed at raising the awareness of safety.
The underlying aim behind most safety interventions is to change behaviour. This change can be measured directly but, if it is to become permanent, the attitude will have to change, so it is possible to measure how effective an intervention has been on all departments and management layers in one exercise by measuring the stated attitude directly.
Most importantly, attitude surveys measure the ‘soft’ areas of safety, such as communication and perceived level of management commitment, which are difficult to quantify in any other way. Some of the essential features that predict a successful safety culture, as determined by the CBI,13 are as follows:
- Leadership and commitment of top management, which is demonstrated in a genuine and visible way;
- Acceptance that the safety programme is a long-term strategy that requires sustained effort and interest;
- A policy statement of high expectations that conveys a sense of optimism about what is possible, and which is supported by adequate codes of practice and safety standards;
- Health and safety should be treated as seriously as other corporate aims and be properly resourced;
- Health and safety must be a line-management responsibility;
- ‘Ownership’ of health and safety must permeate all levels of the workforce. Ownership requires employee involvement, training and communication;
- Realistic and achievable targets should be set and performance measured against them;
- Incidents should be fully investigated;
- Consistency of behaviour against agreed standards should be achieved by auditing, and good safety behaviour should be a condition of employment;
- Deficiencies revealed by investigation of an audit should be remedied promptly; and
- Management must receive adequate up-to-date information to be able to assess performance.
Many of these are ‘soft’ issues – revolving around commitment, ownership or communication – all of which are virtually impossible to measure using traditional techniques, so this is where a safety climate measure can have real value.
The major disadvantage with attitude surveys is that attitudes and actions do not always coincide. Consider the following quote from the report into the Clapham train crash, relating to safety concerns, which were expressed time and time again to the enquiry by the staff involved.14 It was noted that “the remainder of the evidence demonstrated beyond dispute two things: there was total sincerity on the part of all who spoke of safety in this way but, nevertheless, there was a failure to carry those beliefs through from thought to deed.”
So, if attitudes and behaviour are not linked it suggests attitude measures as a predictor of safety performance are useless but, as outlined above, the scores on a properly constructed safety-climate measure have been shown to correlate well with safety performance, where that performance was measured using more traditional techniques. More of a problem, perhaps, is that attitude surveys cannot be performed too frequently, otherwise they lose their effectiveness.
Safety inspections and safety sampling
A safety inspection is a scheduled inspection of premises, or part of same, by personnel within that organisation, possibly accompanied by an external specialist.15 Safety inspections have been a mainstay of safety management for many years and are considered so important that the right to perform them has been enshrined in legislation.16
Safety sampling is a more specific safety inspection technique, where particular items are examined in a relatively short timescale. It has the advantage of being more specific and focused than a normal safety inspection, but shares many of the strengths and weaknesses of safety inspections.
Safety inspections may involve shop-floor staff, and this is extremely important, because it helps give ownership of safety to those who work with the risk. In some organisations, the inspections may involve line managers and/or safety representatives. Safety inspections are also an extremely visible part of safety management; they provide a picture of the actual conditions on the shop floor, and normally result in a list of items requiring attention.
Some organisations have quantified their safety inspections and these scores have then been used in appraisals, or to encourage interdepartmental competitions. Quantification is normally done by splitting the inspection into various subject areas and then rating each area on a scale, where, for example, 1 is poor and 10 is excellent.
Another strength is that safety inspections can be carried out on a regular basis and feedback given quickly on the state of the workplace. This timely feedback helps underline the fact that safety is under the control of the workforce and that their actions can make a difference.
Properly training the workforce in interpersonal, inspection and questioning techniques can dramatically improve the impact of inspections and surveys. Building rapport and framing questions effectively, as described by Marsh, in his book on how developing safety conversations can transform a programme.17
But many of these strengths are also weaknesses. As previously noted, the frequency of inspections and the ensuing list of action points may mean that some of those actions have not been fully dealt with by the time the next inspection is due. This can have a very demoralising effect on the staff performing the inspections, particularly if the majority of the actions are fairly trivial but nevertheless overwhelm the maintenance department.
The tendency of some inspectors to examine certain issues more thoroughly than others highlights another point: if the inspections are being scored and used in interdepartmental competitions, people do take the results fairly seriously. Unfortunately, the scores of the departments may vary not only according to the actual conditions in that department but also according to the skill, experience and commitment of the inspector.
On an even more cynical note, some inspectors have been known to use the inspections as a way of scoring political points. Involving site politics in safety is very damaging for the inspections, and for safety in general, and may ultimately result in some areas becoming unwilling to cooperate with the inspections.
Safety inspections do tend to produce some trivial points because, generally, the staff conducting the inspections have not had much training in the techniques required and, quite simply, it is much easier to spot something that is out of place or broken than it is to spot something that is missing, or a procedure that is not working. Seldom are items of strategic importance discovered, and the potential of many of the issues raised to cause an accident is dubious.
Because of the inspectors’ position in the organisation and their training, the attitudes of staff, especially management, are rarely addressed, again meaning one of the major underlying causes of accidents is overlooked.
Finally, to conduct safety inspections properly requires time and, in a busy department, this can be resented.
Two final thoughts on management time and safety training: not surprisingly, there is a correlation between the amount of management time spent on safety and the frequency of serious accidents. Monitoring management time spent on safety makes some sense, as it is effectively measuring the level of management commitment to safety. Unfortunately, this data is quite difficult to collect with any level of accuracy.
However, if a surrogate measure around the individual activity of managers can be found, this can have the same impact as measuring the amount of management time.
The amount of safety training can be used as a leading key performance measure for safety. Competence is absolutely fundamental to good safety performance, and research has shown the impact that good-quality health and safety advice from a well-trained safety professional can have.18
The delivery of training against an agreed plan can be a good leading indicator. However, measuring the amount of training only does not guarantee the quality or the impact of the training and, as such, is only a partial solution to performance monitoring.
This article touches on some of the more common leading indicators in safety management. All have their strengths and weaknesses, but choosing the right measure, or set of measures, for your organisation at the right time can be a crucial step in driving a strong safety culture. Combining these with lagging indicators can make them even more effective. The last in this series of articles, in next month’s issue, will look at how to present indicators of performance for maximum impact.
1 Amis, RH & Booth, RT (1992): ‘Monitoring Health and Safety Management’, in SHP, February 1992, pp43-46
2 Step Change in Safety: Leading Performance Indicators – Guidance for Effective Use – www.stepchangeinsafety.net/knowledgecentre/publications/publication.cfm/publicationid/26
3 Health and Safety Executive (1991): Successful Health and Safety Management HS(G)65, HMSO, London
4 Fennell, D (1988): Investigation into the King’s Cross Underground Fire, Department of Transport, HMSO, London
5 Krause, TR (1991): ‘The Behaviour Based Approach to Safety’, in SHP August 1991, pp 30-32
6 Krause, TR (1993): ‘Safety and Quality, Two Sides of the Same Coin’, in Occupational Hazards, April 1993
7 Courtaulds Film Cellophane (1995): ‘Changing Behaviour, Improving Safety’, in Health and Safety Information Bulletin 229, pp11-14
8 Cooper, MD (1994): ‘Reducing Accidents Using Goal Setting and feedback: A Field Study’, J Occup and Org Psych, 67, pp219-240
9 Health and Safety Executive, Accident Prevention Advisory Unit (1985): An Outline Report on Occupational Safety and Health, Occasional Paper Op9, HMSO, London
10 Health and Safety Executive (1991): Successful Health and Safety Management HS(G)65, HMSO, London
11 Zohar, D (1980): ‘Safety Climate in Industrial Organizations: Theoretical and Applied Implications’, in J App Psych, 65, 96-102
12 Cooper, MD & Phillips, RA (1994): ‘Validation of a Safety Climate Measure’ – paper presented to the Annual Occupational Psychology Conference, Birmingham 3-5 January 1994
13 Confederation of British Industry (1990): Developing a Safety Culture, CBI, London
14 Hidden, A (1989): Investigation into the Clapham Junction Railway Accident, Department of Transport, HMSO, London, p163
15 Stranks, J & Dewis, M (1986): RoSPA Health and Safety Practice, Pitman, London, p79
16 Health and Safety Executive (1977): The Safety Representatives and Safety Committees Regulations 1977, HMSO, London
17 Marsh, T (2011): Talking Safety, Ryder Marsh Safety Ltd
18 Rimmington, J (1994): ‘Developing safer attitudes’, in Safety Management, March 1994
Neil Budworth is European EHS director for Houghton Global.
The Safety Conversation Podcast: Listen now!
The Safety Conversation with SHP (previously the Safety and Health Podcast) aims to bring you the latest news, insights and legislation updates in the form of interviews, discussions and panel debates from leading figures within the profession.
Find us on Apple Podcasts, Spotify and Google Podcasts, subscribe and join the conversation today!