Informa Markets

Author Bio ▼

Safety and Health Practitioner (SHP) is first for independent health and safety news.
December 29, 2021

Get the SHP newsletter

Daily health and safety news, job alerts and resources

artificial intelligence

Ethical impact of AI on health, safety and wellbeing

Across industry, the use of artificial intelligence (AI) has become a key feature of the technological revolution in workplace safety, health and wellbeing.

David Sharp

Ethical impact of AI on health, safety and wellbeing, David Sharp

Safety and health practitioners are only too aware of the emergence of new forms of monitoring and managing workers based on the collection of large amounts of real-time data. These methods may already be providing you with opportunities to enhance safety and health surveillance, cut exposure to various risk factors, and provide early warnings of stress, health problems and fatigue. They do, however, present other types of risk, not least of the ethical variety and I want to know more about how you are managing these. For a new research project, I’m investigating how practitioners are managing risks such as privacy breaches, lack of transparency, discrimination and in-built biases.

In safety and health, AI solutions are crunching data from a huge range of sources to generate predictive risk assessments, from the structured information in method statements and sickness records through to unstructured big data. Drones that rely on sophisticated AI for commercial operations such as window cleaning, to mitigate the risk of (human) working at height are now commonplace. AI is being used in surveillance cameras for access control, or to monitor worker behaviour for compliance purposes. In the food industry, for example, the use of AI to ensure workers have their PPE is common.

And perhaps less obviously, but just as pervasively, you may well find it is being used in your organisation’s workflow management, journey planning or accounting systems to automate processes that were once done by a human being.

This last point is important, because technology is now embedded so deeply in our lives that we barely notice its influence, or its cost. According to Eurostat[1] almost one-fifth of people aged 16-74 in the EU had used smart watches, fitness bands, connected goggles or headsets, safety trackers and so on in 2020.

AI has become very much a part of our lives at work and at home, and its benefits are manifold. I believe, however, that there are knowledge gaps on the impact the ethical hazards of AI are having on our safety, health and wellbeing.

Take the use of machine learning algorithms to monitor worker performance. This is damaging people’s physical and mental health, according to legislators in the UK. And it may see people’s jobs and livelihoods either displaced with lower skilled, lower paid work, or replaced entirely.

Consider AI’s use in video surveillance; not everyone is necessarily aware they are being filmed, the purpose of the surveillance, and what happens with the data when processed. Surveillance cameras used in a food manufacturing plant to monitor PPE use will process a mass of surplus data that is not relevant to its purpose: making its use invasive. Or when used for access control it can be discriminatory, as the datasets that have been used to train the AI struggle to recognise people of colour.

The EU’s proposed AI Act (2021/0106 (COD) has much to say about the ethical use of AI and its governance, including the use of video surveillance in public places for law enforcement purposes. It is ambitious in its scope, and maybe this is why progress with legislation in this area is slow.

Finally, there is the environmental cost. Vendors talk of ‘cloud’ solutions, but as renowned academic Kate Crawford explains, they are not fluffy, and they are not harmless: “In reality, it takes a gargantuan amount of energy to run the computational infrastructures of Amazon Web Services or Microsoft’s Azure, and the carbon footprint of the AI systems that run on those platforms is growing.[2]

I am currently studying for a Masters in AI Ethics and Society and would like to talk with safety practitioners for a research project about the AI solutions they employ, whether they relate to software programs or hardware such the occupational use of cobots and robots.

Please get in touch with me if you’d like to find out more.

David Sharp FCIM FIWFM TechIOSH is CEO of International Workplace, a digital learning provider specialising in health and safety training. He is a student on the Masters programme in AI Ethics and Society at the University of Cambridge, England.

[1] https://ec.europa.eu/eurostat/web/products-eurostat-news/-/ddn-20210225-1
[2] Atlas of AI : power, politics, and the planetary costs of artificial intelligence, Kate Crawford

The Safety Conversation Podcast: Listen now!

The Safety Conversation with SHP (previously the Safety and Health Podcast) aims to bring you the latest news, insights and legislation updates in the form of interviews, discussions and panel debates from leading figures within the profession.

Find us on Apple Podcasts, Spotify and Google Podcasts, subscribe and join the conversation today!

Related Topics

Subscribe
Notify of
guest

1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Nigel Dupree
Nigel Dupree
2 years ago

If it’s not something measured and recorded the Data ain’t there in the first place other than some commission meta analysis by the HSE Better Display Screen RR 561 going back to 2007 when they had no idea or incentive to manage “Work Exposure Limits” and were not about to go back to having mid-morning and afternoon tea breaks or an hour for lunch to provide a reasonable cognitive break nor about to introduce a “Right to Disconnect” from the 24/7 Digital work, rest not and play not work/life styles in the 21st Century. So, all very well using RIDDOR… Read more »