SHP Online is part of the Informa Markets Division of Informa PLC

SHP Online is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

November 10, 2016

Get the SHP newsletter

Daily health and safety news, job alerts and resources

How AI could benefit the world of work and impact on OSH

Previous articles in this series by HSE’s Foresight Centre have described artificial intelligence (AI) as machines having traits such as reasoning, learning, communication and perception.  It has been acknowledged that AI includes a number of sub-fields, each are worthy of detailed consideration in their own right.  For these articles, the focus has been on the field of machine learning, which is advancing rapidly. In this article, Helen explores the benefits the machine learning could bring to the world of work.

So what benefits could machine learning bring to the world of work?  Machine learning presents an opportunity to obtain insights from data that human analysts may not see on their own.  Artificial intelligent systems using machine learning have steadily improved in accuracy, for example in identifying images, which they can now do with greater accuracy than humans.

If machines are used to ‘double check’ human analysis and help with processing large amounts of data, this may result in less likelihood of mistakes, and reduction of risks and errors in some decision making.  In fact, an algorithm (mathematical recipe) can be designed to ruthlessly deliver the range of tasks it is given, thus eliminating any human bias from the output.

However, the down side of this is that a machine will lack human motives and be so fixed on the task in hand, that there may be unintended negative consequences for humans.  Researchers are already mindful of this, and a new centre for human-compatible AI was launched in August 2016. ai8

This centre will focus on ensuring AI systems are beneficial to humans, and aims to incorporate human values into the design of AI.  In addition, in October 2016, the results of an enquiry into robotics and AI by the House of Commons Select Committee called for a standing commission on AI, which will identify principles for governing the development and application of AI (including examining the social, ethical and legal implications of AI developments).

AI also has the potential to enable a better understanding of risk taking behaviour, (e.g. by taking advantage of big data, such as that from wearable health monitoring devices).  It has potential for making a big difference to medical and scientific research; through processing vast amounts of data and providing direction for future research.  It can also transform the way people communicate at work (e.g. offering real-time translation) and how they communicate with their workplaces, homes, cars etc. (offering the potential to support work/life balance).

AI has the potential to free up workers’ time; allowing them to focus on creative and strategic roles, such as leadership and engagement (which may support a more positive health and safety culture).

However, AI is like any technology; if used irresponsibly it could cause harm.  To ensure AI is beneficial, leading scientists (including Stephen Hawking) are already arguing for more research, along with ethical and regulatory frameworks.  As AI is in its early technology phase, we don’t know what should be regulated yet.  Some commentators feel that ethical and regulatory concerns are being overlooked, and that the technology needs to be enabled within an agreed framework.

The assessment and mitigation of potential risks associated with AI is also being called for.  Risks may include machines and robots that outperform humans, and pursue interests that do not align with those of humans.  There is some evidence to indicate that high levels of automation can have a negative impact on situational awareness (which is a critical factor in preventing accidents).

There are ethical and moral considerations surrounding AI.  For example it could make surveillance much more powerful and wide spread, with implications such as invasion of privacy and loss of control (facial recognition systems are raising concerns over privacy; as people will now always leave a record of their attendance in any public place; unless of course they wear a mask!)

What is clear is that AI could have unintended consequences; although it’s too early to fully appreciate what these may be.  New types of jobs and new forms of worker/machine interaction may present different types of risks (such as an increase in work intensification combined with a mismatch between training/skills and jobs).

One consequence may be that (if machine learning algorithms are scaled up to very high levels of intelligence) they would not reliably preserve the things that humans value; such as emotions/feelings (sentient) and having worthwhile jobs.  It has been suggested that we therefore face a ‘control problem’; that is, how to create advanced AI systems that could be deployed without risk of unacceptable side-effects.

If a machine trains on awful data, it will make awful responses. So the impact of AI on the world of work will depend (to a large extent) on who controls it, and the goals that are set for it.

©Crown Copyright 2016, Health and Safety Executive

How AI could benefit the world of work and impact on OSH Previous articles in this series by HSE’s Foresight Centre have described artificial intelligence (AI) as machines having traits such as
SHP - Health and Safety News, Legislation, PPE, CPD and Resources

Related Topics

Comments
  • John Kersey

    Another masterful review by Dr Beers with a great focus on human issues. HSL have a great basis for assessing human factors based on their operational experience with the Safety Climate Tool application.

    The last point is a key one – in addition to the old computing saw of GIGO – garbage in garbage out there is the further element of data that is not validated (perhaps through manual intervention) not only leading to faulty conclusions but also a faulty premise through machine learning. users of these systems thus need to be very sure of the quality assurance of the data used as well as the monitoring of these.

    But early indications from the experience in the States shows great promise and the application of AI and predictive analysis systems in other areas (such as the anti-poaching operations in Africa) show predictability in excess of 90%

    We have to accept that the OHS applications are way behind that of other business functions such as marketing and finance but we can also learn from their forward experience.

    All in all exciting times ahead for the OHS profession.

    Robot safety rocks!

Leave a Comment
Cancel reply

Exit mobile version