In the latest article from IOSH’s new research department Saeed Ahmadi says trust is the key to a harmonious relationship with technology.
Advances in Artificial Intelligence (AI), Machine Learning (ML),and Information and Communication Technology (ICT) are transforming how people govern and manage citizens and run organisations. Now, more than ever, algorithms increasingly make decisions that human managers used to make – in seconds. Nevertheless, for individuals to accept this technology a critical element is needed: Trust.
Trust is a central component of the interact between people and AI, as incorrect levels of trust may cause misuse, abuse or disuse of the technology. Trust is a prerequisite to ensuring a human-centric approach to technology like AI.
AI is not an end in itself, but a tool that has to serve people with the ultimate aim of supporting, rather than detracting from, human wellbeing. With all its benefits, AI technology can bring new challenges with itself as it has the capacity for enabling machines to “learn” and to take and implement decisions without human intervention, exerting tighter control over work activities by human.
Trust in technology can be referred to the confidence users have in the reliability, security, and fairness of technological systems. It encompasses the belief that technology will function as intended, protect personal data, and operate without bias or harmful consequences. On the other side, distrust in technology among individuals can manifest as resistance to change, reluctance to accept and engage with technology, and low compliance with processes dependent on their data input. This resistance may stem from fears of job loss, changing job roles, the need for upskilling or inadequate training, all of which can erode confidence in the technology itself. Additionally, a lack of transparency in how technology is implemented and managed can cultivate suspicion, reducing collaboration and engagement within tech users.
The black box phenomenon of AI: a source of distrust
Societal biases, prejudices and stereotypes can find their way into an AI system, and they can be flawed by numerous factors. Predictive analytics heavily relies on the quality of data .Poor data quality, such as incomplete or outdated data, can result in unreliable predictions. Another issue is overfitting, that occurs when a model is overly trained on specific data, making it perform well on known data but fail in new scenarios. Conversely, underfitting happens when a model is too simple or insufficiently trained, leading to poor performance with real-world data. These issues can lead to inaccurate predictions and decisions, reducing the overall trust in the AI enabled system.
While AI systems are easily observable in their inputs and outputs, their intermediate steps are often opaque to nonexperts and difficult to understand. In this case, it is known as the ‘black box’ phenomenon that refers to the lack of transparency of the decision-making process of AI systems, which can pose obstacles for organisations and workers, as it is difficult to fully comprehend how the system functions.
Tech failures and trust erosion: lessons learnt from the 2024 Global IT Crash
Today’s increasingly complex and digital environments call for businesses to strengthen their resilience strategies to withstand not only unforeseen disruptions, but also the constant evolution of cyber threats. One significant factor that can erode trust among technology users is feeling vulnerable toward the potentialities for system failures and the absence of robust safeguards to mitigate their impact.
When technology fails—especially on a large scale—it can impact not just individuals , but entire communities and industries. The Global IT crash in July serves as a stark example. The incident, caused by a faulty update from a leading cybersecurity firm, led to one of the largest IT outages in history, affecting 8.5 million systems worldwide. Not only did this event disrupt essential services across various sectors, including healthcare, finance, transportation, and government operations, but raised questions about how secure and fail-safe tech systems are.
Among the lessons learned following this accident is the need for workplaces to enhance technological resilience. Companies can take several measures to prevent future disruptions, such as implementing redundant systems, regularly updating and testing backup protocols, conducting cybersecurity audits, and training staff to respond to ICT failure emergencies. Ultimately, cultivating trust in technology hinges on continuous engagement and education, ensuring that individuals not only adapt to new technologies but also feel empowered and valued in a rapidly evolving digital landscape.
Building trust in Human-Centric AI
The European Commission has established a guideline for Trustworthy AI. These guidelines are designed to ensure that AI systems are developed and deployed in a manner that is lawful, ethical, and robust. With the enforcement of the EU AI Act, these principles will gain legal backing, further reinforcing the need for AI to be a tool that enhances human well-being while being reliable and trustworthy.
Enhance not replace
To foster trust among technology users, these guidelines emphasise the importance of effective oversight and robust technical safety to prevent unintended outcomes. They stress the need for privacy and proper data management, alongside ensuring that AI operations are transparent and comprehensible to users. Inclusivity is a cornerstone, requiring that AI to be fair and non-discriminatory, with diversity integrated throughout its development. AI should enhance human capabilities rather than replace them and contribute positively to society and the environment. The guide also calls for accountability, setting up mechanisms for assigning responsibility and conducting regular audits to ensure the technology is trustworthy and ethical.
Read more:
SHP is collaborating with IOSH’s new thought research department, Advice and Practice (A&P), to bring you a series of article focused on thought leadership. You can read the first article here, introducing the department, and the second article here which analyses the Government’s recently announced return to work policy.