In Part 2 of this article series I considered a number of challenges associated with the rise in use of industrial AI systems in workplaces, namely 1) the emergence of new health and safety risks stemming from the use of the technologies; 2) the challenges of judging whether any risks to health and safety arising from the use of the technologies are as low as is reasonably practicable; 3) the big data and data security challenges when working with the technologies, particularly those associated with extremely large flows of data and sensitive data; 4) the challenges of dealing with cyber security risks when technologies have connections to the internet.
In Part 3, I consider a number of other challenges, namely: 1) the challenges of undertaking accident investigations where failures in complex AI algorithms associated with systems are implicated, 2) the challenges of understanding who is legally liable when complex AI systems go seriously wrong, and 3) the challenge of striking a balance between capitalising on the benefits that AI can bring in this context and ensuring its use is ethical.
Investigating accidents
In Part 2 of this article series I considered a number of challenges associated with the rise in use of industrial AI systems in workplaces, namely 1) the emergence of new health and safety risks stemming from the use of the technologies; 2) the challenges of judging whether any risks to health and safety arising from the use of the technologies are as low as is reasonably practicable; 3) the big data and data security challenges when working with the technologies, particularly those associated with extremely large flows of data and sensitive data; 4) the challenges of dealing with cyber security risks when technologies have connections to the internet.
In Part 3, I consider a number of other challenges, namely: 1) the challenges of undertaking accident investigations where failures in complex AI algorithms associated with systems are implicated, 2) the challenges of understanding who is legally liable when complex AI systems go seriously wrong, and 3) the challenge of striking a balance between capitalising on the benefits that AI can bring in this context and ensuring its use is ethical.
Legal liability issues
The emergence of any new technology can also present significant challenges to existing legal and regulatory frameworks. A 2018 UK House of Lords Select Committee Report considered such challenges in the context of the emergence in use of artificial intelligence across UK society. A wide range of views were expressed in the consultation undertaken regarding the adequacy of existing legal arrangements in the UK. One serious issue considered by the report was who should be held accountable for decisions made or informed by artificial intelligence, including decisions made by systems deployed in workplaces that had potential health and safety implications:
“AI definitely raises all sorts of new questions to do with accountability. Is it the person or people who provided the data who are accountable, the person who built the AI, the person who validated it, the company which operates it? I am sure much time will be taken up in courts deciding on a case-by-case basis until legal precedence is established. It is not clear. In this area this is definitely a new world, and we are going to have to come up with some new answers regarding accountability”.
The report also considered whether, in the event that an AI system malfunctions, underperforms or otherwise make erroneous decisions that cause individuals harm, new mechanisms for legal liability and redress were needed. The current UK legal system looks to establish liability based on standards of behaviour that could be reasonably expected, and looks to establish the scope of liability based on the foreseeability of an outcome from an event. However, the report suggested that use of AI is likely to challenge both of these concepts in a fundamental way, particularly in cases where it is problematic to understand how a decision has been arrived at by an AI system.
A recent white paper prepared by the UK Government’s Department of Business, Energy and Industrial Strategy, set out the UK’s plan for ensuring that the potential for the UK to exploit technologies associated with the fourth industrial revolution, such as AI, is maximised, whilst at the same time its regulatory system remains fit for purpose and regulatory burden is minimised. The white paper advocated action on five main fronts: 1) ensuring that the regulatory system is sufficiently flexible to enable innovation to thrive, enabling greater experimentation; 2) promoting the testing and trialling of innovations under regulatory supervision; 3) supporting innovators to navigate the regulatory landscape and comply with regulation; 4) building dialogue between society and industry as to how technological innovation should be regulated; and 5) working with international partners to reduce regulatory barriers to trade in innovative products and services.
Ethical and philosophical issues
We also need to ask ourselves to what extent is it desirable or ethical to relinquish control of key operational decisions in workplaces to AI. The potential longer term future of travel raises numerous ethical questions, for example: how far do we want the world of work to change?; how desirable is it?; what will the impacts be on worker health and wellbeing, particularly in those parts of the job market projected to be most affected?; in the event of being able to predict future injuries and ill-health attributable to work doing with increasing accuracy, how should such insight be used?; what will it mean for future recruitment to jobs?; and in the event of being able to predict future wrongdoing with increasing accuracy, what will it mean for regulation?
The challenges facing the world of work of the rise in so called artificial narrow intelligence are tangible and real now. In the event that such technologies advance further over the coming decades and the emergence of artificial general intelligence systems becomes a reality, such challenges are only likely to grow. Given prevailing views of the chances of this happening, it makes sense for the world of work to plan ahead. For example, if we are facing challenges now in trying to predict the likely courses of action that systems built around artificial narrow intelligence might take, then this is only likely to grow with the emergence of systems built around artificial general intelligence, where the range of possible decisions that might be taken are likely to be significantly greater and the systems are likely to be even more unpredictable. The emergence of artificial general intelligence will raise the bar with respect to the sorts of questions society needs to ask; questions of a more philosophical nature come into focus, such as: can AI systems be moral agents?, if so, how should we hold them accountable?, how do we prevent them from acquiring morally objectionable biases and discriminating?
A Royal Society policy project on machine learning sought to investigate the potential of machine learning over the next 5 to 10 years for the UK and the barriers to realising the potential. The work identified key areas where action was thought to be needed, these included: 1) the creation of a data environment that draws on open standards and open data principles, 2) the building of a skills base and research environment that can provide the human and technical capital to both apply and further develop machine learning, and 3) the creation of governance systems to address key social and ethical challenges.
All of the challenges identified in this series of blogs will need to be given due consideration over the coming decades.
This is the end of this series of blogs on industry 4.0 and AI, look out for a new series of blogs in the coming weeks.
Related Content