The increasing pace towards automation of industry and the workplace can only be a good thing, especially in terms of business and safety. Machines and computers are more efficient after all, and don’t make mistakes like humans do. But, as was observed in the Industrial Revolution of the early nineteenth century, where industrial processes replaced hand production methods, such radical change in the workplace produces unintended and often unforeseen consequences, which must be anticipated if we don’t want to revisit them.
As well as changing people’s lives both socioeconomically and politically, accidents became a huge price to pay for industrialisation. Of course, we are far away from those dark days now, and the consequences of modern change will be different, but the assumption that automation itself will prevent incidents, or not dramatically change the landscape of working life, is one to be made with caution.
Many feel that Artificial Intelligence, for example, is taking over people’s jobs, whereas others feel it is the answer to reducing human error. But what are its implications? Will too much reliance on automation make us less careful? Will we essentially deskill ourselves since the capabilities required for current jobs will become redundant? Will new opportunities for error arise? There needs to be safeguards in place to ensure that sophisticated tools developed to decrease errors do not have the opposite effect of increasing mistakes.
Sydney Dekker believes we cannot automate human error out the system, calling this a ‘substitution myth’ based on Tayloristic assumptions. It is not as simple as the idea that work can be broken down and divided up between human and machine, one gradually replacing the other based on relative strengths. Instead, new complexities are created as a result of changing systems in favour of technology, introducing the potential for more latent error. This is not going to go away because there will always be a necessary interface between humans and technology, no matter what the balance of involvement may be.
Machines don’t design themselves either, nor do they operate in a vacuum. Humans must make them in such a way that they fit the required purpose, and operators must be trained in a way that matches their understanding of this. As is so often the case, many incidents appear to be ‘operator error’, but training will not improve reliability of the technology or machine when the root cause is bad design or mismanagement.
By learning from past mistakes, we can and must plan with these factors in mind before favouring money and progress over people and safety. Although the consequences of automation are hard to predict, it is important to consider a future where humans and machines evolve together if we are to reduce human error.
Further reading - Safety Differently: Human Factors for a New Era, Sidney Dekker, 2015