In an automated world, human error is a key source of risk

Siddharth Singh
Culture of Energy
Published in
2 min readApr 3, 2015

By Siddharth Singh, 3rd April, 2015

Humans make mistakes*. That should hardly come as a surprise.

With the International Federation of Robotics forecasting a massive uptake of industrial robots for production activities in the next few years — and automation increasing in several other spheres as well — institutions will attempt to minimise human activity in an attempt to minimise risk emanating from human error. (Of course, the primary motivation will continue to be the perpetual hunt for cost cutting avenues). Here’s what Goldman Sachs has to say on this issue:

“Notwithstanding the proliferation of technology and technology-based risk and control systems, our businesses ultimately rely on human beings as our greatest resource, and from time-to-time, they make mistakes that are not always caught immediately by our technological processes or by our other procedures which are intended to prevent and detect such errors. These can include calculation errors, mistakes in addressing emails, errors in software development or implementation, or simple errors in judgment. We strive to eliminate such human errors through training, supervision, technology and by redundant processes and controls. Human errors, even if promptly discovered and remediated, can result in material losses and liabilities for the firm.”

While human error by GS employees may not have devastating consequences (to the outside world, at least), human error has long been recognised as the key source of risk in several industries. For instance, 53% of all fatal air accidents since 1950 have been due to human error (and 52% of all cyber security breaches). However, industries have not had the option to automate activities and avoid human roles in the way they do today or will in the near future.

How this will play out depends on the nature of the industry, availability of technology and the consequences of the risks. In the case of automobile transport, Elon Musk claims that once self-driving technology matures and proliferates, states may even ban human drivers in order to cut down accident-related fatalities. He said,

“(…) when self-driving cars become safer than human-driven cars, the public may outlaw the latter. Hopefully not.”

This isn’t to say technologies don’t come with their own set of risks. Here is an interesting longread on how technology led a hospital to give a patient 38 times the dosage he needed.

Follow Siddharth on Twitter @siddharth3

* If that book does not interest you, perhaps these memes will.

--

--