The Human cost of Automation

In a previous post I talked about how important it is, when using any sort of automation, that you understand the internal workings of the code. Today I wish to investigate another often forgotten consideration when implementing automation — the impact on humans.

I believe the focus of attention in automation too often focusses on the technical, and too infrequently addresses the human impact (both negative and positive). It often appears that the current accepted paradigm is that any automation is a force for good, it’s impact on people and projects can only be positive, and any deviation from this is due to the lack of skills from those implementing.

One interesting paper is “A Model for Types and Levels of Human Interaction with Automation” (Raja Parasuraman, Thomas B. Sheridan, Fellow, IEEE, and Christopher D. Wickens, IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS — PART A: SYSTEMS AND HUMANS, VOL. 30, NO. 3, MAY 2000) In this paper they consider a new framework and model for automation, but before moving onto the human cost I want to focus on a really important point regarding what I believe is the current paradigm in many organisations. This could be characterised by ‘automate everything that can be automated’ or at the very least, the decision to automate is based on a technical assessment or feasibility only. If the decision as to what to automate is left to the automation tester then it should be no surprise that what results is an outcome that favours the automation tester. I will automate what is easy, or what is simple; what is hard, or complex is left to the tester to cope with. I remember years ago, when attending a QTP course, that the instructors told us not to automate anything too complex. Indeed today, it’s still seen as wise to automate the simple stuff. Now obviously this is a generalisation, as the high level of technical questions on the web suggests that automation testers are trying to increase their reach into more technically challenging areas.

This situation only gets worse when lots of subsystems are automated using the same approach, as what is left is a partially automated whole system. Human nature would suggest that the parts of the whole system that aren’t automated would generally be the hardest for the tester to deal with, and as the tester doesn’t deal with the whole system anymore the end result can be a more complex ‘system’ to test.

If my first thought as an automation tester is ‘what’s easiest for me, or the tool, to automate’ that becomes my primary goal. This approach could easily lead to misaligned goals. My primary goal in automation is to help the testers, not to automate whatever is technically feasible, nor to hit a mythical 30% target.

Now two things spring out from this (1) the manual effort that’s required to ‘make the automated tests’ work is often, if not always, either ignored or vastly underrated in any ROI calculations (2) If the decision as to what to automate is based on what is easiest, it’s no surprise that the testers role quickly transforms from ‘testing the system’ to ‘keeping the automation running’ or even worse — that the testers start to spend more and more time trying to get automated tests to pass as opposed to trying to test the application.

Harry Collins, in his book ‘The Shape of Actions’ also talks about the lack of appreciation for the human effort in getting any ‘system’ to work, he terms this “Repair Attribution, and all that”. He goes on to explain that when a human interacts with an automated system the user often has to do a great deal of work to ensure the system works as expected. The best systems attempt to hide this by making the system user friendly so they do not notice the work that they need to do. If an automation approach doesn’t take into account the work the human operators need to do to support the scripts it’s not surprising that that effort required far exceeds what is expected. This quickly leads to systems where the cost to the human operator can quickly cause a) any ROI or speed benefits to be quickly diminished, or b) a situation where the impact on the human operator is one of skill degradation and frustration. Say hello to shelfware…..

So how could we do this differently….

A Model for Types and Levels of Human Interaction with Automation’ discuss a number of human impacts that can be caused by automation. If you’re planning on building automated tests then please take a moment to consider how your approach tackles these problems…

Mental Workload

A well designed automated tool, that can help reduce the workload of simple repetitive tasks, can greatly improve my well-being (a happy, non stressed, tester is a good tester). Does your automation approach benefit your testers this way, or has it just changed their interaction with the software. Does the tester now spend a great deal of time/effort trying to get the tests to run, or analysing the scripts? Did you appreciate how this might impact them negatively? One of the best tools we’ve built simply helped the tester extract data from a log file, and convert it into a readable format. A few hours to build, but it really helped reduce the workload for the testers.

Situational Awareness

If you create automated tests or automated reports, it’s amazing how quickly testers can pass all responsibility onto this system. If I create scripts that report pass/fail, it’s amazing how infrequently testers want to see the results, discuss how the pass/fails were decided, what oracle was used, what wasn’t tested/checked etc. If testing is an attempt to reduce risk, then if you are knowingly introducing a new risk you should attempt to mitigate this somehow.

Complacency

You have a bunch of automated tests that work pretty much 100%, they execute and find no issues. One day the results don’t pass, there was no planned change in that area so what’s your first course of action — to try and ‘fix’ the script to make the test pass. You’ve become complacent to the failure being caused by the app. I’ve seen this happen and it’s an easy trap to fall into. The situation is made worse if the testers skills have degraded (see next topic)

This can also be present in any form of automated reporting. This can run without issue for 99% of your projects but for one project the setup can be slightly different and the data doesn’t come out quite right. Is the user still checking this data before they send the reports out — or do they just assume it’s ok? If you’re’ a tester whose reliant on automated reporting make sure you understand these numbers and how they’re generated, you’ve handed over responsibility for your respect to this automated system — be sure you’re comfortable with this.

Skill Degradation

My automated tests have been running happily for months now, the test sme’s have moved onto something else, then one day the tests fail. Now I’ve been clever and made the tests so smart that anyone can run these so we didn’t need to keep the knowledgeable testers around. So now what happens? The automated testers, who have little or not tacit knowledge built up around the system, are now expected to work out what happened, is this a real bug or not? How confident am I that this is going to end well? The great paradox of automation is that if it’s used to remove staff from the equation, the staff that remain become even more important and valuable to ensure a good outcome.

Think of automation as a tool

We should change the focus to using automation as a tool — to assist and extend the testers ability. Again, when making this assessment, make it not tool based (what can the tool do), but as Harry Collins say “The crucial partition is not to be located by dividing up what humans can actually do, but by considering what they might be able to do by calculation if only they were as good at processing as are computers or potential computers”. At this point I’ve extended my ability, not replaced it. Make the human sapient tester the centrepiece, the focus, and allow them to use tools in such a way that their abilities and skills are amplified. When building/designing automation approaches remember, that your automation approach isn’t going to replace the manual effort, but often alters it in a manner that can be quite unexpected. You can use this power for good, or you can use this power for evil (and no, ignorance is not an excuse)

(If any sentences closely resemble those from ‘The Shape of Actions’ or ‘A Model for Types and Levels of Human Interaction with Automation’ please assume that the source is those)