Asking Neil Postman’s 6 questions to AI testing

Artificial intelligence is all the latest rage now and folks have been calling it the 4th wave of industrial revolution. I am doing reading on both sides of the spectrum on the technical side: learning about AI, its application to testing, attended the https://aisummitguild.com/ and on the critical side: Technopoly, Automation and Us.

AI is unlike all the previous technologies because of its implications of AGI and affect of super intelligence on life on earth itself. I am frankly getting scared of our mad rush to adapt any new technology in the name of efficiency.

Let me define this technology first

A software program which can create important tests, execute them, keep running them continuously and report back any failures for any arbitrary website or a mobile app with only some minimal initial setup.

In this post I will pose Neil’s 6 questions to the above technology broadly called AI testing

1. “What is the problem to which this technology is the solution?”
  • Software testing is not being able to keep up with rapid software deployments that have become the norm in the CI/CD world of software delivery
  • Software testing is a complex field which requires specialized skills which are becoming rare
  • Software test automation development is hard and complex with very little to show for the ROI
2. “Whose problem is it?”
  • Software development managers: who often feel that testing is rote and bottleneck in the delivery pipeline
  • Software test automation engineers: who are under the gun to keep the test frameworks flexible, fast, reliable and extendible.
  • Software development executives: who are always trying to reduce costs and show more profits
  • Software manual testers who can’t keep up with executing increasing number of tests, faster and continuously.
3. “Which people and what institutions might be most seriously harmed by a technological solution?”
  • Manual testers whose skills can quickly become outdated
  • Software test automation engineers, who can now be considered replaceable with AI tools.
4. “What new problems might be created because we have solved this problem?”
  • Displacement of skilled workers (testers and automation engineers)
  • Potential blind spots that have been baked into AI algorithms can wreak havoc eg: security flaws could be exposed all of a sudden and only folks who can diagnose and prevent these would be vendors.
  • By definition AI/ML algorithms are not known even to their makers hence even a benevolent vendor might not be able to help
  • Lack of skilled people for exceptional cases where AI solutions aren’t (yet) working
5. “What sort of people and institutions might acquire special economic and political power because of technological change?”
  • Vendors of these technology tools would gain significant revenue akin to original software companies which had monopolies on a subset of software like Oracle (databases), IBM (mainframes), Apple (hardware)
  • These vendors and their algorithms would essentially be black boxes and if they really scale to a significant portion of the software companies, they can make these companies hostage to their software because of lack of cheaper alternatives.
6. “What changes in language are being enforced by new technologies, and what is being gained and lost by such changes?”
  • A rich jargon development for software testing and automation eg: locators, flakiness, waiters etc would eventually fade away.

To summarize the biggest losers would be testers and software companies who are hostage of the vendors and the biggest winners would be the vendors, if the above defined technology comes to fruition.

Also AI is a broad topic and I am trying to apply this questions to software test automation which itself has a lot of ambiguity, judgments baked into it. So if a hypothetical AI system were to really solve testing it would be more in the lines of Artificial General Intelligence(AGI) not Artificial Narrow Intelligence (ANI). And that would be even a bigger problem that just this field.

I am open to thoughts and criticism, in particular I am not satisfied with my answers to 5) and 6)