conTest 2019 and the future of testing

Adam Mardula
Inside Business Insider
5 min readDec 11, 2019

This year I attended conTest 2019, an exciting yearly conference that explores the role of software quality and testing in the context of the modern world.

A major theme at this year’s conTest was the fast-changing testing landscape. A presenter with a particularly interesting exposition mentioned that testing is becoming more and more difficult because of all of the new platforms that are emerging. As he said: “How do you test a smart mirror system or various smart devices that are connected together?”

Jason Huggins, the creator of Selenium, showed how he is now creating complex robots to test various systems that require physical interaction from the user instead of the browser. It was interesting to see the “Father of Browser Automation” completely switching focus and now working with robots in order to adapt to the reality of the changing technological landscape. This shift makes sense, however, because user technology no longer consists simply of browser-based applications. IoT, embedded sensor technology, and adaptive multi-platform systems, to name a few, make it necessary to deeply consider the requirements of robot automation as part of a tester’s skillset.

More specifically, with advancements in Artificial Intelligence there are systems that will need to be tested that are completely different than anything we may have been used to testing in the past.

For example, up to this point, testing has always been deterministic. There always existed an expected behavior that could be tested for. Generally speaking, we had an expected result or set of results we could compare against when the test was complete. We could reliably reproduce the same outcomes, in the same manner, given the same set of initial conditions — and therefore write tests with statically defined outputs.

With AI, there isn’t an expected behavior, in the familiar sense, as these systems by definition are constantly evolving. Not only do they change the manner in which they respond to initial inputs but they also change things in terms of how long it takes to carry out each test, per execution, as well as results that are achieved. These results may be different, but more optimally correct than the final system conditions we originally asserted to be true, once the process has been completed. How do you test something with no predefined expected behavior?

Testers will have to learn how these systems learn in order to be able to effectively test them. We will have to learn the mechanics of behavioral adaptation, and the process by which an AI system uses, and even builds its own heuristics to solve a problem, after repeated iterations.

During the conference I participated in a workshop where we tried to come up with ways to approach to a subset of these systems. You can see some of my contributions here: https://medium.com/@idavar/testers-storytellers-and-cultural-ai-acd8deae74a1

As technology is advancing, our testing approaches will have to continue to advance. Testers need to know how to code in the same manner that software engineers do in order to be able to traverse the complex datasets and data pipelines that are generated and used by machine learning systems. Some of us may even have to become aware and even functionally proficient in algorithmic complexities of decision theory, statistics, and system optimization, so we can at least better speak the language of machine learning and data mining. Going even further, our tests and testing frameworks themselves will likely have to evolve to incorporate machine learning in their own mechanics, to anticipate the manner in which the “system under test” actually makes its decisions.

As testing teams are becoming smaller, developers will also need to take on a bigger role in testing (we’re seeing this now in our organization). It has always been true that developers should keep ease of testing in mind when building the products that they are tasked to, but now we are seeing that they are having to become familiar with the considerations and choices of test frameworks that we use. This is especially so since the rising complexities of the platforms being built require increasingly frequent modifications to keep up with the requirements of testing, which is in turn necessary to build a stable product.

Testers’ communication skills will be more important than ever since they will have to work with and coach others in testing. We can go as far as to say that even the lines between testing and devops will continue to be blurred, as the systems we develop will require us to understand the complexities of orchestration, in order to properly integrate our new, more advanced testing frameworks into our test pipelines. In many cases, common sense will continue to be more valuable than writing complicated automated tests, especially when it comes to testing AI.

Without a doubt, Mind Mapping will be essential to developing test strategies, since we’ve wandered into the realm of constant cutting-edge adaptation. This is especially true because the ever-changing nature of the non-deterministic computing environment requires us to not be too dogmatic about how we approach testing. We’ll have to think “how a machine that thinks like a person” thinks. A lot of this really boils down to using our visual imagination, and therefore rapidly organizing our ideas in a way that we can augment system execution paths that may be really strong in one iteration, and then, due to learning and optimization, completely irrelevant the next — and thus easily pruned from our thought hierarchy.

In addition to using mind maps to identify the important parts of a system to test, Shifting Left in testing is continuing as an important way to promote efficiency in the development lifecycle. Are Shift Left meetings a possible way to reduce development time?

An SME, Dev, and Tester would have a 15-minute meeting where the main question to be discussed was: How will this component be tested? (This happens before any development starts. The deliverable would be testing notes/instructions.)

Thinking back to the AI testing scenario, it feels imperative that we work in concordance with developers, system architects, and product, early on, to figure out how to build these more advanced ML/data driven driven systems. This seems like the only reasonable way that we’ll be able to adapt to the complexity curve that I’ve been talking about throughout this post.

Overall, ConTEST showed that there are still endless possibilities in the testing field. With the rapid changes happening around us there is no telling what types of unique technologies we’ll be exposed to in the coming years. As a tester, this excites me enormously because there now is no end for the opportunity to use what I think is the most important trait in this field: curiosity.

--

--