First Step AI Driven Testing: Handmade AI Tools for Software Testers
Software test automation requires significant handling. There is a lot of human intervention involved in what is considered an ‘automated’ software testing process. Tester automation performs tool configuration, script creation, script review, data preparation, build integration, test execution, script maintenance, and others. It is now available, working its magic to test software and opening the door to a future of never-before-seen speed, accuracy, and efficiency. Fortunately, a new automation test has surfaced using artificial intelligence. Almost now is that future, whatever we have done and prepared in the test kitchen, then? I’ll tell you the beans.
Previously, we carried out an assessment of the automation test tools used in the company, yes Katalon, they have a True Test feature. Immediately create regression tests that really matter for coverage. Eliminate blind spots with maintenance-free tests generated by AI. interested in this, as there will definitely be no human intervention involved in the test automation process. But we found several limitations that True Test couldn’t do in this version, they could only test in a cloud test environment, for us who use an on premise environment, this felt very difficult. Currently, it can only run on web applications, it can’t run on mobile applications. This is also difficult for us because almost all of our super apps are mobile apps.
We developed an integrated handmade automation test tool that can operate in our own environment by adopting AI as the best alternative solution available to us at the moment. We built a “robot” and named it MIKA (Multifunctional Intelligent Knowledge Assistant) because using additional features from existing tools has many limitations. Providing a User Interface (UI) to make it easier for testers, not only automation testers but also manual testers, to interact with AI tools, using Streamlit (Web Framework) as a user for prompt input and getting generated response output. We have an issue ensuring that there is no garbage in and garbage out in user interactions with AI. Feeding quality information, training data, and information into the MIKA. MIKA has a LangChain Large Language Model (LLM) Framework backend for input control and response output parser connected to AI.
Time-to-market efficiency becomes faster, application testing workers are optimum, and testing quality can be maximized by utilizing AI in the application testing process. This application can also be done in all sectors, starting from banking, telecommunications, manufacturing, insurance, transportation, health, government, and others, which continue to dynamically develop applications to support their businesses. Ensuring the quality of application functional testing throughout the software development life cycle is critical for sustainable business growth.