Enhancing QA Testing

Leveraging Generative AI — PaLM APIs at Vertex AI

Anjali Kulkarni
Technology Hits
5 min readDec 3, 2023

--

Photo by Katja Anokhina on Unsplash

Generative AI is transforming the landscape of QA testing by addressing various challenges through innovative solutions. Time-To-Market being the most important aspect in software development and testing, reducing time in QA processes wherever possible, by automation, is key.

Generative AI can be leveraged in automating various aspects of QA:

  • Requirements Analysis: Extracting key points from requirements documents and summarizing them or identifying ambiguities / inconsistencies at the initial stage in the software development lifecycle.
  • Regression Testing: Changes to individual components in large systems can result in an effect on some other component.AI solutions can help with impact analysis of changes and in addition can also identify testcases to test these changes.
  • CI/CD: Creating test environments on the fly, for e.g.: Docker, Kubernetes or Jenkins setups.
  • Automation: Converting manual test steps to UI or API automation test scripts and maintaining them by employing self-healing techniques.
  • Testdata: Any testing would need a lot of diverse testdata to verify every scenario. Generating such diverse or large volumes of testdata can be challenging. AI can auto generate testdata and save lot of time of QA Engineers.

This blog delves into the transformative influence of Generative AI on QA methodologies, with a particular focus on Google’s PaLM 2 (Pathways AI Language Model) (PaLM 2) APIs at Vertex AI. Here we will focus on automation of manual testcases (Step-by-step instructions for actions that a tester would follow to verify behaviour of a software application) and test data generation (Test data refers to input values to be used during execution of testcases).

It is possible to build a framework which leverages foundation models like PaLM 2, to achieve this goal. We will be focusing on textGenerationmodel and codegenerationmodel from VertexAI framework. Google Cloud’s Vertex AI is designed to be a no-lock-in platform, allowing users to use their preferred tools and frameworks while taking advantage of Google Cloud’s machine learning services.

So let’s get started!

  • Initiate your journey by setting up a Google Cloud Platform account.
  • Create Keys and set Environment Variables.
  • Generate authentication keys for secure access.
  • Save keys as environment variables, enhancing security.
  • Install libraries — google-generativeai.

Generating automated Selenium Test scripts from manual testcases

Here is one of the ways to build a framework to convert manual test cases to automation test scripts. This can be extended to generating API test scripts as well, leveraging REST Assured libraries.

High level framework design to generate automated testscripts from manual testcases (generated by author)

In the above diagram:

  • Step 1: All manual steps are collated in a testcase repository which can take any of the forms : Excel, CSV, XML or even a JSON. These can also be a direct import from a repository, like Jira.
  • Step 2: These manual actions are concatenated to form a prompt to be used by VertexAI foundation model.
prompt = \
"Generate TestNG Selenium code for a login screen. " \
"Click on field Username"\
"Click on field Password"\
"Click Submit button"
  • Step 3: Response is the Selenium automated test script that we desire. It can now be incorporated in our automation framework after a thorough review.

Generating Testdata

Incorporating Business Logic in prompt while generating test data:

A challenge for test data generation is that business logic differs from system to system and in such scenarios just relying on pretrained models will not suffice. This is where prompting techniques come in handy.

Here are a few prompting techniques which we can use for test data generation:

  • Zero-shot: No sample examples. Data is retrieved from the pretrained models.
  • Few-shot: Some examples are provided which set a context for desired responses.
  • Chain-of-Thought (CoT): This is a strategy where a series of prompts are provided to guide a model towards a coherent response. This is normally used along with Few-Shots technique to simulate a natural flow of conversation.

We will use few-shots prompting technique while creating the prompt to achieve desired behaviour from foundation model. More sophisticated prompts can be created with the combination of various prompting techniques.

High level framework design to generate testdata from requirements and business logic (generated by author)

In the above diagram:

  • Step 1: Fields for which testdata needs to be generated along with the business logic, is specified in an xml or a JSON file. This can contain other details like no of records to generate or other details like relationships between fields.
  • Step 2: Prompt (using few-shots prompting technique) is created out of it. It can be observed in the example, business logic dictates that ‘Designation’ field can take only 4 values.

E.g. of Few-shots as in the above example:

prompt = """
Generate testdata to test Employee details screen with
Title, Employee Name, Employee Address and Designation as fields.
Generate data for 10 Employees.

Few-shots for Designation:
Senior Analyst
QA Manager
Team Lead
Test Engineer
"""

Notice in the above example, where we specify values as few-shots, that the field ‘Designation’ can take. This way model can generate data while utilizing only these values.

  • Step 3: Response contains testdata with values that we desire and records exactly equal to the number of records specified in the input (prompt). This response can then be processed and saved as a CSV or any desired format to be used as input data for UI / API testcases during automation runs.

Advantages

  • Increased efficiency: Time and effort are saved because a near perfect test script or large volumes of diverse test data can be quickly created.
  • Increased accuracy: Reduction in human errors associated with testcase creation and more time is left for review, thereby increasing accuracy.

Disadvantages

  • Testcases generated cannot be used as is. They need to be reviewed thoroughly and manually, to eliminate errors resulting from incorrect understanding of business logic.
  • Sometimes, test steps need to be tweaked to incorporate dynamic or external testdata.
  • In addition, changes need to be made, just so that the script fits into the automation framework. All these considerations make these methods more assistive to build a complete end-to-end solution.

Conclusion

As the QA processes continue to evolve, integrating Generative AI and Language Models into testing frameworks empowers teams to overcome challenges with agility and innovation. Vertex AI and PaLM APIs from Google provide a robust foundation, enabling QA professionals to create efficient, automated, and context-aware testing processes.

More information about the author at LinkedIn

--

--

Anjali Kulkarni
Technology Hits

QA Architect, Passionate about learning new technologies and sharing knowledge with community, by the way of giving talks and writing blogs.