Sitemap
Trendyol Tech

Trendyol Tech Team

Balancing Manual and Automated Frontend Testing

--

Do we really need manual QA testing in Frontend?

Manual Testing vs Automated Testing

A recent conversation with my team lead significantly shifted my perspective on frontend testing. For a long time, I considered frontend tests pointless, too effort-intensive, and often a redundant burden when not written well. However, this conversation provided me with a fresh outlook on the subject: How much manual QA testing do we really need in frontend projects? Or more importantly, can we eliminate this need entirely?

With the departure of our team’s tester, we collectively took over the testing processes for a while, testing each other’s tasks. While the process flowed smoothly when acceptance criteria were well-defined, a backlog started to accumulate in the frontend testing steps over time.

This situation led to a critical question raised by our team lead, Uğur Atçı, in a meeting: “Do we really need manual QA testing in Frontend?”

Despite previously viewing test writing as cumbersome and time-consuming, this question changed my viewpoint. The primary goal could be not just to enhance project robustness, but also to optimize testing processes, reduce QA effort, and make sprints more efficient.

My Perspective on Manual Testing

Up until now, I believed that manual tests should be conducted by someone with a strong grasp of the business side and extensive edge case knowledge. This belief stemmed from the following reasons:

Visual Memory and Experience-Based Awareness

Someone who has worked on a project for a long time can instantly spot a misaligned button, incorrect text, or missing warning message on the frontend, much like remembering details of a photograph.

Business Flow and Edge Case Knowledge

This person, knowing the entire business flow and edge cases of the project, can not only test the task at hand but also verify if the business logic contains any missing or incorrect flows.

Reliability

If the tester does their job well, a task they approve can be confidently deployed to the production environment.

However, there’s a significant problem: Testers are generally one of the few members of a team and often have to test multiple tasks simultaneously. This means a tester is constantly context-switching, which makes focusing difficult. Therefore, expecting maximum performance from a tester at all times is unrealistic. When tester numbers are limited, the number of tasks each tester has to handle increases, leading to both time loss and potentially superficial testing. This reduces the effectiveness of tests and creates opportunities for errors to slip through.

In this context, the advantages of automation become apparent. Automation eliminates the need for context switching and ensures tests are always performed consistently. Automation also saves time by quickly completing repetitive tests and allows for the early detection of errors.

Types of Frontend Tasks

In frontend projects, some tasks require manual testing, while others can be automated:

  • New screens and business scenarios: Being new work, tests cannot be pre-written, so they must be manually tested.
  • Bug fixes: These types of errors usually need to be verified manually because if previous tests were adequate, the bug would have been caught during the task itself.
  • Mapping backend fields and responses: Can be done with well-written automation tests.
  • Conditionally hidden/changed fields: If tests were written correctly from the start, they can be controlled with automation.
  • Replatforming/Refactoring: Automation can be achieved with pre-written tests.
  • User experience improvements: UI/UX tests can be manual, but functional tests can be automated.

Disadvantages of Frontend Developers Writing Tests

There are several key reasons why frontend developers avoid writing tests:

Lack of Motivation

Test writing is often seen as a tedious and unenjoyable task for most developers.

Effort Requirement

Writing and continuously maintaining tests can take up a significant portion of a developer’s time.

Increased Maintenance Burden

Adding a new task requires reviewing and updating existing tests. This becomes a significant burden over time.

My Previous Test Writing Experiences

In the projects I’ve worked on so far, we haven’t really needed to write tests in the true sense because we were usually dealing with tasks that didn’t require much business knowledge. Therefore, the test writing process had become a less important step in our eyes. The reasons for this were as follows:

Stable and Widely Used In-House UI Library

Since we had a UI library that had been used in many projects for a long time and had become stable, a large part of the UI consisted of this library. This often made writing component tests unnecessary.

Simple and Predictable Business Flows

Business scenarios were generally not complex, so edge scenarios for most tasks could be predicted and were risk-free. Therefore, we could avoid writing E2E or Integration tests.

Back-office Applications and Quick Problem Solving

When dealing with deep back-office applications, incidents didn’t lead to major chaos. When a problem was noticed, a fix could usually be produced quickly, and we could overcome the problem rapidly.

Transitioning to a New Team and Taking Over a Project

Often, when joining a team or taking over a project, we encounter unnecessary or overly complex pieces of code. When we try to fix these codes, we often encounter incidents, and no matter how much we don’t understand how that code works, we realize it’s actually necessary and revert the development. These kinds of situations gradually reduce our confidence in refactoring and make us hesitant to intervene in code we dislike.

When we, as two frontend developers, started on the project, there was neither documentation nor tests. We were only given some information on how to start the project. When we came to the project very ambitiously and decided to tidy everything up, we made many mistakes with great confidence.

In the project I work on, there are complex processes such as changing the address of an order, defining coupons, managing return and cancellation processes, querying order status, and missing product cargo issues. Each of these processes varies depending on the country the order is shipped to and the customer profile. For example, the required fields and validations when changing an address are completely different in the GULF region, while the requirements for an order in Germany are different, and the rules for the Netherlands and Azerbaijan are different again. We need to ensure that the changes we make in a task are only valid for the correct country and profile. This makes writing tests mandatory to reduce tester effort and human error.

Starting to Write Tests with Artificial Intelligence

After a tiring refactor period, we had a calmer backlog period in recent months that would facilitate test writing processes. With the pilot use of artificial intelligence starting with us, we could make our test writing process more efficient, and now was the right time to start writing tests, which we had been postponing for a long time. After a few different trials, I started experimenting with Cursor and the Claude 3.5 model. It was very successful in generating test cases and very successful in writing its code.

I must say this: The slower the responses from artificial intelligence, the more time we lose. After getting the code written, trying to fix its errors or chatting to get the correct version can take time. In such complex situations, we lose a lot of time in the process of making AI do the right thing every time. Because responses take a long time to arrive, I often switch to “I can do this faster myself” mode and prefer to proceed manually. However, as these processes shorten with faster models over time, having AI write everything will become more efficient.

At this point, actually, the work became much shorter and more reliable when I left the input and foundation to AI, and I manually did the refinement and corrections. Therefore, with the beginning of 2025, my strategy is to have AI do the groundwork for a large task, and then continue with iterations and corrections myself manually. In this way, I both gain speed and increase the reliability of the result.

Unit Testing Insights With AI

While writing unit tests in our React project, it was quite successful in simple components, but difficulties were encountered in more complex components, especially in complex components such as the Canvas API library and Tooltip using React portal. AI was extremely inadequate in analyzing these kinds of special mocks and scenarios and writing correct tests.

Nevertheless, these problems were usually due to the lack of training data for AI. I would encounter similar difficulties when writing tests myself because I would need to bring together many different sources to reach the correct information. In summary, AI is very fast and successful in standard test scenarios, but manual intervention may be required for complex situations. In this process, with the help of AI, we managed to increase the test coverage from 17% to 94.9%, writing approximately 500 unit tests.

Playwright Testing Insights With AI

For business-oriented changes, new screens, and design updates, we decided to use Playwright. By mocking backend requests with Playwright, we focused on testing the most frequently used pages, business processes, and error-prone areas. For example, testing order search, order viewing, and actions that customer representatives can take. When we asked AI to write these tests, although it created smart scenarios, it could not write the tests correctly. The biggest reason for this was that the in-house libraries and components we used were not trained by AI. AI could not write correct tests because it could not fully understand how to use them. T

However, at each step, it was actually doing fundamentally correct things. All we had to do was make the tests it wrote work with our libraries. For this, after writing common methods to be used in all tests myself and adding them to a file, I wrote the first tests with these methods. Afterwards, using these methods, I created all the tests for a reference screen, and by saying to AI, “look, I wrote this file correctly, don’t write as you know, write as I did,” I started to have it write the other tests. AI took the reference file and started writing tests successfully.

An example for AI to which we force to use that method for selecting item from dropdown

Although it initially made some mistakes, it usually made the same type of mistakes. By improving my prompt, I got more accurate results by adding extra warnings such as, “You are making mistakes like X, Y, Z, you solve these when I warn you, now you must do it in one go.” At the end of this process, we started writing pretty good tests, except for complex scenarios. Although I didn’t focus on this topic completely, I managed to have 140 tests written in a total of 3–4 weeks.

Conclusion

Accelerating the test writing process using artificial intelligence actually means making test writing itself more efficient. Test scenarios that a programmer has to write manually are automatically written by artificial intelligence. In this way, the test writing process is significantly shortened and carried out more accurately. Every new condition is quickly analyzed by artificial intelligence, and appropriate tests are automatically generated. This greatly speeds up the software development process because the manual test writing phase, the part of thinking about possible scenarios, is largely eliminated.

The necessity of manual tests in frontend projects may vary depending on the characteristics of the projects and the complexity of business requirements. However, accelerating the testing process with automation provides significant efficiency in the software development process. Artificial intelligence greatly reduces the burden in the test writing phase, allowing test experts to focus on more strategic and valuable tasks. Thus, project teams can develop higher quality and more reliable software by spending less time.

Tests are the insurance of a project, and good insurance means resolving a large portion of potential risks in advance. The AI-supported test writing process not only ensures that tests are written faster but also makes the software more robust and error-free. This ensures that the software becomes safer and of higher quality before reaching the end-user.

In the upcoming period, we will continue to have the non-complex tasks we do tested by the tests we write instead of our QA colleagues. By doing this using artificial intelligence, we aimed to both reduce developer effort here and reduce the workload of the QA tester. Although we have already started to see the benefits of this practice, it is not yet possible to extract statistical data. Perhaps in the future, we will publish an article sharing the results we have obtained.

Thank you for reading, you can contact me or one of the Customer Services team for any questions.

About Us

We’re building a team of the brightest minds in our industry. Interested in joining us? Visit the pages below to learn more about our open positions.

--

--

No responses yet