How we test at PromoFarma by DocMorris

Tech at DocMorris
Tech at PromoFarma by DocMorris
7 min readApr 4, 2023

At PromoFarma, we value technology as a means of fostering ongoing learning. It drives rapid learning, effective communication, and innovative products, as well as boosting collaboration and productivity in the workplace. The article highlights the importance of embracing failure as a crucial part of the learning process in technology development and provides guidance for tech teams.

We have improved our testing a lot in recent years. At the beginning, for example, no shortcuts were applied and all flows were executed via UI. Login was the most tested feature. For both, web and mobile technologies. Cypress and TestCafe for web were the tools used by the QA Team. For mobile testing, we wrote Python and Appium scripts and executed them in a Real Cloud Device Provider. Of course, our teams write their own unit and integration tests, and even functional tests using Behave. We moved from monolithic applications to (micro)services and we have GraphQL as an orchestrator.

We even introduced a new way of testing, Contract Testing with Pact to test our pacts between our APIs and GraphQL. We, the QA Team, hardly ever worked in automated backend testing. Certainly, we tested our APIs via Postman or similar tools. Everything seemed to be working as expected, but there were some aspects that could be improved. In this article, you will find what we have changed.

Every project has different requirements for testing. I will refrain from going into detail on every single project but rather focus on what we changed in general and what we have learnt.

We definitely continue to improve our tests and our test strategy. Though a long way still remains ahead of us! Our culture is one of learning and progressing forward, as mentioned. The bumps in the road are not enough to deter us from making strides in our work.

Here we go! Web Testing

At the very beginning we mostly used Cypress and, in some cases, TestCafe. Regarding this last tool, I have to admit we used it only for what can not be done with Cypress at that moment, for example, multi-tab support (https://docs.cypress.io/guides/references/trade-offs#Inside-the-browser) All the flows that were automated, were done via UI. Login was the most tested feature in most of the projects as I have mentioned. We changed the way we were writing our tests. Here are the points:

  1. As I mentioned above, we used to write all the flows via UI, which means that, if we need to be logged into the application to test the checkout feature, we would login via UI. This is the first thing we changed. The login feature would be tested via UI in its own feature. So the rest of the features that need a user logged into the application, we will do it via API. We started to apply shortcuts.
  2. Pipeline optimization. This was a big challenge and we took several steps to move toward what we have today. On the one hand, all members of the team were always complaining about the time tests took to execute. We could not parallelize as much as we wanted at this moment. With the changes described below, we were able to reduce the pipeline execution by about 20 minutes:

a. We had retries in our Cypress configuration file. This is a controversial thing, but we decided to eliminate retries. Retries only add time to the pipeline when a test fails and, in our opinion, mask flaky tests. We’d rather have a failure in our pipeline and invest time to solve it, although sometimes it is very hard to find out the cause of the flakiness. You can find information about flakiness and remedies in these two posts by google: https://testing.googleblog.com/2020/12/test-flakiness-one-of-main-challenges.htm and https://testing.googleblog.com/2021/03/test-flakiness-one-of-main-challenges.html

b. Data test management. To continue reducing pipeline time execution and parallelizing jobs in our pipelines, we need to generate dynamic synthetic data. So our tests always have “clean data” to use. After the test is executed, all data that was created is deleted in the database. How? Simple, we use our API endpoints for that purpose. In other tests that don’t use dynamic data, we have fixed synthetic data created with the help from the whole engineering team. Thank you team! These fixed data are inserted in a clean Database that has only the minimum data needed for the test (typologies for example and our fixtures). This synthetic fixed data can be: users, pharmacies, products, orders… everything we need to run our tests in our test environments. This kind of environment is created on demand. We have another environment called staging, that is a production-like environment. This environment has its own database. This database is restored every weekend with an encrypted and anonymized copy of the production database. After populating the database with anonymized data, we have a task to insert all our fixtures.

3. Depending on the statistics of use of our customers and depending on the department, we consider changing the tool for automation. In some projects now we are using Playwright instead of using Cypress. The main reason is that most of our customers use Safari more than any other browser and nowadays safari has experimental support from Cypress.

Mobile Testing

Some time ago, all automated tests were launched on the same device, although manual testing was done on different devices, including the physical devices we had. We don’t have a device farm in place in our office, so we use a Real Device Cloud Provider. At that moment all of our mobile automation tests were written in Python + Appium. We have changed a lot from that moment until now. Here, you can find a list of what we changed and learnt:

  1. We have created a mobile test coverage based on statistics of use, as we did for the web. At this moment, we have chosen 3 mobile devices for each platform, with the most commonly used operating system for each one. We run the automation suite on these selected devices. Increasing mobile coverage is something we will continue improving in the incoming months.
  2. Test scripts were migrated from Python to Webdriverio. The reason is basically that we write our functional test in Cypress and Playwright, that is, javascript and typescript. We have more people with in-depth knowledge about JS/TS at the company than we have in Python. If we continue using Python, only a few people would be able to help us with implementation if we were stuck with a problem.
  3. We have included Cucumber as a BDD tool. This made our tests more readable for our business colleagues.
  4. We have modified the pipeline and now we have different jobs for every single feature. This allows us to relaunch if necessary. For sure, if a test fails, we need to investigate the reason why and where the problem is. With this change in the pipeline, we are able to parallelize all the feature executions, so execution time is reduced considerably.
  5. As a first step, we included in our pipelines a connection with Nexus to retrieve the app under test automatically and upload it to our current Real Device Cloud Provider. Now, Nexus is out of the equation and we can get the app from pipelines directly.
  6. Our apps are now hybrid, so we do not need to test everything on real devices. We have decided to perform automated tests, with Webdriverio and Appium, only in native parts and transitions from native to webview. Everything that should be tested in webview is done in Cypress and in a web functional test repository adapting viewports to the most common resolution used by our customers. Screen resolution is something we can retrieve in the statistics of use as you can imagine.

Contract Testing

As many other companies, we have services that need to be tested. This kind of testing is something we recently implemented in our SDLC. Actually, we have 164 pacts developed. We have been facing some challenges since we started. Here are the main points:

  1. At the beginning it was very hard to change our minds and move from functional testing to contract testing. We hardly ever worked at backend level as QA engineers. So we need to change our approach to understand how to do it.
  2. Not all of our contract tests are integrated in the pipeline, but it is something that we will do in the incoming weeks.
  3. This kind of testing helps us to understand how a consumer and provider work. We have a different vision about our backends.

And that’s all! We will continue improving our test scripts and strategy and most probably include new types of testing or, at least, new tools.

Conclusion

We have evolved since we started with test automation. We have introduced new ways of testing such as contract testing, new tools such as cucumber, playwright and webdriverio and improved our pipelines… But a long way still remains.

Therefore, we value your input and opinions on this matter. Please let us know if there are any types of tests you believe should be added or removed, or if you have any suggestions for improving our testing process. Your feedback is paramount in helping us create the best possible product for our customers.

By Sara Martín.

--

--