Testing at Badoo: in broad strokes

Badoo Tech
Bumble Tech
Published in
26 min readAug 17, 2017

In the past, we have talked extensively about how we write autotests, what technologies we use, how we help developers to boost the performance of unit tests, and so on. But we have never before written about the overall strategy of the entire testing process, including manual testing. It’s time to fill this gap.

There is a variety of testing strategies available. They depend on many factors: the selected technologies, the business focus, the application logic, the company’s culture, and much more. What works well for embedded systems may not be suitable for mobile applications, and what works well in accounting cannot always be easily adapted to the production of airplane software.

Some take a very thorough approach to documenting anything and everything, while others feel that the code should be easy to read — and that’s plenty. I would maintain that most of them are right: if the methodologies and practices adopted by the company prove to work, then that is exactly what the company needs.

The same goes for Badoo: many of the approaches that we use work well precisely in the context of our company, within our culture, and in our post-start-up world, where, due to the company’s explosive growth, we have stepped on a whole load of different rakes and got thwacked in the face many times. I’m incredibly pleased that much of what we adopted as the foundation — the fundamental values ​– ​at the very beginning is still working well and has proved perfectly scalable.

I will tell you about the process using the example of one of our teams — the Mobile Web Team. This platform lies at the junction of web and mobile: in a mobile browser, we download a complete HTML5 application, which communicates with the server using a special protocol. By the way, all of Badoo’s other client applications, including Desktop Web, interact with the server in a similar fashion.

Firstly, this process is more or less the same for all the teams that create a product (with only a few exceptions). Secondly, using a concrete example will make it easier for you to follow my description.

Process

As with many other Badoo teams, the Mobile Web task begins with a PRD (Product Requirements Document). It’s a document put together by the product manager in which s/he describes how the requested change in the development should look. Whether it’s new functionality or a change in the behaviour of the existing functionality, we use the term “feature” to denote all of this. The PRD contains a design interface from the designers, business logic, analytics requirements for after the launch of the feature, and much more. This forms the basis for the interaction between the product manager and the developer

Next, the team’s technical lead parses the incoming requirements and gives the document to the developers in full or in parts (if the feature is very large). From this moment on, the feature acquires an owner — the micro project manager, who is responsible not only for the implementation of the functionality, but also for adhering to a set deadline, and who at the same time interacts with other teams in the process of its implementation, if necessary (coordinating the particulars of the PRD, design, etc.).

If there are several people working on the project, then one of them is usually such a micro project manager on the development front — this is usually the most experienced person on the team. The aim of this is to make sure the project does not suffer from having too many cooks in the kitchen. Basically, this is how we try to avoid a situation of collective responsibility. After all, if there are some things for which everyone answers, then, de facto, no one answers for them.

Before we go into development, we come up with a plan; a general scenario of how the feature will be developed, tested and released; what business metrics may be required for analysis after the launch; what experiments may be needed before the final launch; and so on. This plan is approved by the product manager at a special meeting called KickOff: it evaluates the general outline; clarifies and corrects various nuances, if necessary; and gives the go-ahead for implementation according to the plan.

After that, the developer prepares a technical plan (either independently or with the help of his colleagues and manager). This, in fact, is the same plan that was approved earlier, but with elaborations on the technical implementation at each stage: on how that which is required can be optimally integrated with the existing functionality, what technologies and mechanisms should be used, in what order all this will be released, and so on. It is at this stage that a reasonably predictable completion deadline for the implementation takes shape. The developer determines the deadline, gets the manager to approve it, and then strives to adhere to it without fail.

Obviously, the deadline in this case is understood as the date when the new functionality will become available to the user: this is not “I’ll need three hours to program this”, but rather “the completed task will be posted on August 3 in the morning release”. Naturally, to determine such a deadline, one should take into account a wide range of nuances and communicate with all those who will take part in the process, think over the dependencies (especially external ones), coordinate deadlines and resources with other departments, and, of course, factor in the testing time as specified by the testers.

At this stage, the technical lead in QA, who evaluates the testing time, gives an estimate of the time required to test one iteration (without taking into account any re-openings): in other words, literally, how many working-hours are needed to assess the quality of the current task. Why are we talking about one iteration? It’s simple: because we can’t predict how many bugs there will be and how many times we will have to fix them.

Understandably, it’s difficult to control a deadline set in the distant future. Therefore, we use the Situation field in tasks to track the current situation and adjust deadlines with different frequency for different teams. It’s important to remember that when changing or specifying the deadline, one should record the reason for this, so as to be subsequently (in a retrospective review, for instance) able to carry out an analysis of the project and give a more accurate forecast the next time around.

Only after this stage can the developer start programming.

After the developer has completed the implementation of the feature, or part thereof, and believes that it is ready, s/he organizes Visual QA. This is a special meeting with the product manager, during which the developer demonstrates what s/he has done. The product manager can accept the feature or specify some requirements, if necessary (in which case, the feature goes into revision, and all the steps are repeated). At this stage, we also guarantee that the developer has him/herself tested at least the positive scenario of using the application and fixed the bugs, if any. Otherwise, what would s/he show the product manager?

It’s only after a successful Visual QA with the product that the task is sent on to Code Review. Why not before then? Because if the product has additional requirements, we would have wasted the time of the other participants in the process: the reviewer, testers and so on.

Code Review is a very important stage in the quality assurance process. Ideally, at this stage, the developer would not only analyse the code for design and general conventions, but literally “test” it with his/her eyes and head and go through a scenario programmed by another developer. An additional “fresh” look helps to avoid a great number of basic errors.

The next step in the process is QA. Our testing consists of several stages in different environments and includes manual and automated testing of different levels and elements of the system (below, I’ll discuss how we conduct testing in more detail).

And, finally, the release of the feature. For many tasks, this is not the final step, as there may be more modifications, A/B tests, user behaviour analysis and feature optimisation, a retrospective review and analysis of the utility of the feature for the business and the entire application as a whole. There are some features that “don’t take off”. We modify them or remove them from applications. This is normal practice. And those features that have successfully passed all the previous stages, become the main functionality of our applications.

This is the comprehensive scheme of the described process.

Testing

From the description of the process, it is clear what the feature’s life cycle looks like and which stages it includes. And, in my experience, most of these stages are understood (more or less) correctly by all participants in the process. What the PRD looks like, how tasks are distributed, how the code review is carried out, and so on — it’s all clear, and many use it in their teams.

But when it comes to QA, chaos ensues. Various people at various levels often have absolutely outlandish ideas about what “those weird guys from QA” actually do. When their work is understood as “they are doing something there, they poke and click, and then they bring us bugs”, that’s still okay. Sometimes, it happens that the developer himself/herself carefully checks the results of his/her work and declares, “I don’t need testers — I am confident in the quality of my product”. This is a rare case.

I’ve also run into situations where developers thought that the testers “find too many bugs and don’t let us release the product”. Other times, the developer says, “Find me absolutely all the bugs, we fix them, and we’re done” or “You check it, since you know better than me how our product works”. The question immediately arises: how did you write the code then if you don’t know how it works?

In general, very few people actually understand how the testing process works. Let’s try to gain clarity on this process.

What is quality?

First of all, we all need to agree that finding all the bugs is simply impossible. Even the most stubborn individuals agree with this axiom. It’s common sense.

If you imagine a “bug diagram” over an interval of time, you will get something like this:

At first, the number of bugs found (B) is small — this is while we’re getting acquainted with the system or prepping the environment. Then, it can even grow per unit of time (t) once we’ve come across a “problem” section of the application. But then, at some point in time, whatever mechanisms and methods we use, we find fewer and fewer bugs. As a result, time goes to infinity, while all the bugs in the system still won’t be found.

One can imagine a situation where we are not limited by time and have unlimited resources, but it’s already clear from this formulation that this situation is exceedingly artificial: there are too many assumptions and no connection to the harsh reality. In the real world of start-ups and stiff competition, most seek to make a profit in the shortest amount of time, and the task is more likely to be this: find as many bugs as possible in as little time as possible.

There is a tremendously important concept in the speed of finding bugs: S = B/t. It’s somewhat nominal, but many strive to immediately optimise it. This must be because it’s intuitively clear. This gives rise to such things as smoke testing as well as automated testing (yes, not only for regression testing); tools and methodologies are being developed to more accurately identify potentially “high-risk” aspects in products (equivalence classes, for example: http://istqbexamcertification.com/what-is-equivalence-partitioning-in-software-testing/) and, most importantly, give the fullest possible evaluation of the quality of your product as soon as possible.

And since we have agreed from the start that all bugs can’t be found, and our time is limited, it’s obvious that somewhere on the diagram there should be a point of intersection between B and t, which would show the current state of the product’s quality in order to answer the question: have we done enough testing or do we need to do more?

So what is it, this ideal value β = ƒ(B,t)?

As it turns out, there is no such things as the ideal value — it’s different for every project. Moreover, it varies from task to task even within a single, well-coordinated team. It depends on a great number of external conditions, from implementation technologies and culture within each specific team to marketing activities, deadlines, and the customer making a call saying “That’s good enough, let’s call it done”.

It would be even sadder if we knew the minimal “universal” value of β. And it does exist and has a clear and simple definition: “The product quality is good if the user is ready to buy it.” And I mean not necessarily purchasing with money, but more generally — if the user is ready to use your product, if s/he is ready to open it again after it crashes and continue using it, then a good β has been achieved.

After that, however, many additional conditions come into play. Does your product have competitors in the market? What category of users will use your product? Are you ready to invest additional funds in perfectionism? Will there be a press conference where you’ll unveil your new super-idea? When will the load increase? And so on.

Who “generates” the quality?

You may have noticed that from the beginning of the previous section, I have never once mentioned the testers. I did it on purpose, because any necessary level of β is quite attainable, even when the company has no such structure as QA. Not to mention the minimum level of quality — the developer alone can provide this.

In the Mobile Web team, our achieved minimum β is additionally controlled by a clever move with Visual QA. Before handing the task over to the next participants in the process, the developer must him/herself come to the product manager and demonstrate the result of his/her work. And the first meticulous user of his product is the customer him/herself, the one who wrote the PRD.

An additional bonus from communicating with the product manager at this stage is that whatever is nonessential may be trimmed off. For example, for the first launch of a new idea, the product manager may well be willing to test the concept — in other words, not a beautiful interface, thoroughly polished to perfection, to every last pixel, but instead an acceptably functioning ​​”semi-finished product”, sufficiently demonstrating the idea’s capabilities. In the Visual QA process, the readiness criteria can be detailed and adjusted. One just has to make sure to then reflect this in the PRD, so that the other participants in the process are not unduly caught by surprise.

When I came to Badoo, we immediately agreed that in our company the developers are the ones responsible for quality. This is an excellent principle, still regularly reiterated to old employees and relayed afresh to new ones. And in many discussions, this argument helps me convince people at various levels that we need to do things this way, and not in any another.

But why developers? Why do we then need testers at all? Let’s get to the bottom of this.

First of all, the testers don’t make bugs: either there are bugs in the product or there aren’t. You can try to reduce their number by ameliorating the process: you can improve the engineering culture, use unified rules and recommendations (code formatting is a vivid example, redundant as it may seem at times (tabs or spaces?)). But the initial competent planning and architecture of the future project affect the quality of the final product in a colossal way.

At any rate, developers are the ones who directly work on the code, and thus it depends on them whether there will be bugs or not. Subsequent testers can only spot them. Or they may overlook them, even if they employ all the most fashionable approaches and the latest versions of the tools.

The tester is not unlike a spotter for an acrobat in the circus. The acrobat does all the hard work, twirling around on the trapeze, while the spotter just stands there and “does nothing” (just like a tester). The acrobat can perform his number perfectly well without the spotter, but it’s much safer with the spotter, for he knows that in case if an error, the spotter won’t let him fall. This is what they mean when they say, “We are one team, working together on the same thing,” etc. The team is indeed one, but all the responsibility and, most importantly, the decision on whether the safety spotting is required, lies on the shoulders of the acrobat. In our case — on the shoulders of the developer.

In addition, placing the responsibility for quality on the developer, we also avoid the situation that can at times get too cushy and easy to exploit — blaming others. “Whose fault is it?” — “Vasya’s. Because he didn’t find my bug.” In fact, placing blame is pointless. It won’t solve the problem. We need a constructive approach. And the constructive approach is this: what must we do next time to make sure this won’t happen again? The answer to this question should be given by the developer him/herself as the primary source of the problem. What interfered with his/her performance this time, and can we make sure that it doesn’t interfere again in the future? It’s crucial to find a dependable solution. The solution of “asking Vasya to pay more attention next time” is no good. It provides no guarantee: we are all human, and next time, Vasya might make mistakes the same way as this time. On the other hand, the solution à la “Cover this area with an autotest” or “Rewrite the method in such a way that it takes parameters of only a certain type” can be very effective.

Thus, testing should be perceived as an indicator, as an additional tool in the developer’s extensive toolkit, which can help him/her answer the question: is the code ready for production or not yet? Blaming the protractor for the fact that you have incorrectly measured the angle is at the very least not very constructive.

How does the testing process work?

And so, the task has successfully passed all the previous stages of the process and got to testing. What’s next? This question often arises not only among people who don’t directly participate in the testing, but also among specialists themselves. Especially after taking some class that gave a very colourful presentation of the types of testing, methodologies, black-gray-white boxes, unit integration system testing, etc. How does one organise the inspection? Where does one begin?

Another important nuance is to decide when to return the task for revision. After the first bug? Or after the tenth? Or maybe after completing the testing of all the scenarios? Obviously, from the business point of view, you also want to keep the value of β at an optimal level (finding the maximum number of bugs in the minimum amount of time).

Some companies use a fixed set of test scenarios. Some even have test analysts who write these scripts and then check them, either themselves or with the help of other (often less qualified) testers.

Such scenarios look like a sequence of steps and a list of results to which they lead. Scripts may often be in the Given-When-Then form (https://martinfowler.com/bliki/GivenWhenThen.html), but that’s not necessary. Go into a particular section of the menu as user with admin privileges, click the green button and get to a certain screen — check that it displays “Hello, world”.

This approach can be justified if you opt to save on the quality of your employees. You can recruit people with very little experience of working on the computer, and with mobile apps — you can hire “off the street”, since almost everyone has a smartphone.

At the same time, this approach has a number of shortcomings. Obviously, as far as ensuring the optimal β, this approach is counterproductive. The time for passing scripts is constant for a given test session, and with the appearance of new scripts in the list, it only increases. In addition, the approach has flaws associated with the average consumer psychology: on the one hand, it narrows the angle of view to what has already been described, and even elementary things are thereby missed, simply because they are not stated in the form of scripts. On the other hand, people have a tendency, when checking one script, to mark similar ones as also verified (“I just checked authorisation by e-mail, and it worked. Why do I need to check authorisation by phone number? We’ve never had a problem with that over the past hundred launches.”)

In one of my previous companies, the approach to re-opening tasks was this: found a bug — reopen the task. We even had a formal regulatory document drawn up by my boss, containing the list of things after which the task had to be reopened:

1. Is the code in the task not up to the coding standards? Reopen!

2. There are no unit tests? Reopen!

3. Unit tests don’t work? Reopen!

4. The text on the screen is phrased differently from the one in the PRD? Reopen!

5. Does the product interface not match the mock-up? Reopen!

6. etc.

This approach is also less than optimal from the point of view of the parameter β. We increase the total time of testing the task due to the fact that each time, after each flaw, we add additional developer work time. Time is squandered on diverting the developer’s focus from the task that s/he is currently working on, as well as on waiting for the task to move up the queue of all other tasks, as well as on one more stage of the Code Review, as well as on many other, not always justified, interactions. Furthermore, when the task is once again transferred back for testing, it will have to be tested all over again, including repeating all the verifications that have already been done, and this is additional working-hours for the tester. Therefore, the time “t” required for the entire testing process is doubled, tripled, and so on, with each reopening of the task. And if predefined test scenarios are used, then it adds up to a serious detriment.

Therefore, in Badoo, we keep a close watch on the task reopening counter and try to have it continuously decrease. Reopening a task is expensive in terms of time spent by all participants in the process (although if you assess the situation only in terms of the testers’ convenience, this approach may look very tempting).

At the same time, the leader who demands the minimum number of reopenings from his subordinates without explaining the reason “why” is asking for trouble. In this case, a substitution of objectives may take place, where instead of eliminating the cause of the ailment we will treat the symptoms. Instead of improving the engineering culture and working out ways to not step on the rake next time, we may instead wind up in a situation where the numbers are becoming an end in themselves. It’s obvious how this would affect the quality of work: the developer is now trying to deceive the system. Instead of transferring the task to the tester, s/he comes to the tester with a request à la “Here, poke at it a bit, because if you reopen it, I will be punished”. In this case, indicator β also suffers: not only is the testing carried out in a non-transparent way, but it also becomes hard to keep track of. The time “t” can’t be determined. Basically, be careful with this.

Another large and well-known company found a clever way of reducing its time “t” to a minimum. There is a budget for “errors”, but there is no testing whatsoever. Any developer from the first day of work has access to transferring code to production and, after checking everything himself, crossing himself and praying to his gods, at a certain moment just sends his product directly to end-users. After that, he, of course, keeps track of what is happening, and if something is broken, rolls back his changes, figures out the problems, and repeats the process. I even heard that if the budget is not spent completely over a given period, the management reminds its employees that they need to “take more risks”.

I don’t presume to pass value judgments, since I have never seen the process first-hand from the inside. But I’m sure that for some types of business, with tolerant and thick-skinned management, this approach is quite sustainable. Moreover, this company’s vibrant market presence and overall standing of an industry leader would suggest that they are quite satisfied with where they are at.

In our company, we extensively use Exploratory testing as well as Ad hoc testing. This is when the tester during the testing process studies a product or a feature and uses the experience and basic knowledge accumulated earlier to determine which nooks of the tested product to look at and what actions to perform on them. As a result, professionals with that distinct tester’s sensibilities and talent find themselves very much at home in our collective, but on the other hand, this excludes the possibility of hiring people “off the street.” Our testers are seasoned professionals, prized on the market. This would probably be a disadvantage for some companies that try to save cost on quality.

There is no predefined list of test scenarios. Instead, we use two approaches. Firstly, we automate everything as much as possible. Secondly, we use checklists. Autotests help us minimise the time “t” needed to test the functionality, especially regression, and checklists enable us to bear in mind the important parts of the product during exploratory testing. It’s important that checklists are not written in the format of “Go there, click on such-and-such button and check that a yellow dice pops up”, but rather serve as more of a reminder: “To check the 80-year-old user from Zimbabwe when searching for red cars”, “A girl sees comments hidden for boys” and “The authorisation form has changed — check it and clean the cookies”. These are all good memos that broadly and concisely point out the vulnerable parts of the application, allowing one to fully apply one’s imagination.

I recommend using the following pyramid to determine the correct time for reopening the task:

First, we check all positive scenarios. Does the application do what it’s supposed to do? Does it do this exactly as stated? After all, if the developer programmed a new shopping cart for an online store, and you can’t put anything in it, that means that the functionality doesn’t work, and the value of his/her efforts amounts to 0 pounds 0 pennies, even if s/he worked all weekend late into the night to the point of exhaustion. The user is not interested in this kind of “product”, and in our highly competitive environment, she will leave and never come back.

Moreover, as you remember, in our company, the developer is held responsible for quality. Plus, we strive to keep the number of reopened tasks down to a minimum. Therefore, we require developers to independently verify positive scenarios. And that’s where Visual QA helps us yet again. Before transferring the task on to the next participants in the process (for code review, testing, etc.), the developer must come to the product manager and show him/her the result of his/her work. Obviously, the developer is interested in everything working as set forth by the product manager in the PRD.

Accordingly, if in the testing process we find shortcomings in positive usage scenarios, the task is reopened. This is the first reason to reopen the task.

Automated tests — if none are available for the task, or if they don’t work — also constitute the stage of verification that can lead to the reopening of the task. Of course, the comprehensive list of tests includes a variety of scripts, including negative ones. But automated tests take up little time (and we are constantly working on their optimisation) and can be run in parallel with manual checks, which means that they work very well for the overarching goal of optimising the β even at this early stage. Therefore, it’s also up to the developer to ensure that autotests are successfully passed on his/her task.

After testing positive scenarios, we proceed to negative ones. They occupy a larger area in the pyramid — so testing these scenarios may, likewise, require more time. At this stage, we check the non-standard user behaviour. S/he could have made a mistake somewhere in the intended purchase scenario, and the system allowed for this to go through. Or s/he may have accidentally clicked the wrong thing and was logged out. Perhaps, the user entered her name in the phone number field, and the application crashed. Basically, here, we are trying in every possible way to “break the system” and this probes how “fool-proof” it really is. By the way, information security checks also fall under this category.

The approach to reopening the task is as follows: we checked negative scenarios, collected everything we found and issued a ticket, and then reopened the task.

When testing negative scenarios, we try to use common sense and check all the most likely scenarios. As for the next part of the pyramid — Corner Cases, as we call them — the line between these and negative scenarios is not so obvious. This category should include very specific instances from the user’s point of view, sometimes even debatable. For example, checking if the option Exclusive touch for iOS is set. Or rotating the screen several times and returning it to a screen orientation different from the one in which the screen originally opened — for Android. This type of testing is very time-consuming and for some companies prohibitively expensive.

Nevertheless, in a number of teams we regularly check Corner Cases. Moreover, in some cases, we have transferred those errors that recur from task to task, each time forcing us (developers, testers, our users) to suffer and postpone the release, to earlier stages of testing. For example, the developer must verify that the mobile application can be reliably retrieved from the background mode or after the screen is locked before submitting the task over to Code Review. For this purpose, we have established a special developer’s checklist, and before it passes this checklist, a task simply can’t be advanced to the subsequent status within the bugtracker. By the way, this checklist also includes turning the mobile device to switch between the landscape and the portrait mode. Thus, five to ten minutes of the developer’s time spent in testing his/her own creations saves us hours and days that could be otherwise be wasted during the subsequent stages of the process.

Automation

In most of our product teams, we check the quality in several stages. In individual cases, there may be five stages.

If we were not striving to constantly optimise the processes to maintain a satisfactory value of β, to continuously increase the speed of finding bugs S, we would spend a lot of time checking our applications. However, the Mobile Web team, for example, releases items daily (and sometimes more frequently, if business requires it).

We achieve this pace by parallelising many stages of verification and by moving the instance of identifying problems to the earliest possible stage.

For example, if some things can be checked automatically and it does not take much time, why not move such verifications out to the hooks of the code version control system and block bad code from entering the shared storage at the earliest stage? In these hooks, we check the code with a linter ((https://en.wikipedia.org/wiki/Lint_(software)), along with the general conventions for formatting, storing code, organising your tickets in a bug tracker, and so on.

By the way, in many cases, process automation is the key to parallelisation and acceleration. We run autotests on the task branch as soon as the developer passes it to the next stage. AIDA (https://techblog.badoo.com/blog/2013/10/16/aida-badoos-journey-into-continuous-integration/) runs the tests and writes a report about passing the tests into the task. Thus, starting to check the task, the tester gets the first idea of the work completed right from the bugtracker ticket. Some teams have gone further and asked that AIDA reopen the task if it does not pass tests or if the percentage of test coverage of the code has dropped.

We can’t, however, regard automation as the only possible development in the evolution of our company processes. Automation is an extremely important thing, which has a staggering effect on the speed S, but excluding manual testing from the process would be a bad idea. Remember we looked at the situation of testing on predefined scenarios? Automation allows you to exclude the human factor from the tests and ensures against oversights, but the factor of “narrowing the angle of view” is still not going anywhere.

But the same advantage easily turns into a disadvantage if we don’t give the highest priority to the bugs found through autotests. And bugs can be quite legal. For example, we can live with them for a while and decide to fix them in one of the later releases. But the tests are uncompromising — they will crash every time you run them. Thus, you need to either fix the bugs or suppress testing, thereby increasing the likelihood of forgetting about such bugs in the future. Another option is to reconcile one’s self with failed tests, the number of which will grow over time until no one trusts the tests anymore — they just crash anyway.

In addition, integration and system automated tests are very expensive to write and maintain. These are high-level tests that perform “under the hood” testing of the entire chain of application interaction, backend for it, services that support fast processing and storage of data for the backend, etc. In such an interactive system, unstable test crashes for various reasons are very likely, and most importantly — it’s very difficult to find the root of the problems. To understand what does not work, you need to spend a lot of time studying the whole chain of interactions.

The situation is further aggravated by the fact that the very nature of high-level tests makes them slow and resource-consuming. This leads to the architecture of such tests progressing towards the situation of “we check as much as possible in one session”. For example, to check something on behalf of the system user, you need to log in each time under the account with the appropriate privileges. As a result, many people do this: log in once (check the authorisation mechanism) and then immediately move onto the various verifications, as an authorised user. I hardly need to tell you that in the event that something is wrong with the authorisation page, all further checks go to the dogs. If the manual tester Vasya can just be told to “for now ignore the fact that the ‘Enter’ button is labelled ‘Exit’”, the autotest would actually need to be corrected. Either that, or the product code would have to be fixed as quickly as possible. This, of course, is a pretty forced example — many create mechanisms with fast login precisely for authorisation, but I resorted to it to make things as clear as possible.

There are a lot of tricks and approaches for the writing and organising of proper automated (and not only) testing (Automation Pyramid , Page Object, Data driven testing, Model based testing, etc.) By listing the problems like this, I just wish to draw your attention to the fact that automated testing is not the obvious and simple process that people mistakenly think it is. And in reality, it is not a cheaper and more convenient alternative to manual testing. Manual and automated testing should always take place in tandem — only then can we talk about an accurate and comprehensive evaluation of the product quality.

Conclusion

I hope that after reading this post, you have a somewhat clearer understanding of what software testing is, and how testers conduct testing on various tasks. There is a lot of online information on testing, but it’s often presented in a disjointed and isolated way, without offering a strategic perspective on why it’s needed at all, and why the developer is required to comply with certain rules. I hope that my perspective on it may help you evaluate the testing process “from a bird’s eye view” and allow you to approach advice and recommendations from various online sources more judiciously.

Also, bear in mind that any changes, innovations, and improvements should primarily lead to the refinement of the parameter β — the minimum time interval in the course of which you can find the maximum number of bugs to give the resulting quality assessment. And it’s beneficial when the speed of testing S works for the same purpose. However, this approach is also not devoid of potential pitfalls, the misuse of automation being one obvious example.

Moreover, if we take this train of thought beyond the testing process, it becomes clear that the same universal parameter as for the quality assessment β exists for other stages of development, as well as for the project as a whole. It gauges the readiness of the project and must answer the question “How can we create the maximum number of features in the shortest possible time?” It doesn’t matter what we call it, but it’s obvious that such a parameter exists for any business (unless you pursue some strange goals, other than earning money, that is).

And it’s for this purpose that the role of micro project manager is assigned to the developer as the key participant in the whole process. S/he knows best what exactly, at what stage, and how exactly something affects the speed of the delivery of his/her feature to the user. And only s/he can determine how and why to use these or other mechanisms to improve the basic rate of delivery. She/He must adhere to the promised delivery deadline and constantly strive to reduce the delivery timeframe, even if this includes making mistakes. This is a long, but necessary process.

Thank you for your time!
Ilya Ageev, Head of QA.

--

--

Badoo Tech
Bumble Tech

Github: https://github.com/badoo - https://badootech.badoo.com/ - This is a Badoo/Bumble Tech Team blog focused on technology and technology issues.