eMAG Warsaw Hub’s team on Testing Cup 2016
On May 25th a team of brave testers from eMAG Warsaw Hub participated in an event called “Testing Cup 2016”. This was an unique opportunity to evaluate our testing skills in real action competing with other specialists from Poland and also promote eMAG as a company that focuses on the quality of the software delivered.
What is Testing Cup?
The craft of QA in Poland is growing really fast. There are many dedicated software test specialists, so it is inevitable that the number of events for all those people will also increase. The first edition of the Testing Cup took place four years ago in Warsaw. It is formed of two parts: competition and conference. People can share their experience and take their chance to becoming “the best” in the Polish championship of software testing.
The contest is based on a testing application prepared especially for the event. The goal is to collect the points by reporting defects and writing a report describing test run characteristics and the application’s overall quality.
Three years in the row contestants didn’t have a clue about what they are going to work with until the start of the event. This year the rules were different: the first version of the application was published one month ahead, with the mention that the final version of the application will have to have new functions and will have to undergo regression testing, retests and functional testing of the new features.
Rules of the contest
There is a really long regulations document with “dos and don’ts” of the contest, so I will only tell you about the most important ones.
There are two kinds of possible participants: teams of max. 3 people or individual contestants. Both are working on the same application, but their results are evaluated separately. They all sit in the same space, but talking to members of other crews is prohibited — only people within the team can communicate. There is always a possibility to talk to somebody from the jury after a notifying request. It is forbidden by rules to use any tools for decompiling application or database, or to communicate with the outside world.
The whole contest takes 3 hours and is divided into two rounds: reporting defects for two hours and preparing a test report for around one hour. This year’s defects and reports were sent using a desktop application connected to the Testing Cup’s server.
Reports are evaluated by the jury by the following rules: how clear and understandable the description of the bug is, is priority set properly, is “how to reproduce” there, is it actually a defect, is the defect within the testing scope based on documentation.
The test evaluation report is supposed to be very thorough, but without defects description. It should contain: overall quality rate of the application based on executed tests, what functionalities were tested, how they were tested, how much time is needed to finish the tests. The report should also include recommendation for “go live” release, after only two hours of testing by one participant or a three people team.
Every year more and more people come to the Testing Cup so competition is really tough. Many of our opponents came prepared with templates for the report and the knowledge of the first version of the application. Some of them participated in the previous editions, some were winners from previous years. Big international companies like Motorola, Volvo, Nokia and Canon sent their Polish teams of testers to the event. We were competing with great specialists who came there to win.
In eMAG Warsaw Hub there are three QA engineers, so it was a perfect number to form a team.
The official name for the application that is tested on Testing Cup is “Mr. Buggy”. It is always designed especially for the contest with bugs planted all over it. At the first edition it was a simple desktop application which’s main goal was to… report bugs. Testing was based on functional specification. Probably behind this idea was the desire to give testers something that they are all familiar with. Working in QA gives an opportunity to test many types of applications, but we all have one thing in common: you have to report defects through some tool.
Next year’s idea was very similar, but this time the web application and testing was based on user stories.
Third year was something completely different — it was a web application divided into 24 views where there were QA tasks with defects to find. Tasks were scored for different amount of points based on difficulty level.
This year Mr. Buggy was based on the real tool “AutoMagicTest” (link: http://automagictest.com/) which’s main feature is to check the structure of the website and generate a report with the results of this analysis. The first version was presented a month earlier and the second version with new functions was delivered on the day of the contest. This time there was a hidden part to it: “Mr. Buggy The Game” which could be unlocked after doing certain actions within the application. The game also contained defects which could be reported and scored additional points for the team.
Our overall strategy
Before the event we’ve met a few times to discuss and test previous Mr. Buggies. Every testing session took 3 hours straight and it led us to know how much we can do as a team in such a short time. We became aware of what skills we have, so we were able to prepare the strategy based on that experience. We also agreed on a few rules to follow:
- Don’t panic at any moment
- Split tasks into three groups, so one person can cover one-third of the job
- Focus on the tasks that are assigned to you
- Read documentation, rules, instruction TWICE before you report something
- Treat every task as a normal job at work — you know how to test so just do what needs to be done
- Jarek, as our captain, will report all the defects and send the report.
The start of the contest
When the event started we had to find our designated table. We sat down with our work laptops and waited for instructions to be passed on pen drives. After receiving all the documents, we found out the most important facts:
- There are two new functions to test: generating reports to new formats and statistics for reports
- There is a list of fixes ready for retesting
- There is a list of known bugs, which will not be fixed
- Defects should reference to specified application areas. Bugs reported for other areas or found as non-bug issues will earn us negative points
- We can report the suggestions, but only if they are in the scope of new functions — then it will earn us one point each
- All defects have to be reproduced by the captain on his laptop and only he can report bugs and suggestions, as well as to send the report.
After having some time to get familiar with all the instructions, the organizers gave us a password to unlock the application and the testing has begun.
We split the tasks to avoid repeating our work:
- Jarek — making all retests according to instructions, reporting all the defects
- Gabriel — testing new function: exporting data to file
- Marysia — testing new function: statistics
We decided that after finishing the task we would just try exploratory testing of the remaining areas.
Before coming to the Testing Cup I thought that during testing there will be too much noise to concentrate on work. I was really surprised that 400 people can work in silence. Everybody was focused on the tasks given so no noise-cancelling headphones were needed.
During the first round we found 7 defects in new functions. We also made all retests, but some of them were not possible to reproduce on the application version that we were given. All the time we were focused and constantly worked on our tasks. We did our best, but with no hurry, with great caution not to report something outside the scope of testing. I was pretty sure that all of us thought that the worst thing would be to report a defect and get negative points for it.
At the end of the first round, reporting defects was over, anything sent after this time was not evaluated at all.
Unfortunately we didn’t found the ‘hidden function’ which was Mr. Buggy the Game.
After a short break, during which we had time to get something to drink, the second round has begun and we were supposed to write the test report. Before the event we established the structure for such a document, so it was easier to focus on what level of quality the application has. We prepared report together and our captain sent it.
The report contained the scope of testing, what we tested, what we did not test, types of tests we have done, tools used, how many defects we have found, overall evaluation of new functions, report from retests and our recommendations for releasing the application. Writing the report took us about half an hour.
After the contest was over we were feeling pretty good. We were proud of our work even though we knew that there were more than 20 bugs in the application, but we reported only 7. We had a strong feeling that those defects were valid and would be positively evaluated. Of course we discussed what we had done wrong or well, and what we could do better next time. We learned that we should improve our communication so we could spend more time on testing instead of doing paperwork.
We could’ve find more defects if Jarek had more time to do the exploratory testing.
We also didn’t report any suggestions for improvement because we didn’t focus on looking for them.
For the whole next day there was a conference about testing with many great speakers. The results of the contest were announced at the end of the day. Just before we got to the train back home we found out that although we didn’t win, we had 15th place! We were there for the first time, we had little time to prepare and the competition was very high level, so we were very happy and proud of ourselves to achieve that high position.
I wrote to the organizers to get their feedback on our overall evaluation by the committee. In response I got the following:
„Very good reporting of the defects. Practically all of the highest valued defects were found. No negative points. No suggestions that could cut points. Highly pointed report for the tests with small shortages: for example no testing metrics like time spent on the tests.”
At once we started to consider that if we have such high place then maybe we could’ve won with one more bug reported, as there was a very small point difference between the top contestants. Anyway, we came home extremely happy!
Later we found out that our work was evaluated with 33 points and the winner had 43 points. As time has passed the organizers gave more and more statistics.
Many reported bugs were classified as non-defect issue because they were outside of the testing area or were not bugs at all.
We found 7 valid bugs when the average was 10,53 per team — but many participants reported invalid bugs! We did not get any negative points. We had a better score than many people who attended the previous editions. Our approach “stay calm, just do your job” really paid off.
Every Testing Cup is different, the event evolves and next year will probably change into something completely new. Hopefully we will be able to come!
We are truly grateful for that experience, we’ve learned a lot and boosted our confidence of our skills. We also met great people with huge QA knowledge and were able to share our experiences in software testing. To be honest I still feel like I’ve won something with this 15th place in the Polish Championship of Software Testing.