We love testing across the entirety of ASOS Tech. Yet we use such a variety of technologies it doesn’t make sense to have a ‘Testing @ ASOS’ blog post. This is because what works in one area may or may not work in another. As a QA Lead in Customer Experience, it is my responsibility to promote the testing mindset. This helps ensure that the software we are releasing is of a high quality.
What do we mean by ‘quality’?
This is an interesting question. It is one that I often ask when interviewing and I’ve heard many definitions over the years. One definition that I particularly like:
‘Quality is value to somebody whose opinion matters’
Because we all have different definitions of quality, we work with our key stakeholders to ensure they have as much information about the product as they need. They can then make use of that information and make a decision based on the level of quality they know would make our customers happy.
Whose opinion matters?
All our stakeholders have opinions, but some carry more weight depending on their role and the topic. We have product owners, platform leads and business analysts, whose inputs and contributions are invaluable. We also have our customers — they can be both end-users or consumers of services, and their opinions matter first and foremost.
What do our testers do?
We perform our testing activities so that we can provide information. This information helps people who can make decisions about the quality aspect of a product. We are not the gatekeepers, nor are we the people who make decisions on the quality of a product, or ‘sign off’ a release.
We perform a vast number of testing activities in Customer Experience. Every activity that a tester performs aims to provide information about the product.
One initiative that we have recently kicked off in Customer Experience involves building relationships with our Customer Care team. We have held many visits to our Customer Care offices to meet the team and see how they work. Customers who experience a problem will contact them first in order to query the issue. This information will get fed down to the teams over time, but, by having strong relationships with Customer Care, we gain access to it much more quickly. It also helps us understand how we can help improve our customer experience, which we can then use to guide our testing.
We track social feeds (only on Twitter for now). We have dashboards set up that are tracking tweets and using AI to gauge the sentiment of the tweet. These dashboards are monitoring for keywords that give us access to what our customers are thinking.
We also have tools like Application Insights that provide information on how the application is performing. This is for our test and production environments. This all requires testing, too!
We do exploratory testing as well, using personas and charters to guide our sessions. Exploratory testing to us isn’t just about clicking around hoping that something might break. It involves understanding areas of risk and using our knowledge of the system to test it and uncover new information.
Exploratory testing doesn’t just happen on the product itself — we can explore designs and requirements. Any activity that is performed to uncover information is a form of exploratory testing.
We perform testing in production. There is still very much a stigma around this, as there may be fears that we could be impacting the customer experience in some way, especially when this is being performed in an uncontrolled manner. There’s also a fear about what happens if you find an issue, or if you are testing something and it doesn’t quite work as expected. We are trying our hardest across ASOS Tech to break this down.
We have many releases go into production every day in Customer Experience. This wouldn’t be possible without a robust pipeline. How has it become so robust? The answer is through testing and refining it to suit our needs. We use TeamCity and Octopus Deploy, which are powerful tools. But…with this great power comes great responsibility. So, we make sure that these pipelines undergo testing to give us confidence when releasing.
We work and pair with developers, we do mob programming and testing as a team. Some teams may use ATDD to help with this collaboration on stories while other teams may not — there is no right or wrong way to deliver software. We understand that teams are a group of individuals. Each individual has different strengths and weaknesses. What works well for one team in a given context, might not work for another team. This helps remove any boundaries that may arise between developers and testers.
Pairing is great, but we do encourage our testers to maintain a critical distance to the product. We’ve all been too involved in a product and missed some obvious issues. We’ve had testers move to different teams and discover issues that the other testers have missed. They weren’t missed due to negligence or laziness, they were missed because they were too close to the product.
We question everything… Everything? Yes… everything! We don’t make assumptions about the software. We don’t accept it when developers say it will work, we want to prove it can work (or not). We spot missing requirements and let people know as early as possible. We ask questions of the requirements, of the software, of the designs. Learning how to ask the right questions at the right time is a powerful skill to have.
We do a lot of planning, but not necessarily in terms of writing individual test cases and executing them. Our testers do more than that. We help plan releases, predict when it’s safe to release or plan their testing. We also have to plan the test strategy for their platform. We must engage with multiple people to ensure that everything is understood and that we are all working towards the same goal.
Testing is more than automation. It undoubtedly feeds into the strategy when deciding what to automate and how, but it is also a process of uncovering information about a product that might impact the perceived quality. It is about investigating potential product risks at a user level, not just code level.
Ultimately, our testers do a great deal. They do all of the above and more. Everything they do is performed to help uncover information about the product, which we can then use to guide our testing and checking approach. We can reduce and highlight risk in the appropriate areas by performing all of the above.
Look out for future posts about our Engineering Quality Values — a list of values that we all live and breathe to guide our approach and the work that we do in our agile teams!
My name is Gareth Waterhouse and I’ve been at ASOS for over eight years. It’s been amazing seeing the company grow and I love helping the people that I work with achieve their goals.
I’m a keen gamer, enjoy spending time with my family, and love running and sports. I’m a season ticket holder at Sunderland, but the less said about that the better.