How Service Virtualization is helping to modernise Kingfisher’s approach to testing and delivery

Colin Chapman
Kingfisher-Technology
8 min readNov 2, 2023
Service Virtualisation

Introduction

In this blog post, I will be looking at how Kingfisher has started to leverage the capabilities of Service Virtualization and the impact it is having on testing and delivery.

Kingfisher has a sizable engineering department with 1000+ engineers, working in various domains across multiple banners (other Kingfisher blogs can be found here https://medium.com/kingfisher-technology ). We work using agile and waterfall methodologies and a broad range of technologies, and our ways of working can vary vastly from team to team.

We introduced Service Virtualisation to Kingfisher to reduce costs, and improve delivery velocity and quality.

Service Virtualization is a technology that allows us to simulate the functional behaviour and the performance profile characteristics of dependent components in our testing stack. “Dependent components” is the key phrase. See diagram below

Diagram showing what a dependent component is on the context of testing

As soon as you introduce dependent components that are not within the control of a team, delivery velocity reduces. The more dependent components the more possible impediments that can slow a team down. Creating simulations that are within a team’s control to replace these dependent components helps remove impediments, and improve costs, velocity, and quality.

The Service Virtualization tool we use is a low code tool meaning you do not need to understand how to write code to use it out of the box. However, the tool can be customised using JavaScript extensions. Virtual services are easily and rapidly created. They can be developed by anyone in the engineering team. The GUI-based rules engine helps to quickly define intelligence into the service/simulation and services can be run and tested locally or deployed to a remote server where they can be shared by other teams if required. There are 3 primary ways of creating simulations:

· Record existing traffic and playback

· Importing request-response pairs

· Importing a service definition and hand-crafting the messages according to that definition

Data can be handled externally to the simulation making manipulation simple and independent of the simulation functionality. The simulation configuration is stored in source control and can be deployed as part of pipelines on demand. The use of simulations leads to more stable test execution runs, and earlier development for components that are not ready to be tested with. Simulators can also be created in advance of an API being available so that the team are no longer dependent on another team's development timelines. There are also a wide variety of protocols that can be used from SOAP and Rest to MQ, SAP IDOC and FTP, so it is very flexible out of the box.

Test Stability

Test efforts are often impaired when having to work with dependent components outside a team’s control. Let’s consider the diagram below which illustrates how these dependent components can affect testing:

Diagram showing dependent components in different problem contexts

· Working with an unstable service where test data and configuration items are constantly changing impacts the stability of any tests being executed. Test failures not related to defects still need to be investigated by the team. Additional time is spent fixing the problem that caused the test to fail, often waiting on a 3rd party to fix the problem, before testing cycles can continue, adding unnecessary costs and negatively impacting velocity.

· A 3rd party API where you are paying per transaction. Here the more testing you do the more expensive it gets. This scenario creates unpredictable costs that will grow the more testing that you do.

· Test system is unavailable for development environments and therefore the team has to wait until code gets deployed into an integrated environment further down your path to production. An issue found here becomes more expensive to fix and causes engineers to need to context switch between new and old features.

· Rate limited APIs — your tests start failing as you have used up your limit for the day and now must stop testing and wait till tomorrow.

· Waiting to verify your application changes because the system you need to test against has not developed the required functionality yet.

These are all common systems that affect testing which in turn impacts on velocity and cost of delivering new features. Service virtualization can help to remove those impediments.

Dependent components replaced with simulations

Having simulations within the teams control means test data and functionality is a known quantity. This leads to more stable test runs and a higher likelihood that a test failure is due to an actual defect. All this reduces wastage, costs, and improves velocity.

Software Quality

Rather than just automating those happy paths that flow through our component under test, simulations help us to automate those difficult negative paths. It can sometimes be difficult or impossible to force error cases in dependent components. However, it is important to test that our component can handle these error states in a graceful fashion. Using simulations, you can force error states to be returned under set conditions to ensure that are trapped and processed correctly in your component under test.

With service virtualization, you can also simulate the performance of a dependent system. By using simulations to alter those performance characteristics you can start to test cases such as how does my component handles requests under load when we know requests are in an expected performance characteristic. How does my component handle the load when those responses start flowing at an unexpected performance characteristic? Being able to test your component thoroughly in abnormal circumstances and check that it responds appropriately can greatly improve the stability of your production estate. 88% of online customers are less likely to return to a site after a bad experience.

Shift Left — Shift Right

From a testing perspective, a large proportion of teams follow a traditional path to production methodology. They do some development work and create some tests. Once happy these quality gates are good, code and testing move onto other environments such as a performance testing environment and a system integration testing environment where further testing and quality gates happen. After this, the code may get deployed into production.

This is typical for a lot of organisations, but there are limitations.

· Teams end up queuing to get into these environments slowing velocity. If team A has some kind of problem which slows them down this has a compound impact delaying all the following teams queued.

· These environments tend to be full stack meaning they are expensive to run and maintain.

· Automated testing is very brittle. This can be due to numerous reasons, stale or out of date data, or data that has changes due to test executions from other teams; components not in the correct configuration or currently unavailable; updates to 3rd party systems to name just a few, all have impacts on the stability of tests. Then when a test fails, time must be taken to find out why, all time wasted when it is due to one or more of the above.

· When actual defects are found they cost more to fix as they are further down the pipeline and need to be sent back to the engineering teams who have already moved on to creating the next feature.

· Investigating problems is often more difficult due to the lack of expensive monitoring solutions in non-prod. Access is often restricted so that engineers cannot make random changes that may affect test runs from other teams, so they are often reliant on other people to access logs and data.

All the above impacts on cost and delivery velocity in one-way or another of delivering new features, this is even without discussing the infrastructure and running costs to the organisation of these full stack integrated environments. This is why we are looking at how we can transition away from this model.

Using the functional and performance capabilities of service virtualisation enables us to not only stop our reliance on these full-stack integrated environments but also allows us to push both integration testing and performance testing further left in our pipelines. This means issues are found earlier and cost less to fix. Assuming the same quality gates are in place there is now no reason why we cannot release code straight into production. When we have the correct production monitoring in place for engineering teams to use, and are using techniques like canary releasing, where we release code to small subset of users and monitor the traffic before releasing it out further, we massively improve our time to market on new features.

Bounded Contexts and Product Teams

Kingfisher is transforming its ways of working, transitioning from project-focused delivery to product-focused delivery, empowering the product teams. Teams will be more vertically focused and work within a bounded context for their domain / subdomain. At the edges of these bounded contexts, to help enable frictionless delivery, we will see more and more usage of simulations in the future. Removing those dependencies on components from other teams through simulation is key to allowing product teams to work independently without impediment at speed. The more control a team has over its own destiny the faster they will be able to deliver new features.

Conclusion

We still have a long way to go with service virtualisation within Kingfisher. We started with a centralised team learning the tooling capabilities, trying to establish some best practices, and creating some patterns / playbooks while creating simulations for the teams. We are now starting to federate this out to the teams so they can become self-proficient in creating their own simulations. Service virtualisation is starting to have a big impact on costs, quality, and speed of delivery in several areas of Kingfisher and we have so far, really only scratched the surface .

If you are interested in joining us on our journey, please check out our careers page.

Thanks for reading!

--

--