Queue traffic shadowing in service of refactoring

Andrzej Deryło
nexocode
Published in
6 min readJan 25, 2021

--

Queue traffic shadowing in service of refactoring #

One of our customers has recently come up with updated requirements to one of the major flows in a system that we were developing. The requirements involved letting tenants switch on/off some parts of if and parameterize others. The remaining ones were: limit bugs as much as possible with a very optimistic (yet unrealistic) assumption “no bugs at all” with default settings, the second version of the flow must give results as similar as to the first version of the flow as possible limit system downtime to a minimum keep the development pace of another team intact

The technical requirements entailed a lot of changes in various areas of the code, such as:

  • retrieving data for the process
  • initial selection and preparation of the data
  • actual processing of the data

Considering all the above, this seemingly simple task of adding a couple of “ifs” here and there grew in size to a pretty big refactoring task with analysis of refactoring results. All that with no downtime, slowing down other teams and all the other stuff mentioned above. A tough case indeed.

Traffic shadowing #

Traffic shadowing is a technique which allows you to test new features (or even entire applications) using production traffic — before actually releasing it to production. It can be achieved by copying part (or entire) traffic from the usual production path to a tested application or feature.

On the face of it, the idea may sound simple, but implementing it takes a lot of careful planning. Here are the most important things to consider:

  • Getting traffic to test clusters without impacting critical path
  • Annotating traffic as shadowed traffic
  • Compare live service traffic with test cluster after shadowing
  • Stubbing out collaborating services for certain test profiles
  • Synthetic transactions
  • Virtualizing the test-cluster’s database
  • Materializing the test-cluster’s database

The system we developed consisted of a few microservices. The purpose of refactoring was to change logic of one microservice triggered by queue messages. All messages are produced by other containers and are aggregated in a single queue. After that, messages are distributed across priority queues. From priority queues, messages are directed to the actual working queue according to their priority, and the final result of a single message is a row in the SQL database.

To redirect traffic to the alternative flow, we created a separate queue, added consumers to that queue and added a switch letting us control how many messages were redirected to the alternative flow — expressed in percentage. It allowed us to turn on or off the alternative flow and manage the load on it.

To store the results of the alternative flow we decided to add a new table with the same layout as the production one, in the same database. This allowed us to easily compare the results of production processing with the altered flow without setting up and mocking up data in a separate database — saving us a lot of time and headaches.

At the code level we added a separate project to contain altered logic, necessary abstractions and tests. We kept input parameters the same between V1 and V2 as there was no point in changing them. Adding a separate project and keeping input the same let us easily switch from the old flow to the new one — all it took was deleting a project containing the old logic and references to it and pointing an altered consumer to the production queue.

At that point, we had three major aspects covered. What about deployment?

The system was deployed on three different environments:

  • test — where we had regression & integration tests of majors flow on a real resources and anonymized data
  • dev — where we were performing deployments of current work for customer to check
  • prod — where actual production traffic happened.

All of these environments were isolated from each other, and there was no communication between them. Real data was only kept on production, dev was an anonymized backup of production data utilized just as an environment where the customer could go and check if the developed features worked as expected. The test environment was completely automated and operating on a one-day snapshot of anonymized production data to check if major flow works the same as it used to.

In our case there was no separate container which required traffic redirection, it was just an alternative flow inside the application, so problems with deployment of test cluster and database virtualization did not apply — everything was deployed without any modification.

As we had addressed the major problems behind traffic shadowing, we created tasks, implemented them and checked for the most common bugs and mistakes — which took us about a week. At the end of the sprint we deployed the refactored flow to production, turned on redirection to the alternative flow, and resumed our usual work.

After a couple days we went back to investigate the results — the data has been redirected and processed as expected. After further investigation, we found that there were discrepancies between V1 and V2 results. We tracked and fixed the bugs, deployed them to production. After removing the results of the previous processing, we turned redirection again.

We were repeating these steps until the customer was satisfied with the outcome of the refactored flow.

All of the above actions were taken in parallel with normal development of the system and in the production environment. We encountered literally zero downtime, and were able to compare and test the whole feature without disrupting the actual production processing.

When the customer gave us green light to replace V1 process with V2 process, it took us two business days to remove V1 related code and store results from V2 into the usual production table. During these two days we also cleaned the code and removed all temporary code and database structures which were required only for traffic shadowing itself. We also adjusted our regression and integration tests to conform with the results of the process — slightly different yet accepted by the customer.

Conclusion #

Refactoring with traffic shadowing yielded great results for us. It allowed us to fully meet our customer’s requirements related to downtime. Apart from that, we were also able to constantly consult the outcome with the customer, and based on the feedback improved the code to the customer’s complete satisfaction.

From the developer’s perspective, we were able to quickly implement requested changes, safely deploy them to the production environment and observe the outcome. With the ability to control the load on the alternative flow, we could investigate the performance of the new solution. At the same time, we were not slowing down the other teams’ development processes — even if we made a mistake causing bad results, we could quietly discuss what went wrong and, having resolved the problem, deploy appropriate fixes independently.

In our opinion, traffic shadowing is a very useful and powerful technique which can give your customers and developers working with you a new level of security. It might sometimes be challenging to implement but if implemented properly, it is worth the struggle.

Want more from nexocode team? Follow us on Medium, Twitter, and LinkedIn. Want to make magic together? We’re hiring!

Originally published at https://nexocode.com on January 25, 2021.

--

--