👨🏼‍💻Concurrency Management of Different Background Jobs Depending on Data to be Processed with Laravel

Emincan Özcan
Huawei Developers
Published in
3 min readDec 1, 2022

--

Image Link

Introduction

Background jobs are a common requirement of many web projects. Laravel provides a great experience for developers in this regard. If your business rules don’t have special requirements, turning them into background queue jobs just takes a couple of minutes.

However, sometimes our business rules have special requirements and it may be necessary to work a little more in detail to meet these requirements. A scenario I encountered recently required that 3 different queue classes not be running simultaneously on the same model. In this article, I will share the steps I followed for the implementation of this scenario.

The Logic Behind the Solution

When implementing such a business rule, we need to store a record of the jobs running at the time. We can use Redis or a similar tool for this.

For example, we can use a key like “process-product:$productId”. When any job starts to run, it should first check whether this key exists in the tool and take the related actions as follows:

  • If it exists: The relevant job should send itself back to the queue with the same parameters.
  • If it does not exist: The relevant job must first create this value, then perform its transactions and finally delete the key.

This basic logic is not dependent on the programming language or the tool used. There are some details that should not be neglected. For example, having a stable lock mechanism, providing automatic invalidation after a certain period of time so that the following jobs are not blocked forever in case a job fails while it is running, etc.

“A Little Dirty” Implementation

In the Redis facade provided by Laravel, there is a static method called throttle. This method creates an instance of the DurationLimiter class in the background and performs the necessary operations on Redis by using Lua Scripts. Since it also includes additional features such as expiry dates that may be needed in various places, I recommend you take a look at the implementations here, especially if you are also interested in Redis. The code is available here.

However, using the throttle method in the handle method of the job class makes the code a little dirty, and makes testing it harder. So, let’s find out cleaner way.

Cleaner Implementation

Laravel has a concept called “job middleware”. Job middlewares are just like the middleware we use in HTTP requests. They run before the actual logic work and give us a chance to take various actions. For more detailed information about the concept of job middleware, you can refer to this address.

Using job middleware allows us to create a separate middleware class and use them in our job classes. This makes the code both cleaner and easier to test.

Also, a middleware that prevents job overlapping is already available in Laravel. Documentation on this is available at here.

By default, this middleware stores the job class name and a key value in the cache and performs checks on this key. The problem is; Once the job class becomes a part of the key, different job classes will continue to run concurrently. This doesn’t solve our case because we wanted to manage the concurrency of 3 different business classes operating on the same model. Depending on the Laravel version, there are 2 ways to achieve our goal.

Laravel 9.32 and later versions:

We can use this commit from September 11, 2022.

All we have to do is call the shared method while using the WithoutOverlapping middleware:

Before Laravel 9.32:

Previous versions have WithoutOverlapping middleware but the shared method is only available for Laravel 9.32 and later versions. Therefore, a different solution is required. Creating a new middleware class that extends WithoutOverlapping class and rewriting the getLockKey method as follows will get us to the point we want.

New middleware class:

Middleware method to add to job class:

Conclusion

In this article, it is shared how the concurrency of multiple queue tasks can be managed depending on the data it is running on.

References

--

--