Laravel and Murphy’s Law
“Whatever can go wrong, will go wrong”, states the well known Murphy’s Law. Lots of people will be able to relate to this popular adage, but especially in software this is a law to take into account: hope for the best, but be prepared for the worst.
Murphy’s Law and Software Design
When designing software, it’s always good to think about how to recover a situation that goes wrong. Especially when dealing with external APIs the fault may not even be caused by your code, but — as we learned from Murphy’s Law — we should prepare for the worst. An external API could timeout, give a slowdown response (503 — rate limit exceeded), have a breaking change, … A lot of things can go wrong.
Let’s have a look at how we could tackle these issues in Laravel.
Encapsulating in Queued Jobs
When dealing with external API calls, it might be wise to separate those calls into their own processes. Laravel has a great way of encapsulating them: queued jobs. Jobs have a couple of benefits:
- The process is asynchronous (it happens in the background).
- It can automatically attempt the job multiple times.
- The job can be manually retried.
- Insight into why this specific job has failed.
The basics of using Queued Jobs in Laravel
A job in Laravel is nothing different from a simple PHP class with a handle()
method. We indicate that it should be queued by adding the ShouldQueue
interface. Whatever happens within the handle method will be executed when the job is popped from the queue.
class CallExternalApi implements ShouldQueue
{
public function handle(ExternalApi $externalApi)
{
$externalApi->call();
}
}
We can dispatch the job onto the queue by using the dispatch
helper:
dispatch(new CallExternalApi);
The queue worker can be ran by using the queue:work
artisan command.
php artisan queue:work
The command line will display that the job has been processed:
[2019-06-26 14:03:26][12] Processing: App\Jobs\CallExternalApi
[2019-06-26 14:03:26][12] Processed: App\Jobs\CallExternalApi
When things go wrong
As Murphy’s Law states: things will go wrong. When the job fails we see that by default Laravel keeps retrying without any delay.
[2019-06-26 14:03:26][12] Processing: App\Jobs\CallExternalApi
[2019-06-26 14:03:27][12] Processing: App\Jobs\CallExternalApi
[2019-06-26 14:03:28][12] Processing: App\Jobs\CallExternalApi
[2019-06-26 14:03:29][12] Processing: App\Jobs\CallExternalApi
[2019-06-26 14:03:20][12] Processing: App\Jobs\CallExternalApi
[2019-06-26 14:03:31][12] Processing: App\Jobs\CallExternalApi
[2019-06-26 14:03:32][12] Processing: App\Jobs\CallExternalApi
[2019-06-26 14:03:33][12] Processing: App\Jobs\CallExternalApi
...
Limiting retries
One solution to this problem is by limiting the number of retries, the quickest approach to this is by specifying it on the artisan command.
php artisan queue:work --tries=3
Now when the queue has tried 3 times, it will mark the job as failed.
[2019-06-26 14:03:26][12] Processing: App\Jobs\CallExternalApi
[2019-06-26 14:03:27][12] Processing: App\Jobs\CallExternalApi
[2019-06-26 14:03:28][12] Processing: App\Jobs\CallExternalApi
[2019-06-26 14:03:29][12] Failed: App\Jobs\CallExternalApi
Notifying the user of the failure
Laravel has a way to hook into failing jobs by using the failed()
method on the job class. You could use that hook to notify the user of the failure, send a text message to yourself that things are going wrong, etc. The hook will only run after the last retry.
class CallExternalApi implements ShouldQueue
{
public function __construct(User $user)
{
$this->user = $user;
} public function handle(ExternalApi $externalApi)
{
$externalApi->call();
} public function failed()
{
$this->user->notify(new ExternalApiCallFailedNotification);
}
Retrying jobs
Laravel can save all failed jobs into the database. First, you have to create the failed_jobs table by running:
php artisan queue:failed-table
php artisan migrate
The failed_jobs table contains information about the connection, the queue, the payload and the exception that was thrown.
We can view all failed jobs by running queue:failed
:
php artisan queue:failed
This command will show you the following:
The database tables shows more information like the payload and the thrown exceptions. This can also be displayed by using Laravel Horizon.
By keeping this information, Laravel allows us to retry the job at a later time. This might be handy when there was a software bug in the job code. After deploying the bugfix, we can retry the job by running thequeue:retry
command.
php artisan queue:retry 1
The job (with id 1) will now be pushed back onto the queue and ran by the queue worker.
Automating the retry process
It might not always be necessary to retry a job manually. An external API can give a timeout or a slowdown response (503)— in those situations we want to automatically retry multiple times, but with a delay between the attempts.
Delaying attempts
Laravel allows us to define a global delay for all jobs handled by the same worker by specifying the --delay
option:
php artisan queue:work --tries=3 --delay=3
Now the queue worker will wait 3 seconds before retrying a failed job.
Situation specific retries and delays
By using the command line options, we specify the tries and the delay for all jobs. In some situations, this might not be enough. Some jobs are allowed to have more tries or need to have a bigger delay. Laravel allows us to specify those settings on the job class:
class CallExternalApi implements ShouldQueue
{
use InteractsWithQueue; /**
* The number of times the job may be attempted.
*
* @var int
*/
public $tries = 5; /**
* The number of seconds to wait before retrying the job.
*
* @var int
*/
public $retryAfter = 5;
}
If these settings are specified on the job, it will take precedence over the value provided on the command line.
Exponential backoff strategy
In situations like a slowdown response (503) of the external API, it might be necessary to increase the delay after each attempt. Laravel allows this by specifying a retryAfter()
method on the job class.
Via theInteractsWithQueue
trait, we can retrieve the number of attempts.
class CallExternalApi implements ShouldQueue
{
use InteractsWithQueue; /**
* The number of times the job may be attempted.
*
* @var int
*/
public $tries = 10; /**
* @return Carbon
*/
public function retryAfter()
{
return now()->addSeconds(
$this->attempts() * 2
);
}
}
The above example will increase the delay with a linear curve. If we want to implement an exponential approach, we can use the exponential backoff formula:
In Laravel we would implement it as follows:
public function retryAfter()
{
return now()->addSeconds(
(int) round(((2 ** $this->attempts()) - 1 ) / 2)
);
}
Now the retry time will grow exponentially until the max attempts are reached as shown in this diagram:
Note: if you want to have a lot of retries using an exponential backoff, it will quickly have hour-long retry delays. In those cases you might want to look into using a logarithmic function.
Conclusion
When designing software, don’t only think about the happy path. Write down (preferably with (unit) tests) what all the things are that could go wrong. Then design your solution to be able to recover those situations. (Wether or not automatic.) There isn’t a single solution to rule them all, some processes might need to have specific failure handling while others are fine with the default approach.
Do you need help implementing retry strategies in your project or do you need help building a Laravel application? At Maatwebsite, we are there to help you on a commercial basis. Contact us via info@maatwebsite.nl or via phone +31 (0)10 744 9312 to discuss the possibilities.