DEV Community

Cover image for PHP is Single-threaded, So How Does Laravel Handle Asynchronous Queue Jobs?
Kennedy Gitahi
Kennedy Gitahi

Posted on • Edited on

PHP is Single-threaded, So How Does Laravel Handle Asynchronous Queue Jobs?

As I dig deeper into Laravel, one thing I have always wanted to understand better is how it handles asynchronous queue jobs. As we know, PHP is a single-threaded language. This means that when a request arrives at a server, PHP processes it lineally from start to finish before moving to the next one.

This synchronous execution model works well for many scenarios, but it does not always serve the user well. For example, it can fall short when a server needs to perform resource-intensive tasks without making the user wait.

Laravel has a neat solution: its queue system, an elegant solution that brings asynchronous processing capabilities to PHP applications.

But how does Laravel accomplish seemingly asynchrnous operations in a single-threaded environment?

Understanding PHP's Single-Threaded Nature

Before diving into Laravel's queue implementation, it's important to understand what "single-threaded" really means in a PHP context.

When your application receives a request, PHP allocates a single execution thread to process it as mentioend above. This thread can only execute one instruction at a time in a set sequence until the entire script completes.

Unlike languages with native multi-threading capabilities, PHP cannot "spawn" additional threads within the same process to handle multiple tasks simultaneously.

This limitation becomes particularly problematic and obvious when your application needs to:

  • Send emails (especially many emails at the same time)
  • Process large files (like processing a podcast episode)
  • Generate reports (e.g. invoices or excel sheets)
  • Call external APIs with unpredictable response times
  • Perform resource-intensive calculations

Forcing users to wait while these operations complete would create a poor user experience.

This is precisely where Laravel's queue system shines.

How Laravel Achieves Asynchronous Processing

Queue Drivers: The Foundation

The foundation of Laravel's queue system is its queue drivers. These are storage mechanisms that hold your jobs until they're processed.

Laravel supports several of these drivers out of the box:

  • Database: Uses your application's database to store jobs (simple to set up but less performant at scale due to polling overhead)
  • Redis: High-performance, in-memory data store (excellent for high-volume queues and supports Redis Streams in Laravel 10+)
  • Amazon SQS: Cloud-based queue service (great for distributed systems)
  • Beanstalkd: Simple, fast work queue

Each driver has different performance characteristics and setup requirements.

However, Redis provides an excellent balance of performance and simplicity for most applications. This is the main reason why it is the recommended option by many experienced developers.

The Multi-Process Approach

What Laravel does brilliantly is circumvent PHP's single-threaded limitation by leveraging multiple processes instead of threads.

The fundamental concept works like this:

  1. Your application dispatches jobs to a queue (stored in a database, Redis, etc.)
  2. Separate PHP processes (queue workers) pick up and process these jobs
  3. Multiple workers can run simultaneously, creating true parallelism

This approach allows your main application to continue processing requests while background jobs run independently in their own PHP processes.

Queue Workers: The Workhorses

At the heart of Laravel's queue system are queue workers. These are long-running PHP processes that continuously monitor your queues for new jobs.

Starting a queue worker is as simple as running:

php artisan queue:work
Enter fullscreen mode Exit fullscreen mode

Once you run this command, it initiates a PHP process behind the scenes that:

  1. Connects to your configured queue driver (Redis, database, etc.).

  2. Polls for available jobs.

  3. Retrieves and processes them one by one.

  4. Continues this loop until it is manually stopped or hits a configured timeout.

Creating and Dispatching Queue Jobs

Laravel makes it remarkably easy to create and dispatch jobs.

Here's a simple example adapted from the example Laravel gives in its docs:

<?php

namespace App\Jobs;

use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;

class ProcessPodcast implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    protected $podcast;

    public function __construct($podcast)
    {
        $this->podcast = $podcast;
    }

    /**
     * Execute the job.
     */
    public function handle()
    {
        // Process the podcast...
        // This code runs in a separate PHP process
    }

    /**
     * Handle a job failure.
     */
    public function failed(?Throwable $exception)
    {
        //send notification of failure by logging or any other preferred way
        Log::error("Podcast processing failed: ". $exception->getMessage());
    }
}
Enter fullscreen mode Exit fullscreen mode

Dispatching this job to the queue is as simple as:

ProcessPodcast::dispatch($podcast)->tries(5)->backoff(10);
Enter fullscreen mode Exit fullscreen mode

This command will dispatch the job to the queue and set it to retry 5 times with a 10-second delay between retries. Retries are handled automatically by Laravel's queue system and ensure the job is retried up to the configured number of times ($tries) in case of failure.

According to Laravel's documentation:

When a job throws an exception but the exception threshold has not yet been reached, the job will typically be retried immediately. However, you may specify the number of minutes such a job should be delayed by calling the backoff method.

The backoff method prevents the job from being retried too often so it does not overburden your server. The default backoff is 10 seconds, the lowest it can be set to is 1 second, with the highest being 3600 seconds (1 hour).

With the single dispatch command above, Laravel serializes the job and its dependencies, stores it in your configured queue, and allows your application to continue processing the current request immediately.

You can also send a job to a specific queue by using the onQueue method:

ProcessPodcast::dispatch($podcast)->onQueue('high');
Enter fullscreen mode Exit fullscreen mode

In this instance, the job will be sent to the high queue, which you can configure in your queue configuration file. Remember that you can push the same job to multiple queues, and the worker will process them in the order you specified:

ProcessPodcast::dispatch($podcast)->onQueue('high')->onQueue('low');
Enter fullscreen mode Exit fullscreen mode

Doing this can be especially benefitial for instances where you want to prioritize or segment how jobs are processed. This is possible because the Laravel queue worker allows you to specify which queues it should process by priority.

For example, pushing jobs to a high queue can allow you to run a worker that gives them higher processing priority:

php artisan queue:work --queue=high
Enter fullscreen mode Exit fullscreen mode

Scaling Beyond Single Workers

Horizontal Scaling with Multiple Workers

One of the most powerful aspects of Laravel's queue system is how easily it scales horizontally, which it does by allowing you to add more workers. This type of scaling allows running multiple queue workers simultaneously, with each processing different jobs independently:

php artisan queue:work --queue=high,low
Enter fullscreen mode Exit fullscreen mode

Laravel's queue system is designed to handle high volumes of jobs by allowing you to scale horizontally by adding more workers to a queue so you can process jobs concurrently.

Laravel recommends that you start multiple queue:work processes if you would like to do this. To have multiple queue:work processes, you can either run each queue:work process in a separate terminal tab locally or configure your process manager to run multiple queue:work processes in production.

If you are worried about resources or too many workers, you can also use the --max option to limit the number that can run at the same time:

php artisan queue:work --max=10
Enter fullscreen mode Exit fullscreen mode

Process Management with Supervisor

Production environments are different from development environments, and they require a different approach to managing queue workers. Since you do not always have access to a terminal or can keep an eye on your processes indefinitely in production, you need a way to keep your queue:work processes running.

Remember that a queue:work process can stop running for any variety of reasons, some of the most common ones being exceeding a worker timeout or executing the queue:restart command.

In these environments, Laravel recommends using Supervisor. This is a process control system that ensures your queue workers stay running. A typical Supervisor configuration for Laravel looks like this:

[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /path/to/your/project/artisan queue:work --sleep=3 --tries=3
autostart=true
autorestart=true
user=forge
numprocs=8
redirect_stderr=true
stdout_logfile=/path/to/your/project/worker.log
stopwaitsecs=3600
Enter fullscreen mode Exit fullscreen mode

Failed Jobs and Handling Retries

Jobs fail in the real world for a variety of reasons. If one of your queued jobs is encountering an error, you likely do not want it to keep retrying indefinitely because this will either exhaust your resources or lead to massive server bills.

Laravel provides a failed job system that allows its queue system to handle this elegantly and in a structured wayusing built-in retry logic.

When a job fails, Laravel will automatically store the job and the exception that was thrown in the failed_jobs table. You can then use the failed event to handle the failed job.

Laravel allows you to specify how many times or for how long a job may be attempted in several neat ways.

In The Terminal

The first is in the terminal. You can add the --tries switch to the following Artisan command:

php artisan queue:work --tries=3
Enter fullscreen mode Exit fullscreen mode

This tells Laravel how many times to retry that job after the first failure. If a job exceeds its maximum number of attempts, it will be considered a "failed" job (see below for more details).

Supplying --tries=0 to the queue:work command will instruct your worker to retry the job indefinitely, which you might or might not want depending on your use case.

In The Job Class

Another option is setting the number of max retries in the job class:

<?php

namespace App\Jobs;

//Other code removed for brevity

class ProcessPodcast implements ShouldQueue
{
    //Other code removed for brevity

   // Set max attempts in the job class
    public $tries = 3;
}
Enter fullscreen mode Exit fullscreen mode

Laravel 10+ also allows a backoff method inside the event class:

<?php

namespace App\Jobs;

//Other code removed for brevity

class ProcessPodcast implements ShouldQueue
{
    //Other code removed for brevity

    /**
    * Calculate the number of seconds to wait before retrying the job.
    */
    public function backoff(): int
    {
        return 3;
    }
}
Enter fullscreen mode Exit fullscreen mode

Like in the command we saw earlier, the backoff method allows you to specify the number of seconds Laravel should wait before retrying a job that has encountered an exception.

For more complex retry logic, you can return an array of intervals:

<?php

namespace App\Jobs;

//Other code removed for brevity

class ProcessPodcast implements ShouldQueue
{
    //Other code removed for brevity

    /**
    * Calculate the number of seconds to wait before retrying the job.
    */
    public function backoff(): array
    {
        return [3, 30, 300]; // Retry after increasing intervals
    }
}
Enter fullscreen mode Exit fullscreen mode

In this example, the retry delay is 3 seconds after the first try, 30 seconds after the second try, and 5 minutes (300 seconds) after the third try and each subsequent failed try.

Finally, you can also set the retry until time:

<?php

namespace App\Jobs;

//Other code removed for brevity

class ProcessPodcast implements ShouldQueue
{

public function retryUntil()
    {
        return now()->addHours(12);
    }
}
Enter fullscreen mode Exit fullscreen mode

With this option, the job will be retried until the specified time. It is an excellent alternative to defining how many times your application retries a job before it fails, and it allows you to define a time at which the job should no longer be attempted.

By using the retryUntil method, the failing job will be attempted as many times as possible within a given time frame. In this example, the job will be retried until 12 hours have passed after the first failure.

Job Batching

Laravel 10+ offers a job batching feature that allows you to group multiple jobs into a single batch and process them together. This can be especially useful for processing large numbers of jobs in a single batch.

<?php

namespace App\Jobs;

use Illuminate\Bus\Batch;
use Illuminate\Support\Facades\Bus;

$batch = Bus::batch([
    new ProcessPodcast($podcast1),
    new ProcessPodcast($podcast2),
    new ProcessPodcast($podcast3),
])->before(function (Batch $batch) {
    // At this point, we have created the batch but no jobs have been added...
})->progress(function (Batch $batch) {
    // At this point, a single job has completed successfully...
})->then(function (Batch $batch) {
    // At this point, all jobs have completed successfully...
})->catch(function (Batch $batch, Throwable $e) {
    // First batch job failure detected...
})->finally(function (Batch $batch) {
    // At this point, the batch has finished executing...
})->dispatch();

return $batch->id;
Enter fullscreen mode Exit fullscreen mode

The batch's ID (a unique identifier we get through $batch->id), may be used to query for information about the batch once it has been dispatched.

You can then perform some action or run a callback when all the jobs in the batch have completed executing.

Job Chaining

Laravel 10+ also offers a job chaining feature that allows you to run a list of jobs in sequence.

Using the example provided by Laravel:

use App\Jobs\OptimizePodcast;
use App\Jobs\ProcessPodcast;
use App\Jobs\ReleasePodcast;
use Illuminate\Support\Facades\Bus;

Bus::chain([
    new ProcessPodcast,
    new OptimizePodcast,
    new ReleasePodcast,
])->dispatch();
Enter fullscreen mode Exit fullscreen mode

Since this is a chain of jobs, subsequent jobs will not be run if one job in the sequence fails. For example OptimizePodcast will not run if ProcessPodcast fails.

Advanced Queue Management with Laravel Horizon

For completeness, we should mention Laravel Horizon, which is a beautiful dashboard and configuration system for your queue workers. It is an excellent option for developers running Redis queues.

# Install Laravel Horizon
composer require laravel/horizon

# Publish the Horizon configuration
php artisan horizon:install

# Start Horizon
php artisan horizon
Enter fullscreen mode Exit fullscreen mode

Horizon provides:

  • Real-time queue monitoring
  • Job failure tracking and retry management
  • Worker balancing across multiple queues
  • Process scaling based on queue load
  • Tags for organizing jobs.
  • Metrics about queue performance.
  • Recent job details.

Performance Considerations and Best Practices

Error Handling

Error handling is a critical aspect of queue management because jobs are executed in the background, and unhandled exceptions can lead to silent failures. For this reason, you must implement robust error handling within jobs using try-catch blocks and logging.

Implementing try-catch blocks within the handle() method allows you to gracefully manage potential errors.

To follow the best practices, you must:

  • Log detailed error messages, including the exception trace, to make debugging easier for you and other developers who come after you.
  • Use the job's failed() method to perform specific actions upon failure, such as sending notifications or rolling back changes.
  • Avoid generic catch blocks and always catch specific exceptions whenever possible.

An example:

    public function handle()
    {
        try {
            $this->performSensitiveOperation();
        } catch (\Exception $e) {
            Log::error('Job failed: ' . $e->getMessage(), ['exception' => $e]);
            throw $e; //rethrow the exception so laravel knows the job has failed.
        }

    }

    public function performSensitiveOperation(){
        if(someFailureCondition){
            throw new \Exception("Sensitive operation failed");
        }

        // Perform your job logic here
        // ...
    }

    public function failed(\Throwable $exception)
    {
        Log::critical('Job failed critically: ' . $exception->getMessage(), ['exception' => $exception]);
        //send notification to developers.
    }
Enter fullscreen mode Exit fullscreen mode

Job Size

Another great practice is keeping jobs focused and relatively small. The size of your jobs is important because large, monolithic jobs can tie up queue workers and server resources for extended periods. This can delay other jobs and potentially leading to timeouts.

To ensure the best performance, you should:

  • Break down complex tasks into smaller, more manageable jobs. You can then use batching or chaining to run the jobs as you wish
  • Ensure each job performs a single, well-defined task.
  • Avoid performing resource-intensive operations directly within the handle() method; delegate them to separate services or classes and batch or chain them.

Dependency Injection

This is a general OOP practice, but always remember to use dependency injection in job constructors. This is a best practice because dependency injection promotes code reusability, testability, and maintainability.

The best practices here include:

  • Inject services, repositories, and other dependencies into the job's constructor.
  • Avoid instantiating dependencies directly within the handle() method.

An example:

use App\Services\PodcastProcessingService;

public function __construct(PodcastProcessingService $podcastService, $podcast)
    {
        $this->podcastService = $podcastService;
        $this->podcast = $podcast;
    }

    public function handle()
    {
        $this->podcastService->process($this->podcast);
    }
Enter fullscreen mode Exit fullscreen mode

Worker Memory Limits

Jobs that process large datasets or perform memory-intensive operations can exhaust worker memory, which can lead to weird side effects including crashes. To prevent this, you should configure appropriate memory limits for workers.

You can do this by:

  • Adjusting the PHP memory_limit setting in your php.ini file or within your Laravel configuration to accommodate the memory requirements of your jobs.
  • Ensuring your octane workers have enough memory when using octane.
  • Monitoring worker memory usage and adjusting the limit as needed.

Queue Prioritization

You should use queue names (e.g., high, default, low) to prioritize jobs. Doing this will help ensure critical tasks are processed before less important ones.

To follow this best practice:

  • Create multiple queues with different priorities.
  • Dispatch critical jobs to high-priority queues.
  • Use the --queue option with queue:work or Octane to specify which queues to process and in which order to do so.
  • Example dispatching to a high priority queue.
ProcessPodcast::dispatch($podcast)->onQueue('high');
Enter fullscreen mode Exit fullscreen mode

Connection Pooling

You should also ensure proper connection pooling for database queues. Database queues can create a large number of database connections that can potentially exhaust available connections.

To follow this best practice:

  • Configure database connection pooling to reuse existing connections.
  • Use a database driver that supports connection pooling.
  • Optimize database queries within jobs to minimize connection usage.
  • Ensure your database server is configured to handle the expected number of connections.

Conclusion

Wow! That was much more information than I intended, but I think it was worth it.

Understanding Laravel's queue system is crucial for building robust and scalable applications. This is because the system allows developers and their code to elegantly overcome PHP's single-threaded limitation by using multiple processes rather than threads.

This approach allows Laravel applications to process resource-intensive tasks asynchronously without blocking the main request cycle. The effects is significantly improved application responsiveness and user experience.

By leveraging queues efficiently, which includes choosing the right driver, using Supervisor for stability as Laravel suggests, and adopting Horizon for monitoring, you can build applications that can handle high workloads gracefully regardlessof what those workloads are.

Remember that whether you're sending emails, processing uploads, or generating reports, Laravel queues are a powerful tool that provide the flexibility and power you need to get the job done efficiently.

Feel free to leave comments, corrections, or questions blow. A share would also be highly appreciated!

Top comments (1)

Collapse
 
kengitahi profile image
Kennedy Gitahi

Have you ever looked under the hood of Laravel's queue jobs?