Skip to content

The Lock Component

Warning: You are browsing the documentation for Symfony 4.x, which is no longer maintained.

Read the updated version of this page for Symfony 7.2 (the current stable version).

The Lock Component creates and manages locks, a mechanism to provide exclusive access to a shared resource.

If you're using the Symfony Framework, read the Symfony Framework Lock documentation.

Installation

1
$ composer require symfony/lock

Note

If you install this component outside of a Symfony application, you must require the vendor/autoload.php file in your code to enable the class autoloading mechanism provided by Composer. Read this article for more details.

Usage

Locks are used to guarantee exclusive access to some shared resource. In Symfony applications, you can use locks for example to ensure that a command is not executed more than once at the same time (on the same or different servers).

Locks are created using a LockFactory class, which in turn requires another class to manage the storage of locks:

1
2
3
4
5
use Symfony\Component\Lock\LockFactory;
use Symfony\Component\Lock\Store\SemaphoreStore;

$store = new SemaphoreStore();
$factory = new LockFactory($store);

4.4

The Symfony\Component\Lock\LockFactory class was introduced in Symfony 4.4. In previous versions it was called Symfony\Component\Lock\Factory.

The lock is created by calling the createLock() method. Its first argument is an arbitrary string that represents the locked resource. Then, a call to the acquire() method will try to acquire the lock:

1
2
3
4
5
6
7
8
9
// ...
$lock = $factory->createLock('pdf-invoice-generation');

if ($lock->acquire()) {
    // The resource "pdf-invoice-generation" is locked.
    // You can compute and generate the invoice safely here.

    $lock->release();
}

If the lock can not be acquired, the method returns false. The acquire() method can be safely called repeatedly, even if the lock is already acquired.

Note

Unlike other implementations, the Lock Component distinguishes lock instances even when they are created for the same resource. It means that for a given scope and resource one lock instance can be acquired multiple times. If a lock has to be used by several services, they should share the same Lock instance returned by the LockFactory::createLock method.

Tip

If you don't release the lock explicitly, it will be released automatically upon instance destruction. In some cases, it can be useful to lock a resource across several requests. To disable the automatic release behavior, set the third argument of the createLock() method to false.

Serializing Locks

The Key contains the state of the Lock and can be serialized. This allows the user to begin a long job in a process by acquiring the lock, and continue the job in another process using the same lock:

1
2
3
4
5
6
7
8
use Symfony\Component\Lock\Key;
use Symfony\Component\Lock\Lock;

$key = new Key('article.'.$article->getId());
$lock = new Lock($key, $this->store, 300, false);
$lock->acquire(true);

$this->bus->dispatch(new RefreshTaxonomy($article, $key));

Note

Don't forget to disable the autoRelease to avoid releasing the lock when the destructor is called.

Not all stores are compatible with serialization and cross-process locking: for example, the kernel will automatically release semaphores acquired by the SemaphoreStore store. If you use an incompatible store, an exception will be thrown when the application tries to serialize the key.

Blocking Locks

By default, when a lock cannot be acquired, the acquire method returns false immediately. To wait (indefinitely) until the lock can be created, pass true as the argument of the acquire() method. This is called a blocking lock because the execution of your application stops until the lock is acquired.

Some of the built-in Store classes support this feature. When they don't, they can be decorated with the RetryTillSaveStore class:

1
2
3
4
5
6
7
8
9
10
use Symfony\Component\Lock\LockFactory;
use Symfony\Component\Lock\Store\RedisStore;
use Symfony\Component\Lock\Store\RetryTillSaveStore;

$store = new RedisStore(new \Predis\Client('tcp://localhost:6379'));
$store = new RetryTillSaveStore($store);
$factory = new LockFactory($store);

$lock = $factory->createLock('notification-flush');
$lock->acquire(true);

Expiring Locks

Locks created remotely are difficult to manage because there is no way for the remote Store to know if the locker process is still alive. Due to bugs, fatal errors or segmentation faults, it cannot be guaranteed that release() method will be called, which would cause the resource to be locked infinitely.

The best solution in those cases is to create expiring locks, which are released automatically after some amount of time has passed (called TTL for Time To Live). This time, in seconds, is configured as the second argument of the createLock() method. If needed, these locks can also be released early with the release() method.

The trickiest part when working with expiring locks is choosing the right TTL. If it's too short, other processes could acquire the lock before finishing the job; if it's too long and the process crashes before calling the release() method, the resource will stay locked until the timeout:

1
2
3
4
5
6
7
8
9
10
11
12
// ...
// create an expiring lock that lasts 30 seconds (default is 300.0)
$lock = $factory->createLock('charts-generation', 30);

if (!$lock->acquire()) {
    return;
}
try {
    // perform a job during less than 30 seconds
} finally {
    $lock->release();
}

Tip

To avoid leaving the lock in a locked state, it's recommended to wrap the job in a try/catch/finally block to always try to release the expiring lock.

In case of long-running tasks, it's better to start with a not too long TTL and then use the refresh() method to reset the TTL to its original value:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// ...
$lock = $factory->createLock('charts-generation', 30);

if (!$lock->acquire()) {
    return;
}
try {
    while (!$finished) {
        // perform a small part of the job.

        // renew the lock for 30 more seconds.
        $lock->refresh();
    }
} finally {
    $lock->release();
}

Tip

Another useful technique for long-running tasks is to pass a custom TTL as an argument of the refresh() method to change the default lock TTL:

1
2
3
4
5
6
7
$lock = $factory->createLock('charts-generation', 30);
// ...
// refresh the lock for 30 seconds
$lock->refresh();
// ...
// refresh the lock for 600 seconds (next refresh() call will be 30 seconds again)
$lock->refresh(600);

This component also provides two useful methods related to expiring locks: getRemainingLifetime() (which returns null or a float as seconds) and isExpired() (which returns a boolean).

Automatically Releasing The Lock

Locks are automatically released when their Lock objects are destructed. This is an implementation detail that will be important when sharing Locks between processes. In the example below, pcntl_fork() creates two processes and the Lock will be released automatically as soon as one process finishes:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
// ...
$lock = $factory->createLock('report-generation', 3600);
if (!$lock->acquire()) {
    return;
}

$pid = pcntl_fork();
if (-1 === $pid) {
    // Could not fork
    exit(1);
} elseif ($pid) {
    // Parent process
    sleep(30);
} else {
    // Child process
    echo 'The lock will be released now.';
    exit(0);
}
// ...

To disable this behavior, set to false the third argument of LockFactory::createLock(). That will make the lock acquired for 3600 seconds or until Lock::release() is called.

The Owner of The Lock

Locks that are acquired for the first time are owned [1]_ by the Lock instance that acquired it. If you need to check whether the current Lock instance is (still) the owner of a lock, you can use the isAcquired() method:

1
2
3
if ($lock->isAcquired()) {
    // We (still) own the lock
}

Because of the fact that some lock stores have expiring locks (as seen and explained above), it is possible for an instance to lose the lock it acquired automatically:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
// If we cannot acquire ourselves, it means some other process is already working on it
if (!$lock->acquire()) {
    return;
}

$this->beginTransaction();

// Perform a very long process that might exceed TTL of the lock

if ($lock->isAcquired()) {
    // Still all good, no other instance has acquired the lock in the meantime, we're safe
    $this->commit();
} else {
    // Bummer! Our lock has apparently exceeded TTL and another process has started in
    // the meantime so it's not safe for us to commit.
    $this->rollback();
    throw new \Exception('Process failed');
}

Caution

A common pitfall might be to use the isAcquired() method to check if a lock has already been acquired by any process. As you can see in this example you have to use acquire() for this. The isAcquired() method is used to check if the lock has been acquired by the current process only!

Available Stores

Locks are created and managed in Stores, which are classes that implement PersistingStoreInterface and, optionally, BlockingStoreInterface.

The component includes the following built-in store types:

Store Scope Blocking Expiring
FlockStore local yes no
MemcachedStore remote no yes
PdoStore remote no yes
RedisStore remote no yes
SemaphoreStore local yes no
ZookeeperStore remote no no

4.4

The PersistingStoreInterface and BlockingStoreInterface interfaces were introduced in Symfony 4.4. In previous versions there was only one interface called Symfony\Component\Lock\StoreInterface.

FlockStore

The FlockStore uses the file system on the local computer to create the locks. It does not support expiration, but the lock is automatically released when the lock object goes out of scope and is freed by the garbage collector (for example when the PHP process ends):

1
2
3
4
5
use Symfony\Component\Lock\Store\FlockStore;

// the argument is the path of the directory where the locks are created
// if none is given, sys_get_temp_dir() is used internally.
$store = new FlockStore('/var/stores');

Caution

Beware that some file systems (such as some types of NFS) do not support locking. In those cases, it's better to use a directory on a local disk drive or a remote store based on PDO, Redis or Memcached.

MemcachedStore

The MemcachedStore saves locks on a Memcached server, it requires a Memcached connection implementing the \Memcached class. This store does not support blocking, and expects a TTL to avoid stalled locks:

1
2
3
4
5
6
use Symfony\Component\Lock\Store\MemcachedStore;

$memcached = new \Memcached();
$memcached->addServer('localhost', 11211);

$store = new MemcachedStore($memcached);

Note

Memcached does not support TTL lower than 1 second.

PdoStore

The PdoStore saves locks in an SQL database. It requires a PDO connection, a Doctrine DBAL Connection, or a Data Source Name (DSN). This store does not support blocking, and expects a TTL to avoid stalled locks:

1
2
3
4
5
use Symfony\Component\Lock\Store\PdoStore;

// a PDO, a Doctrine DBAL connection or DSN for lazy connecting through PDO
$databaseConnectionOrDSN = 'mysql:host=127.0.0.1;dbname=lock';
$store = new PdoStore($databaseConnectionOrDSN, ['db_username' => 'myuser', 'db_password' => 'mypassword']);

Note

This store does not support TTL lower than 1 second.

Before storing locks in the database, you must create the table that stores the information. The store provides a method called createTable() to set up this table for you according to the database engine used:

1
2
3
4
5
try {
    $store->createTable();
} catch (\PDOException $exception) {
    // the table could not be created for some reason
}

A great way to set up the table in production is to call the createTable() method in your local computer and then generate a database migration:

1
2
$ php bin/console doctrine:migrations:diff
$ php bin/console doctrine:migrations:migrate

RedisStore

The RedisStore saves locks on a Redis server, it requires a Redis connection implementing the \Redis, \RedisArray, \RedisCluster or \Predis classes. This store does not support blocking, and expects a TTL to avoid stalled locks:

1
2
3
4
5
6
use Symfony\Component\Lock\Store\RedisStore;

$redis = new \Redis();
$redis->connect('localhost');

$store = new RedisStore($redis);

SemaphoreStore

The SemaphoreStore uses the PHP semaphore functions to create the locks:

1
2
3
use Symfony\Component\Lock\Store\SemaphoreStore;

$store = new SemaphoreStore();

CombinedStore

The CombinedStore is designed for High Availability applications because it manages several stores in sync (for example, several Redis servers). When a lock is being acquired, it forwards the call to all the managed stores, and it collects their responses. If a simple majority of stores have acquired the lock, then the lock is considered as acquired; otherwise as not acquired:

1
2
3
4
5
6
7
8
9
10
11
12
13
use Symfony\Component\Lock\Store\CombinedStore;
use Symfony\Component\Lock\Store\RedisStore;
use Symfony\Component\Lock\Strategy\ConsensusStrategy;

$stores = [];
foreach (['server1', 'server2', 'server3'] as $server) {
    $redis = new \Redis();
    $redis->connect($server);

    $stores[] = new RedisStore($redis);
}

$store = new CombinedStore($stores, new ConsensusStrategy());

Instead of the simple majority strategy (ConsensusStrategy) an UnanimousStrategy can be used to require the lock to be acquired in all the stores.

Caution

In order to get high availability when using the ConsensusStrategy, the minimum cluster size must be three servers. This allows the cluster to keep working when a single server fails (because this strategy requires that the lock is acquired in more than half of the servers).

ZookeeperStore

The ZookeeperStore saves locks on a ZooKeeper server. It requires a ZooKeeper connection implementing the \Zookeeper class. This store does not support blocking and expiration but the lock is automatically released when the PHP process is terminated:

1
2
3
4
5
6
7
use Symfony\Component\Lock\Store\ZookeeperStore;

$zookeeper = new \Zookeeper('localhost:2181');
// use the following to define a high-availability cluster:
// $zookeeper = new \Zookeeper('localhost1:2181,localhost2:2181,localhost3:2181');

$store = new ZookeeperStore($zookeeper);

Note

Zookeeper does not require a TTL as the nodes used for locking are ephemeral and die when the PHP process is terminated.

Reliability

The component guarantees that the same resource can't be locked twice as long as the component is used in the following way.

Remote Stores

Remote stores (MemcachedStore, PdoStore, RedisStore and ZookeeperStore) use a unique token to recognize the true owner of the lock. This token is stored in the Key object and is used internally by the Lock.

Every concurrent process must store the Lock in the same server. Otherwise two different machines may allow two different processes to acquire the same Lock.

Caution

To guarantee that the same server will always be safe, do not use Memcached behind a LoadBalancer, a cluster or round-robin DNS. Even if the main server is down, the calls must not be forwarded to a backup or failover server.

Expiring Stores

Expiring stores (MemcachedStore, PdoStore and RedisStore) guarantee that the lock is acquired only for the defined duration of time. If the task takes longer to be accomplished, then the lock can be released by the store and acquired by someone else.

The Lock provides several methods to check its health. The isExpired() method checks whether or not its lifetime is over and the getRemainingLifetime() method returns its time to live in seconds.

Using the above methods, a more robust code would be:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
// ...
$lock = $factory->createLock('invoice-publication', 30);

if (!$lock->acquire()) {
    return;
}
while (!$finished) {
    if ($lock->getRemainingLifetime() <= 5) {
        if ($lock->isExpired()) {
            // lock was lost, perform a rollback or send a notification
            throw new \RuntimeException('Lock lost during the overall process');
        }

        $lock->refresh();
    }

    // Perform the task whose duration MUST be less than 5 minutes
}

Caution

Choose wisely the lifetime of the Lock and check whether its remaining time to live is enough to perform the task.

Caution

Storing a Lock usually takes a few milliseconds, but network conditions may increase that time a lot (up to a few seconds). Take that into account when choosing the right TTL.

By design, locks are stored in servers with a defined lifetime. If the date or time of the machine changes, a lock could be released sooner than expected.

Caution

To guarantee that date won't change, the NTP service should be disabled and the date should be updated when the service is stopped.

FlockStore

By using the file system, this Store is reliable as long as concurrent processes use the same physical directory to store locks.

Processes must run on the same machine, virtual machine or container. Be careful when updating a Kubernetes or Swarm service because, for a short period of time, there can be two containers running in parallel.

The absolute path to the directory must remain the same. Be careful of symlinks that could change at anytime: Capistrano and blue/green deployment often use that trick. Be careful when the path to that directory changes between two deployments.

Some file systems (such as some types of NFS) do not support locking.

Caution

All concurrent processes must use the same physical file system by running on the same machine and using the same absolute path to the lock directory.

By definition, usage of FlockStore in an HTTP context is incompatible with multiple front servers, unless to ensure that the same resource will always be locked on the same machine or to use a well configured shared file system.

Files on the file system can be removed during a maintenance operation. For instance, to clean up the /tmp directory or after a reboot of the machine when a directory uses tmpfs. It's not an issue if the lock is released when the process ended, but it is in case of Lock reused between requests.

Caution

Do not store locks on a volatile file system if they have to be reused in several requests.

MemcachedStore

The way Memcached works is to store items in memory. That means that by using the MemcachedStore the locks are not persisted and may disappear by mistake at any time.

If the Memcached service or the machine hosting it restarts, every lock would be lost without notifying the running processes.

Caution

To avoid that someone else acquires a lock after a restart, it's recommended to delay service start and wait at least as long as the longest lock TTL.

By default Memcached uses a LRU mechanism to remove old entries when the service needs space to add new items.

Caution

The number of items stored in Memcached must be under control. If it's not possible, LRU should be disabled and Lock should be stored in a dedicated Memcached service away from Cache.

When the Memcached service is shared and used for multiple usage, Locks could be removed by mistake. For instance some implementation of the PSR-6 clear() method uses the Memcached's flush() method which purges and removes everything.

Caution

The method flush() must not be called, or locks should be stored in a dedicated Memcached service away from Cache.

PdoStore

The PdoStore relies on the ACID properties of the SQL engine.

Caution

In a cluster configured with multiple primaries, ensure writes are synchronously propagated to every node, or always use the same node.

Caution

Some SQL engines like MySQL allow to disable the unique constraint check. Ensure that this is not the case SET unique_checks=1;.

In order to purge old locks, this store uses a current datetime to define an expiration date reference. This mechanism relies on all server nodes to have synchronized clocks.

Caution

To ensure locks don't expire prematurely; the TTLs should be set with enough extra time to account for any clock drift between nodes.

RedisStore

The way Redis works is to store items in memory. That means that by using the RedisStore the locks are not persisted and may disappear by mistake at any time.

If the Redis service or the machine hosting it restarts, every locks would be lost without notifying the running processes.

Caution

To avoid that someone else acquires a lock after a restart, it's recommended to delay service start and wait at least as long as the longest lock TTL.

Tip

Redis can be configured to persist items on disk, but this option would slow down writes on the service. This could go against other uses of the server.

When the Redis service is shared and used for multiple usages, locks could be removed by mistake.

Caution

The command FLUSHDB must not be called, or locks should be stored in a dedicated Redis service away from Cache.

CombinedStore

Combined stores allow the storage of locks across several backends. It's a common mistake to think that the lock mechanism will be more reliable. This is wrong. The CombinedStore will be, at best, as reliable as the least reliable of all managed stores. As soon as one managed store returns erroneous information, the CombinedStore won't be reliable.

Caution

All concurrent processes must use the same configuration, with the same amount of managed stored and the same endpoint.

Tip

Instead of using a cluster of Redis or Memcached servers, it's better to use a CombinedStore with a single server per managed store.

SemaphoreStore

Semaphores are handled by the Kernel level. In order to be reliable, processes must run on the same machine, virtual machine or container. Be careful when updating a Kubernetes or Swarm service because for a short period of time, there can be two running containers in parallel.

Caution

All concurrent processes must use the same machine. Before starting a concurrent process on a new machine, check that other processes are stopped on the old one.

Caution

When running on systemd with non-system user and option RemoveIPC=yes (default value), locks are deleted by systemd when that user logs out. Check that process is run with a system user (UID <= SYS_UID_MAX) with SYS_UID_MAX defined in /etc/login.defs, or set the option RemoveIPC=off in /etc/systemd/logind.conf.

ZookeeperStore

The way ZookeeperStore works is by maintaining locks as ephemeral nodes on the server. That means that by using ZookeeperStore the locks will be automatically released at the end of the session in case the client cannot unlock for any reason.

If the ZooKeeper service or the machine hosting it restarts, every lock would be lost without notifying the running processes.

Tip

To use ZooKeeper's high-availability feature, you can setup a cluster of multiple servers so that in case one of the server goes down, the majority will still be up and serving the requests. All the available servers in the cluster will see the same state.

Note

As this store does not support multi-level node locks, since the clean up of intermediate nodes becomes an overhead, all locks are maintained at the root level.

Overall

Changing the configuration of stores should be done very carefully. For instance, during the deployment of a new version. Processes with new configuration must not be started while old processes with old configuration are still running.

This work, including the code samples, is licensed under a Creative Commons BY-SA 3.0 license.
TOC
    Version