The Lock Component
The Lock Component creates and manages locks, a mechanism to provide exclusive access to a shared resource.
If you're using the Symfony Framework, read the Symfony Framework Lock documentation.
Installation
1
$ composer require symfony/lock
Note
If you install this component outside of a Symfony application, you must
require the vendor/autoload.php
file in your code to enable the class
autoloading mechanism provided by Composer. Read
this article for more details.
Usage
Locks are used to guarantee exclusive access to some shared resource. In Symfony applications, you can use locks for example to ensure that a command is not executed more than once at the same time (on the same or different servers).
Locks are created using a LockFactory class, which in turn requires another class to manage the storage of locks:
1 2 3 4 5
use Symfony\Component\Lock\LockFactory;
use Symfony\Component\Lock\Store\SemaphoreStore;
$store = new SemaphoreStore();
$factory = new LockFactory($store);
The lock is created by calling the createLock() method. Its first argument is an arbitrary string that represents the locked resource. Then, a call to the acquire() method will try to acquire the lock:
1 2 3 4 5 6 7 8 9
// ...
$lock = $factory->createLock('pdf-creation');
if ($lock->acquire()) {
// The resource "pdf-creation" is locked.
// You can compute and generate the invoice safely here.
$lock->release();
}
If the lock can not be acquired, the method returns false
. The acquire()
method can be safely called repeatedly, even if the lock is already acquired.
Note
Unlike other implementations, the Lock Component distinguishes lock
instances even when they are created for the same resource. It means that for
a given scope and resource one lock instance can be acquired multiple times.
If a lock has to be used by several services, they should share the same Lock
instance returned by the LockFactory::createLock
method.
Tip
If you don't release the lock explicitly, it will be released automatically
upon instance destruction. In some cases, it can be useful to lock a resource
across several requests. To disable the automatic release behavior, set the
third argument of the createLock()
method to false
.
Serializing Locks
The Key contains the state of the Lock and can be serialized. This allows the user to begin a long job in a process by acquiring the lock, and continue the job in another process using the same lock.
First, you may create a serializable class containing the resource and the key of the lock:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
// src/Lock/RefreshTaxonomy.php
namespace App\Lock;
use Symfony\Component\Lock\Key;
class RefreshTaxonomy
{
public function __construct(
private object $article,
private Key $key,
) {
}
public function getArticle(): object
{
return $this->article;
}
public function getKey(): Key
{
return $this->key;
}
}
Then, you can use this class to dispatch all that's needed for another process to handle the rest of the job:
1 2 3 4 5 6 7 8 9 10 11 12
use App\Lock\RefreshTaxonomy;
use Symfony\Component\Lock\Key;
$key = new Key('article.'.$article->getId());
$lock = $factory->createLockFromKey(
$key,
300, // ttl
false // autoRelease
);
$lock->acquire(true);
$this->bus->dispatch(new RefreshTaxonomy($article, $key));
Note
Don't forget to set the autoRelease
argument to false
in the
Lock
instantiation to avoid releasing the lock when the destructor is
called.
Not all stores are compatible with serialization and cross-process locking: for example, the kernel will automatically release semaphores acquired by the SemaphoreStore store. If you use an incompatible store (see lock stores for supported stores), an exception will be thrown when the application tries to serialize the key.
Blocking Locks
By default, when a lock cannot be acquired, the acquire
method returns
false
immediately. To wait (indefinitely) until the lock can be created,
pass true
as the argument of the acquire()
method. This is called a
blocking lock because the execution of your application stops until the
lock is acquired:
1 2 3 4 5 6 7 8
use Symfony\Component\Lock\LockFactory;
use Symfony\Component\Lock\Store\FlockStore;
$store = new FlockStore('/var/stores');
$factory = new LockFactory($store);
$lock = $factory->createLock('pdf-creation');
$lock->acquire(true);
When the store does not support blocking locks by implementing the
BlockingStoreInterface interface (see
lock stores for supported stores), the Lock
class
will retry to acquire the lock in a non-blocking way until the lock is
acquired.
Expiring Locks
Locks created remotely are difficult to manage because there is no way for the
remote Store
to know if the locker process is still alive. Due to bugs,
fatal errors or segmentation faults, it cannot be guaranteed that the
release()
method will be called, which would cause the resource to be
locked infinitely.
The best solution in those cases is to create expiring locks, which are
released automatically after some amount of time has passed (called TTL for
Time To Live). This time, in seconds, is configured as the second argument of
the createLock()
method. If needed, these locks can also be released early
with the release()
method.
The trickiest part when working with expiring locks is choosing the right TTL.
If it's too short, other processes could acquire the lock before finishing the
job; if it's too long and the process crashes before calling the release()
method, the resource will stay locked until the timeout:
1 2 3 4 5 6 7 8 9 10 11 12
// ...
// create an expiring lock that lasts 30 seconds (default is 300.0)
$lock = $factory->createLock('pdf-creation', ttl: 30);
if (!$lock->acquire()) {
return;
}
try {
// perform a job during less than 30 seconds
} finally {
$lock->release();
}
Tip
To avoid leaving the lock in a locked state, it's recommended to wrap the job in a try/catch/finally block to always try to release the expiring lock.
In case of long-running tasks, it's better to start with a not too long TTL and then use the refresh() method to reset the TTL to its original value:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
// ...
$lock = $factory->createLock('pdf-creation', ttl: 30);
if (!$lock->acquire()) {
return;
}
try {
while (!$finished) {
// perform a small part of the job.
// renew the lock for 30 more seconds.
$lock->refresh();
}
} finally {
$lock->release();
}
Tip
Another useful technique for long-running tasks is to pass a custom TTL as
an argument of the refresh()
method to change the default lock TTL:
1 2 3 4 5 6 7
$lock = $factory->createLock('pdf-creation', ttl: 30);
// ...
// refresh the lock for 30 seconds
$lock->refresh();
// ...
// refresh the lock for 600 seconds (next refresh() call will be 30 seconds again)
$lock->refresh(600);
This component also provides two useful methods related to expiring locks:
getRemainingLifetime()
(which returns null
or a float
as seconds) and isExpired()
(which returns a boolean).
Automatically Releasing The Lock
Locks are automatically released when their Lock objects are destroyed. This is
an implementation detail that is important when sharing Locks between
processes. In the example below, pcntl_fork()
creates two processes and the
Lock will be released automatically as soon as one process finishes:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
// ...
$lock = $factory->createLock('pdf-creation');
if (!$lock->acquire()) {
return;
}
$pid = pcntl_fork();
if (-1 === $pid) {
// Could not fork
exit(1);
} elseif ($pid) {
// Parent process
sleep(30);
} else {
// Child process
echo 'The lock will be released now.';
exit(0);
}
// ...
Note
In order for the above example to work, the PCNTL extension must be installed.
To disable this behavior, set the autoRelease
argument of
LockFactory::createLock()
to false
. That will make the lock acquired
for 3600 seconds or until Lock::release()
is called:
1 2 3 4 5
$lock = $factory->createLock(
'pdf-creation',
3600, // ttl
false // autoRelease
);
Shared Locks
A shared or readers-writer lock is a synchronization primitive that allows concurrent access for read-only operations, while write operations require exclusive access. This means that multiple threads can read the data in parallel but an exclusive lock is needed for writing or modifying data. They are used for example for data structures that cannot be updated atomically and are invalid until the update is complete.
Use the acquireRead() method to acquire a read-only lock, and acquire() method to acquire a write lock:
1 2 3 4
$lock = $factory->createLock('user-'.$user->id);
if (!$lock->acquireRead()) {
return;
}
Similar to the acquire()
method, pass true
as the argument of acquireRead()
to acquire the lock in a blocking mode:
1 2
$lock = $factory->createLock('user-'.$user->id);
$lock->acquireRead(true);
Note
The priority policy of Symfony's shared locks depends on the underlying store (e.g. Redis store prioritizes readers vs writers).
When a read-only lock is acquired with the acquireRead()
method, it's
possible to promote the lock, and change it to a write lock, by calling the
acquire()
method:
1 2 3 4 5 6 7 8 9
$lock = $factory->createLock('user-'.$userId);
$lock->acquireRead(true);
if (!$this->shouldUpdate($userId)) {
return;
}
$lock->acquire(true); // Promote the lock to a write lock
$this->update($userId);
In the same way, it's possible to demote a write lock, and change it to a
read-only lock by calling the acquireRead()
method.
When the provided store does not implement the
SharedLockStoreInterface interface (see
lock stores for supported stores), the Lock
class
will fallback to a write lock by calling the acquire()
method.
The Owner of The Lock
Locks that are acquired for the first time are owned by the Lock
instance that acquired
it. If you need to check whether the current Lock
instance is (still) the owner of
a lock, you can use the isAcquired()
method:
1 2 3
if ($lock->isAcquired()) {
// We (still) own the lock
}
Because some lock stores have expiring locks, it is possible for an instance to lose the lock it acquired automatically:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
// If we cannot acquire ourselves, it means some other process is already working on it
if (!$lock->acquire()) {
return;
}
$this->beginTransaction();
// Perform a very long process that might exceed TTL of the lock
if ($lock->isAcquired()) {
// Still all good, no other instance has acquired the lock in the meantime, we're safe
$this->commit();
} else {
// Bummer! Our lock has apparently exceeded TTL and another process has started in
// the meantime so it's not safe for us to commit.
$this->rollback();
throw new \Exception('Process failed');
}
Caution
A common pitfall might be to use the isAcquired()
method to check if
a lock has already been acquired by any process. As you can see in this example
you have to use acquire()
for this. The isAcquired()
method is used to check
if the lock has been acquired by the current process only.
Note
Technically, the true owners of the lock are the ones that share the same instance of Key
,
not Lock
. But from a user perspective, Key
is internal and you will likely only be working
with the Lock
instance so it's easier to think of the Lock
instance as being the one that
is the owner of the lock.
Available Stores
Locks are created and managed in Stores
, which are classes that implement
PersistingStoreInterface and, optionally,
BlockingStoreInterface.
The component includes the following built-in store types:
Store | Scope | Blocking | Expiring | Sharing | Serialization |
---|---|---|---|---|---|
FlockStore | local | yes | no | yes | no |
MemcachedStore | remote | no | yes | no | yes |
MongoDbStore | remote | no | yes | no | yes |
PdoStore | remote | no | yes | no | yes |
DoctrineDbalStore | remote | no | yes | no | yes |
PostgreSqlStore | remote | yes | no | yes | no |
DoctrineDbalPostgreSqlStore | remote | yes | no | yes | no |
RedisStore | remote | no | yes | yes | yes |
SemaphoreStore | local | yes | no | no | no |
ZookeeperStore | remote | no | no | no | no |
Tip
Symfony includes two other special stores that are mostly useful for testing:
InMemoryStore
, which saves locks in memory during a process, and NullStore
,
which doesn't persist anything.
7.2
The NullStore was introduced in Symfony 7.2.
FlockStore
The FlockStore uses the file system on the local computer to create the locks. It does not support expiration, but the lock is automatically released when the lock object goes out of scope and is freed by the garbage collector (for example when the PHP process ends):
1 2 3 4 5
use Symfony\Component\Lock\Store\FlockStore;
// the argument is the path of the directory where the locks are created
// if none is given, sys_get_temp_dir() is used internally.
$store = new FlockStore('/var/stores');
Caution
Beware that some file systems (such as some types of NFS) do not support locking. In those cases, it's better to use a directory on a local disk drive or a remote store.
MemcachedStore
The MemcachedStore saves locks on a Memcached server, it requires a Memcached
connection implementing the \Memcached
class. This store does not
support blocking, and expects a TTL to avoid stalled locks:
1 2 3 4 5 6
use Symfony\Component\Lock\Store\MemcachedStore;
$memcached = new \Memcached();
$memcached->addServer('localhost', 11211);
$store = new MemcachedStore($memcached);
Note
Memcached does not support TTL lower than 1 second.
MongoDbStore
The MongoDbStore saves locks on a MongoDB server >=2.2
, it requires a
\MongoDB\Collection
or \MongoDB\Client
from mongodb/mongodb or a
MongoDB Connection String.
This store does not support blocking and expects a TTL to
avoid stalled locks:
1 2 3 4 5 6 7 8 9 10 11
use Symfony\Component\Lock\Store\MongoDbStore;
$mongo = 'mongodb://localhost/database?collection=lock';
$options = [
'gcProbability' => 0.001,
'database' => 'myapp',
'collection' => 'lock',
'uriOptions' => [],
'driverOptions' => [],
];
$store = new MongoDbStore($mongo, $options);
The MongoDbStore
takes the following $options
(depending on the first parameter type):
Option | Description |
---|---|
gcProbability | Should a TTL Index be created expressed as a probability from 0.0 to 1.0 (Defaults to 0.001 ) |
database | The name of the database |
collection | The name of the collection |
uriOptions | Array of URI options for MongoDBClient::__construct |
driverOptions | Array of driver options for MongoDBClient::__construct |
When the first parameter is a:
MongoDB\Collection
:
$options['database']
is ignored$options['collection']
is ignored
MongoDB\Client
:
$options['database']
is mandatory$options['collection']
is mandatory
MongoDB Connection String:
$options['database']
is used otherwise/path
from the DSN, at least one is mandatory$options['collection']
is used otherwise?collection=
from the DSN, at least one is mandatory
Note
The collection
querystring parameter is not part of the MongoDB Connection String definition.
It is used to allow constructing a MongoDbStore
using a Data Source Name (DSN) without $options
.
PdoStore
The PdoStore saves locks in an SQL database. It requires a PDO connection or a Data Source Name (DSN). This store does not support blocking, and expects a TTL to avoid stalled locks:
1 2 3 4 5
use Symfony\Component\Lock\Store\PdoStore;
// a PDO instance or DSN for lazy connecting through PDO
$databaseConnectionOrDSN = 'mysql:host=127.0.0.1;dbname=app';
$store = new PdoStore($databaseConnectionOrDSN, ['db_username' => 'myuser', 'db_password' => 'mypassword']);
Note
This store does not support TTL lower than 1 second.
The table where values are stored is created automatically on the first call to the save() method. You can also create this table explicitly by calling the createTable() method in your code.
DoctrineDbalStore
The DoctrineDbalStore saves locks in an SQL database. It is identical to PdoStore but requires a Doctrine DBAL Connection, or a Doctrine DBAL URL. This store does not support blocking, and expects a TTL to avoid stalled locks:
1 2 3 4 5
use Symfony\Component\Lock\Store\DoctrineDbalStore;
// a Doctrine DBAL connection or DSN
$connectionOrURL = 'mysql://myuser:mypassword@127.0.0.1/app';
$store = new DoctrineDbalStore($connectionOrURL);
Note
This store does not support TTL lower than 1 second.
The table where values are stored will be automatically generated when your run the command:
1
$ php bin/console make:migration
If you prefer to create the table yourself and it has not already been created, you can create this table explicitly by calling the createTable() method. You can also add this table to your schema by calling configureSchema() method in your code
If the table has not been created upstream, it will be created automatically on the first call to the save() method.
PostgreSqlStore
The PostgreSqlStore uses Advisory Locks provided by PostgreSQL. It requires a PDO connection or a Data Source Name (DSN). It supports native blocking, as well as sharing locks:
1 2 3 4 5
use Symfony\Component\Lock\Store\PostgreSqlStore;
// a PDO instance or DSN for lazy connecting through PDO
$databaseConnectionOrDSN = 'pgsql:host=localhost;port=5634;dbname=app';
$store = new PostgreSqlStore($databaseConnectionOrDSN, ['db_username' => 'myuser', 'db_password' => 'mypassword']);
In opposite to the PdoStore
, the PostgreSqlStore
does not need a table to
store locks and it does not expire.
DoctrineDbalPostgreSqlStore
The DoctrineDbalPostgreSqlStore uses Advisory Locks provided by PostgreSQL. It is identical to PostgreSqlStore but requires a Doctrine DBAL Connection or a Doctrine DBAL URL. It supports native blocking, as well as sharing locks:
1 2 3 4 5
use Symfony\Component\Lock\Store\DoctrineDbalPostgreSqlStore;
// a Doctrine Connection or DSN
$databaseConnectionOrDSN = 'postgresql+advisory://myuser:mypassword@127.0.0.1:5634/lock';
$store = new DoctrineDbalPostgreSqlStore($databaseConnectionOrDSN);
In opposite to the DoctrineDbalStore
, the DoctrineDbalPostgreSqlStore
does not need a table to
store locks and does not expire.
RedisStore
The RedisStore saves locks on a Redis server, it requires a Redis connection
implementing the \Redis
, \RedisArray
, \RedisCluster
, \Relay\Relay
or
\Predis
classes. This store does not support blocking, and expects a TTL to
avoid stalled locks:
1 2 3 4 5 6
use Symfony\Component\Lock\Store\RedisStore;
$redis = new \Redis();
$redis->connect('localhost');
$store = new RedisStore($redis);
SemaphoreStore
The SemaphoreStore uses the PHP semaphore functions to create the locks:
1 2 3
use Symfony\Component\Lock\Store\SemaphoreStore;
$store = new SemaphoreStore();
CombinedStore
The CombinedStore is designed for High Availability applications because it manages several stores in sync (for example, several Redis servers). When a lock is acquired, it forwards the call to all the managed stores, and it collects their responses. If a simple majority of stores have acquired the lock, then the lock is considered acquired:
1 2 3 4 5 6 7 8 9 10 11 12 13
use Symfony\Component\Lock\Store\CombinedStore;
use Symfony\Component\Lock\Store\RedisStore;
use Symfony\Component\Lock\Strategy\ConsensusStrategy;
$stores = [];
foreach (['server1', 'server2', 'server3'] as $server) {
$redis = new \Redis();
$redis->connect($server);
$stores[] = new RedisStore($redis);
}
$store = new CombinedStore($stores, new ConsensusStrategy());
Instead of the simple majority strategy (ConsensusStrategy
) an
UnanimousStrategy
can be used to require the lock to be acquired in all
the stores:
1 2 3 4
use Symfony\Component\Lock\Store\CombinedStore;
use Symfony\Component\Lock\Strategy\UnanimousStrategy;
$store = new CombinedStore($stores, new UnanimousStrategy());
Caution
In order to get high availability when using the ConsensusStrategy
, the
minimum cluster size must be three servers. This allows the cluster to keep
working when a single server fails (because this strategy requires that the
lock is acquired for more than half of the servers).
ZookeeperStore
The ZookeeperStore saves locks on a ZooKeeper server. It requires a ZooKeeper
connection implementing the \Zookeeper
class. This store does not
support blocking and expiration but the lock is automatically released when the
PHP process is terminated:
1 2 3 4 5 6 7
use Symfony\Component\Lock\Store\ZookeeperStore;
$zookeeper = new \Zookeeper('localhost:2181');
// use the following to define a high-availability cluster:
// $zookeeper = new \Zookeeper('localhost1:2181,localhost2:2181,localhost3:2181');
$store = new ZookeeperStore($zookeeper);
Note
Zookeeper does not require a TTL as the nodes used for locking are ephemeral and die when the PHP process is terminated.
Reliability
The component guarantees that the same resource can't be locked twice as long as the component is used in the following way.
Remote Stores
Remote stores (MemcachedStore,
MongoDbStore,
PdoStore,
PostgreSqlStore,
RedisStore and
ZookeeperStore) use a unique token to recognize
the true owner of the lock. This token is stored in the
Key object and is used internally by
the Lock
.
Every concurrent process must store the Lock
on the same server. Otherwise two
different machines may allow two different processes to acquire the same Lock
.
Caution
To guarantee that the same server will always be safe, do not use Memcached behind a LoadBalancer, a cluster or round-robin DNS. Even if the main server is down, the calls must not be forwarded to a backup or failover server.
Expiring Stores
Expiring stores (MemcachedStore, MongoDbStore, PdoStore and RedisStore) guarantee that the lock is acquired only for the defined duration of time. If the task takes longer to be accomplished, then the lock can be released by the store and acquired by someone else.
The Lock
provides several methods to check its health. The isExpired()
method checks whether or not its lifetime is over and the getRemainingLifetime()
method returns its time to live in seconds.
Using the above methods, a robust code would be:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
// ...
$lock = $factory->createLock('pdf-creation', 30);
if (!$lock->acquire()) {
return;
}
while (!$finished) {
if ($lock->getRemainingLifetime() <= 5) {
if ($lock->isExpired()) {
// lock was lost, perform a rollback or send a notification
throw new \RuntimeException('Lock lost during the overall process');
}
$lock->refresh();
}
// Perform the task whose duration MUST be less than 5 seconds
}
Caution
Choose wisely the lifetime of the Lock
and check whether its remaining
time to live is enough to perform the task.
Caution
Storing a Lock
usually takes a few milliseconds, but network conditions
may increase that time a lot (up to a few seconds). Take that into account
when choosing the right TTL.
By design, locks are stored on servers with a defined lifetime. If the date or time of the machine changes, a lock could be released sooner than expected.
Caution
To guarantee that date won't change, the NTP service should be disabled and the date should be updated when the service is stopped.
FlockStore
By using the file system, this Store
is reliable as long as concurrent
processes use the same physical directory to store locks.
Processes must run on the same machine, virtual machine or container. Be careful when updating a Kubernetes or Swarm service because, for a short period of time, there can be two containers running in parallel.
The absolute path to the directory must remain the same. Be careful of symlinks that could change at anytime: Capistrano and blue/green deployment often use that trick. Be careful when the path to that directory changes between two deployments.
Some file systems (such as some types of NFS) do not support locking.
Caution
All concurrent processes must use the same physical file system by running on the same machine and using the same absolute path to the lock directory.
Using a FlockStore
in an HTTP context is incompatible with multiple
front servers, unless to ensure that the same resource will always be
locked on the same machine or to use a well configured shared file system.
Files on the file system can be removed during a maintenance operation. For
instance, to clean up the /tmp
directory or after a reboot of the machine
when a directory uses tmpfs
. It's not an issue if the lock is released when
the process ended, but it is in case of Lock
reused between requests.
Danger
Do not store locks on a volatile file system if they have to be reused in several requests.
MemcachedStore
The way Memcached works is to store items in memory. That means that by using the MemcachedStore the locks are not persisted and may disappear by mistake at any time.
If the Memcached service or the machine hosting it restarts, every lock would be lost without notifying the running processes.
Caution
To avoid that someone else acquires a lock after a restart, it's recommended to delay service start and wait at least as long as the longest lock TTL.
By default Memcached uses a LRU mechanism to remove old entries when the service needs space to add new items.
Caution
The number of items stored in Memcached must be under control. If it's not possible, LRU should be disabled and Lock should be stored in a dedicated Memcached service away from Cache.
When the Memcached service is shared and used for multiple usage, Locks could be
removed by mistake. For instance some implementation of the PSR-6 clear()
method uses the Memcached's flush()
method which purges and removes everything.
Danger
The method flush()
must not be called, or locks should be stored in a
dedicated Memcached service away from Cache.
MongoDbStore
Caution
The locked resource name is indexed in the _id
field of the lock
collection. Beware that an indexed field's value in MongoDB can be
a maximum of 1024 bytes in length including the structural overhead.
A TTL index must be used to automatically clean up expired locks. Such an index can be created manually:
1 2 3 4
db.lock.createIndex(
{ "expires_at": 1 },
{ "expireAfterSeconds": 0 }
)
Alternatively, the method MongoDbStore::createTtlIndex(int $expireAfterSeconds = 0)
can be called once to create the TTL index during database setup. Read more
about Expire Data from Collections by Setting TTL in MongoDB.
Tip
MongoDbStore
will attempt to automatically create a TTL index. It's
recommended to set constructor option gcProbability
to 0.0
to
disable this behavior if you have manually dealt with TTL index creation.
Caution
This store relies on all PHP application and database nodes to have
synchronized clocks for lock expiry to occur at the correct time. To ensure
locks don't expire prematurely; the lock TTL should be set with enough extra
time in expireAfterSeconds
to account for any clock drift between nodes.
writeConcern
and readConcern
are not specified by MongoDbStore meaning
the collection's settings will take effect.
readPreference
is primary
for all queries.
Read more about Replica Set Read and Write Semantics in MongoDB.
PdoStore
The PdoStore relies on the ACID properties of the SQL engine.
Caution
In a cluster configured with multiple primaries, ensure writes are synchronously propagated to every node, or always use the same node.
Caution
Some SQL engines like MySQL allow to disable the unique constraint check.
Ensure that this is not the case SET unique_checks=1;
.
In order to purge old locks, this store uses a current datetime to define an expiration date reference. This mechanism relies on all server nodes to have synchronized clocks.
Caution
To ensure locks don't expire prematurely; the TTLs should be set with enough extra time to account for any clock drift between nodes.
PostgreSqlStore
The PostgreSqlStore relies on the Advisory Locks properties of the PostgreSQL database. That means that by using PostgreSqlStore the locks will be automatically released at the end of the session in case the client cannot unlock for any reason.
If the PostgreSQL service or the machine hosting it restarts, every lock would be lost without notifying the running processes.
If the TCP connection is lost, the PostgreSQL may release locks without notifying the application.
RedisStore
The way Redis works is to store items in memory. That means that by using the RedisStore the locks are not persisted and may disappear by mistake at any time.
If the Redis service or the machine hosting it restarts, every locks would be lost without notifying the running processes.
Caution
To avoid that someone else acquires a lock after a restart, it's recommended to delay service start and wait at least as long as the longest lock TTL.
Tip
Redis can be configured to persist items on disk, but this option would slow down writes on the service. This could go against other uses of the server.
When the Redis service is shared and used for multiple usages, locks could be removed by mistake.
Danger
The command FLUSHDB
must not be called, or locks should be stored in a
dedicated Redis service away from Cache.
CombinedStore
Combined stores allow the storage of locks across several backends. It's a common
mistake to think that the lock mechanism will be more reliable. This is wrong.
The CombinedStore
will be, at best, as reliable as the least reliable of
all managed stores. As soon as one managed store returns erroneous information,
the CombinedStore
won't be reliable.
Caution
All concurrent processes must use the same configuration, with the same amount of managed stored and the same endpoint.
Tip
Instead of using a cluster of Redis or Memcached servers, it's better to use
a CombinedStore
with a single server per managed store.
SemaphoreStore
Semaphores are handled by the Kernel level. In order to be reliable, processes must run on the same machine, virtual machine or container. Be careful when updating a Kubernetes or Swarm service because for a short period of time, there can be two running containers in parallel.
Caution
All concurrent processes must use the same machine. Before starting a concurrent process on a new machine, check that other processes are stopped on the old one.
Caution
When running on systemd with non-system user and option RemoveIPC=yes
(default value), locks are deleted by systemd when that user logs out.
Check that process is run with a system user (UID <= SYS_UID_MAX) with
SYS_UID_MAX
defined in /etc/login.defs
, or set the option
RemoveIPC=off
in /etc/systemd/logind.conf
.
ZookeeperStore
The way ZookeeperStore works is by maintaining locks as ephemeral nodes on the server. That means that by using ZookeeperStore the locks will be automatically released at the end of the session in case the client cannot unlock for any reason.
If the ZooKeeper service or the machine hosting it restarts, every lock would be lost without notifying the running processes.
Tip
To use ZooKeeper's high-availability feature, you can setup a cluster of multiple servers so that in case one of the server goes down, the majority will still be up and serving the requests. All the available servers in the cluster will see the same state.
Note
As this store does not support multi-level node locks, since the clean up of intermediate nodes becomes an overhead, all locks are maintained at the root level.
Overall
Changing the configuration of stores should be done very carefully. For instance, during the deployment of a new version. Processes with new configuration must not be started while old processes with old configuration are still running.