DEV Community

Ryan Zhi
Ryan Zhi

Posted on

In-depth Interview Questions on Distributed Locks and Thread Pools

Distributed Locks

Distributed Lock Principles

Distributed locks are typically implemented using Redis with a Lua script to ensure atomicity. The script first checks if the lock key exists. If not, it proceeds to set the lock. The data structure used is a hash, with the main key being the business key (which we need to design) and the sub-key being a UUID: thread ID. The initial lock value is 1.
For mutual exclusion, if thread 1 has already acquired the lock, thread 2 will also execute the same Lua script. It will check for the existence of the key and find that it exists (since both threads 1 and 2 have the same lock key). It will then check if the hash structure contains thread 2's ID. If not, it will return the remaining time of thread 1's lock and then use a while loop to acquire the lock.
For reentrant locks, the logic is the same as for mutual exclusion, but any thread can pass the check, and the value in the hash structure for that key is incremented by 1 using the incrby command.
The principle of releasing the lock is to call the unlock() method, which decrements the value in the hash structure for the holding thread. If the value is 0, the lock is deleted.
Redisson's implementation of distributed locks adds a watchdog mechanism to prevent the lock from being released prematurely if the task has not completed. This is essentially a background task thread that, after successfully acquiring the lock, adds the thread holding the lock to a map. It then checks every 10 seconds if the thread still holds the lock key (by iterating over the thread IDs in the map and querying Redis). If it does, it continuously extends the lock key's lifetime.
If the service crashes, the watchdog mechanism disappears, and the key's expiration time will not be extended, causing it to expire after 30 seconds, allowing other threads to acquire the lock.

Optimizing Distributed Locks

To reduce the triggering of distributed locks or optimize performance in concurrent scenarios, consider sending tokens with expiration times and using MQ to broadcast updates to the token, with the distributed lock serving as a fallback operation.

Handling Redis Failures

If Redis goes down while a Java thread holds a Redisson distributed lock, several issues may arise:
The lock may be lost, especially in a single-instance or master-slave architecture.
The lock may not be renewed if the watchdog mechanism fails.
The lock may be released if it expires before Redis restarts.
Solutions include enabling Redis persistence, setting reasonable lock expiration times, using high-availability architectures like Redis Sentinel or Redis Cluster, and implementing the Redlock algorithm across multiple Redis instances.

Designing Distributed Lock Keys

When using Redisson's lock, the key is a fixed business string plus a unique token.

Thread Pools and Multithreading

Thread Pool Execution Process

The execute() method's logic involves checking the number of threads against the core pool size, adding tasks to the queue if the core threads are busy, creating new non-core threads if the queue is full, and handling new tasks with a rejection policy if the pool is at capacity.

Core Parameters of Thread Pools

The constructor of a thread pool includes parameters like corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue, threadFactory, and handler.

Rejection Policies of Thread Pools

Rejection policies include AbortPolicy, CallerRunsPolicy, DiscardPolicy, and DiscardOldestPolicy.

Queue Types for Thread Pools

Supported queue types include ArrayBlockingQueue, LinkedBlockingQueue, PriorityBlockingQueue, and DelayQueue.

Usage Scenarios for Thread Pools

Thread pools are used for quick responses and high throughput, such as parallel service calls or processing message queues.

Optimization and Considerations for Thread Pools

Avoid using Executors factory methods, handle task exceptions properly, and monitor thread pool performance to adjust parameters and prevent overload.

Lock Mechanisms in Thread Pools

Internally, thread pools use locks like mainLock to protect shared resources and Worker thread locks to manage thread interruption status.

Extending and Customizing Thread Pools

You can implement a ThreadFactory interface to customize thread creation and define custom rejection policies based on business needs.

Pre-creation of Core Threads

By default, core threads are created on demand when tasks are submitted. However, you can pre-create core threads using methods like prestartCoreThread() or prestartAllCoreThreads().

Idle Core Thread Destruction

Core threads remain alive until the pool is shut down. If allowCoreThreadTimeOut is set to true, idle core threads may be destroyed after keepAliveTime.

State of Core and Non-Core Threads When Idle

Core threads wait for new tasks, while non-core threads wait for a certain time (keepAliveTime) and are destroyed if no new tasks arrive.

Logging and Measuring Task Execution Time

You can wrap tasks with FutureTask to add logging logic or override ThreadPoolExecutor's beforeExecute and afterExecute methods to perform actions before and after task execution.

Dynamic Core Pool Size Adjustment

The core pool size can be dynamically adjusted using the setCorePoolSize(int newCorePoolSize) method. If the new size is larger, new core threads will be created; if smaller, excess threads may be terminated.

Behavior When corePoolSize Is Increased

If the corePoolSize is increased and there are tasks in the queue, new core threads will be created to handle the tasks.

Handling Idle Threads with keepAliveTime

The keepAliveTime is not managed by a task but by the Worker thread's runWorker method, which exits the thread if no tasks are received within the keepAliveTime.

Use of NANOSECONDS in workQueue.poll

The workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS) method uses NANOSECONDS for precision. Although the time unit can be specified when creating the thread pool, it is internally converted to nanoseconds for consistent time management.

Top comments (0)