Java Concurrency in Depth - 2
In the first part of this series, Java Concurrency in Depth (Part 1) , I discussed the internals of Java’s synchronization primitives (synchronized, volatile,and atmoic classes) and the pros and cons of each of them.
In this article, I will discuss other high-level locks that are built on top of volatile, atomic classes and Compare-And-Swap.
The primary reason for writing a multi-threaded application is improving performance. In this case, we are trading performance gain with more complexity in code, debugging, and monitoring. When having a shared object,locking becomes inevitable and this shared object becomes the bottleneck where we see performance issues, among others. So, choosing the right locking mechanism can have a profound impact on performance and operations.
But why is synchronized not enough?
Synchronized is an exclusive locking mechanism that cannot be tailored to different use cases.
No way to instruct the JVM to use fair policy.
When deadlock or starvation occur, the only way to resolve the problem is killing the process because threads block indefinitely and there is no way to interrupt a blocked thread in this case.
Lock
Having an interface like java.util.concurrent.locks.Lock provides great flexibility to have different implementations tailored to different use cases, as well as overcoming the drawbacks of synchronized. The interface defines the following methods:
lock() : The thread blocks indefinitely until it acquires the lock. No way to interrupt the blocked thread.
lockInterruptibly() : The thread blocks indefinitely until it acquires the lock. The blocked thread can be interrupted
tryLock() : Non-blocking call. Returns immediately with true if the lock is acquired and false otherwise.
tryLock(long, TimeUnit) : Blocking call until the lock is acquired or the specified timeout has elapsed. Returns with true if the lock is acquired and false otherwise. It is also possible to interrupt a blocked thread.
unlock() : Releases the lock. It is important to ALWAYS have it in a finally block.
newCondition() : This is one of the most useful features. In some use cases, the thread needs to wait until a certain condition is satisfied, and during this waiting time, it is useful to release the lock for other threads to continue execution. This is similar to Object.wait and Object.notify/notifyAll, which work with synchronized.
ReentrantLock
ReentrantLockimplements Lock interface. It has two constructors to choose from depending on whether you need a fair lock or a non-fair one.
It is important to know that both the fair and non-fair variants of ReentrantLock use a waiting queue to park blocked threads. This means:
Thread priority (can be set using Thread.setPriority()) has no effect. So, don’t rely on it in when using reentrant locks.
Fairness here is very simple — it is about getting a chance to execute. It’s nothing like complex schedulers, which means it is still possible to cause starvation if a thread doesn’t release the lock or holds it for a long time.
Fair and non-fair variants use AbstractQueuedSynchronizer, which implements the waiting queue. It is a variant of the CLH (Craig,Landin, and Hagersten) lock queue .
The logic behind granting locks:
Fair lock: grant the lock only if the call is recursive (i.e. the thread is already holding the lock) or there are no other waiting threads, or the thread is the first in the queue.
Non-fair lock: tries to acquire the lock and, if it can’t, then the thread is queued.
Important Usage Guidelines
tryLock(), unlike the rest of the methods, acts in a non-fair way with both fair and non-fair implementations. So, pay attention when using it with a fair lock, as you might be expecting different behavior.
When using ReentrantLock, try to avoid using lock(). Use any of the other methods and, depending on the use case, you might add an exponential backoff or maximum retries or both methods combined. Even if the use case requires blocking, for instance until data is available, use the lockInterruptibly() or tryLock(long, Timeout) methods.
boolean acquired = false;
long wait = 100;
int retries = 0;
int maxRetries = 10;
try {
while (!acquired && retries < maxRetries) {
acquired = lock.tryLock(wait, TimeUnit.MILLISECONDS);
wait *= 2;
++retries;
}
if (!acquired) {
// log error or throw exception
}
} catch (InterruptedException e) {
// log error or throw exception
} finally {
lock.unlock();
}
One bad practice, in general, is ignoring interrupts. Interrupts should be handled properly to avoid any problems with application or thread pool termination. Even when you’re sure that you need to ignore them, then log the exception. But don’t just swallow it.
The class provides many methods that, as per the Javadoc, should be used only for debugging and instrumentation and not for synchronization purposes. My advice is keeping programming to the interface because you don’t need to pollute your code with such granular debug information that you can actually get using a decent profiler.
ReentrantReadWriteLock
ReentrantReadWriteLockactually consists of two reentrant locks: a read lock and a write lock. The read lock is shared (which means multiple threads can acquire it as long as the write lock is not acquired), whereas the exclusive write lock can be acquired by one thread only as long as the read lock is not acquired by any other thread. This is very useful in many use cases and helps increase concurrency.
This also has both fair and non-fair policies.
Other Useful Synchronization Tools
CountDownLatch : This is useful when one or more threads need to wait until a set of operations completes. A count is passed to the constructor and it can be decremented by calling countDown(). When the count goes to zero, waiting threads are notified.
CyclicBarrier : This is useful when a set of threads are required to wait until they reach a common point. The same behavior could be achieved using CoutDownLatch, but CyclicBarrier has some additional options:
The count in CyclicBarrier can be reset.
CyclicBarrier accepts an optional Runnable implementation that will execute when the barrier is tripped.
A barrier is considered broken if any of the threads leaves the barrier because of interruption, failure, or timeout. When this happens, all other threads waiting will leave the barrier by throwing BarrierBrokenException. The state of the barrier can be tested using isBroker(). This state is kept until reset() is called.
StampedLock : Yet another implementation of a read-write lock. It doesn’t implement the Lock interface, and it is more complicated to use that ReentrantReadWriteLock. It has a very interesting feature — Optimistic Reading — but it is quite tricky and fragile to use. I strongly recomment using ReentrantReadWriteLock instead of StampedLock.
相關文章
- Java Concurrency in Depth - 1Java
- Concurrency(五: Java記憶體模型)Java記憶體模型
- Concurrency(十五: Java中的讀寫鎖)Java
- 7.79 DEPTH
- P3Depth: Monocular Depth Estimation with a Piecewise Planarity Prior3DMono
- Concurrency Patterns in GoGo
- RA-Depth: Resolution Adaptive Self-Supervised Monocular Depth EstimationAPTMono
- Asyncio in Python and Concurrency tasksPython
- Concurrency (一) prac1.
- 111-Minimum Depth of Binary Tree
- 104. Maximum Depth of Binary Tree
- [leetcode]maximum-depth-of-binary-treeLeetCode
- 104-Maximum Depth of Binary Tree
- Concurrency(十一: 飢餓與公平)
- Concurrency(四:執行緒安全)執行緒
- Concurrency(六: 同步程式碼塊)
- LeetCode のminimum-depth-of-binary-treeLeetCode
- LeetCode 104. Maximum Depth of Binary TreeLeetCode
- 104. Maximum Depth of Binary Tree(圖解)圖解
- In-depth analysis of the comparison between AT and XA of distributed transactions
- Concurrency(一:如何理解多執行緒)執行緒
- 聊聊FluxFlatMap的concurrency及prefetch引數UX
- 位元組大模型團隊Depth Anything V2模型入選蘋果最新CoreML模型大模型蘋果REM
- Oracle 19c Concepts(09):Data Concurrency and ConsistencyOracle
- Concurrency(二:建立和啟動執行緒)執行緒
- 用 Go 語言實戰 Limit Concurrency 方法GoMIT
- 題解:SP22382 ETFD - Euler Totient Function DepthFunction
- GeoLayout: Geometry Driven Room Layout Estimation Based on Depth Maps of PlanesOOM
- Go基礎學習六之併發concurrencyGo
- Concurrency(二十三: 非阻塞演算法下)演算法
- Concurrency(二十二: 非阻塞演算法上)演算法
- Concurrency(三:競態條件和臨界區)
- Java(2)Java
- 【知識點】深度優先搜尋 Depth First Search
- 深度優先搜尋 (Depth First Search 簡稱:DFS)
- leetcode【210】【Depth-first Search】Course Schedule II【c++版本】LeetCodeC++
- C++ concurrency::task實現非同步程式設計(WindowsC++非同步程式設計Windows
- CMU15-455 Lab2 - task4 Concurrency Index -併發B+樹索引演算法的實現Index索引演算法