Java Concurrency in Depth - 1
Java comes with strong support for multi-threading and concurrency, which makes it easy to write concurrent applications. But usually, multi-threaded applications are tricky to debug, troubleshoot, and sometimes to scale. From my experience with concurrent applications, most of the issues are found when they run at scale, which means when they go live in many cases. In order to make this easier, it is better to understand how things work under the hood and the pros and cons of every choice.
This article is the first in a series of articles discussing the internals of Java concurrency.
Let’s start with this example:
public class Foo {
private int x;
public int getX() {
return x;
}
public void setX(int x) {
this.x = x;
}
}
This code is obviously not thread-safe. One way to make it thread safe is to make setX() and getX() both synchronized
.
How Synchronization Works
When a thread calls a synchronized method or block, it tries to acquire an intrinsic lock (monitor). Once a thread acquires the lock, other threads block until the lock is released.
This looks okay! But there are some drawbacks for synchronization:
- Starvation: Synchronization doesn’t guarantee fairness. This means that if there are many threads competing to acquire the lock, then there is a possibility that some threads don’t get a chance to continue, which means starvation.
- Deadlock: Calling synchronized code from other synchronized code can cause deadlocks.
- Less throughput: Using synchronization means only one thread is executing on a particular object. In many cases, this is not necessary because it is enough to lock access to the variable only on write, and there no need to lock the variable if all the threads at the moment are reading (concurrent reads).
Synchronization is good for thread safety but not optimal for concurrency. Check out this Javadoc about liveness problems.
Volatile
Another solution is using volatile.
public class Foo {
private volatile int x;
...
}
How Volatile Works
Volatile is said to guarantee:
1. Visibility : If one thread changes a value of a variable, the change will be visible immediately to other threads reading the variable. This is guaranteed by not allowing the compiler or the JVM to allocate those variables in the CPU registers. Any write to a volatile variable is flushed immediately to main memory and any read of it is fetched from main memory. That means there is a little bit of performance penalty, but that’s far better from a concurrency point of view.
2. Ordering : Sometimes for performance optimization, the JVM reorders instructions. This is not allowed when accessing volatile variables. Access to volatile variables is not reordered with access to other volatile variables, nor with access to other normal fields around them. This makes writes to non-volatile fields around them visible immediately to other threads.
Let’s look at an example to clarify this:
public class Foo {
private int x = -1;
private volatile boolean v = false;
public void setX(int x) {
this.x = x;
v = true;
}
public int getX() {
if (v == true) {
return x;
}
return 0;
}
}
Because of the first rule, if thread A calls setX(), and thread B calls getX(), then the change to v will be visible immediately to thread B. And because of the second rule, the change to x will be visible to thread B immediately as well.
However, volatile is not suitable for some operations, like ++, –, etc. This is because these operations translate into multiple read and write instructions. For example:
public int increment() {
//x++
int tmp = x;
tmp = tmp + 1;
x = tmp;
return x;
}
In a multi-threaded program, such operations should be atomic,which volatile doesn’t guarantee. Java SE comes with a set of atomic classes like AtomicInteger, AtomicLong, and AtomicBoolean, which can be used to solve this problem.
How Atomic Classes Work
Java relies on machine instructions/algorithms to achieve atomicity. Prior to Java 8, Atomic classes used Compare-and-Swap . Starting in Java 8, some methods of atomic classes began using Fetch-and-Add .
Let’s have a look at this implementation of AtomicInteger.getAndIncrement() in Java 7:
public final int getAndIncrement() {
for (;;) {
int current = get();
int next = current + 1;
if (compareAndSet(current, next))
return current;
}
}
In Java 8, that implementation has changed to:
public final int getAndIncrement() {
return unsafe.getAndAddInt(this, valueOffset, 1);
}
In the first implementation, compareAndSet returns true only if the actual value equals the current one, so the loop goes indefinitely until this condition is met.
This will be completely fine in an environment with few threads, but let’s think: What if we have 100 threads calling this function? Due to the high contention, race conditions are worse — so the loop might keep going on for a long time. That could lead to a livelocksituation. In such cases, solutions have to be designed carefully. One idea could be using something like a map-reduce solution, where you divide the threads into sets (mappers) and each set shares an atomic instance and a reducer thread collects values from the shared atomic instances.
Is this problem solved in Java 8?
1. Keep in mind there are still some methods using the first approach, like getAndUpdate(IntUnaryOperator).
2. Performance under contention still goes down, but it remains much better in Java 8. Check out this blog post where Ashkrit has plotted graphs comparing the performance of both.
連結:https://dzone.com/articles/java-concurrency-in-depth-part-1
相關文章
- Java Concurrency in Depth - 2Java
- Concurrency (一) prac1.
- Concurrency(五: Java記憶體模型)Java記憶體模型
- Concurrency(十五: Java中的讀寫鎖)Java
- 7.79 DEPTH
- P3Depth: Monocular Depth Estimation with a Piecewise Planarity Prior3DMono
- Concurrency Patterns in GoGo
- RA-Depth: Resolution Adaptive Self-Supervised Monocular Depth EstimationAPTMono
- Asyncio in Python and Concurrency tasksPython
- 111-Minimum Depth of Binary Tree
- 104. Maximum Depth of Binary Tree
- [leetcode]maximum-depth-of-binary-treeLeetCode
- 104-Maximum Depth of Binary Tree
- Concurrency(十一: 飢餓與公平)
- Concurrency(四:執行緒安全)執行緒
- Concurrency(六: 同步程式碼塊)
- LeetCode のminimum-depth-of-binary-treeLeetCode
- LeetCode 104. Maximum Depth of Binary TreeLeetCode
- 104. Maximum Depth of Binary Tree(圖解)圖解
- In-depth analysis of the comparison between AT and XA of distributed transactions
- Concurrency(一:如何理解多執行緒)執行緒
- 聊聊FluxFlatMap的concurrency及prefetch引數UX
- CentOS7 出現 df -h 與 du -h -x --max-depth=1 統計相差過大CentOS
- Oracle 19c Concepts(09):Data Concurrency and ConsistencyOracle
- Concurrency(二:建立和啟動執行緒)執行緒
- 用 Go 語言實戰 Limit Concurrency 方法GoMIT
- 題解:SP22382 ETFD - Euler Totient Function DepthFunction
- GeoLayout: Geometry Driven Room Layout Estimation Based on Depth Maps of PlanesOOM
- java(1)Java
- Go基礎學習六之併發concurrencyGo
- Concurrency(二十三: 非阻塞演算法下)演算法
- Concurrency(二十二: 非阻塞演算法上)演算法
- Concurrency(三:競態條件和臨界區)
- 【知識點】深度優先搜尋 Depth First Search
- 深度優先搜尋 (Depth First Search 簡稱:DFS)
- java1Java
- leetcode【210】【Depth-first Search】Course Schedule II【c++版本】LeetCodeC++
- C++ concurrency::task實現非同步程式設計(WindowsC++非同步程式設計Windows