ConcurrentHashMap原始碼解讀
資料結構
原始碼中的宣告
public class ConcurrentHashMap<K, V> extends AbstractMap<K, V>
implements ConcurrentMap<K, V>, Serializable {
//底層就是一個Segment陣列
final Segment<K,V>[] segments;
//初始化時候Segment陣列的預設長度
static final int DEFAULT_INITIAL_CAPACITY = 16;
//載入因子
static final float DEFAULT_LOAD_FACTOR = 0.75f;
//預設的執行緒併發級別,就是併發數,預設16個執行緒進行併發
static final int DEFAULT_CONCURRENCY_LEVEL = 16;
//最大容量entryies的個數
static final int MAXIMUM_CAPACITY = 1 << 30;
//Segment陣列的最小容量
static final int MIN_SEGMENT_TABLE_CAPACITY = 2;
//最大的Segament的個數
static final int MAX_SEGMENTS = 1 << 16; // slightly conservative
ConcurrentHashMap就是一個Segment陣列,每個Segment裡面儲存的是一個雜湊表。
static final class Segment<K,V> extends ReentrantLock implements Serializable {
// Hash table,預設初始容量為2,特別注意,這裡table用了volatile修飾,
// 保證多執行緒讀寫時的可見性
transient volatile HashEntry<K,V>[] table;
// segment中hashtable的元素數量計數器,用於size方法中,分段計算彙總
transient int count;
// 執行更新操作時,獲取segment鎖的重試次數,多核CPU重試64次,單核CPU重試1次
static final int MAX_SCAN_RETRIES =
Runtime.getRuntime().availableProcessors() > 1 ? 64 : 1;
// 其他省略
}
Segment繼承ReentrantLock,每個Segment元素都一個鎖,實現了分段鎖,鎖的粒度更細化,支援更高的併發。
ConcurrentHashMap初始化
public ConcurrentHashMap(int initialCapacity,
float loadFactor, int concurrencyLevel) {
if (!(loadFactor > 0) || initialCapacity < 0 || concurrencyLevel <= 0)
throw new IllegalArgumentException();
if (concurrencyLevel > MAX_SEGMENTS)
concurrencyLevel = MAX_SEGMENTS;
// Find power-of-two sizes best matching arguments
int sshift = 0;
int ssize = 1;//用來定義Segment[]的長度
//ssize的長度是大於併發級別數的最小2次方
while (ssize < concurrencyLevel) {
++sshift;
ssize <<= 1;
}
this.segmentShift = 32 - sshift;
this.segmentMask = ssize - 1;
if (initialCapacity > MAXIMUM_CAPACITY)
initialCapacity = MAXIMUM_CAPACITY;
//定義每個Segment元素裡面雜湊表的長度
int c = initialCapacity / ssize;
if (c * ssize < initialCapacity)
++c;
int cap = MIN_SEGMENT_TABLE_CAPACITY;//預設雜湊表的長度是2
//cap是每個Segment元素雜湊表的長度,最後也是2的n次方
while (cap < c)
cap <<= 1;
//建立Segment[0]和Segment陣列
Segment<K,V> s0 =
new Segment<K,V>(loadFactor, (int)(cap * loadFactor),
(HashEntry<K,V>[])new HashEntry[cap]);
Segment<K,V>[] ss = (Segment<K,V>[])new Segment[ssize];
// UNSAFE為sun.misc.Unsafe物件,使用CAS操作,
// 將segments[0]的元素替換為已經初始化的s0,保證原子性。
// Unsafe類採用C++語言實現,底層實現CPU的CAS指令操作,保證原子性。
UNSAFE.putOrderedObject(ss, SBASE, s0); // ordered write of segments[0]
this.segments = ss;
}
從構造方法看出,根據併發數計算Segment陣列的長度ssize和每個HashEntry陣列的長度cap,並初始化Segment[0].
ssize的長度是2的n次方,並且預設長度為16,每個hashEntry陣列的長度也是2的n次方,最小為2
put方法實現
public V put(K key, V value) {
Segment<K,V> s;
if (value == null)
throw new NullPointerException();
int hash = hash(key);
int j = (hash >>> segmentShift) & segmentMask;
if ((s = (Segment<K,V>)UNSAFE.getObject // nonvolatile; recheck
(segments, (j << SSHIFT) + SBASE)) == null) // in ensureSegment
s = ensureSegment(j);
return s.put(key, hash, value, false);
}
private int hash(Object k) {
int h = hashSeed;
if ((0 != h) && (k instanceof String)) {
return sun.misc.Hashing.stringHash32((String) k);
}
h ^= k.hashCode();
// Spread bits to regularize both segment and index locations,
// using variant of single-word Wang/Jenkins hash.
h += (h << 15) ^ 0xffffcd7d;
h ^= (h >>> 10);
h += (h << 3);
h ^= (h >>> 6);
h += (h << 2) + (h << 14);
return h ^ (h >>> 16);
}
1. 根據key計算出hash值,由此得到Segment元素的下標位置
2. 檢查下標segment是否已經初始化,如果沒有初始化,則呼叫ensureSegment進行初始化,內部用了CAS操作進行替換,達到初始化效果。初始化的過程進行了雙重檢查,UNSAFE.getObjectVolatile,通過這個方法執行了兩次,以檢查segment是否已經初始化,以及用UNSAFE.compareAndSwapObject進行CAS替換,CAS的替換有失敗的可能,因此原始碼中還加了自旋重試的操作,保證最終CAS操作的成功。
private Segment<K,V> ensureSegment(int k) {
final Segment<K,V>[] ss = this.segments;
long u = (k << SSHIFT) + SBASE; // raw offset
Segment<K,V> seg;
if ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u)) == null) {
Segment<K,V> proto = ss[0]; // use segment 0 as prototype
int cap = proto.table.length;
float lf = proto.loadFactor;
int threshold = (int)(cap * lf);
HashEntry<K,V>[] tab = (HashEntry<K,V>[])new HashEntry[cap];
if ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u))
== null) { // recheck
Segment<K,V> s = new Segment<K,V>(lf, threshold, tab);
while ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u))
== null) {
if (UNSAFE.compareAndSwapObject(ss, u, null, seg = s))
break;
}
}
}
return seg;
}
3. 呼叫Segment的put方法,將元素放到HashEntry陣列中,過程中會去獲取鎖
final V put(K key, int hash, V value, boolean onlyIfAbsent) {
HashEntry<K,V> node = tryLock() ? null :
scanAndLockForPut(key, hash, value);
V oldValue;
try {
HashEntry<K,V>[] tab = table;
int index = (tab.length - 1) & hash;
HashEntry<K,V> first = entryAt(tab, index);
for (HashEntry<K,V> e = first;;) {
if (e != null) {
K k;
if ((k = e.key) == key ||
(e.hash == hash && key.equals(k))) {
oldValue = e.value;
if (!onlyIfAbsent) {
e.value = value;
++modCount;
}
break;
}
e = e.next;
}
else {
if (node != null)
node.setNext(first);
else
node = new HashEntry<K,V>(hash, key, value, first);
int c = count + 1;
if (c > threshold && tab.length < MAXIMUM_CAPACITY)
rehash(node);
else
setEntryAt(tab, index, node);
++modCount;
count = c;
oldValue = null;
break;
}
}
} finally {
unlock();
}
return oldValue;
}
當有執行緒A和執行緒B在相同segment物件上put物件時,執行過程如下:
1、執行緒A執行tryLock()方法獲取鎖。
2、執行緒B獲取鎖失敗,則執行scanAndLockForPut()方法,在scanAndLockForPut方法中,會通過重複執行tryLock()方法嘗試獲取鎖,
在多處理器環境下,重複次數為64,單處理器重複次數為1,當執行tryLock()方法的次數超過上限時,則執行lock()方法掛起執行緒B;
這樣設計目的是為了讓執行緒切換和自旋消耗的CPU的時間達到平衡,不至於白白浪費CPU,也不會過於平凡切換執行緒導致更多的CPU浪費。
3、獲得鎖之後,根據hash值定位到HashEntry陣列的下標,更新或插入元素,在插入過程中,如果HashEntry陣列元素個數容量超過負載比例,
則進行rehash操作擴容,擴容為原來的兩倍(rehash請對比Java集合-HashMap原始碼實現深入解析中的邏輯,自行分析,基本一模一樣);
4、在插入後,還會更新segment的count計數器,用於size方法中計算map元素個數時不用對每個segment內部HashEntry遍歷重新計算,提高效能。
5、當執行緒A執行完插入操作後,會通過unlock()方法釋放鎖,接著喚醒執行緒B繼續執行;
get方法
get方法是不回去獲取鎖的,根據key計算hash值,定位到Segment元素位置,並且使用UNSAFE的get方法保證獲取的元素是最新的。
public V get(Object key) {
Segment<K,V> s; // manually integrate access methods to reduce overhead
HashEntry<K,V>[] tab;
int h = hash(key);
long u = (((h >>> segmentShift) & segmentMask) << SSHIFT) + SBASE;
if ((s = (Segment<K,V>)UNSAFE.getObjectVolatile(segments, u)) != null &&
(tab = s.table) != null) {
for (HashEntry<K,V> e = (HashEntry<K,V>) UNSAFE.getObjectVolatile
(tab, ((long)(((tab.length - 1) & h)) << TSHIFT) + TBASE);
e != null; e = e.next) {
K k;
if ((k = e.key) == key || (e.hash == h && key.equals(k)))
return e.value;
}
}
return null;
}
size方法public int size() {
// Try a few times to get accurate count. On failure due to
// continuous async changes in table, resort to locking.
final Segment<K,V>[] segments = this.segments;
int size;
boolean overflow; // true if size overflows 32 bits
long sum; // sum of modCounts
long last = 0L; // previous sum
int retries = -1; // first iteration isn't retry
try {
for (;;) {
//嘗試RETRIES_BEFORE_LOCK次後還無法統計到正確的大小,就將
//整個Segment陣列鎖住,進行hashEntry陣列的長度累計
if (retries++ == RETRIES_BEFORE_LOCK) {
for (int j = 0; j < segments.length; ++j)
ensureSegment(j).lock(); // force creation
}
sum = 0L;
size = 0;
overflow = false;
//兩次查詢modCount,判斷資料結構是否沒有變化,如果沒有變化,則是跳出迴圈,返回size
for (int j = 0; j < segments.length; ++j) {
Segment<K,V> seg = segmentAt(segments, j);
if (seg != null) {
sum += seg.modCount;
int c = seg.count;
if (c < 0 || (size += c) < 0)
overflow = true;
}
}
if (sum == last)
break;
last = sum;
}
} finally {
if (retries > RETRIES_BEFORE_LOCK) {
for (int j = 0; j < segments.length; ++j)
segmentAt(segments, j).unlock();
}
}
return overflow ? Integer.MAX_VALUE : size;
}
1. 重複計算Segment的modCount 和 hashEntry陣列的size大小,並彙總
2. 前一次的sum和後一次的last(分別表示兩次的modCount彙總)相等,表示資料結構沒有變化,就返回累計的size
3. 否則,就再次檢查sum和last,嘗試RETRIES_BEFORE_LOCK次還是無法統計到正確的值,就將整個Segment陣列都鎖住,累計size大小,並在finally中釋放鎖
注意,累計的size大小是大概的值,比如說如果在last==sum情況下,跳出迴圈,再返回size之前,存在一個執行緒put元素,返回的值就會有問題
轉載:https://www.jianshu.com/p/47c1be88a88e相關文章
- ConcurrentHashMap原始碼解讀一HashMap原始碼
- ConcurrentHashMap原始碼解讀二HashMap原始碼
- ConcurrentHashMap原始碼閱讀HashMap原始碼
- ConcurrentHashMap 原始碼閱讀小結HashMap原始碼
- JDK1.8 ConcurrentHashMap原始碼閱讀JDKHashMap原始碼
- JDK原始碼閱讀(7):ConcurrentHashMap類閱讀筆記JDK原始碼HashMap筆記
- ConcurrentHashMap原始碼解析HashMap原始碼
- ConcurrentHashMap 原始碼分析HashMap原始碼
- PostgreSQL 原始碼解讀(3)- 如何閱讀原始碼SQL原始碼
- ConcurrentHashMap原始碼分析-JDK18HashMap原始碼JDK
- ConcurrentHashMap原始碼解析-Java7HashMap原始碼Java
- FairyGUI原始碼解讀AIGUI原始碼
- ZooKeeper原始碼解讀原始碼
- WeakHashMap,原始碼解讀HashMap原始碼
- Laravel 原始碼解讀Laravel原始碼
- reselect原始碼解讀原始碼
- Swoft 原始碼解讀原始碼
- Seajs原始碼解讀JS原始碼
- ReentrantLock原始碼解讀ReentrantLock原始碼
- MJExtension原始碼解讀原始碼
- Axios 原始碼解讀iOS原始碼
- SDWebImage原始碼解讀Web原始碼
- MJRefresh原始碼解讀原始碼
- Handler原始碼解讀原始碼
- LifeCycle原始碼解讀原始碼
- ThreadLocal 原始碼解讀thread原始碼
- Masonry原始碼解讀原始碼
- LinkedHashMap原始碼解讀HashMap原始碼
- Redux原始碼解讀Redux原始碼
- ThreadLocal原始碼解讀thread原始碼
- HashMap原始碼解讀HashMap原始碼
- 【C++】【原始碼解讀】std::is_same函式原始碼解讀C++原始碼函式
- 還不懂 ConcurrentHashMap ?這份原始碼分析瞭解一下HashMap原始碼
- hashmap和concurrenthashmap原始碼分析(1.7/1.8)HashMap原始碼
- ConcurrentHashMap (jdk1.7)原始碼學習HashMapJDK原始碼
- 原始碼淺入淺出 Java ConcurrentHashMap原始碼JavaHashMap
- Java7 ConcurrentHashMap原始碼淺析JavaHashMap原始碼
- ConcurrentHashMap 實現原理和原始碼分析HashMap原始碼