上文中結尾處,我們說到了現在很少用Hashtable,那麼在需要執行緒安全的場景中,我們如何保持同步呢,這就是本文的重點:ConcurrentHashMap(JDK1.7)。ConcurrentHashMap比HashMap以及Hashtable複雜多了,其內部採用了鎖分段技術用以提高併發存取效率。我們看一下測試程式碼:
程式碼清單1:
import java.util.HashMap;
import java.util.Hashtable;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
public class CurrentHashMapTest {
private static ConcurrentHashMap<String,String> concurrentHashMap=new ConcurrentHashMap<>();
private static Hashtable<String,String> hashtable=new Hashtable<>();
private static HashMap<String,String> hashMap=new HashMap<>();
public static void main(String[] args){
testConcurrentHashMapThreadSafe();
System.out.println(concurrentHashMap.size()+"last:"+concurrentHashMap.get("concurrentHashMap9999"));
testHashtableThreadSafe();
System.out.println(hashtable.size()+"last:"+hashtable.get("hashtable9999"));
testHashMapThreadSafe();
System.out.println(hashMap.size()+"last:"+hashMap.get("hashmap9999"));
System.out.println("test end");
}
public static void testConcurrentHashMapThreadSafe(){
long startTime=System.currentTimeMillis();
for (int i=0;i<100000;i++){
new ConcurrentThread(i,"concurrentHashMap",concurrentHashMap).start();
}
long endTime=System.currentTimeMillis();
System.out.println("ConcurrentHashMap take time:"+(endTime-startTime));
}
public static void testHashtableThreadSafe(){
long startTime=System.currentTimeMillis();
for (int i=0;i<100000;i++){
new ConcurrentHashTableThread(i,"hashtable",hashtable).start();
}
long endTime=System.currentTimeMillis();
System.out.println("Hashtable take time:"+(endTime-startTime));
}
public static void testHashMapThreadSafe(){System.out.println("enter test HashMap");
long startTime=System.currentTimeMillis();
for (int i=0;i<100000;i++){
new ConcurrentHashMapThread(i,"hashmap",hashMap).start();
}
long endTime=System.currentTimeMillis();
System.out.println(" HashMap take time:"+(endTime-startTime));
}
}
class ConcurrentThread extends Thread{
public int i;
public String name;
private ConcurrentHashMap<String,String> map;
public ConcurrentThread(int i,String name,ConcurrentHashMap<String,String> map){
this.i=i;
this.name=name;
this.map=map;
}
@Override
public void run() {
super.run();
map.put(name+i,i+"");
}
}
class ConcurrentHashTableThread extends Thread{
public int i;
public String name;
private Hashtable<String,String> map;
public ConcurrentHashTableThread(int i,String name,Hashtable<String,String> map){
this.i=i;
this.name=name;
this.map=map;
}
@Override
public void run() {
super.run();
map.put(name+i,i+"");
}
}
class ConcurrentHashMapThread extends Thread{
public int i;
public String name;
private HashMap<String,String> map;
public ConcurrentHashMapThread(int i,String name,HashMap<String,String> map){
this.i=i;
this.name=name;
this.map=map;
}
@Override
public void run() {
super.run();
map.put(name+i,i+"");
}
}複製程式碼
上面的程式碼輸出結果(程式碼執行環境:Ubuntu14.04+idea+jdk1.7):
ConcurrentHashMap take time:3522
100000last:9999
Hashtable take time:3674
100000last:9999
enter test HashMap
HashMap take time:1105168
99945last:9999
test end
從程式碼輸出結果上可以看出ConcurrentHashMap的效率明顯要比Hashtable要高效,而HashMap是不安全的。
先說一下ConcurrentHashMap的內部結構,如下圖所示:
按照以前的風格,我們看下ConcurrentHashMap的建構函式,如程式碼清單2:
static final int DEFAULT_INITIAL_CAPACITY = 16;//table陣列的預設長度,這個和HashMap是一樣的
static final float DEFAULT_LOAD_FACTOR = 0.75f;//載入因子
static final int DEFAULT_CONCURRENCY_LEVEL = 16;//併發級別
static final int MAXIMUM_CAPACITY = 1 << 30;//最大容量,這裡可以看到DEFAULT_INITIAL_CAPACITY、DEFAULT_LOAD_FACTOR、MAXIMUM_CAPACITY,都是和HashMap相應欄位的值是相同的。
static final int MIN_SEGMENT_TABLE_CAPACITY = 2;//段組的最小長度,這裡最小值為2的原因是,如果小於2的話(即為1),就沒有鎖分段的意義了,就和Hashtable一樣了,不能兩個執行緒同時併發存和取資料了。
static final int MAX_SEGMENTS = 1 << 16; //段組的最大長度
static final int RETRIES_BEFORE_LOCK = 2;//
final int segmentMask;//地位掩碼
final int segmentShift;//段偏移量
final Segment<K,V>[] segments;//段組
transient Set<K> keySet;
transient Set<Map.Entry<K,V>> entrySet;
transient Collection<V> values;
public ConcurrentHashMap(int initialCapacity,
float loadFactor, int concurrencyLevel) {
if (!(loadFactor > 0) || initialCapacity < 0 || concurrencyLevel <= 0)
throw new IllegalArgumentException();
if (concurrencyLevel > MAX_SEGMENTS)
concurrencyLevel = MAX_SEGMENTS;
// Find power-of-two sizes best matching arguments
int sshift = 0;//左移次數
int ssize = 1;//經過計算得到段組的長度
while (ssize < concurrencyLevel) {//我們在閱讀原始碼時,碰到這類程式碼,我們可以假設輸入值,以便更好的理解程式碼的含義。
++sshift;
ssize <<= 1;//sszie的值為2的sshift冪
}
this.segmentShift = 32 - sshift;//
this.segmentMask = ssize - 1;//低位掩碼,sszie為2的指數,則segmentMask的低位全是1.
if (initialCapacity > MAXIMUM_CAPACITY)
initialCapacity = MAXIMUM_CAPACITY;
int c = initialCapacity / ssize;
if (c * ssize < initialCapacity)
++c;
int cap = MIN_SEGMENT_TABLE_CAPACITY;
while (cap < c)//cap的值是2的指數,同時計算之後也是table陣列的容量。
cap <<= 1;
// create segments and segments[0]
Segment<K,V> s0 =
new Segment<K,V>(loadFactor, (int)(cap * loadFactor),
(HashEntry<K,V>[])new HashEntry[cap]);
Segment<K,V>[] ss = (Segment<K,V>[])new Segment[ssize];//建立段組
UNSAFE.putOrderedObject(ss, SBASE, s0); // 利用Unsafe將s0放在SBASE放入位置
this.segments = ss;
}
public ConcurrentHashMap(int initialCapacity, float loadFactor) {
this(initialCapacity, loadFactor, DEFAULT_CONCURRENCY_LEVEL);
}
public ConcurrentHashMap(int initialCapacity) {
this(initialCapacity, DEFAULT_LOAD_FACTOR, DEFAULT_CONCURRENCY_LEVEL);
}
public ConcurrentHashMap() {
this(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR, DEFAULT_CONCURRENCY_LEVEL);
}
public ConcurrentHashMap(Map<? extends K, ? extends V> m) {
this(Math.max((int) (m.size() / DEFAULT_LOAD_FACTOR) + 1,
DEFAULT_INITIAL_CAPACITY),
DEFAULT_LOAD_FACTOR, DEFAULT_CONCURRENCY_LEVEL);
putAll(m);
}複製程式碼
程式碼清單2中的34~41行,主要是為了計算segmentShift與segmentMask的值,下面舉個兩個計算過程的例子:
看了上面的兩組執行資料,我們可以知道segmentShift以及segmentMask的值是由concurrentLevel決定的,這幾個變數意義在程式碼註釋裡都有說明,這裡就不進行闡述了。
我們建立ConcurrentHashMap物件的目的就是為了使用,於是我們就來到了put方法這裡,如程式碼清單3
程式碼清單3
public V put(K key, V value) {
Segment<K,V> s;
if (value == null)
throw new NullPointerException();//ConcurrentHashMap也不能接收null的鍵值對的,key和value都不能為Null
int hash = hash(key);//計算雜湊值
int j = (hash >>> segmentShift) & segmentMask;//計算段組的索引,(hash>>>segmentShift)保留雜湊值的高位將其結果與segmentMask與是為了求段組下標。
if ((s = (Segment<K,V>)UNSAFE.getObject // nonvolatile; recheck
(segments, (j << SSHIFT) + SBASE)) == null) //取出(j<<SSHIFT)+SBASE記憶體偏移處的物件,如果為空,則建立。
s = ensureSegment(j);
return s.put(key, hash, value, false);//具體的put資料的操作由segment物件來完成。
}
private int hash(Object k) {//這個hash函式的作用就是為了對key的hashcode的原始值進行再次處理,以減少碰撞。
int h = hashSeed;
if ((0 != h) && (k instanceof String)) {
return sun.misc.Hashing.stringHash32((String) k);
}
h ^= k.hashCode();
// Spread bits to regularize both segment and index locations,
// using variant of single-word Wang/Jenkins hash.
h += (h << 15) ^ 0xffffcd7d;
h ^= (h >>> 10);
h += (h << 3);
h ^= (h >>> 6);
h += (h << 2) + (h << 14);
return h ^ (h >>> 16);
}
private Segment<K,V> ensureSegment(int k) {
final Segment<K,V>[] ss = this.segments;
long u = (k << SSHIFT) + SBASE; //記憶體地址
Segment<K,V> seg;
if ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u)) == null) {//如果記憶體偏移處沒有值,使用ss[0]元素為原型。
Segment<K,V> proto = ss[0]; // use segment 0 as prototype
int cap = proto.table.length;//複製容量
float lf = proto.loadFactor;//複製載入因子
int threshold = (int)(cap * lf);//閥值
HashEntry<K,V>[] tab = (HashEntry<K,V>[])new HashEntry[cap];
if ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u))
== null) { // 再次檢查是否為null
Segment<K,V> s = new Segment<K,V>(lf, threshold, tab);//建立Segment物件
while ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u))
== null) {//迴圈檢查u地址偏移處的物件是否為null
if (UNSAFE.compareAndSwapObject(ss, u, null, seg = s))//如果賦值成功則跳出迴圈,
break;
}
}
}
return seg;//最終返回此次建立的Segment物件或者u處的Segment物件。
}
// Unsafe mechanics
private static final sun.misc.Unsafe UNSAFE;
private static final long SBASE;
private static final int SSHIFT;//有多少個1位
private static final long TBASE;
private static final int TSHIFT;//有多少個1位
private static final long HASHSEED_OFFSET;
private static final long SEGSHIFT_OFFSET;
private static final long SEGMASK_OFFSET;
private static final long SEGMENTS_OFFSET;
static {
int ss, ts;
try {
UNSAFE = sun.misc.Unsafe.getUnsafe();
Class tc = HashEntry[].class;
Class sc = Segment[].class;
TBASE = UNSAFE.arrayBaseOffset(tc);//table組的物件頭的偏移量
SBASE = UNSAFE.arrayBaseOffset(sc);//段組的物件頭的偏移量
ts = UNSAFE.arrayIndexScale(tc);//單個HashEntry的大小,
ss = UNSAFE.arrayIndexScale(sc);//單個Segment的大小
HASHSEED_OFFSET = UNSAFE.objectFieldOffset(
ConcurrentHashMap.class.getDeclaredField("hashSeed"));//hashSeed的記憶體地址
SEGSHIFT_OFFSET = UNSAFE.objectFieldOffset(
ConcurrentHashMap.class.getDeclaredField("segmentShift"));//segmentShift的記憶體地址
SEGMASK_OFFSET = UNSAFE.objectFieldOffset(
ConcurrentHashMap.class.getDeclaredField("segmentMask"));//segmentMask的記憶體地址
SEGMENTS_OFFSET = UNSAFE.objectFieldOffset(
ConcurrentHashMap.class.getDeclaredField("segments"));//segment的起始地址
} catch (Exception e) {
throw new Error(e);
}
if ((ss & (ss-1)) != 0 || (ts & (ts-1)) != 0)//這裡可以看到對於ss以及ts的要求也是2的指數值。
throw new Error("data type scale not a power of two");
SSHIFT = 31 - Integer.numberOfLeadingZeros(ss);//numberOfLeadingZeros是代表一個int型的二進位制值代表數值的最高位為1的之前有多少個0位。也就是說SSHIFT與TSHIFT代表資料的有效資訊佔用多少位。
TSHIFT = 31 - Integer.numberOfLeadingZeros(ts);
}複製程式碼
通過程式碼清單3我們知道了ConcurrentHashMap的put操作是由Segment來完成的,下面我們繼續往下挖,看程式碼清單4
程式碼清單4
static final class Segment<K,V> extends ReentrantLock implements Serializable {//繼承ReetrantLock可重入鎖
private static final long serialVersionUID = 2249069246763182397L;
static final int MAX_SCAN_RETRIES =
Runtime.getRuntime().availableProcessors() > 1 ? 64 : 1;
transient volatile HashEntry<K,V>[] table;//表組
transient int count;//表的長度
transient int modCount;//修改次數
transient int threshold;//閥值
final float loadFactor;//負載因子
Segment(float lf, int threshold, HashEntry<K,V>[] tab) {
this.loadFactor = lf;
this.threshold = threshold;
this.table = tab;
}
final V put(K key, int hash, V value, boolean onlyIfAbsent) {//put操作
HashEntry<K,V> node = tryLock() ? null :
scanAndLockForPut(key, hash, value);//保證能夠獲取到段鎖,只有key不在該段內,node才不為null,其餘情況node為null
V oldValue;
try {
HashEntry<K,V>[] tab = table;
int index = (tab.length - 1) & hash;//計算table陣列的索引
HashEntry<K,V> first = entryAt(tab, index);
for (HashEntry<K,V> e = first;;) {
if (e != null) {//迴圈遍歷連結串列,如果沒有找到e=null然後跳轉至else的分支程式碼中。
K k;
if ((k = e.key) == key ||
(e.hash == hash && key.equals(k))) {
oldValue = e.value;
if (!onlyIfAbsent) {
e.value = value;
++modCount;
}
break;
}
e = e.next;
}
else {
if (node != null)
node.setNext(first);//頭插法
else
node = new HashEntry<K,V>(hash, key, value, first);//頭插法
int c = count + 1;
if (c > threshold && tab.length < MAXIMUM_CAPACITY)//擴容處理
rehash(node);
else
setEntryAt(tab, index, node);
++modCount;
count = c;
oldValue = null;
break;
}
}
} finally {
unlock();
}
return oldValue;
}
@SuppressWarnings("unchecked")
private void rehash(HashEntry<K,V> node) {//這個函式的理解還是不容易的。
HashEntry<K,V>[] oldTable = table;
int oldCapacity = oldTable.length;
int newCapacity = oldCapacity << 1;//擴容方式為old*2.
threshold = (int)(newCapacity * loadFactor);//新的閥值
HashEntry<K,V>[] newTable =
(HashEntry<K,V>[]) new HashEntry[newCapacity];
int sizeMask = newCapacity - 1;
for (int i = 0; i < oldCapacity ; i++) {
HashEntry<K,V> e = oldTable[i];//遍歷table陣列,進而遍歷單連結串列
if (e != null) {
HashEntry<K,V> next = e.next;
int idx = e.hash & sizeMask;
if (next == null) // Single node on list
newTable[idx] = e;
else { // Reuse consecutive sequence at same slot
HashEntry<K,V> lastRun = e;
int lastIdx = idx;
for (HashEntry<K,V> last = next;last != null;last = last.next) {//遍歷單連結串列
int k = last.hash & sizeMask;
if (k != lastIdx) {
lastIdx = k;
lastRun = last;
}
}
newTable[lastIdx] = lastRun;
// Clone remaining nodes
for (HashEntry<K,V> p = e; p != lastRun; p = p.next) {
V v = p.value;
int h = p.hash;
int k = h & sizeMask;
HashEntry<K,V> n = newTable[k];
newTable[k] = new HashEntry<K,V>(h, p.key, v, n);
}
}
}
}
int nodeIndex = node.hash & sizeMask; // add the new node
node.setNext(newTable[nodeIndex]);
newTable[nodeIndex] = node;
table = newTable;
}
private HashEntry<K,V> scanAndLockForPut(K key, int hash, V value) {
HashEntry<K,V> first = entryForHash(this, hash);//根據hash值找到table的陣列元素
HashEntry<K,V> e = first;
HashEntry<K,V> node = null;
int retries = -1; // 用來定位節點,如果為0則定位到包含key的節點
while (!tryLock()) {//迴圈檢測鎖,如果當前執行緒已經獲取到鎖,則跳出迴圈。
HashEntry<K,V> f; // to recheck first below
if (retries < 0) {//檢索key的節點
if (e == null) {
if (node == null) // speculatively create node
node = new HashEntry<K,V>(hash, key, value, null);
retries = 0;
}
else if (key.equals(e.key))
retries = 0;
else
e = e.next;
}
else if (++retries > MAX_SCAN_RETRIES) {
lock();
break;
}
else if ((retries & 1) == 0 &&
(f = entryForHash(this, hash)) != first) {
e = first = f; // re-traverse if entry changed
retries = -1;
}
}
return node;
}
private void scanAndLock(Object key, int hash) {
// similar to but simpler than scanAndLockForPut
HashEntry<K,V> first = entryForHash(this, hash);
HashEntry<K,V> e = first;
int retries = -1;
while (!tryLock()) {
HashEntry<K,V> f;
if (retries < 0) {
if (e == null || key.equals(e.key))
retries = 0;
else
e = e.next;
}
else if (++retries > MAX_SCAN_RETRIES) {
lock();
break;
}
else if ((retries & 1) == 0 &&
(f = entryForHash(this, hash)) != first) {
e = first = f;
retries = -1;
}
}
}
final V remove(Object key, int hash, Object value) {
if (!tryLock())
scanAndLock(key, hash);
V oldValue = null;
try {
HashEntry<K,V>[] tab = table;
int index = (tab.length - 1) & hash;
HashEntry<K,V> e = entryAt(tab, index);
HashEntry<K,V> pred = null;
while (e != null) {
K k;
HashEntry<K,V> next = e.next;
if ((k = e.key) == key ||
(e.hash == hash && key.equals(k))) {
V v = e.value;
if (value == null || value == v || value.equals(v)) {
if (pred == null)
setEntryAt(tab, index, next);
else
pred.setNext(next);
++modCount;
--count;
oldValue = v;
}
break;
}
pred = e;
e = next;
}
} finally {
unlock();
}
return oldValue;
}
final boolean replace(K key, int hash, V oldValue, V newValue) {
if (!tryLock())
scanAndLock(key, hash);
boolean replaced = false;
try {
HashEntry<K,V> e;
for (e = entryForHash(this, hash); e != null; e = e.next) {
K k;
if ((k = e.key) == key ||
(e.hash == hash && key.equals(k))) {
if (oldValue.equals(e.value)) {
e.value = newValue;
++modCount;
replaced = true;
}
break;
}
}
} finally {
unlock();
}
return replaced;
}
final V replace(K key, int hash, V value) {
if (!tryLock())
scanAndLock(key, hash);
V oldValue = null;
try {
HashEntry<K,V> e;
for (e = entryForHash(this, hash); e != null; e = e.next) {
K k;
if ((k = e.key) == key ||
(e.hash == hash && key.equals(k))) {
oldValue = e.value;
e.value = value;
++modCount;
break;
}
}
} finally {
unlock();
}
return oldValue;
}
final void clear() {
lock();
try {
HashEntry<K,V>[] tab = table;
for (int i = 0; i < tab.length ; i++)
setEntryAt(tab, i, null);
++modCount;
count = 0;
} finally {
unlock();
}
}
}複製程式碼
上面的程式碼清單4其實就是Segment類的程式碼,之前我們說過ConcurrentHashMap的put操作是由Segment的put來執行的。細心的讀者可以看到Segment繼承了ReentrantLock,也就是其內部是可以直接使用lock與unlock來進行同步操作的。從程式碼中我們可以看到其put操作是執行緒安全的,而且Segment的其他成員函式也是執行緒安全的。這裡如果認真看了程式碼清單2,3,4的同學會發現segments陣列的長度取決於建構函式指定的concurrencyLevel的值,在儲存資料時並不會擴容segments的陣列長度,在進行儲存資料時,擴容的是segment的成員變數table陣列的長度。
儲存資料的姿勢搞清楚之後,我們就看看怎麼取我們的資料,請看程式碼清單5:
程式碼清單5
public V get(Object key) {
Segment<K,V> s; // manually integrate access methods to reduce overhead
HashEntry<K,V>[] tab;
int h = hash(key);
long u = (((h >>> segmentShift) & segmentMask) << SSHIFT) + SBASE;//計算索引
if ((s = (Segment<K,V>)UNSAFE.getObjectVolatile(segments, u)) != null &&
(tab = s.table) != null) {//通過CAS獲索引處Segment物件,並進一步獲得table的引用
for (HashEntry<K,V> e = (HashEntry<K,V>) UNSAFE.getObjectVolatile
(tab, ((long)(((tab.length - 1) & h)) << TSHIFT) + TBASE);//找到table索引處的單連結串列,並迴圈遍歷
e != null; e = e.next) {
K k;
if ((k = e.key) == key || (e.hash == h && key.equals(k)))
return e.value;
}
}
return null;
}複製程式碼
程式碼清單5沒有什麼可以過多的說的,就是定位索引,遍歷單連結串列,找到返回對應值,否則返回null.如果大家明白了put的過程,get操作是很好理解的。
接下來我們看下ConcurrentHashMap是怎麼統計目前包含多少鍵值對的,請看程式碼清單6:
程式碼清單6
public int size() {
// Try a few times to get accurate count. On failure due to
// continuous async changes in table, resort to locking.
final Segment<K,V>[] segments = this.segments;
int size;
boolean overflow; // 是否溢位
long sum; // 修改次數
long last = 0L; // 上遍歷時的修改次數
int retries = -1;
try {
for (;;) {
if (retries++ == RETRIES_BEFORE_LOCK) {// 這裡注意只有可重鎖的次數大於最大值時,才會對segments陣列元素依次上鎖
for (int j = 0; j < segments.length; ++j)
ensureSegment(j).lock(); // force creation
}
sum = 0L;
size = 0;
overflow = false;
for (int j = 0; j < segments.length; ++j) {
Segment<K,V> seg = segmentAt(segments, j);
if (seg != null) {
sum += seg.modCount;
int c = seg.count;
if (c < 0 || (size += c) < 0)//如果相加為負數,則說明已經超過最大值,溢位,即overflow為true
overflow = true;
}
}
if (sum == last)//如果為true則代表,沒有在累積鍵值對時,沒有其他執行緒改變資料結構,則退出迴圈。
break;
last = sum;
}
} finally {
if (retries > RETRIES_BEFORE_LOCK) {//解鎖
for (int j = 0; j < segments.length; ++j)
segmentAt(segments, j).unlock();
}
}
return overflow ? Integer.MAX_VALUE : size;
}複製程式碼
上面的size函式首先不加鎖迴圈執行以下操作:遍歷segments陣列元素,獲得count和modCount的值並相加。如果連續兩次所有的modcount相加結果相等,即last==sum,則過程中沒有發生其他執行緒修改ConcurrentHashMap的情況,返回獲得的值。當迴圈次數超過可重入最大值時,這時需要對所有的段組元素進行加鎖,獲取返回值後再依次解鎖。值得注意的是,加鎖過程中要強制建立所有的Segment,否則容易出現其他執行緒建立Segment並進行put,remove等操作。
要說的內容就這麼多,如果文中有不對的地方,麻煩指出,如果喜歡我的文章,可以動動手指關注一下,贊一下,我會有更大的動力寫出更多的文章,轉載請註明出處:blog.csdn.net/android_jia…