如果在动态扩容的时候有
get
方法的调用,则
ForwardingNode
将会把请求转发到新的桶数组中,以避免阻塞
get
方法的调用,
ForwardingNode
在构造的时候会将扩容后的桶数组
nextTable
保存下来。
UNSAFE.compareAndSwap***
这是在
Java 8
版本的
ConcurrentHashMap
实现
CAS
的工具,以
int
类型为例其方法定义如下:
/** * Atomically update Java variable to x if it is currently * holding expected. * @returntrue if successful */ publicfinalnativebooleancompareAndSwapInt(Object o, long offset, int expected, int x);
相应的语义为:
如果对象
o
起始地址偏移量为
offset
的值等于
expected
,则将该值设为
x
,并返回
true
表明更新成功,否则返回
false
,表明
CAS
失败
初始化
publicConcurrentHashMap(int initialCapacity, float loadFactor, int concurrencyLevel){ if (!(loadFactor > 0.0f) || initialCapacity 0 || concurrencyLevel <= 0) // 检查参数 thrownew IllegalArgumentException(); if (initialCapacity initialCapacity = concurrencyLevel; long size = (long)(1.0 + (long)initialCapacity / loadFactor); int cap = (size >= (long)MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : tableSizeFor((int)size); // tableSizeFor,求不小于size的 2^n的算法,jdk1.8的HashMap中说过 this.sizeCtl = cap; }
final V putVal(K key, V value, boolean onlyIfAbsent){ if (key == null || value == null) thrownew NullPointerException(); int hash = spread(key.hashCode()); int binCount = 0; for (Node[] tab = table;;) { Node f; int n, i, fh; if (tab == null || (n = tab.length) == 0)// 折叠 elseif ((f = tabAt(tab, i = (n - 1) & hash)) == null) {// 折叠} elseif ((fh = f.hash) == MOVED)// 折叠 else { V oldVal = null; synchronized (f) { // 要注意这里这个不起眼的判断条件 if (tabAt(tab, i) == f) { if (fh >= 0) { // fh>=0的节点是链表,否则是树节点或者ForwardingNode binCount = 1; for (Node e = f;; ++binCount) { K ek; if (e.hash == hash && ((ek = e.key) == key || (ek != null && key.equals(ek)))) { oldVal = e.val; // 如果链表中有值了,直接更新 if (!onlyIfAbsent) e.val = value; break; } Node pred = e; if ((e = e.next) == null) { // 如果流程走到这里,则说明链表中还没值,直接连接到链表尾部 pred.next = new Node(hash, key, value, null); break; } } } // 红黑树的操作先略过 } } } } // put成功,map的元素个数+1 addCount(1L, binCount); returnnull; }
这段代码中要特备注意一个不起眼的判断条件(上下文在源码上已经标注出来了):
tabAt(tab, i) == f
,这个判断的目的是为了处理调用
put
方法的线程和扩容线程的竞争。因为
synchronized
是阻塞锁,如果调用
put
方法的线程恰好和扩容线程同时操作同一个桶,且调用
put
方法的线程竞争锁失败,等到该线程重新获取到锁的时候,当前桶中的元素就会变成一个
ForwardingNode
,那就会出现
tabAt(tab, i) != f
的情况。
多线程动态扩容
privatefinalvoidtransfer(Node[] tab, Node[] nextTab){ int n = tab.length, stride; if ((stride = (NCPU > 1) ? (n >>> 3) / NCPU : n) stride = MIN_TRANSFER_STRIDE; // subdivide range if (nextTab == null) { // 初始化新的桶数组 try { @SuppressWarnings("unchecked") Node[] nt = (Node[])new Node,?>[n <1]; nextTab = nt; } catch (Throwable ex) { // try to cope with OOME sizeCtl = Integer.MAX_VALUE; return; } nextTable = nextTab; transferIndex = n; } int nextn = nextTab.length; ForwardingNode fwd = new ForwardingNode(nextTab); boolean advance = true; boolean finishing = false; // to ensure sweep before committing nextTab for (int i = 0, bound = 0;;) { Node f; int fh; while (advance) { int nextIndex, nextBound; if (--i >= bound || finishing) advance = false; elseif ((nextIndex = transferIndex) <= 0) { i = -1; advance = false; } elseif (U.compareAndSwapInt (this, TRANSFERINDEX, nextIndex, nextBound = (nextIndex > stride ? nextIndex - stride : 0))) { bound = nextBound; i = nextIndex - 1; advance = false; } } if (i 0 || i >= n || i + n >= nextn) { int sc; if (finishing) { nextTable = null; table = nextTab; sizeCtl = (n <1) - (n >>> 1); return; } if (U.compareAndSwapInt(this, SIZECTL, sc = sizeCtl, sc - 1)) { // 判断是会否是最后一个扩容线程 if ((sc - 2) != resizeStamp(n) < return; finishing = advance = true; i = n; // recheck before commit } } elseif ((f = tabAt(tab, i)) == null) advance = casTabAt(tab, i, null, fwd); elseif ((fh = f.hash) == MOVED) // 只有最后一个扩容线程才有机会执行这个分支 advance = true; // already processed else { // 复制过程与HashMap类似,这里不再赘述 synchronized (f) { // 折叠 } } } }
Yes, this is a valid point; thanks. The post-scan was needed in a previous version, and could be removed. It does not trigger often enough to matter though, so is for now another minor tweak that might be included next time CHM is updated.
虽然
Doug
在邮件中的措辞用了could be, not often enough等,但也确认了最后一个扩容线程的二次检查是没有必要的。具体的复制过程与
HashMap
类似,感兴趣的读者可以翻一下
高端的面试从来不会在HashMap的红黑树上纠缠太多
这篇文章。