Java容器类(2)-Map

主要源码分析,容器类关系图

Posted by zhidaliao on October 22, 2016

This document is not completed and will be updated anytime.

点击查看大图

Map

映射表,存在值相同的情况,会被覆盖

接口定义了增删改、遍历、是否包含操作,定义了 Entry 接口;

public interface Map<K,V> {

    int size();

    boolean isEmpty();

    boolean containsKey(Object key);

    boolean containsValue(Object value);

    V get(Object key);

    V put(K key, V value);

    V remove(Object key);

    void putAll(Map<? extends K, ? extends V> m);

    void clear();;

    Set<K> keySet();

    Collection<V> values();

    Set<Map.Entry<K, V>> entrySet();

    interface Entry<K,V> {
        
        K getKey();

        V getValue();

        V setValue(V value);

        boolean equals(Object o);

        int hashCode();
    }
    
    boolean equals(Object o);
    
    int hashCode();
}

SortedMap

和 SortedList 类似,提供更多样的区间操作方法

public interface SortedMap<K,V> extends Map<K,V> {
    
    Comparator<? super K> comparator();

    SortedMap<K,V> subMap(K fromKey, K toKey);
    
    SortedMap<K,V> headMap(K toKey);
    
    SortedMap<K,V> tailMap(K fromKey);
    
    K firstKey();
    
    K lastKey();
    
    Set<K> keySet();
    
    Collection<V> values();
    
    Set<Map.Entry<K, V>> entrySet();
}

继承自 SortedMap ,提供了丰富的排序取元素方法

public interface NavigableMap<K,V> extends SortedMap<K,V> {
 
    Map.Entry<K,V> lowerEntry(K key);

    K lowerKey(K key);

    Map.Entry<K,V> floorEntry(K key);

    K floorKey(K key);

    Map.Entry<K,V> ceilingEntry(K key);

    K ceilingKey(K key);

    Map.Entry<K,V> higherEntry(K key);

    K higherKey(K key);

    Map.Entry<K,V> firstEntry();

    Map.Entry<K,V> lastEntry();

    ...
}

ConcurrentMap

继承 Map 接口,提供了四个方法。

public interface ConcurrentMap<K, V> extends Map<K, V> {
    
    V putIfAbsent(K key, V value);

    
    boolean remove(Object key, Object value);

    
    boolean replace(K key, V oldValue, V newValue);

    
    V replace(K key, V value);
}

ConcurrentHashMap源码剖析

ConcurrentNavigableMap

public interface ConcurrentNavigableMap<K,V> extends ConcurrentMap<K,V>, NavigableMap<K,V>
{
    
    ConcurrentNavigableMap<K,V> subMap(K fromKey, boolean fromInclusive,
                                       K toKey,   boolean toInclusive);

    
    ConcurrentNavigableMap<K,V> headMap(K toKey, boolean inclusive);


    
    ConcurrentNavigableMap<K,V> tailMap(K fromKey, boolean inclusive);

    
    ConcurrentNavigableMap<K,V> subMap(K fromKey, K toKey);

    
    ConcurrentNavigableMap<K,V> headMap(K toKey);

    
    ConcurrentNavigableMap<K,V> tailMap(K fromKey);

    
    ConcurrentNavigableMap<K,V> descendingMap();

    
    public NavigableSet<K> navigableKeySet();

    
    NavigableSet<K> keySet();

    
    public NavigableSet<K> descendingKeySet();
}

AbstractMap

提供通用的子类操作实现,具体的 put(K key, V value) , setValue(V value) 留给子类实现

public V setValue(V value) {
    throw new UnsupportedOperationException();
}

HashMap

两个非常重要的参数:初始容量 和 负载因子,这两个参数是影响HashMap性能的重要参数。其中,容量表示哈希表中 桶的数量 (table 数组的大小),初始容量是创建哈希表时桶的数量;负载因子是哈希表在其容量自动增加之前可以达到多满的一种尺度,它衡量的是一个散列表的空间的使用程度,负载因子越大表示散列表的装填程度越高,反之愈小。

对于使用 拉链法(下文会提到)的哈希表来说,查找一个元素的平均时间是 O(1+a),a 指的是链的长度,是一个常数。特别地,若负载因子越大,那么对空间的利用更充分,但查找效率的也就越低;若负载因子越小,那么哈希表的数据将越稀疏,对空间造成的浪费也就越严重。系统默认负载因子为 0.75,这是时间和空间成本上一种折衷,一般情况下我们是无需修改的。

JDK1.8 版本之后,拉链法链表长度大于8,会转换成红黑树

映射表内部是一个Entry对象,有一个next指针,构造函数也需要一个next Entry 入参,主要是哈希发生碰撞的时候拉链法模式

    static class Entry<K,V> implements Map.Entry<K,V> {
        final K key;
        V value;
        Entry<K,V> next;
        int hash;

        Entry(int h, K k, V v, Entry<K,V> n) {
            value = v;
            next = n;
            key = k;
            hash = h;
        }

        public final K getKey() {
            return key;
        }

        public final V getValue() {
            return value;
        }

        public final V setValue(V newValue) {
            V oldValue = value;
            value = newValue;
            return oldValue;
        }

        public final boolean equals(Object o) {
            if (!(o instanceof Map.Entry))
                return false;
            Map.Entry e = (Map.Entry)o;
            Object k1 = getKey();
            Object k2 = e.getKey();
            if (k1 == k2 || (k1 != null && k1.equals(k2))) {
                Object v1 = getValue();
                Object v2 = e.getValue();
                if (v1 == v2 || (v1 != null && v1.equals(v2)))
                    return true;
            }
            return false;
        }

        public final int hashCode() {
            return Objects.hashCode(getKey()) ^ Objects.hashCode(getValue());
        }

        public final String toString() {
            return getKey() + "=" + getValue();
        }

        
        void recordAccess(HashMap<K,V> m) {
        }

        
        void recordRemoval(HashMap<K,V> m) {
        }
    }

一个简单的 map.put("key","value") 涉及的源码:

常见场景:

HashMap<String,String> map = new HashMap();
map.put("key1", "value1");
map.put("hello", "value2");
map.put("0", "value3");
Set<String> set = map.keySet();

Iterator<String> it = set.iterator();

while(it.hasNext()){
    System.out.println(it.next());
}

output:
hello
0
key1
int threshold;

static final int DEFAULT_INITIAL_CAPACITY = 1 << 4;

static final int MAXIMUM_CAPACITY = 1 << 30;

static final float DEFAULT_LOAD_FACTOR = 0.75f;

/* 构造函数  */
public HashMap(int initialCapacity, float loadFactor) {
    if (initialCapacity < 0)
        throw new IllegalArgumentException("Illegal initial capacity: " +
                                           initialCapacity);
    if (initialCapacity > MAXIMUM_CAPACITY)
        initialCapacity = MAXIMUM_CAPACITY;
    if (loadFactor <= 0 || Float.isNaN(loadFactor))
        throw new IllegalArgumentException("Illegal load factor: " +
                                           loadFactor);

    this.loadFactor = loadFactor;
    threshold = initialCapacity;
    init();
}

public HashMap() {
    this(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR);
}

/* 添加元素 */
public V put(K key, V value) {
    if (table == EMPTY_TABLE) {
        inflateTable(threshold);
    }
    if (key == null)
        return putForNullKey(value);
    int hash = hash(key);
    int i = indexFor(hash, table.length);
    for (Entry<K,V> e = table[i]; e != null; e = e.next) {
        Object k;
        // 如果存在哈希值相同的情况,比对一下是否 Key 值一样,如果是的话进行覆盖处理
        if (e.hash == hash && ((k = e.key) == key || key.equals(k))) {
            V oldValue = e.value;
            e.value = value;
            e.recordAccess(this);
            return oldValue;
        }
    }
    
    modCount++;
    // 如果存在下标相同的情况下,要连接链表
    addEntry(hash, key, value, i);
    return null;
}

扩充Entry数组

private void inflateTable(int toSize) {
    int capacity = roundUpToPowerOf2(toSize);
    // threshold = 12
    threshold = (int) Math.min(capacity * loadFactor, MAXIMUM_CAPACITY + 1);
    table = new Entry[capacity];
    initHashSeedAsNeeded(capacity);
}

初始化number = 16, 小于 MAXIMUM_CAPACITY , 最后返回的还是16

private static int roundUpToPowerOf2(int number) {
    return number >= MAXIMUM_CAPACITY
            ? MAXIMUM_CAPACITY
            : (number > 1) ? Integer.highestOneBit((number - 1) << 1) : 1;
}

用于初始化hashSeed参数,其中hashSeed用于计算key的hash值,它与key的hashCode进行按位异或运算。这个hashSeed是一个与实例相关的随机值,主要用于解决hash冲突。 1.8之后取消了

final boolean initHashSeedAsNeeded(int capacity) {
    
    boolean currentAltHashing = hashSeed != 0;
    
    boolean useAltHashing = sun.misc.VM.isBooted() &&
            (capacity >= Holder.ALTERNATIVE_HASHING_THRESHOLD);
    
    boolean switching = currentAltHashing ^ useAltHashing;
    
    if (switching) {
        hashSeed = useAltHashing
            ? sun.misc.Hashing.randomHashSeed(this)
            : 0;
    }

    return switching;
}
private V putForNullKey(V value) {
    // 如果映射表中存在 key为空的元素,则使用新的 value 替换
    for (Entry<K,V> e = table[0]; e != null; e = e.next) {
        if (e.key == null) {
            V oldValue = e.value;
            e.value = value;
            e.recordAccess(this);
            return oldValue;
        }
    }
    // 否则添加一个 key 为空的元素
    modCount++;
    addEntry(0, null, value, 0);
    return null;
}

添加新的元素,容量不够的时候,直接扩大两倍

void addEntry(int hash, K key, V value, int bucketIndex) {
    
    if ((size >= threshold) && (null != table[bucketIndex])) {
        resize(2 * table.length);
        hash = (null != key) ? hash(key) : 0;
        bucketIndex = indexFor(hash, table.length);
    }

    createEntry(hash, key, value, bucketIndex);
}
void resize(int newCapacity) {
    Entry[] oldTable = table;
    int oldCapacity = oldTable.length;
    if (oldCapacity == MAXIMUM_CAPACITY) {
        threshold = Integer.MAX_VALUE;
        return;
    }

    Entry[] newTable = new Entry[newCapacity];
    transfer(newTable, initHashSeedAsNeeded(newCapacity));
    table = newTable;
    threshold = (int)Math.min(newCapacity * loadFactor, MAXIMUM_CAPACITY + 1);
}

扩容之后进行重哈希

void transfer(Entry[] newTable, boolean rehash) {
    int newCapacity = newTable.length;
    for (Entry<K,V> e : table) {
        while(null != e) {
            Entry<K,V> next = e.next;
            if (rehash) {
                e.hash = null == e.key ? 0 : hash(e.key);
            }
            int i = indexFor(e.hash, newCapacity);
            e.next = newTable[i];
            newTable[i] = e;
            e = next;
        }
    }
}

根据哈希值反推在数组中的位置, 通过哈希值,对长度取模的余数就是元素在数组中的位置

static int indexFor(int h, int length) {
    return h & (length-1);
}

创建新的Entry对象

void createEntry(int hash, K key, V value, int bucketIndex) {
    Entry<K,V> e = table[bucketIndex];
    table[bucketIndex] = new Entry<>(hash, key, value, e);
    size++;
}

计算哈希值 1.7版本

final int hash(Object k) {
    int h = hashSeed;
    if (0 != h && k instanceof String) {
        return sun.misc.Hashing.stringHash32((String) k);
    }

    h ^= k.hashCode();

    h ^= (h >>> 20) ^ (h >>> 12);
    return h ^ (h >>> 7) ^ (h >>> 4);
}

1.8版本

static final int hash(Object key) {
    int h;
    return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
}

为什么HashMap的数组长度要取2的整次幂。因为这样(数组长度-1)正好相当于一个“ 低位掩码”。“与”操作的结果就是散列值的高位全部归零,只保留低位值,用来做数组下标访问。

以初始长度16为例,16-1=15。2进制表示是00000000 00000000 00001111。和某散列值做“与”操作如下,结果就是截取了最低的四位值。

10100101 11000100 00100101
00000000 00000000 00001111
----------------------------------
00000000 00000000 00000101    //高位全部归零,只保留末四位

这时候问题就来了,这样就算我的散列值分布再松散,要是只取最后几位的话,碰撞也会很严重。

右位移16位,正好是32bit的一半,自己的高半区和低半区做异或,就是为了混合原始哈希码的高位和低位,以此来加大低位的随机性。而且混合后的低位掺杂了高位的部分特征,这样高位的信息也被变相保留下来。

但明显Java 8觉得扰动做一次就够了,做4次的话,多了可能边际效用也不大,所谓为了效率考虑就改成一次了。

分析: JDK 源码中 HashMap 的 hash 方法原理是什么?

反序列化和序列化

HashMap实现序列化

如果该类实现了writeObject和readObject这两个方法那么就会调用该类的实现,如果没有的话就会使用defaultWriteObject()和defaultReadObject()

大家都知道HashMap存储是根据Key的hash值来计算出,键值对应该放在数组的哪个位置,但是在不同的JVM中,得到的hash值不一定相同

在反序列化的时候,readObject中调用了一个叫做putForCreate的方法,这个方法中又调用了indexFor这个方法重新计算了key的hash值,这样就可以把key和value可以正确放到数组中。

private void readObject(java.io.ObjectInputStream s)
         throws IOException, ClassNotFoundException
{
     s.defaultReadObject();
    if (loadFactor <= 0 || Float.isNaN(loadFactor)) {
        throw new InvalidObjectException("Illegal load factor: " +
                                           loadFactor);
    }

     table = (Entry<K,V>[]) EMPTY_TABLE;

     s.readInt(); // ignored.

     int mappings = s.readInt();
    if (mappings < 0)
        throw new InvalidObjectException("Illegal mappings count: " +
                                           mappings);

    // capacity chosen by number of mappings and desired load (if >= 0.25)
    int capacity = (int) Math.min(
                mappings * Math.min(1 / loadFactor, 4.0f),
                // we have limits...
                HashMap.MAXIMUM_CAPACITY);

    // allocate the bucket array;
    if (mappings > 0) {
        inflateTable(capacity);
    } else {
        threshold = capacity;
    }

    init();  // Give subclass a chance to do its thing.

    // Read the keys and values, and put the mappings in the HashMap
    for (int i = 0; i < mappings; i++) {
        K key = (K) s.readObject();
        V value = (V) s.readObject();
        putForCreate(key, value);
    }
}

private void putForCreate(K key, V value) {

    int hash = null == key ? 0 : hash(key);
    int i = indexFor(hash, table.length);

    for (Entry<K,V> e = table[i]; e != null; e = e.next) {
        Object k;
        if (e.hash == hash &&
            ((k = e.key) == key || (key != null && key.equals(k)))) {
            e.value = value;
            return;
        }
    }

    createEntry(hash, key, value, i);
}

浅谈算法和数据结构(11):哈希表

Java 8系列之重新认识HashMap

java提高篇(二五)—–HashTable

TreeMap

红黑树实现的映射表

常见场景:
TreeMap<String,String> map = new TreeMap();
map.put("key1", "value1");
map.put("hello", "value2");
map.put("0", "value3");
Set<String> set = map.keySet();

Iterator<String> it = set.iterator();

while(it.hasNext()){
    System.out.println(it.next());
}


output:
0
hello
key1

源码分析:


public class TreeMap<K,V>  extends AbstractMap<K,V> implements NavigableMap<K,V>, Cloneable, java.io.Serializable
{ 
    private final Comparator<? super K> comparator;

    private transient Entry<K,V> root = null;
 
    private transient int size = 0;
 
    private transient int modCount = 0;
 
    public TreeMap() {
        comparator = null;
    }
 
    public TreeMap(Comparator<? super K> comparator) {
        this.comparator = comparator;
    }
    transient KeySet<K> navigableKeySetView = null;

    public final NavigableSet<K> navigableKeySet() {
        KeySet<K> nksv = navigableKeySetView;
        return (nksv != null) ? nksv :
          (navigableKeySetView = new TreeMap.KeySet(this));
    }

    public boolean containsKey(Object key) {
        return getEntry(key) != null;
    }

    public boolean containsValue(Object value) {
        for (Entry<K,V> e = getFirstEntry(); e != null; e = successor(e))
            if (valEquals(value, e.value))
                return true;
        return false;
    }
  
    public V put(K key, V value) {
        Entry<K,V> t = root;
        if (t == null) {
            compare(key, key); // type (and possibly null) check

            root = new Entry<>(key, value, null);
            size = 1;
            modCount++;
            return null;
        }
        int cmp;
        Entry<K,V> parent;
        // split comparator and comparable paths
        Comparator<? super K> cpr = comparator;
        if (cpr != null) {
            do {
                parent = t;
                cmp = cpr.compare(key, t.key);
                if (cmp < 0)
                    t = t.left;
                else if (cmp > 0)
                    t = t.right;
                else
                    return t.setValue(value);
            } while (t != null);
        }
        else {
            if (key == null)
                throw new NullPointerException();
            Comparable<? super K> k = (Comparable<? super K>) key;
            do {
                parent = t;
                cmp = k.compareTo(t.key);
                if (cmp < 0)
                    t = t.left;
                else if (cmp > 0)
                    t = t.right;
                else
                    return t.setValue(value);
            } while (t != null);
        }
        Entry<K,V> e = new Entry<>(key, value, parent);
        if (cmp < 0)
            parent.left = e;
        else
            parent.right = e;
        fixAfterInsertion(e);
        size++;
        modCount++;
        return null;
    }

    final Entry<K,V> getFirstEntry() {
        Entry<K,V> p = root;
        if (p != null)
            while (p.left != null)
                p = p.left;
        return p;
    }

    ...


    static final class KeySet<E> extends AbstractSet<E> implements NavigableSet<E> {
        private final NavigableMap<E, Object> m;
        KeySet(NavigableMap<E,Object> map) { m = map; }

        public Iterator<E> iterator() {
            if (m instanceof TreeMap)
                return ((TreeMap<E,Object>)m).keyIterator();
            else
                return (Iterator<E>)(((TreeMap.NavigableSubMap)m).keyIterator());
        }
        ...
    }


    Iterator<K> keyIterator() {
        return new KeyIterator(getFirstEntry());
    }
}

Entry 对象

// 红黑树的机制
private static final boolean RED   = false;
private static final boolean BLACK = true;

static final class Entry<K,V> implements Map.Entry<K,V> {
    
    K key;
    V value;
    Entry<K,V> left = null;
    Entry<K,V> right = null;
    Entry<K,V> parent;
    boolean color = BLACK;
    
    Entry(K key, V value, Entry<K,V> parent) {
        this.key = key;
        this.value = value;
        this.parent = parent;
    }

    public K getKey() {
        return key;
    }

    public V getValue() {
        return value;
    }

    public V setValue(V value) {
        V oldValue = this.value;
        this.value = value;
        return oldValue;
    }

    public boolean equals(Object o) {
        if (!(o instanceof Map.Entry))
            return false;
        Map.Entry<?,?> e = (Map.Entry<?,?>)o;

        return valEquals(key,e.getKey()) && valEquals(value,e.getValue());
    }

    public int hashCode() {
        int keyHash = (key==null ? 0 : key.hashCode());
        int valueHash = (value==null ? 0 : value.hashCode());
        return keyHash ^ valueHash;
    }

    public String toString() {
        return key + "=" + value;
    }
    
}

迭代器模式是获取根节点,然后依次遍历每个子节点。

LinkedHashMap

LinkedHashMap采用的hash算法和HashMap相同,但是它重新定义了数组中保存的元素Entry,该Entry除了保存当前对象的引用外,还保存了其上一个元素before和下一个元素after的引用,从而在哈希表的基础上又构成了双向链接列表

LinkedHashMap通过重写newNode实现访问顺序的记录,重写afterNodeAccess实现对访问顺序的记录,重写afterNodeInsertion实现新元素增加后的控制(如实现LRU的删除老的节点)。 LinkedHashMap是Hash表和链表的实现,并且依靠着双向链表保证了迭代顺序是插入的顺序。

public class LinkedHashMap<K,V> extends HashMap<K,V> implements Map<K,V>
{

    
    public LinkedHashMap(int initialCapacity, float loadFactor) {
        super(initialCapacity, loadFactor);
        accessOrder = false;
    }

    private static class Entry<K,V> extends HashMap.Entry<K,V> {
        Entry<K,V> before, after;

        Entry(int hash, K key, V value, HashMap.Entry<K,V> next) {
            super(hash, key, value, next);
        }

        private void remove() {
            before.after = after;
            after.before = before;
        }

        ...
    }
}

深入Java集合学习系列:LinkedHashMap的实现原理

WeakHashMap

public class WeakHashMap<K,V>
    extends AbstractMap<K,V>
    implements Map<K,V> {
 
    private static final int DEFAULT_INITIAL_CAPACITY = 16;
 
    private static final int MAXIMUM_CAPACITY = 1 << 30;
 
    private static final float DEFAULT_LOAD_FACTOR = 0.75f;
 
    Entry<K,V>[] table;
 
    private int size;
 
    private int threshold;
 
    private final float loadFactor;
 
    private final ReferenceQueue<Object> queue = new ReferenceQueue<>();
 
    int modCount;
 
    static final int ALTERNATIVE_HASHING_THRESHOLD_DEFAULT = Integer.MAX_VALUE;

    public V put(K key, V value) {
        Object k = maskNull(key);
        int h = hash(k);
        Entry<K,V>[] tab = getTable();
        int i = indexFor(h, tab.length);

        for (Entry<K,V> e = tab[i]; e != null; e = e.next) {
            if (h == e.hash && eq(k, e.get())) {
                V oldValue = e.value;
                if (value != oldValue)
                    e.value = value;
                return oldValue;
            }
        }

        modCount++;
        Entry<K,V> e = tab[i];
        tab[i] = new Entry<>(k, value, queue, h, e);
        if (++size >= threshold)
            resize(tab.length * 2);
        return null;
    }

    private static class Entry<K,V> extends WeakReference<Object> implements Map.Entry<K,V> {
        V value;
        int hash;
        Entry<K,V> next;

        Entry(Object key, V value,
              ReferenceQueue<Object> queue,
              int hash, Entry<K,V> next) {
            super(key, queue);
            this.value = value;
            this.hash  = hash;
            this.next  = next;
        }
        ..
    }

    private void expungeStaleEntries() {
        for (Object x; (x = queue.poll()) != null; ) {
            synchronized (queue) {
                @SuppressWarnings("unchecked")
                    Entry<K,V> e = (Entry<K,V>) x;
                int i = indexFor(e.hash, table.length);

                Entry<K,V> prev = table[i];
                Entry<K,V> p = prev;
                while (p != null) {
                    Entry<K,V> next = p.next;
                    if (p == e) {
                        if (prev == e)
                            table[i] = next;
                        else
                            prev.next = next;
                        // Must not null out e.next;
                        // stale entries may be in use by a HashIterator
                        e.value = null; // Help GC
                        size--;
                        break;
                    }
                    prev = p;
                    p = next;
                }
            }
        }
    }

   public int size() {
        if (size == 0)
            return 0;
        expungeStaleEntries();
        return size;
    }
}

expungeStaleEntries 方法用来清除 引用队列queue中,无效的键对应的entry对象。该方法在getTable,size,resize方法会被调用

Entry对象继承自WeakReference,使用的是弱引用机制。

private final ReferenceQueue<Object> queue = new ReferenceQueue<>();是引用队列,被GC的键会入列,等待下次对象调用 expungeStaleEntries方法


同步容器类

每个操作都上锁,效率比较慢,相对于同步容器类,并发容器更高的效率和多样化的设计显得更加的主流。

迭代操作中,修改元素,可能会抛出ConcurrentModificationException , 删除元素元素可能抛出ArrayIndexOutBoundsException

Hashtable

HashMap的同步版本,关键方法都使用了同步块

collections.synchronizedMap

Map m = Collections.synchronizedMap(new TreeMap());
m.put("1", "hello");
m.put("3", "hello");
m.put("4", "hello");

Iterator<String> it = m.keySet().iterator();
while(it.hasNext()){
    System.out.println(it.next());
}

collections.synchronizedSortedMap

SortedMap s1 = Collections.synchronizedSortedMap(new TreeMap());


并发容器类

ConcurrentHashMap

继承自ConcurrentMap接口

ConcurrentHashMap中对这个数据结构,针对并发稍微做了一点调整。它把区间按照并发级别(concurrentLevel),分成了若干个segment。默认情况下内部按并发级别为16来创建。对于每个segment的容量,默认情况也是16。当然并发级别(concurrentLevel)和每个段(segment)的初始容量都是可以通过构造函数设定的。

Segment#put 方法

对元素的添加在 segment 中进行,每次添加会加锁进行

final V put(K key, int hash, V value, boolean onlyIfAbsent) {
    HashEntry<K,V> node = tryLock() ? null :
        scanAndLockForPut(key, hash, value);
    V oldValue;
    try {
        HashEntry<K,V>[] tab = table;
        int index = (tab.length - 1) & hash;
        HashEntry<K,V> first = entryAt(tab, index);
        for (HashEntry<K,V> e = first;;) {
            if (e != null) {
                K k;
                if ((k = e.key) == key ||
                    (e.hash == hash && key.equals(k))) {
                    oldValue = e.value;
                    if (!onlyIfAbsent) {
                        e.value = value;
                        ++modCount;
                    }
                    break;
                }
                e = e.next;
            }
            else {
                if (node != null)
                    node.setNext(first);
                else
                    node = new HashEntry<K,V>(hash, key, value, first);
                int c = count + 1;
                if (c > threshold && tab.length < MAXIMUM_CAPACITY)
                    rehash(node);
                else
                    setEntryAt(tab, index, node);
                ++modCount;
                count = c;
                oldValue = null;
                break;
            }
        }
    } finally {
        unlock();
    }
    return oldValue;
}

Map数量统计,尝试两次数量统计,第一次统计每个 segment 的数量,第二次统计 modCount 的数量,如果两者不一致,则对每个Segment 进行加锁,再统计数量。

public int size() {
   
    final Segment<K,V>[] segments = this.segments;
    int size;
    boolean overflow; // true if size overflows 32 bits
    long sum;         // sum of modCounts
    long last = 0L;   // previous sum
    int retries = -1; // first iteration isn't retry
    try {
        for (;;) {
            if (retries++ == RETRIES_BEFORE_LOCK) {
                for (int j = 0; j < segments.length; ++j)
                    ensureSegment(j).lock(); // force creation
            }
            sum = 0L;
            size = 0;
            overflow = false;
            for (int j = 0; j < segments.length; ++j) {
                Segment<K,V> seg = segmentAt(segments, j);
                if (seg != null) {
                    sum += seg.modCount;
                    int c = seg.count;
                    if (c < 0 || (size += c) < 0)
                        overflow = true;
                }
            }
            if (sum == last)
                break;
            last = sum;
        }
    } finally {
        if (retries > RETRIES_BEFORE_LOCK) {
            for (int j = 0; j < segments.length; ++j)
                segmentAt(segments, j).unlock();
        }
    }
    return overflow ? Integer.MAX_VALUE : size;
}

ConcurrentHashMap源码分析

ConcurrentSkipListMap

所有的修改操作都是使用CAS,只要失败就会重试,直至成功,所以就算多线程并发操作也不会出现错误,而且通过CAS避免了使用锁,性能比用锁好很多。

ConcurrentSkipListMap和TreeMap,它们虽然都是有序的哈希表.

  • 它们的线程安全机制不同,TreeMap是非线程安全的,而ConcurrentSkipListMap是线程安全的。
  • ConcurrentSkipListMap是通过跳表实现的,而TreeMap是通过红黑树实现的

put方法的源码

public V put(K key, V value) {
    if (value == null)
        throw new NullPointerException();
    return doPut(key, value, false);
}

找到前驱结点,如果下一个结点为空,直接创建新的结点关联上,

private V doPut(K kkey, V value, boolean onlyIfAbsent) {
    Comparable<? super K> key = comparable(kkey);
    for (;;) {
        Node<K,V> b = findPredecessor(key);
        Node<K,V> n = b.next;
        for (;;) {
            if (n != null) {
                Node<K,V> f = n.next;
                if (n != b.next)               // inconsistent read
                    break;
                Object v = n.value;
                if (v == null) {               // n is deleted
                    n.helpDelete(b, f);
                    break;
                }
                if (v == n || b.value == null) // b is deleted
                    break;
                int c = key.compareTo(n.key);
                if (c > 0) {
                    b = n;
                    n = f;
                    continue;
                }
                if (c == 0) {
                    if (onlyIfAbsent || n.casValue(v, value))
                        return (V)v;        
                    else
                        break; // restart if lost race to replace value
                }
                // else c < 0; fall through
            }

            Node<K,V> z = new Node<K,V>(kkey, value, n);
            if (!b.casNext(n, z))
                break;         // restart if lost race to append to b
            int level = randomLevel();
            if (level > 0)
                // 插入新的层级
                insertIndex(z, level);
            return null;
        }
    }
}

findPredecessor 寻找指定Key的前驱节点 从头结点开始,也就是全路径最短的链表开始查找插入位置,直到没有比指定key值还小的节点,然后取下一层级的链表,使用同样的方式遍历找到合适的位置,直到没有其他层级位置,不需要遍历所有节点就能快速找到对应的插入位置。

private Node<K,V> findPredecessor(Comparable<? super K> key) {
    if (key == null)
        throw new NullPointerException(); 
    for (;;) {
        Index<K,V> q = head;
        Index<K,V> r = q.right;
        for (;;) {
            if (r != null) {
                Node<K,V> n = r.node;
                K k = n.key;
                if (n.value == null) {
                    if (!q.unlink(r))
                        break;           
                    r = q.right;          
                    continue;
                }
                // 不断的循环判断,找到比大于指定值的结点。
                if (key.compareTo(k) > 0) {
                    q = r;
                    r = r.right;
                    continue;
                }
            }
            // 往下一层级查找更接近的结点
            Index<K,V> d = q.down;
            if (d != null) {
                q = d;
                r = d.right;
            } else  // 如果是最底层的结点,就直接返回
                return q.node;
        }
    }
}

java并发编程(二十三)—-(JUC集合)ConcurrentSkipListMap介绍

Java多线程系列–“JUC集合”05之 ConcurrentSkipListMap

IdentityHashMap

1.内部通过数组存储键值对,相邻元素存在键值对 比如:i 位置是key,i+1位置是value

2.当hashcode相等,出现冲突的时候,通过线性探索发解决冲突问题

3.IdentityHashMap与常用的HashMap的区别是:

  • 前者比较key时是“引用相等”而后者是“对象相等”,即对于k1和k2,当k1==k2时,IdentityHashMap认为两个key相等
  • 而HashMap只有在k1.equals(k2) == true 时才会认为两个key相等。

可出现重复的key值,主要引用不一致就行

ihm.put("b", "2");
ihm.put(new String("c"), "3");
ihm.put(new String("c"), "4");

Iterator<String> it = ihm.keySet().iterator();
while(it.hasNext()){
    String key = it.next();
    System.out.println(ihm.get(key));
}

output:
3 
4 
2

IdentityHashMap类源码解析