Hashtables

Agenda

  • Discussion: pros/cons of array-backed and linked structures
  • Comparison to set and dict
  • The map ADT
  • Direct lookups via Hashing
  • Hashtables
    • Collisions and the "Birthday problem"
  • Runtime analysis & Discussion

Discussion: pros/cons of array-backed and linked structures

Between the array-backed and linked list we have:

  1. $O(1)$ indexing (array-backed)
  2. $O(1)$ appending (array-backed & linked)
  3. $O(1)$ insertion/deletion without indexing (linked)
  4. $O(N)$ linear search (unsorted)
  5. $O(\log N)$ binary search, when sorted (only array-backed lists)

Comparison to set and dict

The set and dict types don't support positional access (i.e., by index), but do support lookup/search. How fast do they fare compared to lists?

In [1]:
import timeit

def lin_search(lst, x):
    return x in lst
    
def bin_search(lst, x):
    # assumes lst is sorted
    low = 0
    hi  = len(lst)-1
    while low <= hi:
        mid = (low + hi) // 2
        if x < lst[mid]:
            hi  = mid - 1
        elif x < lst[mid]:
            low = mid + 1
        else:
            return True
    else:
        return False
    
def set_search(st, x):
    return x in st
    
    
def dict_search(dct, x):
    return x in dct
In [2]:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import random

ns = np.linspace(100, 10_000, 50, dtype=int)

ts_linsearch = [timeit.timeit('lin_search(lst, lst[-1])',
                              setup='lst = list(range({})); random.shuffle(lst)'.format(n),
                              globals=globals(),
                              number=100)
                for n in ns]

ts_binsearch = [timeit.timeit('bin_search(lst, 0)',
                              setup='lst = list(range({}))'.format(n),
                              globals=globals(),
                              number=100)
                for n in ns]


ts_setsearch = [timeit.timeit(#'set_search(st, 0)',
                              'set_search(st, {})'.format(random.randrange(n)),
                              setup='lst = list(range({})); random.shuffle(lst);'
                                    'st = set(lst)'.format(n),
                              globals=globals(),
                              number=100)
                for n in ns]

ts_dctsearch = [timeit.timeit(#'dict_search(dct, 0)',
                              'dict_search(dct, {})'.format(random.randrange(n)),
                              setup='lst = list(range({})); random.shuffle(lst);'
                                    'dct = {{x:x for x in lst}}'.format(n),
                              globals=globals(),
                              number=100)
                for n in ns]
In [3]:
plt.plot(ns, ts_linsearch, 'or')
plt.plot(ns, ts_binsearch, 'sg')
plt.plot(ns, ts_setsearch, 'db')
plt.plot(ns, ts_dctsearch, 'om');

It looks like sets and dictionaries support lookup in constant time! How?!

The map ADT

We will focus next on the "map" abstract data type (aka "associative array" or "dictionary"), which is used to associate keys (which must be unique) with values. Python's dict type is an implementation of the map ADT.

Given an implementation of a map, it is trivial to implement a set on top of it (how?).

Here's a simple map API:

In [4]:
class MapDS:
    def __init__(self):
        self.data = []
    
    def __setitem__(self, key, value): # keeps things in order -> O(N)
        for i in range(len(self.data)):
            if self.data[i][0] == key:
                self.data[i][1] = value
                return
        else:
            self.data.append([key, value])
    
    def __getitem__(self, key): # binary search -> O(log N)
        for k,v in self.data:
            if k == key:
                return v
        else:
            raise KeyError(str(key))
            
    def __contains__(self, key): # binary search -> O(log N)
        try:
            _ = self[key]
            return True
        except:
            return False
In [5]:
m = MapDS()
m['batman'] = 'bruce wayne'
m['superman'] = 'clark kent'
m['spiderman'] = 'peter parker'
In [6]:
m['batman']
Out[6]:
'bruce wayne'
In [7]:
m['batman'] = 'tony stark'
In [8]:
m['batman']
Out[8]:
'tony stark'

How do we make the leap from linear runtime complexity to constant?!

Direct lookups via Hashing

Hashes (a.k.a. hash codes or hash values) are simply numerical values computed for objects.

In [9]:
hash('hello')
Out[9]:
4408441162126540726
In [10]:
hash('batman')
Out[10]:
3762518510994342531
In [11]:
hash('batmen') 
Out[11]:
1150571658522024291
In [12]:
[hash(s) for s in ['different', 'objects', 'have', 'very', 'different', 'hashes']]
Out[12]:
[8006964748821135390,
 -1301096129472531924,
 6253874399800533668,
 -8827952904197905253,
 8006964748821135390,
 -235356904885202232]
In [13]:
[hash(s)%100 for s in ['different', 'objects', 'have', 'very', 'different', 'hashes']]
Out[13]:
[90, 76, 68, 47, 90, 68]

Hashtables

In [14]:
class Hashtable:
    def __init__(self, n_buckets):
        self.buckets = [None] * n_buckets
        
    def __setitem__(self, key, val):
        bidx = hash(key) % len(self.buckets)
        self.buckets[bidx] = [key, val]
    
    def __getitem__(self, key):
        bidx = hash(key) % len(self.buckets)
        kv = self.buckets[bidx]
        if kv and kv[0] == key:
            return kv[1]
        else:
            raise KeyError(str(key))
        
    def __contains__(self, key):
        try:
            _ = self[key]
            return True
        except:
            return False
In [15]:
ht = Hashtable(100)
ht['spiderman'] = 'peter parker'
ht['batman'] = 'bruce wayne'
ht['superman'] = 'clark kent'
In [16]:
ht['spiderman']
Out[16]:
'peter parker'
In [17]:
ht['batman']
Out[17]:
'bruce wayne'
In [18]:
ht['superman']
Out[18]:
'clark kent'
In [30]:
ht = Hashtable(2)
ht['spiderman'] = 'peter parker'
ht['batman'] = 'bruce wayne'
ht['superman'] = 'clark kent'
In [31]:
ht['spiderman']
Out[31]:
'peter parker'
In [32]:
ht['batman']
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-32-4597e9975528> in <module>
----> 1 ht['batman']

<ipython-input-14-8fabdd540d1d> in __getitem__(self, key)
     13             return kv[1]
     14         else:
---> 15             raise KeyError(str(key))
     16 
     17     def __contains__(self, key):

KeyError: 'batman'
In [33]:
ht['superman']
Out[33]:
'clark kent'

On Collisions

The "Birthday Problem"

Problem statement: Given $N$ people at a party, how likely is it that at least two people will have the same birthday?

In [19]:
def birthday_p(n_people):
    p_inv = 1
    for n in range(365, 365-n_people, -1):
        p_inv *= n / 365
    return 1 - p_inv
In [20]:
birthday_p(3)
Out[20]:
0.008204165884781345
In [21]:
birthday_p(23)
Out[21]:
0.5072972343239857
In [22]:
1-364/365*363/365
Out[22]:
0.008204165884781456
In [23]:
%matplotlib inline
import matplotlib.pyplot as plt

n_people = range(1, 80)
plt.plot(n_people, [birthday_p(n) for n in n_people]);

General collision statistics

Repeat the birthday problem, but with a given number of values and "buckets" that are allotted to hold them. How likely is it that two or more values will map to the same bucket?

In [24]:
def collision_p(n_values, n_buckets):
    p_inv = 1
    for n in range(n_buckets, n_buckets-n_values, -1):
        p_inv *= n / n_buckets
    return 1 - p_inv
In [25]:
collision_p(23, 365) # same as birthday problem, for 23 people
Out[25]:
0.5072972343239857
In [26]:
collision_p(10, 100)
Out[26]:
0.37184349044470544
In [27]:
collision_p(100, 1000)
Out[27]:
0.9940410733677595
In [28]:
# keeping number of values fixed at 100, but vary number of buckets: visualize probability of collision
%matplotlib inline
import matplotlib.pyplot as plt

n_buckets = range(100, 100001, 1000)
plt.plot(n_buckets, [collision_p(100, nb) for nb in n_buckets])
plt.show()
In [34]:
def avg_num_collisions(n, b):
    """Returns the expected number of collisions for n values uniformly distributed
    over a hashtable of b buckets. Based on (fairly) elementary probability theory.
    (Pay attention in MATH 474!)"""
    return n - b + b * (1 - 1/b)**n
In [35]:
avg_num_collisions(28, 365)
Out[35]:
1.011442040700615
In [36]:
avg_num_collisions(1000, 1000)
Out[36]:
367.6954247709637
In [37]:
avg_num_collisions(1000, 10000)
Out[37]:
48.32893558556316

Dealing with Collisions

To deal with collisions in a hashtable, we simply create a "chain" of key/value pairs for each bucket where collisions occur. The chain needs to be a data structure that supports quick insertion — natural choice: the linked list!

In [38]:
class Hashtable:
    class Node:
        def __init__(self, key, val, next=None):
            self.key = key
            self.val = val
            self.next = next
            
    def __init__(self, n_buckets=1000):
        self.buckets = [None] * n_buckets
        
    def __setitem__(self, key, val):
        bidx = hash(key) % len(self.buckets)
        n = self.buckets[bidx]
        while n:
            if key == n.key:
                n.val = val
                return
            n = n.next
        else:
            self.buckets[bidx] = Hashtable.Node(key, val, next=self.buckets[bidx])
    
    def __getitem__(self, key):
        bidx = hash(key) % len(self.buckets)
        n = self.buckets[bidx]
        while n:
            if key == n.key:
                return n.val
            n = n.next
        else:
            raise KeyError(str(key))
    
    def __contains__(self, key):
        try:
            _ = self[key]
            return True
        except:
            return False
In [43]:
ht = Hashtable(1)
ht['batman'] = 'bruce wayne'
ht['superman'] = 'clark kent'
ht['spiderman'] = 'peter parker'
In [44]:
ht['batman']
Out[44]:
'bruce wayne'
In [45]:
ht['superman']
Out[45]:
'clark kent'
In [46]:
ht['spiderman']
Out[46]:
'peter parker'
In [47]:
def ht_search(ht, x):
    return x in ht

def init_ht(size):
    ht = Hashtable(size)
    for x in range(size):
        ht[x] = x
    return ht

ns = np.linspace(100, 10_000, 50, dtype=int)
ts_htsearch = [timeit.timeit('ht_search(ht, 0)',
                             #'ht_search(ht, {})'.format(random.randrange(n)),
                             setup='ht = init_ht({})'.format(n),
                             globals=globals(),
                             number=100)
               for n in ns]
In [49]:
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(ns, ts_binsearch, 'ro')
plt.plot(ns, ts_htsearch, 'gs')
plt.plot(ns, ts_dctsearch, 'b^')
plt.show()

Loose ends

Iteration

In [50]:
class Hashtable(Hashtable):
    def __iter__(self):
        for n in self.buckets:
            while n:
                yield n.key
                n = n.next
In [51]:
ht = Hashtable(100)
ht['batman'] = 'bruce wayne'
ht['superman'] = 'clark kent'
ht['spiderman'] = 'peter parker'
In [52]:
for k in ht:
    print(k)
superman
batman
spiderman

Key ordering

In [53]:
ht = Hashtable()
d = {}
for x in 'banana apple cat dog elephant'.split():
    d[x[0]] = x
    ht[x[0]] = x
In [54]:
for k in d:
    print(k, '=>', d[k])
b => banana
a => apple
c => cat
d => dog
e => elephant
In [55]:
for k in ht:
    print(k, '=>', ht[k])
b => banana
d => dog
a => apple
e => elephant
c => cat

Load factor & Rehashing

It is clear that the ratio of the number of keys to the number of buckets (known as the load factor) can have a significant effect on the performance of a hashtable.

A fixed number of buckets doesn't make sense, as it might be wasteful for a small number of keys, and also scale poorly to a relatively large number of keys. And it also doesn't make sense to have the user of the hashtable manually specify the number of buckets (which is a low-level implementation detail).

Instead: a practical hashtable implementation would start with a relatively small number of buckets, and if/when the load factor increases beyond some threshold (typically 1), it dynamically increases the number of buckets (typically to twice the previous number). This requires that all existing keys be rehashed to new buckets (why?).

Uniform hashing

Ultimately, the performance of a hashtable also heavily depends on hashcodes being uniformly distributed --- i.e., where, statistically, each bucket has roughly the same number of keys hashing to it. Designing hash functions that do this is an algorithmic problem that's outside the scope of this class!

Runtime analysis & Discussion

For a hashtable with $N$ key/value entries, we have the following worst-case runtime complexity:

  • Insertion: $O(N)$
  • Lookup: $O(N)$
  • Deletion: $O(N)$

Assuming uniform hashing and rehashing behavior described above, it is also possible to prove that hashtables have $O(1)$ amortized runtime complexity for the above operations. Proving this is also beyond the scope of this class (but is demonstrated by empirical data).

Vocabulary list

  • hashtable
  • hashing and hashes
  • collision
  • hash buckets & chains
  • birthday problem
  • load factor
  • rehashing

Addendum: On Hashability

Remember: a given object must always hash to the same value. This is required so that we can always map the object to the same hash bucket.

Hashcodes for collections of objects are usually computed from the hashcodes of its contents, e.g., the hash of a tuple is a function of the hashes of the objects in said tuple:

In [ ]:
hash(('two', 'strings'))

This is useful. It allows us to use a tuple, for instance, as a key for a hashtable.

However, if the collection of objects is mutable — i.e., we can alter its contents — this means that we can potentially change its hashcode.`

If we were to use such a collection as a key in a hashtable, and alter the collection after it's been assigned to a particular bucket, this leads to a serious problem: the collection may now be in the wrong bucket (as it was assigned to a bucket based on its original hashcode)!

For this reason, only immutable types are, by default, hashable in Python. So while we can use integers, strings, and tuples as keys in dictionaries, lists (which are mutable) cannot be used. Indeed, Python marks built-in mutable types as "unhashable", e.g.,

In [ ]:
hash([1, 2, 3])

That said, Python does support hashing on instances of custom classes (which are mutable). This is because the default hash function implementation does not rely on the contents of instances of custom classes. E.g.,

In [ ]:
class Student:
    def __init__(self, fname, lname):
        self.fname = fname
        self.lname = lname
In [ ]:
s = Student('John', 'Doe')
hash(s)
In [ ]:
s.fname = 'Jane'
hash(s) # same as before mutation

We can change the default behavior by providing our own hash function in __hash__, e.g.,

In [ ]:
class Student:
    def __init__(self, fname, lname):
        self.fname = fname
        self.lname = lname
        
    def __hash__(self):
        return hash(self.fname) + hash(self.lname)
In [ ]:
s = Student('John', 'Doe')
hash(s)
In [ ]:
s.fname = 'Jane'
hash(s)

But be careful: instances of this class are no longer suitable for use as keys in hashtables (or dictionaries), if you intend to mutate them after using them as keys!