Between the array-backed and linked list we have:
set
and dict
¶The set
and dict
types don't support positional access (i.e., by index), but do support lookup/search. How fast do they fare compared to lists?
import timeit
def lin_search(lst, x):
return x in lst
def bin_search(lst, x):
# assumes lst is sorted
low = 0
hi = len(lst)-1
while low <= hi:
mid = (low + hi) // 2
if x < lst[mid]:
hi = mid - 1
elif x < lst[mid]:
low = mid + 1
else:
return True
else:
return False
def set_search(st, x):
return x in st
def dict_search(dct, x):
return x in dct
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import random
ns = np.linspace(100, 10_000, 50, dtype=int)
ts_linsearch = [timeit.timeit('lin_search(lst, lst[-1])',
setup='lst = list(range({})); random.shuffle(lst)'.format(n),
globals=globals(),
number=100)
for n in ns]
ts_binsearch = [timeit.timeit('bin_search(lst, 0)',
setup='lst = list(range({}))'.format(n),
globals=globals(),
number=100)
for n in ns]
ts_setsearch = [timeit.timeit(#'set_search(st, 0)',
'set_search(st, {})'.format(random.randrange(n)),
setup='lst = list(range({})); random.shuffle(lst);'
'st = set(lst)'.format(n),
globals=globals(),
number=100)
for n in ns]
ts_dctsearch = [timeit.timeit(#'dict_search(dct, 0)',
'dict_search(dct, {})'.format(random.randrange(n)),
setup='lst = list(range({})); random.shuffle(lst);'
'dct = {{x:x for x in lst}}'.format(n),
globals=globals(),
number=100)
for n in ns]
plt.plot(ns, ts_linsearch, 'or')
plt.plot(ns, ts_binsearch, 'sg')
plt.plot(ns, ts_setsearch, 'db')
plt.plot(ns, ts_dctsearch, 'om');
It looks like sets and dictionaries support lookup in constant time! How?!
map
ADT¶We will focus next on the "map" abstract data type (aka "associative array" or "dictionary"), which is used to associate keys (which must be unique) with values. Python's dict
type is an implementation of the map ADT.
Given an implementation of a map, it is trivial to implement a set on top of it (how?).
Here's a simple map API:
class MapDS:
def __init__(self):
self.data = []
def __setitem__(self, key, value): # keeps things in order -> O(N)
for i in range(len(self.data)):
if self.data[i][0] == key:
self.data[i][1] = value
return
else:
self.data.append([key, value])
def __getitem__(self, key): # binary search -> O(log N)
for k,v in self.data:
if k == key:
return v
else:
raise KeyError(str(key))
def __contains__(self, key): # binary search -> O(log N)
try:
_ = self[key]
return True
except:
return False
m = MapDS()
m['batman'] = 'bruce wayne'
m['superman'] = 'clark kent'
m['spiderman'] = 'peter parker'
m['batman']
m['batman'] = 'tony stark'
m['batman']
How do we make the leap from linear runtime complexity to constant?!
Hashes (a.k.a. hash codes or hash values) are simply numerical values computed for objects.
hash('hello')
hash('batman')
hash('batmen')
[hash(s) for s in ['different', 'objects', 'have', 'very', 'different', 'hashes']]
[hash(s)%100 for s in ['different', 'objects', 'have', 'very', 'different', 'hashes']]
class Hashtable:
def __init__(self, n_buckets):
self.buckets = [None] * n_buckets
def __setitem__(self, key, val):
bidx = hash(key) % len(self.buckets)
self.buckets[bidx] = [key, val]
def __getitem__(self, key):
bidx = hash(key) % len(self.buckets)
kv = self.buckets[bidx]
if kv and kv[0] == key:
return kv[1]
else:
raise KeyError(str(key))
def __contains__(self, key):
try:
_ = self[key]
return True
except:
return False
ht = Hashtable(100)
ht['spiderman'] = 'peter parker'
ht['batman'] = 'bruce wayne'
ht['superman'] = 'clark kent'
ht['spiderman']
ht['batman']
ht['superman']
ht = Hashtable(2)
ht['spiderman'] = 'peter parker'
ht['batman'] = 'bruce wayne'
ht['superman'] = 'clark kent'
ht['spiderman']
ht['batman']
ht['superman']
Problem statement: Given $N$ people at a party, how likely is it that at least two people will have the same birthday?
def birthday_p(n_people):
p_inv = 1
for n in range(365, 365-n_people, -1):
p_inv *= n / 365
return 1 - p_inv
birthday_p(3)
birthday_p(23)
1-364/365*363/365
%matplotlib inline
import matplotlib.pyplot as plt
n_people = range(1, 80)
plt.plot(n_people, [birthday_p(n) for n in n_people]);
Repeat the birthday problem, but with a given number of values and "buckets" that are allotted to hold them. How likely is it that two or more values will map to the same bucket?
def collision_p(n_values, n_buckets):
p_inv = 1
for n in range(n_buckets, n_buckets-n_values, -1):
p_inv *= n / n_buckets
return 1 - p_inv
collision_p(23, 365) # same as birthday problem, for 23 people
collision_p(10, 100)
collision_p(100, 1000)
# keeping number of values fixed at 100, but vary number of buckets: visualize probability of collision
%matplotlib inline
import matplotlib.pyplot as plt
n_buckets = range(100, 100001, 1000)
plt.plot(n_buckets, [collision_p(100, nb) for nb in n_buckets])
plt.show()
def avg_num_collisions(n, b):
"""Returns the expected number of collisions for n values uniformly distributed
over a hashtable of b buckets. Based on (fairly) elementary probability theory.
(Pay attention in MATH 474!)"""
return n - b + b * (1 - 1/b)**n
avg_num_collisions(28, 365)
avg_num_collisions(1000, 1000)
avg_num_collisions(1000, 10000)
To deal with collisions in a hashtable, we simply create a "chain" of key/value pairs for each bucket where collisions occur. The chain needs to be a data structure that supports quick insertion — natural choice: the linked list!
class Hashtable:
class Node:
def __init__(self, key, val, next=None):
self.key = key
self.val = val
self.next = next
def __init__(self, n_buckets=1000):
self.buckets = [None] * n_buckets
def __setitem__(self, key, val):
bidx = hash(key) % len(self.buckets)
n = self.buckets[bidx]
while n:
if key == n.key:
n.val = val
return
n = n.next
else:
self.buckets[bidx] = Hashtable.Node(key, val, next=self.buckets[bidx])
def __getitem__(self, key):
bidx = hash(key) % len(self.buckets)
n = self.buckets[bidx]
while n:
if key == n.key:
return n.val
n = n.next
else:
raise KeyError(str(key))
def __contains__(self, key):
try:
_ = self[key]
return True
except:
return False
ht = Hashtable(1)
ht['batman'] = 'bruce wayne'
ht['superman'] = 'clark kent'
ht['spiderman'] = 'peter parker'
ht['batman']
ht['superman']
ht['spiderman']
def ht_search(ht, x):
return x in ht
def init_ht(size):
ht = Hashtable(size)
for x in range(size):
ht[x] = x
return ht
ns = np.linspace(100, 10_000, 50, dtype=int)
ts_htsearch = [timeit.timeit('ht_search(ht, 0)',
#'ht_search(ht, {})'.format(random.randrange(n)),
setup='ht = init_ht({})'.format(n),
globals=globals(),
number=100)
for n in ns]
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(ns, ts_binsearch, 'ro')
plt.plot(ns, ts_htsearch, 'gs')
plt.plot(ns, ts_dctsearch, 'b^')
plt.show()
class Hashtable(Hashtable):
def __iter__(self):
for n in self.buckets:
while n:
yield n.key
n = n.next
ht = Hashtable(100)
ht['batman'] = 'bruce wayne'
ht['superman'] = 'clark kent'
ht['spiderman'] = 'peter parker'
for k in ht:
print(k)
ht = Hashtable()
d = {}
for x in 'banana apple cat dog elephant'.split():
d[x[0]] = x
ht[x[0]] = x
for k in d:
print(k, '=>', d[k])
for k in ht:
print(k, '=>', ht[k])
It is clear that the ratio of the number of keys to the number of buckets (known as the load factor) can have a significant effect on the performance of a hashtable.
A fixed number of buckets doesn't make sense, as it might be wasteful for a small number of keys, and also scale poorly to a relatively large number of keys. And it also doesn't make sense to have the user of the hashtable manually specify the number of buckets (which is a low-level implementation detail).
Instead: a practical hashtable implementation would start with a relatively small number of buckets, and if/when the load factor increases beyond some threshold (typically 1), it dynamically increases the number of buckets (typically to twice the previous number). This requires that all existing keys be rehashed to new buckets (why?).
Ultimately, the performance of a hashtable also heavily depends on hashcodes being uniformly distributed --- i.e., where, statistically, each bucket has roughly the same number of keys hashing to it. Designing hash functions that do this is an algorithmic problem that's outside the scope of this class!
For a hashtable with $N$ key/value entries, we have the following worst-case runtime complexity:
Assuming uniform hashing and rehashing behavior described above, it is also possible to prove that hashtables have $O(1)$ amortized runtime complexity for the above operations. Proving this is also beyond the scope of this class (but is demonstrated by empirical data).
Remember: a given object must always hash to the same value. This is required so that we can always map the object to the same hash bucket.
Hashcodes for collections of objects are usually computed from the hashcodes of its contents, e.g., the hash of a tuple is a function of the hashes of the objects in said tuple:
hash(('two', 'strings'))
This is useful. It allows us to use a tuple, for instance, as a key for a hashtable.
However, if the collection of objects is mutable — i.e., we can alter its contents — this means that we can potentially change its hashcode.`
If we were to use such a collection as a key in a hashtable, and alter the collection after it's been assigned to a particular bucket, this leads to a serious problem: the collection may now be in the wrong bucket (as it was assigned to a bucket based on its original hashcode)!
For this reason, only immutable types are, by default, hashable in Python. So while we can use integers, strings, and tuples as keys in dictionaries, lists (which are mutable) cannot be used. Indeed, Python marks built-in mutable types as "unhashable", e.g.,
hash([1, 2, 3])
That said, Python does support hashing on instances of custom classes (which are mutable). This is because the default hash function implementation does not rely on the contents of instances of custom classes. E.g.,
class Student:
def __init__(self, fname, lname):
self.fname = fname
self.lname = lname
s = Student('John', 'Doe')
hash(s)
s.fname = 'Jane'
hash(s) # same as before mutation
We can change the default behavior by providing our own hash function in __hash__
, e.g.,
class Student:
def __init__(self, fname, lname):
self.fname = fname
self.lname = lname
def __hash__(self):
return hash(self.fname) + hash(self.lname)
s = Student('John', 'Doe')
hash(s)
s.fname = 'Jane'
hash(s)
But be careful: instances of this class are no longer suitable for use as keys in hashtables (or dictionaries), if you intend to mutate them after using them as keys!