Coding software-based mutex algos for fun
I learnt a couple of nice scheduling algorithms in my Operating Systems class last semester. They seemed like an interesting bunch, but it was not generally clear whether their performance would be good.
Now, what better a way to understand them than by coding them up? So it began.
First off, those who know about GIL would rather laugh at me when I tell you that I wrote the algorithms in Python.
Multi-threading in Python? But Why ?!?!
Simple reason, I was looking to write code fast. After all, I had an exam I hadn’t studied anything for (yes, this exam). Plus, I don’t know, but might be that GIL isn’t going to affect the output of this test significantly anyway (have not verified this yet).
So I went, and wrote a few classes (one for each algorithm):
The algorithms
Dekker’s algorithm (2-thread)
class Dekker:
def __init__(self):
self.flag = [0, 0]
self.turn = 0
def lock(self, i):
j = 1-i
while self.flag[j]:
if self.turn == j:
self.flag[i] = 0
while self.turn == j:
pass
self.flag[i] = 1
def unlock(self, i):
j = 1-i
self.turn = j
self.flag[i] = 0
Peterson’s algorithm (2-thread)
Peterson:
def __init__(self):
self.flag = [0, 0]
self.turn = 0
def lock(self, i):
j = 1-i
self.flag[i] = 1
self.turn = j
while self.flag[j] and self.turn == j:
pass
def unlock(self, i):
self.flag[i] = 0
Lamport’s bakery algorithm (n-thread)
class Lamport:
def __init__(self, c):
self.count = c
self.ticket = [0]*c
self.choosing = [0]*c
def lock(self, i):
self.choosing[i] = 1
self.ticket[i] = max(self.ticket) + 1
self.choosing[i] = 0
for j in range(0, self.count):
while self.choosing[j]:
pass
while (self.ticket[j] and
(self.ticket[j], j) < (self.ticket[i], i)):
pass
def unlock(self, i):
self.ticket[i] = 0
Testing, how?
So I made a simple function. It would grab the lock, increment the thread’s counter (globally placed), print the new total count for all threads, and release the lock.
I wrote some clever python idioms, but that’s irrelevant. What is relevant is the rate at which the counter increases for each thread. That could be throught of as a metric to quantify the performance of the algorithms.
Results
I ran each algorithm for 5 seconds, and looked at the count of iterations of each process. Also, to note the presence of unfairness, or an initial bias, I also ran one for 10 seconds, but it did not have any noticeable effect. Also, since Dekker and Peterson only work for 2 threads, I also tested Lamport on 2 threads (highly unfair, I know).
Thread 1 | Thread 2 | |
---|---|---|
Dekker | 7415 | 7245 |
Peterson | 116 | 114 |
Lamport | 119 | 117 |
Through this, it is quite clear who the winner is. Dekker seems to perform much faster than the other 2. Of course, it has the other issues of being more complicated, and of only supporting 2 threads. But the huge performance gap is no doubt remarkable.
This experiment was never intended to be of very high accuracy; why, it was hardly more than an exam time fun exercise. But a difference of this magnitude is certainly of interest, and what an in-depth look at the algorithms reveals, not completely unexpected.