6

Is it possible to have a fair semaphore in python, one that guarantees that blocking threads are unblocked in the order they call acquire()?

2 Answers 2

7

You might have to build one from other moving parts. For example, create a Queue.Queue() to which each listener posts a brand-new Event() on which it then waits. When it is time to wake up one of the waiting threads, pop off the item on the queue that has been waiting longest — it will be one of those event objects — and release the thread through event.set().

Obviously, you could also use a semaphore per waiting process, but I like the semantics of an Event since it can clearly only happen once, while a semaphore has the semantics that its value could support many waiting threads.

To set the system up:

import Queue
big_queue = Queue.Queue()

Then, to wait:

import threading
myevent = threading.Event()
big_queue.put(myevent)
myevent.wait()

And to release one of the waiting threads:

event = big_queue.get()
event.set()

I suppose the weakness of this approach is that the thread doing the set/release has to wait for a waiting thread to come along, whereas a true semaphore would let several releases proceed even if no one was waiting yet?

Sign up to request clarification or add additional context in comments.

4 Comments

Can we not overcome this limitation you stated somehow? Is it not possible?
I am thinking about it! Out of curiosity, why — if the threads waiting are symmetric — would a “fair” semaphore even be desirable? With how memory caching works on modern operating systems, it should be most efficient to activate the most recently active thread, not the one that has been waiting the longest and thus has the biggest chance of having blocks of memory expire from the processor cache or even from main memory. Isn't a fair semaphore therefore an anti-pattern? :)
+1 Nice answer and an excellent example of using higher level threading tools to reason about mutexes.
@brandon I do not think so. There can be undesirable starvation issues. Like several users waiting to be served, for example. Or for another example, you have many tasks of type A, B, c, ... You want roughly equal number of tasks to have executed from each category at any time perhaps because you need to postprocess on them in some online manner which has that "roughly equal number" requirement
0

With Brandon having addressed the "fair semaphore" question, it might be useful to look at a related problem of barriers, a waiting point for threads to reach and then be released at the same time: http://docs.python.org/py3k/whatsnew/3.2.html#threading

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.