0

From reading the docs and questions/answers here, there is no "fair" semaphore in Python, which means that under stress conditions, it becomes likely that some threads will starve.

In this thread (fair-semaphore-in-python), the answer suggests to use a queue of events, where each time, the thread that ends its work will signal to the next thread that it's safe to start. The problem with it is that the last thread will wait forever to signal to the next thread, which does not exist and will thus never come.

My question is, whether there is a way to implement a fair semaphore in Python? Are the threading.queue objects fair? (since if so, then it should be possible to reimplement the acquire() method to wait until it reaches the end of the queue and thus enforce fairness)

10
  • Re, "Are the threading.queue objects fair?" That's not optional behavior for queues. If something is called a "queue", then that means, by definition, that objects come out in the same order that they went in. Commented Apr 16, 2019 at 16:36
  • Re, "[is there] a way to implement a fair semaphore in Python?" Python is a fully-general programming language. And, it's got locks and condition variables. I don't have time to design it for you, but if you have locks and condition variables (Or, if you have not-necessarily-fair semaphores), then you can build on that to implement pretty much any higher-level inter-thread synchronization and communication that you want. Commented Apr 16, 2019 at 16:41
  • @SolomonSlow but different threads are going to try to add items to the queue. The question regarding queue fairness was in other words, whether threads cannot starve while waiting to be able to add their items to the queue - whether the access to the queue is not in random order Commented Apr 17, 2019 at 8:24
  • OK, so it sounds like your higher-level goal is, you want a fixed-size, blocking queue; and you want it to be "fair" in the sense that, if one thread is blocked in q.put(a), and then another thread subsequently is blocked in q.put(b), you want a guarantee that the a always will go in to the queue before the b goes in. Is that right? Commented Apr 17, 2019 at 13:37
  • 1
    IMO, it would be a mistake to try to make threads go one-at-a-time like that. I would have just one dedicated thread for the GPU. I would let other threads put tasks into a blocking queue, the GPU thread would perform the tasks in the same order that they went into the queue, and then it would signal the the waiting, other thread each time it completed a task. Commented Apr 21, 2019 at 19:38

0

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.