Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

4
  • I don't know if you looked around Stack Exchange much, but here's a similar question, with a few answers but none marked as the accepted answer, and at least one warning about tough bugs and advising to just use coarse locking: stackoverflow.com/q/4320337/618649 Commented Jun 3, 2015 at 23:50
  • It has lots of non-solutions that's for sure. Just like the first answere here will deadlock within seconds. Google was also quite unhelpfull since most cases deal with threads and not cores and try to solve the problem that one thread gets preempted before finishing its operation. Commented Jun 4, 2015 at 0:17
  • Well...each core is still effectively executing a thread even if this is your own custom kernel that only spins up as many threads as you have CPU cores (or call them processes or executors or whipsits or whatever you want to call them). Point being that they're all still contending over a shared resource (RAM) and you have to coordinate access or you end up in deadlock, or with hopelessly corrupted data structures. So it's not so different from threads. That threads are pre-emptible by a scheduler just means you have even more opportunities for conflicts, because more threads. ;-) Commented Jun 4, 2015 at 0:47
  • @Craig (sorry to revive a dead horse) The difference between user space threads and the kernel running things on multiple cores is that the kernel controls the task switching. As said the code will never preempted in the middle, which totally changes the playing field. You have to coordinate access but the best way to coordinate differs. How they differ is part of the question. Commented Feb 26, 2020 at 16:04