Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

22
  • 10
    Yup, TL:DR you usually don't need to distinguish. The OS will already migrate your compute-intensive threads on high-performance cores if they don't start there, if enough of them are free. You might want to verify that the OS is in fact doing this by using low-level APIs, especially if your workload is bursty (and thus maybe not simple for the scheduler), or like x264 you typically start more threads than there are logical cores, to keep cores busy when one thread temporarily runs out of work to do. Commented Jul 20, 2021 at 3:20
  • 5
    @supercat: It's more complex than that (e.g. tasks dealing with user interfaces need latency not throughput, idle CPU/s take longer to wake up, load from other processes isn't known by any one process, some work can be "pre-done" in the background to make performance sensitive work faster, sometimes "power management" means temperature or fan noise management, hyper-threading creates additional "slow core by itself vs. share a fast core" compromises, ...). I'd also assume user would rather tell OS how long before charging (than each app or service), & OS can predict from past usage patterns. Commented Jul 20, 2021 at 5:59
  • 12
    "you're left without any control over the "performance hints" that are necessary" - not quite true. It wasn't standardized because of how much hints vary per system, but you totally can use system-specific calls with help of std::thread::native_handle(). Commented Jul 20, 2021 at 14:52
  • 17
    Stop listening to these fools. => Could we avoid hyperboles here? The OS has generic heuristics which are generally good, but in special cases are insufficient, or inconvenient. Typical examples are usecases where latency matters, in which case reserving exclusive use of a certain number of cores and pinning threads to those cores work much better that "hoping" that the OS will allow you to reach your target. And yes, this requires reaching beyond C++, the standard offers nothing there. Commented Jul 20, 2021 at 17:52
  • 5
    "Modern systems do not allow CPU time to be wasted on unimportant work while more important work waits" is an overstatement of the progress made. It's all too easy, and common, to end up with CPU time awfully wasted waiting for some incoming data (key press, character on a serial port), or the next second. But it still typically requires programming skills to make something real-time and CPU-savvy, unless there's a framework that did this hard work. That should be done, and can be done with the right techniques. Commented Jul 21, 2021 at 7:51