Timeline for Why Garbage Collection if smart pointers are there
Current License: CC BY-SA 2.5
15 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Mar 18, 2019 at 8:50 | comment | added | ern0 | There's no I/O during the cleanup, but hundreds of megabytes of small fragments. | |
| Mar 12, 2019 at 21:52 | comment | added | user75963 | I bet your problem is not slow deleting but slow I/O. | |
| Jul 10, 2012 at 19:21 | comment | added | supercat | Deallocation often isn't done "in the background", but rather pauses all foreground threads. Batch-mode garbage-collection is generally a performance win, despite the pausing of foreground threads, because it allows unused space to be consolidated. One could separate the processes of garbage-collection and heap compactification, but--especially in frameworks that use direct references rather than handles--they both tend to be "stop-the-world" processes, and it's often most practical to do them together. | |
| Dec 28, 2010 at 14:09 | comment | added | J D | @ern0: FWIW, Mathematica suffers from the UI pauses due to avalanches of deallocation from reference counting that you describe. I also referred to this phenomenon in my own answer. | |
| Dec 27, 2010 at 20:12 | comment | added | ern0 | I just said that freeing an object costs lot, so it's better to do it with a background thread, which does not suspends the foreground task. Smart pointers solve ownership problem, too, of course, but some cases auto-dropping an object by a smart pointer may cause a chain reaction of dropping other objects, which may take too long, and may occuring in an unwanted moment of time (e.g. in the middle of an UI action). | |
| Dec 27, 2010 at 18:22 | comment | added | J D | @Konrad: Mutating shared state is just another form of communication. The Cilk guys quantify it as cache complexity, the asymptotic cache miss rate on an idealized multicore, because that measures the rate at which the cache coherence hardware is communicating which is often the bottleneck in parallel programs on multicores in practice. They minimize cache complexity by mutating shared state in-place from parallel tasks that, again, execute on threads in a pool and the threads themselves are abstracted away. So you've got the same problem with your thread unsafe pool allocators. | |
| Dec 27, 2010 at 18:08 | comment | added | J D | @Konrad: Asynchronous programming means you ask the OS to invoke your function when an asynchronous operation has completed and that usually happens in the thread pool. So asynchronous programs typically have control flow that hops between threads. Per-thread resources obviously wouldn't work in that context due to thread hopping and per-workflow pool allocators would be too heavyweight (you can have millions of workflows). Moreover, you're concurrently passing messages between threads so you would have to resolve who owns which message or deep copy everything like Erlang (which is slow). | |
| Dec 27, 2010 at 16:08 | comment | added | Konrad Rudolph | @Jon: how so? I’ve never worked with Cilk but as far as I know this language places a lot of emphasis on communication. Wildly sharing object state is the opposite of that (i.e., it ignores communication). Aren’t objects usually either owned by a parent thread which manages its lifetime or local to one thread? Neither case requires cross-thread reference counting. | |
| Dec 27, 2010 at 15:57 | comment | added | J D | @Konrad: Asynchronous programming and Cilk-style parallel programming are obvious counter examples. | |
| Dec 27, 2010 at 15:32 | comment | added | Konrad Rudolph | @Jon: which, honestly, is most of the time. If you willy-nilly share object state between different threads you will have completely different problems. I’ll admit that many people program that way but this is a consequence of the bad threading abstractions that have existed until recently and it’s not a good way to do multithreading. | |
| Dec 27, 2010 at 15:29 | comment | added | J D | @Konrad: The only solves the performance problem in the context of single-threaded applications. | |
| Dec 27, 2010 at 14:57 | comment | added | Konrad Rudolph | @ern0: no. The whole point of (reference counting) smart pointers is that there is no ownership problem and no delete operator. | |
| Dec 27, 2010 at 14:46 | comment | added | ern0 | Yep, sure, GC is much more comfortable. (Especially for beginners: there're no ownership problems, there's even no delete operator.) | |
| Dec 27, 2010 at 14:29 | comment | added | Konrad Rudolph | In principle you have a point but it should be noted that this is an issue that has a very simple solution: use a pool allocator or small object allocator to bundle deallocations. But this admittedly takes (slightly) more effort than having a GC run in background. | |
| Dec 27, 2010 at 14:11 | history | answered | ern0 | CC BY-SA 2.5 |