Timeline for How meaningful is the Big-O time complexity of an algorithm?
Current License: CC BY-SA 3.0
10 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| May 12, 2016 at 14:33 | history | edited | ratchet freak | CC BY-SA 3.0 |
added 122 characters in body
|
| Aug 18, 2014 at 21:24 | comment | added | user22815 | Good point about efficient algorithms sometimes being more difficult to follow due to the complexity of the optimized code. This is a good argument for having very thorough test coverage on the classes and functions that implement those algorithms. | |
| Jul 9, 2014 at 0:29 | comment | added | Vatine | Well, neither quicksort nor heapsort come with stability guarantees, so that (specific) issue shouldn't be a deciding factor between them. | |
| S Jul 8, 2014 at 18:52 | history | suggested | Aaron Hall | CC BY-SA 3.0 |
capitalization
|
| Jul 8, 2014 at 18:49 | review | Suggested edits | |||
| S Jul 8, 2014 at 18:52 | |||||
| Apr 11, 2013 at 17:39 | vote | accept | James C | ||
| Apr 11, 2013 at 15:02 | comment | added | SF. | It is important to realize that constants can be quite important. O(n) where single iteration takes a second may be worse than O(n log n) where you get a million iterations per second. | |
| Apr 11, 2013 at 6:13 | comment | added | Daniel B | I'd also like to throw in this blog post on the topic, which I found to be a good rule of thumb. The thing to take away from it is that with O(n^3) and friends, "huge" could be as small as a few hundred items. Another good example is O(N!) - you will not be working out all combinations / permutations of more than one or two dozen items. Actually, I consider the usefulness of big-O to be the greatest exception to "what have you tried, and did you profile it?". I don't need to profile it to know that 100! is never going to happen. | |
| Apr 11, 2013 at 0:45 | comment | added | user40980 | Example of that last point - where it doesn't make much difference and naive may be the better choice - matrix multiplication - the naive approach is O(n^3), while the fastest ones are O(n^2.3727) but it doesn't make much of a difference unless you are dealing with matrixes that are huge... and the math is a lot harder to follow. | |
| Apr 11, 2013 at 0:00 | history | answered | ratchet freak | CC BY-SA 3.0 |