71
votes
Accepted
Why is the worst case for this function O(n^2)?
The book is indeed correct, and it provides a good argument. Note that timings are not a reliable indicator of algorithmic complexity. The timings might only consider a special data distribution, or ...
44
votes
Accepted
What do you call it when you change the Big O execution time of a function
It is typically called "performance optimization", but I would not call it "refactoring", since this term typically refers to changes in code which don't change its visible behaviour. And a change in ...
20
votes
Why is big O notation an asymptotic notation/analysis?
What does asymptotic mean in the context of algorithmic analysis?
It means that someone, at some point during the last 130 years, looked at Bachmann-Landau notation and thought "Hey, this looks ...
19
votes
Accepted
Why is big O notation an asymptotic notation/analysis?
An asymptote in mathematics represents a value (line) to which the observed function constantly approaches.
Well, that is a narrow definition of the term asymptote. A more general definition allows ...
13
votes
What do you call it when you change the Big O execution time of a function
I don't think there is a standard term, a commonly used is optimizing though improving, or speeding up would also be correct making the clarification that you are talking about code changes and not ...
13
votes
Accepted
Why the names Omega and Theta in Big Omega and Big Theta?
O was introduced in Bachmann, Paul (1894). Analytische Zahlentheorie. Bachmann is using the term "Ordnung" (order of) multiple times to refer to the growth rate of functions, and introduces ...
11
votes
Accepted
Why isn't a counter used to avoid nested for loops for index based operations?
Both are O(N) where N is the number of elements in the array. The second one is just more confusing and has bugs.
Assuming you fix the bugs, they're both the same loop. They'll compile into similar ...
10
votes
Why is big O notation an asymptotic notation/analysis?
I think you're overthinking the whole asymptotic "but never reaches" aspect, that's not terribly important. Also, forget about trying to incorporate a bunch of grad-school level math, that ...
9
votes
Trying to understand P vs NP vs NP Complete vs NP Hard
I will try to give you less informal definition for the same.
P problems : problems that can be solved in polynomial time. Contains problems which can be efficiently solvable.
NP problem : problems ...
8
votes
Why is the worst case for this function O(n^2)?
Note that if all elements are different in each of the list which is assumed, you can iterate C only once for each element in A (if there's element in B which is equal). So inner loop is O(n^2) total
8
votes
Accepted
Is looping an array to compare to itself considered O(n^2)?
It's still O(n^2), with a constant coefficient of 1/2.
The number of times the inner loop is executed is (i-1) + (i-2) + (i-3) ... + 3 + 2 + 1 with the total number of terms being i. Note that the ...
8
votes
Accepted
Algorithms: How do I sum O(n) and O(mlog(n)) together?
You can have + in big-o, when you have terms with different parameters.
O(n + mlog(n)) describes the general case where n and m are different terms, that grow independantly.
In the case where n is ...
7
votes
Why is the worst case for this function O(n^2)?
We will assume that no individual sequence contains duplicate.
is a very important piece of information.
Otherwise, the worst-case of optimized version would still be O(n³), when A and B are equal ...
7
votes
Big O Complexity when iterating in 3 dimensions
Personally I would put it as O(bx*by*bz). The reason being I assume that the parameters can all change arbitrarily to one another, for example you could get the following input: bx=30000, by=30, bz=40....
7
votes
Accepted
Am I allowed to simplify terms while calculating Big O?
Not only can you, it is in very definition of O-complexity.
The O-complexity is not really as simple as most laymen programmers understand it.
Simply said, the O is defined asymptotically. That ...
7
votes
What do you call it when you change the Big O execution time of a function
As others have said, "optimizing" is the general term for improving the performance of an algorithm. However, optimizing often means improving constant factors. If I wanted to concisely but clearly ...
6
votes
Accepted
Generating O(N^2) without loops
Here's a somewhat contrived example, squaring a number using recursion and no loops.
using System;
public class Program
{
public static void Main()
{
var num_to_square = 10;
...
6
votes
Accepted
How do we define a "step" when calculating big O notation in an algorithm?
A step is anything that takes time, memory, or whatever you are measuring behind the label of "complexity". A step can be any size, as long as it is roughly the same size each time it gets ...
5
votes
Generating O(N^2) without loops
No.* Any algorithm with no loops must be O(1), because you know at compile-time how many steps the algorithm will take (at most).
Unless by "using a loop" you mean actually writing one of the C# ...
5
votes
What is meant with finding real and integer constants in Big Oh notation?
The difference here is between your informal definition ("the largest increasing factor") and a rigorous mathematical definition of O(n). How do you know that "the largest increasing factor" in (n + ...
5
votes
What is time complexity of update in binary heap?
I'm going to disagree a bit with Scara95. With out context of your answer this seems like whether or not your answer is wrong could possibly be a matter of pedantry. Technically, yes, its not O(N) + ...
5
votes
What do you call it when you change the Big O execution time of a function
It will be context-dependent.
"Fixing a quadratic runtime performance bug" is typically what I see. However, whether that deserves fixing (a code change) is context-dependent.
Keep in mind that ...
5
votes
Accepted
Running time for simple for loops - Big O notation
I assume you need to prove the algorithm is O(N), not O(n).
During the first iteration of the first for loop, the statement (sum++) is called N times.
During the second iteration, it is called at ...
4
votes
Accepted
Is the complexity of this code quasilinear time?
Given that quasilinear time means the following:
An algorithm is said to run in quasilinear time (also referred to as log-linear time) if T(n) = O(n log^k n) for some positive constant k; ...
4
votes
How do we define a "step" when calculating big O notation in an algorithm?
is the implication/assumption here that printf is O(1) and therefore the overall algorithm is O(1*N)?
This. I'd say the example you've quoted (which I appreciate you've just taken from somewhere else)...
3
votes
Why is the worst case for this function O(n^2)?
To put things into the terms that your book uses:
I think you have no problem understanding that the check for a == b is worst-case O(n2).
Now in the worst case for the third loop, every a in A has a ...
3
votes
Accepted
How should I document a method's computational complexity in the code?
If the complexity is relevant for users of the method, then the best approach is to state it in the summary field as in your example. Of course it should clearly state what n represent, since it is ...
3
votes
Big O Complexity when iterating in 3 dimensions
The first thing you need to study detail about asymptotic analysis and amortized analysis.
If we consider Asymptotic analysis for your scenario with bx = 10, by = 1000, bz = 1000 as your friend ...
3
votes
What is the O of this operation?
Simplify:
def find_level(n):
level = 0
while n > 0:
level += 1
n -= level
return level
Get the closed form for the highest n per level:
n = sum(x = 0 to level, x) = ...
3
votes
Big-O when worst case numerical value is known
Yes, as soon as an absolute upper bound is known the algorithm has constant complexity.
Algorithmic complexity is only concerned how the algorithm scales for arbitrarily large input sizes: how long ...
Only top scored, non community-wiki answers of a minimum length are eligible
Related Tags
big-o × 137algorithms × 68
complexity × 40
algorithm-analysis × 34
sorting × 7
runtime × 7
java × 5
data-structures × 5
array × 5
computer-science × 5
recursion × 5
big-theta × 5
python × 4
performance × 4
math × 4
graph × 3
efficiency × 3
trees × 3
c# × 2
c++ × 2
optimization × 2
ruby × 2
loops × 2
time × 2
heap × 2