3

If I have to add arbitrary numbers like the numbers 1,12,14,71,83,21... then what would be the time complexity of this operation?

I know that adding two numbers is O(1), but how about a list of n numbers. Assuming I'm using the best possible data structure to store them for this purpose, if at all that can have any impact on the process!

Thanks in advance!

7
  • That would be O(N). Why does a data structure come into play? You have N numbers, you want to sum them, that is an O(N) operation. Commented Dec 26, 2015 at 10:11
  • How so? It would be good to know how this is working. Commented Dec 26, 2015 at 10:11
  • You perform an O(1) operation N times, that becomes O(N). Commented Dec 26, 2015 at 10:12
  • So the processor really adopts the same naive approach as we would? Add 2 numbers, and then add the next number on to this sum, then the next number and so on. Commented Dec 26, 2015 at 10:13
  • 2
    The processor only does what you tell it to do. However, you won't find an algorithm for summing numbers that won't process each number. Commented Dec 26, 2015 at 10:14

3 Answers 3

5

Adding 2 numbers is O(1) since the operation itself is constant time and the input is fixed. Regardless of the input, the operation will always take the save time.

Moving to a collection of items, the operation is still constant time, but now it's being done multiple times. The larger the input size (N), the longer it will take, and the growth rate will be linear - for each extra item in the input, the operation will take 1 more cycle.

Because of that, the time complexity is O(N).

Sign up to request clarification or add additional context in comments.

2 Comments

Not so sure that adding 2 numbers is O(1). If the numbers can be arbitrarily large of width W1 and W2 bits, it could be O(log (max(W1,W2))) .... If the numbers are bounded (like 64 bits machine numbers fitting in hardware registers), indeed adding them is O(1); bigint arithmetic is not constant time!
@BasileStarynkevitch - yes you're right, and I guess we could come up with various other scenarios where O(1) would be wrong here but that's not what the question is about, plus O(1) is "given" in OP
4

It's O(N). Each data-point must be hit at least once.

However, assuming your question is practical rather than theoretical: If you have N/2 cores capable of adding two numbers in a single cycle, you can perform the operation in log2(N) cycles. Pseudocode for a fast parallel approach:

while N > 1:
    N = N / 2
    for i in 0..N: // in parallel
        X[i] += X[i + N]
// result in X[0]

as opposed to a naive approach:

 accum = 0
 for i in 0..N: // in serial
     accum += X[i]
 // result in accum

The bottleneck preventing parallelization in the naive case is the 'reduction' into accum. I think any commutative reduction operation (addition, scalar multiplication, etc) can be parallelized as above.

Another practical consideration is that CPU and GPU processor cores that can do more than one addition in a single "cycle" (eg SSE).

Big-O doesn't highlight reduction bottlenecks and does not necessarily reflect time complexity measured in real time.

Comments

1

It would be O(N). For example in pseudo-code, whereas list contains the numbers:

while(list[i] != end) { //n in the size of the list, so O(N)
    sum += list[i]; //O(1)
    i++;  //O(1)
}

No matter what the structure is, it would always be O(N), since you must check (add) each and every element.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.