2
def func(n):
    if n == 1:
        return 1
    return func(n-1) + n*(n-1)

print func(5)

Getting confused. Not sure what exactly it is. Is it O(n)?

2
  • Yes, it is O(n). Commented Aug 8, 2017 at 7:34
  • @DAle why is it O(n)? Commented Aug 8, 2017 at 7:35

3 Answers 3

5

Calculating the n*(n-1) is a fixed time operation. The interesting part of the function is calling func(n-1) until n is 1. The function will make n such calls, so it's complexity is O(n).

Sign up to request clarification or add additional context in comments.

Comments

1

If we assume that arithmetic operations are constant time operations (and they really are when numbers are relatively small) then time complexity is O(n):

T(n) = T(n-1) + C = T(n-2) + C + C = ... = n * C = O(n)

But the multiplication complexity in practice depends on the underlying type (and we are talking about Python where the type depends on the value). It depends on the N as N approaches infinity. Thus, strictly speaking, the complexity is equal to:

T(n) = O(n * multComplexity(n))

And this multComplexity(n) depends on a specific algorithm that is used for multiplication of huge numbers.

Comments

0

As described in other answers, the answer is close to O(n) for practical purposes. For a more precise analysis, if you don't want to make the approximation that multiplication is constant-time:

Calculating n*(n-1) takes O(log n * log n) (or O(log n)^1.58, depending on the algorithm Python uses, which depends on the size of the integer). See here - note that we need to take the log because the complexity is relative to the number of digits.

Adding the two terms takes O(log n), so we can ignore that.

The multiplication gets done O(n) times, so the total is O(n * log n * log n). (It might be possible to get this bound tighter, but it's certainly larger than O(n) - see the WolframAlpha plot).

In practice, the log terms won't really matter unless n gets very large.

5 Comments

Addition and Multiplication is in general seen as an O(1) operation. So the recursion defines the complexity, i.e., func(n-1). Thus, as already answered in the comments and by @Mureinik the complexity is in O(n).
For finite values of n, e.g. 64-bit values, the time complexity can be thought of as O(1); for arbitrary large values of n (when O-notation becomes "meaningful"), there are better-than-quadratic algorithms for multiplication - e.g. Karatsuba - (log n)^1.58
@meowgoesthedog sure, so replace log n * log n with (log n)^1.58. My point is that it's not technically O(n); as I wrote in the last line, it's going to be close enough in most cases.
This looks like a classical homework assignment to me. If we want technically true we would have to know the data type otherwise every answer can be argued against. For example the n == 1 could take exponential time in some cases, if we would like to use exact arithmetic for example.
I see your point, yes. n may not necessarily be a native integer type, but some arbitrary precision class which has overloaded operators. If this were a strongly typed language (e.g. C) then we would have a definitive answer

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.