From my book "Data Structures & Algorithms in Java: Sixth Edition" the definition of Big Oh is the following:
Let
f(n)andg(n)be functions mapping positive integers to positive real numbers. We say thatf(n)isO(g(n))if there is a real constantc > 0and an integer constantn0 >= 1such thatf(n) <= c * g(n)forn >= 0
They then show that the function 8n + 5 is O(n) and use the following justification:
By the big-Oh definition, we need to find a real constant
c > 0and integer constantn0 >= 1such that8n+5 <= c * nfor every integern >= n0. It is easy to see that a possible choice isc = 9andn0 = 5. Indeed, this is one of infinitely many choices available because there is a trade-off betweencd andn0. For example, we could rely on constantc = 13andn0 = 1
In my bachelor's studies, I learned that big O is just the largest increasing factor in a method f(n) and as such this description is new to me. I can answer the questions by finding the biggest factor, but cannot justify. It would help me if I knew:
What is meant with "a real constant
c > 0" and "an integer constantn0 >= 1" What do these mean?What trade-off is being talked about when they say there is a tradeoff between
candn0?Why does the choice of
candn0matter? It feels strange picking arbitrary values likec = 9999999999andn0=1and then concluding that indeedf(n)is Big-Oh ofO(g(n))just because8*1 + 5 <= 999999999* 1
I can't imagine a case where a function f(n) would be bigger than c*n if you're free in choosing the c.