Big O notation is a mathematical notation that describes the approximate size of a function on a domain. Big O is a member of a family of notations invented by German mathematicians Paul Bachmann and [1] Edmund Landau [2] and expanded by others, collectively called Bachmann–Landau notation. The letter O was chosen by Bachmann to stand for Ordnung, meaning the order of approximation.

In computer science, big O notation is used to classify algorithms according to how their run time or space requirements [a] grow as the input size grows.[3] In analytic number theory, big O notation is often used to express bounds on the growth of an arithmetical function; one well-known example is the remainder term in the prime number theorem.[4] In mathematical analysis, including calculus, Big O notation is used to bound the error when truncating a power series and to express the quality of approximation of a real or complex valued function by a simpler function.

Often, big O notation characterizes functions according to their growth rates as the variable becomes large: different functions with the same asymptotic growth rate may be represented using the same O notation. The letter O is used because the growth rate of a function is also referred to as the order of the function. A description of a function in terms of big O notation only provides an upper bound on the growth rate of the function.

Associated with big O notation are several related notations, using the symbols , , , , , , , and to describe other kinds of bounds on growth rates.[5][6][7]

Formal definition

edit

Let   the function to be estimated, be either a real or complex valued function defined on a domain   and let   the comparison function, be a non-negative real valued function defined on the same set   Common choices for the domain are intervals of real numbers, bounded or unbounded, the set of positive integers, the set of complex numbers and tuples of real/complex numbers. With the domain written explicitly or understood implicitly, one writes

 

which is read as  "  is big   of  "  if there exists a positive real number   such that

 

If   (i.e. g is also never zero) throughout the domain   an equivalent definition is that the ratio   is bounded, i.e. there is a positive real number   so that   for all   These encompass all the uses of big   in computer science and mathematics, including its use where the domain is finite, infinite, real, complex, single variate, or multivariate. In most applications, one chooses the function   appearing within the argument of   to be as simple a form as possible, omitting constant factors and lower order terms. The number   is called the implied constant because it is normally not specified. When using big   notation, what matters is that some finite   exists, not its specific value. This simplifies the presentation of many analytic inequalities.

For functions defined on positive real numbers or positive integers, a more restrictive and somewhat conflicting definition is still in common use,[3][8] especially in computer science. When restricted to functions which are eventually positive, the notation

 

means that for some real number     in the domain   Here, the expression   doesn't indicate a limit, but the notion that the inequality holds for large enough   The expression   often is omitted.[3]

Similarly, for a finite real number   the notation

 

means that for some constant     on the interval   that is, in a small neighborhood of   In addition, the notation   means   More complicated expressions are also possible.

Despite the presence of the equal sign (=) as written, the expression   does not refer to an equality, but rather to an inequality relating   and  

In the 1930s,[6] the Russian number theorist I.M. Vinogradov introduced the notation   which has been increasingly used in number theory[4][9][10] and other branches of mathematics, as an alternative to the   notation. We have

 

Frequently both notations are used in the same work.

Set version of big O

edit

In computer science[3] it is common to define big   as also defining a set of functions. With the positive (or non-negative) function   specified, one interprets   as representing the set of all functions   that satisfy   One can then equivalently write   read as "the function   is among the set of all functions of order at most  "

Examples with an infinite domain

edit

In typical usage the   notation is applied to an infinite interval of real numbers   and captures the behavior of the function for very large  . In this setting, the contribution of the terms that grow "most quickly" will eventually make the other ones irrelevant. As a result, the following simplification rules can be applied:

  • If   is a sum of several terms, if there is one with largest growth rate, it can be kept, and all others omitted.
  • If   is a product of several factors, any constants (factors in the product that do not depend on  ) can be omitted.

For example, let  , and suppose we wish to simplify this function, using   notation, to describe its growth rate for large  . This function is the sum of three terms:  ,  , and  . Of these three terms, the one with the highest growth rate is the one with the largest exponent as a function of  , namely  . Now one may apply the second rule:  is a product of   and   in which the first factor does not depend on  . Omitting this factor results in the simplified form  . Thus, we say that   is a "big O" of  . Mathematically, we can write   for all  . One may confirm this calculation using the formal definition: let   and  . Applying the formal definition from above, the statement that   is equivalent to its expansion,   for some suitable choice of a positive real number   and for all  . To prove this, let  . Then, for all  :   so   While it is also true, by the same argument, that  , this is a less precise approximation of the function  . On the other hand, the statement   is false, because the term   causes   to be unbounded.

When a function   describes the number of steps required in an algorithm with input  , an expression such as   with the implied domain being the set of positive integers, may be interpreted as saying that the algorithm has at most the order of   time complexity.

Example with a finite domain

edit

Big O can also be used to describe the error term in an approximation to a mathematical function on a finite interval. The most significant terms are written explicitly, and then the least-significant terms are summarized in a single big O term. Consider, for example, the exponential series and two expressions of it that are valid when   is small:   The middle expression (the line with " ") means the absolute-value of the error   is at most some constant times   when   is small. This is an example of the use of Taylor's theorem.

The behavior of a given function may be very different on finite domains than on infinite domains, for example,   while  

Multivariate examples

edit

 

 

 

 

Here we have a complex variable function of two variables. In general, any bounded function is  .

 

The last example illustrates a mixing of finite and infinite domains on the different variables.

In all of these examples, the bound is uniform in both variables. Sometimes in a multivariate expression, one variable is more important than others, and one may express that the implied constant   depends on one or more of the variables using subscripts to the big O symbol or the   symbol. For example, consider the expression

 

This means that for each real number  , there is a constant  , which depends on  , so that for all  ,   This particular statement follows from the general binomial theorem.

Another example, common in the theory of Taylor series, is   Here the implied constant depends on the size of the domain.

The subscript convention applies to all of the other notations in this page.

Properties

edit

Product

edit
 
 

If   and   then  . It follows that if   and   then  .

Multiplication by a constant

edit

Let k be a nonzero constant. Then  . In other words, if  , then  

Transitive property

edit

If   and   then  .

If the function   of a positive integer   can be written as a finite sum of other functions, then the fastest growing one determines the order of  . For example,

 

Some general rules about growth toward infinity; the 2nd and 3rd property below can be proved rigorously using L'Hôpital's rule:

Large powers dominate small powers

edit

For  , then  

Powers dominate logarithms

edit

For any positive     no matter how large   is and how small   is. Here, the implied constant depends on both   and  .

Exponentials dominate powers

edit

For any positive     no matter how large   is and how small   is.

A function that grows faster than   for any   is called superpolynomial. One that grows more slowly than any exponential function of the form   with   is called subexponential. An algorithm can require time that is both superpolynomial and subexponential; examples of this include the fastest known algorithms for integer factorization and the function  .

We may ignore any powers of   inside of the logarithms. For any positive  , the notation   means exactly the same thing as  , since  . Similarly, logs with different constant bases are equivalent with respect to Big O notation. On the other hand, exponentials with different bases are not of the same order. For example,   and   are not of the same order.

More complicated expressions

edit

In more complicated usage,   can appear in different places in an equation, even several times on each side. For example, the following are true for   a positive integer:   The meaning of such statements is as follows: for any functions which satisfy each   on the left side, there are some functions satisfying each   on the right side, such that substituting all these functions into the equation makes the two sides equal. For example, the third equation above means: "For any function satisfying  , there is some function   such that  ". The implied constant in the statement " " may depend on the implied constant in the expression " ".

Some further examples:  

Vinogradov's ≫ and Knuth's big Ω

edit

When   are both positive functions, Vinogradov[6] introduced the notation  , which means the same as  . Vinogradov's two notations enjoy visual symmetry, as for positive functions  , we have  

In 1976, Donald Knuth[7] defined

 

which has the same meaning as Vinogradov's  .

Much earlier, Hardy and Littlewood defined   differently, but this it seldom used anymore (Ivič's book[9] being one exception). Justifying his use of the  -symbol to describe a stronger property,[7] Knuth wrote: "For all the applications I have seen so far in computer science, a stronger requirement ... is much more appropriate". Knuth further wrote, "Although I have changed Hardy and Littlewood's definition of  , I feel justified in doing so because their definition is by no means in wide use, and because there are other ways to say what they want to say in the comparatively rare cases when their definition applies."[7]

Indeed, Knuth's big   enjoys much more widespread use today than the Hardy–Littlewood big  , being a common feature in computer science and combinatorics.

Hardy's ≍ and Knuth's big Θ

edit

In analytic number theory,[10] the notation   means both   and  . This notation is originally due to Hardy.[5] Knuth's notation for the same notion is  .[7] Roughly speaking, these statements assert that   and   have the same order. These notations mean that there are positive constants   so that   for all   in the common domain of  . When the functions are defined on the positive integers or positive real numbers, as with big O, writers oftentimes interpret statements   and   as holding for all sufficiently large  , that is, for all   beyond some point  . Sometimes this is indicated by appending   to the statement. For example,   is true for the domain   but false if the domain is all positive integers, since the function is zero at  .

Further examples

edit

 

 

The notation

  means that there is a positive constant   so that   for all  . By contrast,   means that there is a positive constant   so that   for all   and   means that there are positive constants   so that   for all  .

For any domain  ,   each statement being for all   in  .

Orders of common functions

edit

Here is a list of classes of functions that are commonly encountered when analyzing the running time of an algorithm. In each case, c is a positive constant and n increases without bound. The slower-growing functions are generally listed first.

Notation Name Example
  constant Finding the median value for a sorted array of numbers; Calculating  ; Using a constant-size lookup table
  inverse Ackermann function Amortized complexity per operation for the Disjoint-set data structure
  double logarithmic Average number of comparisons spent finding an item using interpolation search in a sorted array of uniformly distributed values
  logarithmic Finding an item in a sorted array with a binary search or a balanced search tree as well as all operations in a binomial heap
 
 
polylogarithmic Matrix chain ordering can be solved in polylogarithmic time on a parallel random-access machine.
 
 
fractional power Searching in a k-d tree
  linear Finding an item in an unsorted list or in an unsorted array; adding two n-bit integers by ripple carry
  n log-star n Performing triangulation of a simple polygon using Seidel's algorithm,[11] where  
  linearithmic, loglinear, quasilinear, or " " Performing a fast Fourier transform; fastest possible comparison sort; heapsort and merge sort
  quadratic Multiplying two  -digit numbers by schoolbook multiplication; simple sorting algorithms, such as bubble sort, selection sort and insertion sort; (worst-case) bound on some usually faster sorting algorithms such as quicksort, Shellsort, and tree sort
  polynomial or algebraic Tree-adjoining grammar parsing; maximum matching for bipartite graphs; finding the determinant with LU decomposition
 
 
L-notation or sub-exponential Factoring a number using the quadratic sieve or number field sieve
 
 
exponential Finding the (exact) solution to the travelling salesman problem using dynamic programming; determining if two logical statements are equivalent using brute-force search
  factorial Solving the travelling salesman problem via brute-force search; generating all unrestricted permutations of a poset; finding the determinant with Laplace expansion; enumerating all partitions of a set

The statement   is sometimes weakened to   to derive simpler formulas for asymptotic complexity. In many of these examples, the running time is actually  , which conveys more precision.

Little-o notation

edit

For real or complex-valued functions of a real variable   with   for sufficiently large  , one writes [2]

 

if   That is, for every positive constant ε there exists a constant   such that

 

Intuitively, this means that   grows much faster than  , or equivalently   grows much slower than  . For example, one has

  and       both as  

When one is interested in the behavior of a function for large values of  , little-o notation makes a stronger statement than the corresponding big-O notation: every function that is little-o of   is also big-O of   on some interval  , but not every function that is big-O of   is little-o of  . For example,   but   for  .

Little-o respects a number of arithmetic operations. For example,

if   is a nonzero constant and   then  , and
if   and   then  
if   and   then  

It also satisfies a transitivity relation:

if   and   then  

Little-o can also be generalized to the finite case:[2]   if   In other words,   for some   with  .

This definition is especially useful in the computation of limits using Taylor series. For example:

 , so  

Asymptotic notation

edit

A relation related to litte-o is the asymptotic notation  . For real valued functions  , the expression   means   One can connect this to little-o by observing that   is also equivalent to  . Here   refers to a function tending to zero as  . One reads this as "  is asymptotic to  ". For nonzero functions on the same (finite or infinite) domain,   forms an equivalence relation.

One of the most famous theorems using the notation   is Stirling's formula   In number theory, the famous prime number theorem states that   where   is the number of primes which are at most   and   is the natural logarithm of  .

As with little-o, there is a version with finite limits (two-sided or one-sided) as well, for example  

Further examples:       The last asymptotic is a basic property of the Riemann zeta function.

Knuth's little 𝜔

edit

For eventually positive, real valued functions   the notation   means   In other words,  . Roughly speaking, this means that   grows much faster than does  .

The Hardy–Littlewood Ω notation

edit

In 1914 G.H. Hardy and J.E. Littlewood introduced the new symbol  [12] which is defined as follows:

  as   if  

Thus   is the negation of  

In 1916 the same authors introduced the two new symbols   and   defined as:[13]

  as   if  
  as   if  

These symbols were used by E. Landau, with the same meanings, in 1924.[14] Authors that followed Landau, however, use a different notation for the same definitions:[9] The symbol   has been replaced by the current notation   with the same definition, and   became  

These three symbols   as well as   (meaning that   and   are both satisfied), are now currently used in analytic number theory.[9] [10]

Simple examples

edit

We have

  as  

and more precisely

  as  

where   means that the left side is both   and  ,

We have

  as  

and more precisely

  as  

however

  as  

Family of Bachmann–Landau notations

edit

For understanding the fomal definitions, consult the list of logic symbols used in mathematics.

Notation Name[7] Description Formal definition Compact definition

[4] [5] [7] [12] [15][16]

  Small O; Small Oh; Little O; Little Oh f is dominated by g asymptotically (for any constant factor  )    
  or

  (Vinogradov's notation)

Big O; Big Oh; Big Omicron   is bounded above by g (up to constant factor  )    
  (Hardy's notation) or   (Knuth notation) Of the same order as (Hardy); Big Theta (Knuth) f is bounded by g both above (with constant factor  ) and below (with constant factor  )       and  
  as  , where   is finite,  

or  

Asymptotic equivalence f is equal to g asymptotically   (in the case  )  
  (Knuth's notation), or

  (Vinogradov's notation)

Big Omega in complexity theory (Knuth) f is bounded below by g, up to a constant factor    
  as  ,

where   can be finite,   or  

Small Omega; Little Omega f dominates g asymptotically   (for  )  
  Big Omega in number theory (Hardy–Littlewood)   is not dominated by g asymptotically    

The limit definitions assume   for   in a neighborhood of the limit; when the limit is  , this means that   for sufficiently large  .

Computer science and combinatorics use the big  , big Theta  , little  , little omega   and Knuth's big Omega   notations. [3] Analytic number theory often uses the big  , small  , Hardy's  , Hardy–Littlewood's big Omega   (with or without the +, − or ± subscripts), Vinogradov's   and   notations and   notations. [9] [4] [10] The small omega   notation is not used as often in analysis or in number theory. [17]

Quality of approximations using different notation

edit

Informally, especially in computer science, the big   notation often can be used somewhat differently to describe an asymptotic tight bound where using big Theta   notation might be more factually appropriate in a given context .[18] For example, when considering a function  , all of the following are generally acceptable, but tighter bounds (such as numbers 2,3 and 4 below) are usually strongly preferred over looser bounds (such as number 1 below).

  1.  
  2.  
  3.  
  4.   as  .

While all three statements are true, progressively more information is contained in each. In some fields, however, the big O notation (number 2 in the lists above) would be used more commonly than the big Theta notation (items numbered 3 in the lists above). For example, if   represents the running time of a newly developed algorithm for input size  , the inventors and users of the algorithm might be more inclined to put an upper bound on how long it will take to run without making an explicit statement about the lower bound or asymptotic behavior.

Extensions to the Bachmann–Landau notations

edit

Another notation sometimes used in computer science is   (read soft-O), which hides polylogarithmic factors. There are two definitions in use: some authors use   as shorthand for   for some  [citation needed], while others use it as shorthand for   .[19] When   is polynomial in  , there is no difference; however, the latter definition allows one to say, e.g. that   while the former definition allows for   for any constant  . Some authors write O* for the same purpose as the latter definition.[20] Essentially, it is big O notation, ignoring logarithmic factors because the growth-rate effects of some other super-logarithmic function indicate a growth-rate explosion for large-sized input parameters that is more important to predicting bad run-time performance than the finer-point effects contributed by the logarithmic-growth factor(s). This notation is often used to obviate the "nitpicking" within growth-rates that are stated as too tightly bounded for the matters at hand (since   for any constant   and any  

Also, the L notation, defined as

 

is convenient for functions that are between polynomial and exponential in terms of  .

edit

The generalization to functions taking values in any normed vector space is straightforward (replacing absolute values by norms), where   and   need not take their values in the same space. A generalization to functions   taking values in any topological group is also possible[citation needed]. The "limiting process"   can also be generalized by introducing an arbitrary filter base, i.e. to directed nets   and  . The   notation can be used to define derivatives and differentiability in quite general spaces, and also (asymptotical) equivalence of functions,

 

which is an equivalence relation and a more restrictive notion than the relationship "  is  " from above. (It reduces to   if   and   are positive real valued functions.) For example,   is, but  .

History

edit

We sketch the history of the Bois-Reymond, Bachmann–Landau, Hardy, Vinogradov and Knuth notations.

In 1870, Paul du Bois-Reymond [21] defined  ,   and   to mean, respectively,   These were note widely adopted and are not used today. The first and third enjoy a symmetry:   means the same as  . Later, Landau adopted   in the narrower sense that the limit of   equals 1. None of these notations is in use today.

The symbol O was first introduced by number theorist Paul Bachmann in 1894, in the second volume of his book Analytische Zahlentheorie ("analytic number theory").[1] The number theorist Edmund Landau adopted it, and was thus inspired to introduce in 1909 the notation o;[2] hence both are now called Landau symbols. These notations were used in applied mathematics during the 1950s for asymptotic analysis.[22] The symbol   (in the sense "is not an o of") was introduced in 1914 by Hardy and Littlewood.[12] Hardy and Littlewood also introduced in 1916 the symbols   ("right") and   ("left"),.[13] This notation   became somewhat commonly used in number theory at least since the 1950s.[23]

The symbol  , although it had been used before with different meanings,[21] was given its modern definition by Landau in 1909[2] and by Hardy in 1910.[5] Just above on the same page of his tract Hardy defined the symbol  , where   means that both   and   are satisfied. The notation is still currently used in analytic number theory.[24] [10] In his tract Hardy also proposed the symbol  , where   means that   for some constant   (this corresponds to Bois-Reymond's notation  ).

In the 1930s, Vinogradov[6] popularized the notation   and  , both of which mean  . This notation became standard in analytic number theory.[4]

In the 1970s the big O was popularized in computer science by Donald Knuth, who proposed the different notation   for Hardy's  , and proposed a different definition for the Hardy and Littlewood Omega notation.[7]

Hardy introduced the symbols   and advocated for Boid-Reymond's   (as well as the already mentioned other symbols) in his 1910 tract "Orders of Infinity",[5] but made use of them only in three papers (1910–1913). In his nearly 400 remaining papers and books he consistently used the Landau symbols O and o.[25] Hardy's symbols   and   are not used anymore.

Matters of notation

edit

Arrows

edit

In mathematics, an expression such as   indicates the presence of a limit. In big-O notation and related notations  , there is no implied limit, in contrast with little-o,   and   notations. Notation such as   can be considered an abuse of notation.

Equals sign

edit

Some consider   to also be an abuse of notation, since the use of the equals sign could be misleading as it suggests a symmetry that this statement does not have. As de Bruijn says,   is true but   is not.[26] Knuth describes such statements as "one-way equalities", since if the sides could be reversed, "we could deduce ridiculous things like   from the identities   and  .[27] In another letter, Knuth also pointed out that[28]

the equality sign is not symmetric with respect to such notations [as, in this notation,] mathematicians customarily use the '=' sign as they use the word 'is' in English: Aristotle is a man, but a man isn't necessarily Aristotle.

For these reasons, some advocate for using set notation and write  , read as "  is an element of  ", or "  is in the set  " – thinking of   as the class of all functions   such that  .[27] However, the use of the equals sign is customary.[26][27] and is more convenient in more complex expressions of the form  

The Vinogradov notations   and  , which are widely used in number theory [9] [4] [10] do not suffer from this defect, as they more clearly indicate that big-O indicates an inequality rather than an equality. They also enjoy a symmetry that big-O notation lacks:   means the same as  . In combinatorics and computer science, these notations are rarely seen.[3]

Typesetting

edit

Big O is typeset as an italicized uppercase "O", as in the following example:  .[29][30] In TeX, it is produced by simply typing 'O' inside math mode. Unlike Greek-named Bachmann–Landau notations, it needs no special symbol. However, some authors use the calligraphic variant   instead.[31][32]

The big-O originally stands for "order of" ("Ordnung", Bachmann 1894), and is thus a Latin letter. Neither Bachmann nor Landau ever call it "Omicron". The symbol was much later on (1976) viewed by Knuth as a capital omicron,[7] probably in reference to his definition of the symbol Omega. The digit zero should not be used.

See also

edit

References and notes

edit
  1. ^ a b Bachmann, Paul (1894). Analytische Zahlentheorie [Analytic Number Theory] (in German). Vol. 2. Leipzig: Teubner.
  2. ^ a b c d e Landau, Edmund (1909). Handbuch der Lehre von der Verteilung der Primzahlen [Handbook on the theory of the distribution of the primes] (in German). Leipzig: B.G. Teubner; reprinted as two volumes in one by Chelsea, 1974, with an appendix by Dr. Paul T. Bateman. pp. 59–63.
  3. ^ a b c d e f Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2022). "Characterizing running times". Introduction to Algorithms (4th ed.). MIT Press and McGraw-Hill. ISBN 978-0-262-53091-0.
  4. ^ a b c d e f Iwaniec, Henryk; Kowalski, Emmanuel (2004). Analytic Number Theory. American Mathematical Society.
  5. ^ a b c d e Hardy, G. H. (1910). Orders of Infinity: The 'Infinitärcalcül' of Paul du Bois-Reymond. Cambridge University Press. p. 2.
  6. ^ a b c d Vinogradov, Matveevič (1934). "A new estimate for G(n) in Waring's problem". Doklady Akademii Nauk SSSR (in Russian). 5 (5–6): 249–253.
    Translated in English in:
    Vinogradov, Matveevič (1985). Selected works / Ivan Matveevič Vinogradov; prepared by the Steklov Mathematical Institute of the Academy of Sciences of the USSR on the occasion of his 90th birthday. Springer-Verlag.
  7. ^ a b c d e f g h i Knuth, Donald (April–June 1976). "Big Omicron and big Omega and big Theta". SIGACT News. 8 (2): 18–24. doi:10.1145/1008328.1008329. S2CID 5230246.
  8. ^ Sipser, Michael (2012). Introduction to the Theory of Computation (3 ed.). Boston, MA: PWS Publishin.
  9. ^ a b c d e f Ivić, A. (1985). The Riemann Zeta-Function. John Wiley & Sons. chapter 9.
  10. ^ a b c d e f Gérald Tenenbaum, Introduction to analytic and probabilistic number theory, « Notation », page xxiii. American Mathematical Society, Providence RI, 2015.
  11. ^ Seidel, Raimund (1991), "A Simple and Fast Incremental Randomized Algorithm for Computing Trapezoidal Decompositions and for Triangulating Polygons", Computational Geometry, 1: 51–64, CiteSeerX 10.1.1.55.5877, doi:10.1016/0925-7721(91)90012-4
  12. ^ a b c Hardy, G.H.; Littlewood, J.E. (1914). "Some problems of diophantine approximation: Part II. The trigonometrical series associated with the elliptic θ functions". Acta Mathematica. 37: 225. doi:10.1007/BF02401834. Archived from the original on 2018-12-12. Retrieved 2017-03-14.
  13. ^ a b Hardy, G.H.; Littlewood, J.E. (1916). "Contribution to the theory of the Riemann zeta-function and the theory of the distribution of primes". Acta Mathematica. 41: 119–196. doi:10.1007/BF02422942.
  14. ^ Landau, E. (1924). "Über die Anzahl der Gitterpunkte in gewissen Bereichen. IV" [On the number of grid points in known regions]. Nachr. Gesell. Wiss. Gött. Math-phys. (in German): 137–150.
  15. ^ Balcázar, José L.; Gabarró, Joaquim. "Nonuniform complexity classes specified by lower and upper bounds" (PDF). RAIRO – Theoretical Informatics and Applications – Informatique Théorique et Applications. 23 (2): 180. ISSN 0988-3754. Archived (PDF) from the original on 14 March 2017. Retrieved 14 March 2017 – via Numdam.
  16. ^ Cucker, Felipe; Bürgisser, Peter (2013). "A.1 Big Oh, Little Oh, and Other Comparisons". Condition: The Geometry of Numerical Algorithms. Berlin, Heidelberg: Springer. pp. 467–468. doi:10.1007/978-3-642-38896-5. ISBN 978-3-642-38896-5.
  17. ^ for example it is omitted in: Hildebrand, A.J. "Asymptotic Notations" (PDF). Department of Mathematics. Asymptotic Methods in Analysis. Math 595, Fall 2009. Urbana, IL: University of Illinois. Archived (PDF) from the original on 14 March 2017. Retrieved 14 March 2017.
  18. ^ Cormen et al. 2022, p. 57.
  19. ^ Cormen et al. 2022, p. 74–75.
  20. ^ Andreas Björklund and Thore Husfeldt and Mikko Koivisto (2009). "Set partitioning via inclusion-exclusion" (PDF). SIAM Journal on Computing. 39 (2): 546–563. doi:10.1137/070683933. Archived (PDF) from the original on 2022-02-03. Retrieved 2022-02-03. See sect.2.3, p.551.
  21. ^ a b Bois-Reymond, Paul du (1870). "Sur la grandeur relative des infinis des fonctions". Annali di Matematica, Serie 2. 4: 338–353. doi:10.1007/BF02420041.
  22. ^ Erdelyi, A. (1956). Asymptotic Expansions. Courier Corporation. ISBN 978-0-486-60318-6. {{cite book}}: ISBN / Date incompatibility (help).
  23. ^ E. C. Titchmarsh, The Theory of the Riemann Zeta-Function (Oxford; Clarendon Press, 1951)
  24. ^ Hardy, G. H.; Wright, E. M. (2008) [1st ed. 1938]. "1.6. Some notations". An Introduction to the Theory of Numbers. Revised by D. R. Heath-Brown and J. H. Silverman, with a foreword by Andrew Wiles (6th ed.). Oxford: Oxford University Press. ISBN 978-0-19-921985-8.
  25. ^ Hardy, G. H. (1966–1979). Collected papers of G. H. Hardy (Including Joint papers with J. E. Littlewood and others), 7 vols. Clarendon Press, Oxford.
  26. ^ a b de Bruijn, N.G. (1958). Asymptotic Methods in Analysis. Amsterdam: North-Holland. pp. 5–7. ISBN 978-0-486-64221-5. Archived from the original on 2023-01-17. Retrieved 2021-09-15. {{cite book}}: ISBN / Date incompatibility (help)
  27. ^ a b c Graham, Ronald; Knuth, Donald; Patashnik, Oren (1994). Concrete Mathematics (2 ed.). Reading, Massachusetts: Addison–Wesley. p. 446. ISBN 978-0-201-55802-9. Archived from the original on 2023-01-17. Retrieved 2016-09-23.
  28. ^ Donald Knuth (June–July 1998). "Teach Calculus with Big O" (PDF). Notices of the American Mathematical Society. 45 (6): 687. Archived (PDF) from the original on 2021-10-14. Retrieved 2021-09-05. (Unabridged version Archived 2008-05-13 at the Wayback Machine)
  29. ^ Donald E. Knuth, The art of computer programming. Vol. 1. Fundamental algorithms, third edition, Addison Wesley Longman, 1997. Section 1.2.11.1.
  30. ^ Ronald L. Graham, Donald E. Knuth, and Oren Patashnik, Concrete Mathematics: A Foundation for Computer Science (2nd ed.), Addison-Wesley, 1994. Section 9.2, p. 443.
  31. ^ Sivaram Ambikasaran and Eric Darve, An   Fast Direct Solver for Partial Hierarchically Semi-Separable Matrices, J. Scientific Computing 57 (2013), no. 3, 477–501.
  32. ^ Saket Saurabh and Meirav Zehavi,  -Max-Cut: An  -Time Algorithm and a Polynomial Kernel, Algorithmica 80 (2018), no. 12, 3844–3860.

Notes

edit
  1. ^ Note that the "size" of the input is typically used as an indication of how challenging a given instance is, of the problem to be solved. The amount of [execution] time, and the amount of [memory] space required to compute the answer, (or to "solve' the problem), are seen as indicating the difficulty of that instance of the problem. For purposes of Computational complexity theory, Big   notation is used for an upper bound on [the "order of magnitude" of] all 3 of those: the size of the input [data stream], the amount of [execution] time required, and the amount of [memory] space required.

Further reading

edit
  • Knuth, Donald (1997). "1.2.11: Asymptotic Representations". Fundamental Algorithms. The Art of Computer Programming. Vol. 1 (3rd ed.). Addison-Wesley. ISBN 978-0-201-89683-1.
  • Sipser, Michael (1997). Introduction to the Theory of Computation. PWS Publishing. pp. 226–228. ISBN 978-0-534-94728-6.
  • Avigad, Jeremy; Donnelly, Kevin (2004). Formalizing O notation in Isabelle/HOL (PDF). International Joint Conference on Automated Reasoning. doi:10.1007/978-3-540-25984-8_27.
  • Black, Paul E. (11 March 2005). Black, Paul E. (ed.). "big-O notation". Dictionary of Algorithms and Data Structures. U.S. National Institute of Standards and Technology. Retrieved December 16, 2006.
  • Black, Paul E. (17 December 2004). Black, Paul E. (ed.). "little-o notation". Dictionary of Algorithms and Data Structures. U.S. National Institute of Standards and Technology. Retrieved December 16, 2006.
  • Black, Paul E. (17 December 2004). Black, Paul E. (ed.). "Ω". Dictionary of Algorithms and Data Structures. U.S. National Institute of Standards and Technology. Retrieved December 16, 2006.
  • Black, Paul E. (17 December 2004). Black, Paul E. (ed.). "ω". Dictionary of Algorithms and Data Structures. U.S. National Institute of Standards and Technology. Retrieved December 16, 2006.
  • Black, Paul E. (17 December 2004). Black, Paul E. (ed.). "Θ". Dictionary of Algorithms and Data Structures. U.S. National Institute of Standards and Technology. Retrieved December 16, 2006.
edit