# Big O notation (algorithm analysis)

Learn the full details of the Big O notation: the first of the top 3 notations used in algorithm analysis to represent performance.

## Introduction to Big O notation

In the last article we learned about asymptotic classes and saw how important they are for algorithm analysis. In this article we will learn the first of three notations used in computing to represent the asymptotic behavior of algorithms: the Big O notation.

If you have not read the article on “asymptotic classe”, click this link and go to the article where we teach you valuable knowledge on the subject.

As objectively as possible, Big O notation provides us with simplified symbology to represent an upper performance limit for an algorithm. A maximum time limit that an algorithm takes to execute.

• nstead of saying that the time complexity of the Bubble Sort (in the worst case) is equal to T(n)= 5n2-n+1, we say that its complexity is Big O(n2) We use the quadratic class n2 inside the parentheses of the notation to specify that the algorithm takes, at most, a quadratic time to execute.
• Rather than saying that the time complexity of the algorithm that finds the largest value of an array (in the worst case) is equal to T(n)=4n+1, we say its complexity is Big O(n). We use the linear class n inside the parentheses of the notation to specify that the algorithm takes, at most, a linear time to execute.

This is a little superficial explanation, but it is an introduction to getting you started understanding the meaning and use of Big O notation.

## Example for linear function

When we say that the function 4n+1 is Big O(n), we are saying that the asymptotic behavior of a linear function is equal to or greater than its own. This means that the 4n+1 function will never grow to the point where it surpasses linear behavior. Thus, Big O(n) presents itself as an upper limit, and we know that the function will never exhibit growth behavior that exceeds this upper limit.

Example: for function 4n+1, there is another function of linear behavior that limits it superiorly. Note in the graph below that, for values of n>1, the function 4n+1 is surpassed by another function of class n. Similarly, it is also correct to say that 4n+1 is Big O(n2), because the 4n+1 function will never grow to the point where it exceeds quadratic behavior. Thus, O(n2) also presents itself as an upper asymptotic limit: Now it would be wrong to say that 4n+1 is Big O(1), because the 4n+1 function will always grow to the point where it surpasses constant behavior. Thus, Big O(1is not presented as an upper asymptotic limit: ### Summing up:

4n+1 is Big O(n2), as it can be overcome (superiorly limited) by a quadratic function.

4n+1 is Big O(n), as it can be overcome (superiorly limited) by a linear function.

4n+1 is not Big O(1), as it cannot be surpassed (superiorly limited) by a constant function.

## Example for quadratic function

When we say that the function 5n2-n+1 is Big O(n2), we are saying that the asymptotic behavior of a quadratic function is equal to or greater than its own. This means that the function 5n2-n+1 will never grow to the point where it exceeds quadratic behavior. Thus, O(n2) is presented as an upper limit and we know that the function will never exhibit growth behavior that exceeds this upper limit.

Example: For the 5n2-n+1 function, there is another quadratic behavior function that limits it superiorly. See below: Similarly, it is also correct to say that 5n2​-n+1 is Big O(n3​), because the function 5n2​-n+1 will never grow to the point where it exceeds cubic behavior. Thus, O(n3) also presents itself as an upper asymptotic limit: Now it would be wrong to say that 5n2-n+1 is Big O(n), because the function 5n2-n+1 will always grow to the point where it surpasses linear behavior. Thus, O(n) does not present itself as an upper asymptotic limit: ### Summing up:

5n2-n+1 is Big O(n3), because it can be overcome (limited above) by a cubic function.

5n2-n+1 is Big O(n2), because it can be overcome (limited above) by a quadratic function.

5n2-n+1 is not Big O(n), because it cannot be overcome (superiorly limited) by a linear function.

## Formal mathematical definition of Big O notation

Given two time complexity functions of algorithms: f(n) and g(n). The formal mathematical definition states that:

• f(n) = O(g(n)) if positive constants c and n0 exist, such that f(n) ≤ c.g(n) for all n ≥ n0.

This definition seems a bit complicated to understand at first glance, but let’s demonstrate with some practical examples.

## First example of mathematical use of Big O notation

Let us try to demonstrate by definition that 4n+1=O(n) (reads “It’s Big O of n”).

By definition, all we have to do is demonstrate that there is a positive constant c and a positive initial value n0 so that for all values of n greater than or equal to n0, 4n+1 is always less than or equal to c multiplied by n.

In summary: we need to prove that 4n+1 ≤ c.n, for a positive c constant and a positive initial n0 value.

To do so, the first decision we need to make is to choose a value for the constant c. In this case, let’s consider c equal to the value 5. Note below a table showing the inequality results for c=5 and for some values of n: Note that for n equal to 0 (zero) the function of algorithm 4n+1 is greater than c.n. But when n equals 1, the values of c.n begin to outperform the 4n+1 algorithm. When we plot a graph for both functions, we see more clearly this exact moment of overrun of the c.n function: This means that O(n) represents an upper asymptotic limit for the 4n+1 function. For from a given value n, c.n surpasses the function.

However, what really matters for algorithm analysis is that we can demonstrate our goal, namely that there is a constant c=5 and an n0=1, such that 4n+1 ≤ c.n for all n values greater than or equal to n0. Thus, it is proved that, in fact, 4n+1=O(n) (is Big O of n).

## Second example of the mathematical use of Big O notation

Let us try to demonstrate by definition that 5n2-n+1=O(n2) (reads “It’s Big O from n squared”).

By definition, all we have to do is demonstrate that there is a positive constant c and a positive initial value n0 so that for all n values greater than or equal to n0​, 5n2-n+1 is always less than or equal to c multiplied by n2.

In summary, we need to prove that 5n2-n+1 ≤ c.n2, for a constant c and an initial n0 value.

Again, we need to choose a value for the constant c. In this case, we will also consider c equal to the value 5. Note below a table showing the inequality results for c=5 and for some values of n: Note that for n equal to 0 (zero) the algorithm function 5n2-n+1 is greater than c.n2. But when n equals 1 the values of c.n2 begin to outperform the algorithm 5n2-n+1. When we plot a graph for both functions, we see more clearly this exact moment of overrun of the c.n2 function: This means that O(n2) represents an upper asymptotic limit for the 5n2-n+1 function. For from a given value n, c.n2 surpasses the function. However, what really matters for algorithm analysis is that we can demonstrate our goal, that there is a constant c=5 and n0=1, such that 5n2-n+1 ≤ c.n2 for all n values greater than or equal to n0. Thus, it is proved that, in fact, 5n2-n+1=O(n2) (is Big O from n squared).

## Third example: a failed attempt

Let’s try to demonstrate by definition that 2n2-n=O(n).

By definition, all we have to do is demonstrate that there is a positive constant c and an initial value n0 so that for all values greater than n0, 2n2-n is always less than or equal to c multiplied by n.

In summary, we need to prove that 2n2-n≤c.n, for a constant c and an initial n0 value.

At first glance, it is clear that 2n2-n has quadratic behavior and c.n has linear behavior. So we are tempted to claim that this demonstration is false, since a linear function (c.n) cannot represent an upper asymptotic limit for a quadratic function (2n2-n). But let’s take a closer look to see why this is impossible.

Let’s perform some algebraic transformations in order to isolate the constant c in inequality and understand its possible values: The resulting inequality already shows us that it is impossible for any constant c to make the expression valid for large values of n (n→∞), because no matter how large the value of c is, the value of n will always exceed it.

Below are some graphs illustrating some scenarios of c.n being overcome by the 2n2-n quadratic function:

Graph for constant c=10 Graph for constant c=100 Graph for constant c=1000 That is, no matter how large the constant c. The 2n2-n function will always outperform the c.n.

Therefore, we cannot demonstrate our goal. There is no constant c and n0, such that 2n2-n ≤ c.n for all n values greater than or equal to n0. Thus, it is proved that, in fact, 2n2-n≠O(n) (reads “it’s not Big O of n”).

## Exercises

Try to demonstrate mathematically that n2+800=O(n2).

Try to demonstrate mathematically that 2n+10=O(n).

Try to demonstrate mathematically that n2=O(n).

Try to demonstrate mathematically that 7n-2=O(n).

Try to demonstrate mathematically that n2+20n+5=O(n3). 