## Introduction to Big O notation

In the last article we learned about asymptotic classes and saw how important they are for algorithm analysis. In this article we will learn the first of three notations used in computing to represent the asymptotic behavior of algorithms: the **Big O **notation.

As objectively as possible, Big O notation provides us with simplified symbology to represent an **upper performance limit** for an algorithm. A maximum time limit that an algorithm takes to execute.

- nstead of saying that the time complexity of the
*Bubble Sort*(in the worst case) is equal to T(n)= 5n^{2}-n+1, we say that its complexity is Big O(n^{2}) We use the quadratic class n^{2}inside the parentheses of the notation to specify that the algorithm takes, at most, a quadratic time to execute. - Rather than saying that the time complexity of the algorithm that finds the largest value of an array (in the worst case) is equal to T(n)=4n+1, we say its complexity is Big O(n). We use the linear class n inside the parentheses of the notation to specify that the algorithm takes, at most, a linear time to execute.

This is a little superficial explanation, but it is an introduction to getting you started understanding the meaning and use of Big O notation.

## Example for linear function

When we say that the function 4n+1 is Big O(n), we are saying that the asymptotic behavior of a linear function **is equal to or greater than its own**. This means that the 4n+1 function will never grow to the point where it surpasses linear behavior. Thus, Big O(n) presents itself as an upper limit, and we know that the function will never exhibit growth behavior that exceeds this upper limit.

Example: for function 4n+1, there is another function of linear behavior that limits it superiorly. Note in the graph below that, for values of n>1, the function 4n+1 is surpassed by another function of class n.

Similarly, it is also correct to say that 4n+1 is Big O(n^{2}), because the 4n+1 function will never grow to the point where it exceeds quadratic behavior. Thus, O(n^{2}) also presents itself as an upper asymptotic limit:

Now it would be **wrong to say** that 4n+1 is Big O(1), because the 4n+1 function will always grow to the point where it surpasses constant behavior. Thus, Big O(1) is **not presented** as an upper asymptotic limit:

### Summing up:

4n+1 is Big O(n^{2}), as it can be overcome (superiorly limited) by a quadratic function.

4n+1 is Big O(n), as it can be overcome (superiorly limited) by a linear function.

4n+1 is not Big O(1), as it **cannot **be surpassed (superiorly limited) by a constant function.

## Example for quadratic function

When we say that the function 5n^{2}-n+1 is Big O(n^{2}), we are saying that the asymptotic behavior of a quadratic function **is equal to or greater **than its own. This means that the function 5n^{2}-n+1 will never grow to the point where it exceeds quadratic behavior. Thus, O(n^{2}) is presented as an upper limit and we know that the function will never exhibit growth behavior that exceeds this upper limit.

Example: For the 5n^{2}-n+1 function, there is another quadratic behavior function that limits it superiorly. See below:

Similarly, it is also correct to say that 5n^{2}-n+1 is Big O(n^{3}), because the function 5n^{2}-n+1 will never grow to the point where it exceeds cubic behavior. Thus, O(n^{3}) also presents itself as an upper asymptotic limit:

Now it would be **wrong to say that** 5n^{2}-n+1 is Big O(n), because the function 5n^{2}-n+1 will always grow to the point where it surpasses linear behavior. Thus, O(n) **does not present itself **as an upper asymptotic limit:

### Summing up:

5n^{2}-n+1 is Big O(n^{3}), because it can be overcome (limited above) by a cubic function.

5n^{2}-n+1 is Big O(n^{2}), because it can be overcome (limited above) by a quadratic function.

5n^{2}-n+1 is not Big O(n), because it **cannot **be overcome (superiorly limited) by a linear function.

## Formal mathematical definition of Big O notation

Given two time complexity functions of algorithms: f(n) and g(n). The formal mathematical definition states that:

- f(n) = O(g(n)) if positive constants c and n
_{0}exist, such that f(n) ≤ c.g(n) for all n ≥ n_{0}.

This definition seems a bit complicated to understand at first glance, but let’s demonstrate with some practical examples.

## First example of mathematical use of Big O notation

Let us try to demonstrate by definition that 4n+1=O(n) (reads “It’s Big O of n”).

By definition, all we have to do is demonstrate that there is a positive constant c and a positive initial value n_{0} so that for all values of n greater than or equal to n_{0}, 4n+1 is always **less than or equal** to c multiplied by n.

In summary: we need to prove that 4n+1 ≤ c.n, for a positive c constant and a positive initial n_{0} value.

To do so, the first decision we need to make is to choose a value for the constant c. In this case, let’s consider c equal to the value **5**. Note below a table showing the inequality results for c=5 and for some values of n:

Note that for n equal to 0 (zero) the function of algorithm 4n+1 is greater than c.n. But when n equals 1, the values of c.n begin to outperform the 4n+1 algorithm. When we plot a graph for both functions, we see more clearly this exact moment of overrun of the c.n function:

This means that O(n) represents an upper asymptotic limit for the 4n+1 function. For from a given value n, c.n surpasses the function.

However, what really matters for algorithm analysis is that we can demonstrate our goal, namely that there is a constant c=5 and an n_{0}=1, such that 4n+1 ≤ c.n for all n values greater than or equal to n_{0}. Thus, it is proved that, in fact, 4n+1=O(n) (is Big O of n).

## Second example of the mathematical use of Big O notation

Let us try to demonstrate by definition that 5n^{2}-n+1=O(n^{2}) (reads “It’s Big O from n squared”).

By definition, all we have to do is demonstrate that there is a positive constant c and a positive initial value n_{0} so that for all n values greater than or equal to n_{0}, 5n^{2}-n+1 is always **less than or equal** to c multiplied by n^{2}.

In summary, we need to prove that 5n^{2}-n+1 ≤ c.n^{2}, for a constant c and an initial n_{0} value.

Again, we need to choose a value for the constant c. In this case, we will also consider c equal to the value **5**. Note below a table showing the inequality results for c=5 and for some values of n:

Note that for n equal to 0 (zero) the algorithm function 5n^{2}-n+1 is greater than c.n^{2}. But when n equals **1** the values of c.n^{2} begin to outperform the algorithm 5n^{2}-n+1. When we plot a graph for both functions, we see more clearly this exact moment of overrun of the c.n^{2} function:

^{2}) represents an upper asymptotic limit for the 5n

^{2}-n+1 function. For from a given value n, c.n

^{2}surpasses the function. However, what really matters for algorithm analysis is that we can demonstrate our goal, that there is a constant c=5 and n

_{0}=1, such that 5n

^{2}-n+1 ≤ c.n

^{2}for all n values greater than or equal to n

_{0}. Thus, it is proved that, in fact, 5n

^{2}-n+1=O(n

^{2}) (is Big O from n squared).

## Third example: a failed attempt

Let’s try to demonstrate by definition that 2n^{2}-n=O(n).

By definition, all we have to do is demonstrate that there is a positive constant c and an initial value n_{0} so that for all values greater than n_{0}, 2n^{2}-n is always **less than or equal** to c multiplied by n.

In summary, we need to prove that 2n^{2}-n≤c.n, for a constant c and an initial n_{0} value.

At first glance, it is clear that 2n^{2}-n has quadratic behavior and c.n has linear behavior. So we are tempted to claim that this demonstration is false, since a linear function (c.n) cannot represent an upper asymptotic limit for a quadratic function (2n^{2}-n). But let’s take a closer look to see why this is impossible.

Let’s perform some algebraic transformations in order to isolate the constant c in inequality and understand its possible values:

The resulting inequality already shows us that it is impossible for any constant c to make the expression valid for large values of n (n→∞), because no matter how large the value of c is, the value of n will always exceed it.

Below are some graphs illustrating some scenarios of c.n being overcome by the 2n^{2}-n quadratic function:

That is, no matter how large the constant c. The 2n^{2}-n function will always outperform the c.n.

Therefore, we cannot demonstrate our goal. There is no constant c and n_{0}, such that 2n^{2}-n ≤ c.n for all n values greater than or equal to n_{0}. Thus, it is proved that, in fact, 2n^{2}-n≠O(n) (reads “it’s **not** Big O of n”).

## Exercises

Try to demonstrate mathematically that n^{2}+800=O(n^{2}).

Try to demonstrate mathematically that 2n+10=O(n).

Try to demonstrate mathematically that n^{2}=O(n).

Try to demonstrate mathematically that 7n-2=O(n).

Try to demonstrate mathematically that n^{2}+20n+5=O(n^{3}).