## Introduction to Theta notation

In the last article we know the second computational notation used in algorithm analysis to define the asymptotic behavior of the algorithms: the Ω (omega) notation. In this article we will teach you the third computational notation used to mathematically define the asymptotic behavior of algorithms: the Theta notation, represented by the Greek letter Θ.

If you have not read the article on the notation Ω (Omega), click this link and go to the article where we explain in detail about it. It is highly recommended that you have some knowledge of Big O and Ω (Omega) notation before reading this article!

As objectively as possible, the Θ notation provides us with a simplified symbology to represent a **fair performance threshold** for an algorithm. An **exact time limit** that an algorithm takes to execute. That is, the notation Θ represents the meeting point between the notations Ω (inferior limit) and Big O (superior limit).

## Formal mathematical definition of Theta notation

Unlike what we did for the articles on Ω and Big O notations, let’s cover the mathematical definition of Θ notation before moving on to the examples.

Given two time complexity functions of algorithms: f(n) and g(n). The formal mathematical definition states that:

- f(n) = Θ(g(n)) if there are two positive constants c
_{1}and c_{2}and a value n0, such that c_{1}.g(n) ≤ f(n) ≤ c_{2}.g(n) for all n ≥ n0.

That is, if it **is limited upper and lower** by the same function g(n). Let’s demonstrate with some illustrated examples.

## First example of mathematical use of Theta notation

Let us try to demonstrate by definition that 4n+1 = Θ(n).

By definition, all we have to do is demonstrate that there are two positive constants c1 and c2 and a positive initial value n0 so that for all n values greater than or equal to n0, 4n+1 is always **greater than or equal** to c1 multiplied by n and **less than or equal **to c2 multiplied by n (at the same time).

In summary: we need to prove that c_{1}.n ≤ 4n+1 ≤ c_{2}.n, for positive c1 and c2 constants and a positive initial n0 value.

To do so, the first decision we need to make is to choose values for the constants c1 and c2. In this case, let’s consider c1 being the value **4** and c2 being the value **5**. Note below a table showing the inequality results for some values of n:

Note that for all values of n ≥ 1 expressed in the table, the function of the 4n+1 algorithm is always greater than 4n and less than or equal to 5n. When we plot a graph for both functions, we see more clearly how function 4n+1 is bounded superiorly by function 5n and inferiorly by function 4n:

This means that the 4n+1 function will never grow below 4n and will not grow more than 5n. For this reason we say that Θ(n) represents a **fair** asymptotic limit for the 4n+1 function, because it is limited upper and lower by two other functions of the same asymptotic class, the linear class.

In this way we can prove our goal. There are two positive constants c1=4 and c_{2}=5 and one n_{0}=1, such that c_{1}.n ≤ 4n+1 ≤ c_{2}.n for all n values greater than or equal to n0. Therefore, it is proved that, in fact, 4n+1 = Θ(n).

## Second example: a failed attempt

Let’s try to demonstrate by definition that 5n^{2}-n+1 = Θ(n).

By definition, all we have to do is demonstrate that there are two positive constants c1 and c2 and a positive initial value n0, so that for all n values greater than or equal to n0, 5n2-n+1 is always **greater than or equal** to c1 multiplied by n and **less than or equal **to c2 multiplied by n (at the same time).

In summary: we need to prove that c_{1}.n ≤ 5n^{2}-n+1 ≤ c_{2}.n, for positive c1 and c2 constants and a positive initial n0 value.

To do so, the first decision we need to make is to choose values for the constants c1 and c2. In this case, let’s consider c1 being the value **5** and c2 being the value **10**. Note below a table showing the inequality results for some values of n:

Note that for all values of n ≥ 1 expressed in the table, the function of algorithm 5n2-n+1 is actually greater than or equal to function 5n. However, when the value of n=3 the function is exceeded by function 10n. It turns out that the function 10n, being linear, will never be able to surpass the function 5n2-n+1, which is quadratic.

See in the graph below the relationship between the functions. We can see more clearly how the 5n2-n+1 function is inferiorly limited by the 5n function, for values of n ≥ 1. But note that the function 10n does not represent a superior asymptotic limite, being surpassed by the function 5n2-n+1 for values of n > 2.

This means that Θ(n) does not represent a fair asymptotic limit for the 5n2-n+1 function. It means that there are no positive constants c1 and c2 and one n_{0}=1, such that c_{1}.n ≤ 5n^{2}-n+1 ≤ c_{2}.n for all n values greater than or equal to n0. Thus, it is proved that, in fact, 5n2-n+1 ≠ Θ(n). (reads “**not **n teta”).

## Third example: a failed attempt

Let’s try to demonstrate by definition that 4n+4 = Θ(n^{2}).

By definition, all we have to do is demonstrate that there are two positive constants c1 and c2 and a positive initial value n0, so that for all n values greater than or equal to n0, 4n+4 is always **greater than or equal** to c1 multiplied by n^{2} and **less than or equal **to c2 multiplied by n2 (at the same time).

In summary: we need to prove that c_{1}.n^{2} ≤ 4n+4 ≤ c_{2}.n^{2}, for positive c1 and c2 constants and a positive initial n0 value.

To do so, the first decision we need to make is to choose values for the constants c1 and c2. In this case, let’s consider c1 being the value **1** and c2 being the value **3**. Note below a table showing the inequality results for some values of n:

Note that for all values of n ≥ 2 expressed in the table, the function of the 4n+4 algorithm is actually less than or equal to the 3n2 function. However, when the value of n=5 the function is surpassed by the function n2. It turns out that the function n2, being quadratic, can never be surpassed by the function 4n+4, which is linear.

See in the graph below the relationship between the functions. We can more clearly see how the 4n+4 function is bounded superiorly by the 3n2 function, for values of n ≥ 1. But note that the function n2 does not represent an inferior asymptotic limite, since it surpasses the function 4n+4 to values of n > 4.

This means that Θ(n^{2}) does not represent a fair asymptotic limit for the 4n+4 function. It means that there are no positive constants c1 and c2 and one n0=1, such that c1.n2 ≤ 4n+4 ≤ c2.n2 for all n values greater than or equal to n0. Thus, it is proved that, in fact, 4n+4 ≠ Θ(n2). (it reads “**not **teta from n squared”).

## Relationships between Big O, Omega and Theta

An easy and practical way to understand the relationship between the three asymptotic notations is to determine the asymptotic boundaries of a given function by constructing what we call the **asymptotic relationship table**. See how it can be done.

Given a time complexity function T(n)=4n+4, we can easily discover all of their Big O, Ω and Θ relations by constructing a table where the rows represent asymptotic classes and the columns represent Big O, Ω and Θ notations.

Note that a function T(n) = 4n+4 is only Θ(n) (Theta of n) because it is Ω(n) (Omega of n) and O(n) (Big O of n) at the same time. That is, a function is Theta of a class if it is Omega and Big O of that same asymptotic class!

Thus, the meaning of asymptotic Big O, Ω and Θ notations is understood. They serve to represent performance limits of algorithms.

## Hard limits and loose limits of algorithms

The hard and loose limit concepts allow us to rate the overall performance of an algorithm, taking into account its best and worst case.

When we state that the algorithm *“Bubble Sort”* has:

- In the worst case: T(n) = 5n
^{2}-n+1 - In the best case: T(n) = 3n+2

We are stating that:

- In the worst case: T(n) = Θ(n
^{2}) (Theta of n squared), as it has quadratic complexity. - In the best case: T(n) = Θ(n) (Theta of n), as it has linear complexity.

This means that the *“Bubble Sort”* algorithm has a **loose asymptotic limit**, since overall its performance behaves differently for better and worse. That is, it is generally O(n2) and Ω(n).

When we state that the *“Find Max”* algorithm (which calculates the largest value of an array) has:

- In the worst case: T(n) = 4n+1
- In the best case: T(n) = 3n+2

We are stating that:

- In the worst case: T(n) = Θ(n) (Theta of n), as it has linear complexity.
- In the best case: T(n) = Θ(n) (Theta of n), as it has linear complexity.

This means that the *“Find Max”* algorithm has a **hard asymptotic limit**, since overall its performance behaves the same for the best and worst case. That is, it is generally Θ(n).

## Exercises

Try to demonstrate mathematically that n^{2}+800 = Θ(n^{2}).

Try to demonstrate mathematically that n+10 = Θ(n^{2}).

Try to demonstrate mathematically that 2n+10 = Θ(n).

Try to demonstrate mathematically that 7n-2 = Θ(1).

Try to demonstrate mathematically that n^{2}+20n+5 = Θ(n^{3}).

Fill in the asymptotic relationship table below: