# Theta notation

Learn all the details of Theta notation: the last of the top 3 notations used in algorithm analysis to represent performance.

In the last article we met the second computational notation used to define the asymptotic behavior of algorithms: the Ω (Omega) notation. In this article we will teach you the third computational notation used to mathematically define the asymptotic behavior of algorithms: the Theta notation, represented by the Greek letter Θ.

If you haven’t read the article about the Ω (Omega) notation, click on this link and access the article where we explain in detail about the subject. It is highly recommended that you have some knowledge of Big O and Ω (Omega) notations before reading this article!

In the most objective way possible, the Θ (Theta) notation provides us with a simplified symbology to represent a tight performance threshold for an algorithm. An exact time limit that an algorithm takes to run.

That is, the Θ notation represents the meeting point between the Ω (lower limit) and Big O (upper limit) notations.

## Formal mathematical definition

Unlike what we did for the articles on Ω and Big O notations, let’s address the mathematical definition of Θ notation before moving on to the examples.

Given two algorithm time complexity functions: f(n) and g(n). The formal mathematical definition states that:

• f(n) = Θ(g(n)) (read “f of n is Theta of g of n”) if there are two positive constants c1 and c2 and a value n0, such that c1.g(n)f(n)c2.g(n) for all n ≥ n0

That is, if it is bounded from above and below (at the same time) by the same function g(n).

Let’s demonstrate with some illustrated examples.

## Mathematical resolution: 1st example

Let’s try to demonstrate by definition that 4n+1=Θ(n) (read “is theta of n”).

First, we have to:

• f(n) = 4n+1
• g(n) n.

If, by mathematical definition, we need to demonstrate that there are two positive constants c1 and c2 and an initial value n0, so that:

• c1.g(n) f(n)c2.g(n) for all n ≥ n0.

So, for this problem, we need to show that:

• c1.n 4n+1 ≤ c2.n for all n ≥ n0.

In summary: we need to prove that, for all values of n greater than or equal to n0, the function 4n+1 is:

• greater than or equal to c1 multiplied by n and;
• less than or equal to c2 multiplied by n.

To do so, the first decision we need to make is to choose values for the constants c1 and c2. In this case, let’s consider c1 being equal to the value 4 and c2 being equal to the value 5. See below a table that shows the inequality results for some values of n: Note that, for all values of n ≥ 1 expressed in the table above, the function of the 4n+1 algorithm is always greater than 4n and less than or equal to 5n.

When we plot a graph for both functions, we see more clearly how the function 4n+1 is bounded above by the function 5n and below by the function 4n: This means that the 4n+1 function will never have a growth behavior smaller than 4n nor will it grow more than 5n. For this reason, we say that Θ(n) represents a tight asymptotic limit for the function 4n+1, because it is bounded above and below by two other functions of the same asymptotic class: the linear class n.

In this way, we managed to prove our objective. There are two positive constants c1=4 and c2=5 and a n0=1, such that c1.n ≤ 4n+1c2.n for all values of n greater than or equal to n0.

Therefore, it is proved that, in fact, 4n+1 = Θ(n).

## Mathematical resolution: 2nd example (wrong)

Let’s try to demonstrate by definition that 5n2-n+1=Θ(n) (read “is theta of n”).

First, we have to:

• f(n) = 5n2-n+1
• g(n) n.

So, for this problem, we need to show that:

• c1.n 5n2-n+1 ≤ c2.n for all n ≥ n0.

In summary: we need to prove that, for all values of n greater than or equal to n0, the function 5n2-n+1 is:

• greater than or equal to c1 multiplied by n and;
• less than or equal to c2 multiplied by n.

To do so, the first decision we need to make is to choose values for the constants c1 and c2. In this case, let’s consider c1 being equal to the value 5 and c2 being equal to the value 10. See below a table that shows the inequality results for some values of n: Note that, for all values of n 2 expressed in the table, the algorithm’s function 5n2-n+1 is in fact less than or equal to function 10n. However, from the value of n=3 the function 5n2-n+1 is superior to the function 10n (corrupting the rule f(n)c2.g(n)).

This happens because the 10n function, being linear, will never be able to overcome the 5n2-n+1 function, which is quadratic.

See the relationship between the functions in the chart below. We can visualize with greater clarity how the function 5n2-n+1 manages to limit inferiorly the function 5n, for values of n ≥ 1. However, notice that the function 10n does not represent an asymptotic superior limit for the function 5n2-n+1, being surpassed by it for values of n > 2. This means that Θ(n) does not represent a tight asymptotic limit for the function 5n2-n+1. It means that there are no positive constants c1 and c2 and an n0, such that c1.n 5n2-n+1 ≤ c2.n for all values of n greater than or equal to n0.

In this way, it is proved that, in fact, 5n2-n+1≠Θ(n). (reads “is not theta of n”).

## Mathematical resolution: 3rd example (wrong)

Let’s try to demonstrate by definition that 4n+4=Θ(n2) (read “is theta of n”).

First, we have to:

• f(n) = 4n+4
• g(n) = n2.

So, for this problem, we need to show that:

• c1.n2 4n+4 ≤ c2.n2 for all n ≥ n0.

In summary: we need to prove that, for all values of n greater than or equal to n0, the function 4n+4 is:

• greater than or equal to c1 multiplied by n2 and;
• less than or equal to c2 multiplied by n2.

To do so, the first decision we need to make is to choose values for the constants c1 and c2. In this case, let’s consider c1 being equal to the value 1 and c2 being equal to the value 3. See below a table that shows the inequality results for some values of n: Note that, for all values of  4 expressed in the table, the algorithm function 4n+4 is in fact greater than or equal to function n2. However, from the value of n=5 function 4n+4 is overcome by function n2 (corrupting the rule c1.g(n) f(n)).

It turns out that the function n2, being quadratic, can never be surpassed by the function 4n+4, which is linear.

See the relationship between the functions in the chart below. Notice how the 4n+4 function is overcome by the two functions (n2 and 3n2) for values of n > 4. This means that Θ(n2) does not represent a tight asymptotic limit for the 4n+4 function. It means that there are no positive constants c1 and c2 and a n0, such that c1.n2 ≤ 4n+4c2.n2 for all values of n greater than or equal to n0.

In this way, it is proved that, in fact, 4n+4≠Θ(n2). (read “is not theta of n squared”).

## Relationships between Big O, Omega and Theta notations

An easy and practical way to understand the relationship between the three asymptotic notations (Big O,  and Θ) is to determine the asymptotic limits of a given function by constructing what we call an asymptotic relationship table. See how it can be done.

Given a time complexity function T(n) = 4n+4, we can easily discover all its Big O, Ω and Θ relationships by building a table where the rows represent the asymptotic classes and the columns represent the notations. Note that a function T(n) = 4n+4 is only Θ(n) (Theta of n) because it is Ω(n) (Omega of n) and O(n) (Big O of n) at the same time. That is, a function is Theta of a asymptotic class if it is Omega and Big O of that same asymptotic class!

In this way, the meaning of the asymptotic Big O,  and Θ notations is understood. They serve to represent performance limits of algorithms.

## Tight limits and loose limits

The concepts of tight and loose limits allow us to classify the overall performance of an algorithm, taking into account its best and worst cases.

### Loose asymptotic limit

When we say that algorithm A has:

• In the worst case: T(n) = 5n2-n+1
• In the best case: T(n) = 3n+2

We are claiming that:

• In the worst case: T(n) = Θ(n2) (Theta of n squared), as it has quadratic complexity.
• In the best case: T(n) = Θ(n) (Theta of n), as it has linear complexity.

This means that algorithm A has a loose asymptotic limit, since, in general, its performance behaves differently for the best and worst case.

That is, it, in general, is O(n2) and Ω(n).

### Tight asymptotic limit

When we say that algorithm B has:

• In the worst case: T(n) = 4n+1
• In the best case: T(n) = 3n+2

We are claiming that:

• In the worst case: T(n) = Θ(n) (Theta of n), as it has linear complexity.
• In the best case: T(n) = Θ(n) (Theta of n), as it has linear complexity.

This means that algorithm B has a tight asymptotic limit, since, in general, their performance behaves the same for the best and worst case.

That is, it, in general, is Θ(n).

## Exercises

To demonstrate that you have understood the concept and the formal mathematical definition of Theta notation well, solve the proposed exercises:

### Exercises A

Try to prove mathematically that n2+800 = Θ(n2).

Try to prove mathematically that n+10 = Θ(n2).

Try to prove mathematically that 2n+10 = Θ(n).

Try to prove mathematically that 7n-2 = Θ(1).

Try to prove mathematically that n2+20n+5 = Θ(n3).

### Exercise B

Complete the asymptotic relationship table below: ### Exercise C

Using the instructions counting technique, find the complexity formula T(n) of the algorithm below, and then try to mathematically demonstrate that this formula found is Θ(n²).

`1.public static void func(int[ ] n){2.     for (int i=0; i<n.length-1; i++) {3.          for (int j=0; j<n.length-1; j++) {4.               n[j]=n[j]+(i*j);5.          }6.     }7.}`

## Playlist

2 Videos

And so we come to the end of the first part of the algorithm analysis course, which aimed to teach you the basics of the instructions counting technique and the 3 main asymptotic notations.

Soon we will be producing the second part of this course, aimed at teaching the techniques used for the analysis of recursive algorithms. 