**We analyse our algorithms to determine the resources required to execute them, commonly by looking at the space or time complexity**. Often we are only interested in the running time, which is why most articles online only talk about running time. However, we can also look at other things like memory usage.

Why would we analyse an algorithm that already does what it’s supposed to do? Well, it might return the correct result, but very slowly. By comparing solutions or algorithms, we can find the most efficient one for the problem at hand. The analysis can also be used to see how efficient our code is.

How do we measure/analyse? We analyse by looking at the ** growth rate**. An algorithm always receives some input, and the input size can vary, we are therefore not concerned with finding out the running time for a particular value. We express running time as a function of the size of the input

`N`

as `f(N)`

because the running time increases as the input gets bigger. For example, `N^2`

have a higher growth rate than `100 * N`

.At first glance, it might not be that clear because:

```
N = 10
N^2 = 100
100 * N = 10000
```

As the input `n`

increases, we more clearly start to see a huge difference:

```
N = 100000
100000^2 = 10000000000
100 * 100000 = 10000000
```

How common growth rates looks like with different input sizes can be found in the table below.

Time Complexity | Name | N = 10 | N = 100 | N = 1000 |
---|---|---|---|---|

1 | Constant | 1 | 1 | 1 |

logn | Logarithmic | 1 | 2 | 3 |

n | Linear | 10 | 100 | 1000 |

nlogn | Linear Logarithmic | 10 | 200 | 3000 |

n^2 | Quadratic | 100 | 10 000 | 1 000 000 |

n^3 | Cubic | 1000 | 1 000 000 | 1 000 000 000 |

2^n | Exponential | 1024 | 1267650600228229401496703205376 | 2^1000 (try it out yourself) |

Usually, we look at three different analysis cases, which are the ** worst case**, the

```
def is_in_array(number, array):
for i in array:
if i == number:
return True
return False
```

The best case is that it finds a match on index 0 and returns true. For the average case, we would have to estimate the average time for each possible iteration. The worst case would be that it searches for a value that does not exist in the array, resulting in it having to loop through all the elements.

**We mainly only look at the worst case when analysing**. Why do we look at the worst case and not the best case? Let’s be optimistic, right? Well, this is because we want our program to run efficiently on all inputs, not reasonable on some. So **if we look at the worst case, we get a good indication of how well our algorithm is designed**. Looking at the best case and patting ourselves on the shoulders, followed by crossing our fingers each time the algorithm is executed is not a smart approach.

Should we look at the average case? It depends because a program's worst case input might not be that common. An average case analysis could then give us a clearer idea of what's going on. However, figuring out the average case can be more time consuming and harder than the worst case. Just don't forget that the worst-case result is often easy to find and sufficient, or at least the best result available.

**Big O notation is used to communicate the complexity of an algorithm**. We communicate how *the input size affects the rate of growth*. So if you look at the table above, we would say an algorithm is O(n) if it grows linearly, and O(1) if its growth rate is constant, etc. As you also know already know, we are interested in using Big-O for describing the worst case.

We will now look at some code and determine the correct Big O notation. The examples can be tricky to understand as a beginner of algorithm analysis but clear up some misconceptions common amongst beginners out there.

```
function isInArray(number, array) {
for (var element of array) {
if (element === number) {
return true;
}
}
return false;
}
```

This function has an O(n) runtime, in other words, it’s growth rate is linear. In this case, **the length of the array is n**. A clearer example is:

```
for (var i = 0; n > i; i++) {
console.log("hello");
}
```

Since we have no idea what `n`

(input size) is we must assume the worst case and say O(n). If it’s a loop then most often we are at least dealing with O(n) runtime. O(n) is not always the runtime for a loop, which we will see in the next example.

```
function isHigherThanTen(number) {
var ten = 0;
for (var i = 0; 10 > i; i++) {
ten += 1;
}
return number > ten;
}
```

This function has an O(1) runtime. Yes the code itself is badly written. However, it illustrates that not all functions containing loops must have an O(n) runtime because the input has nothing to do with the loop itself. A common mistake is to think that one loop automatically is O(n) or that a nested loop automatically is O(n^2). We should analyse the code, not guess the Big O.

```
var sum = 0;
for (i = 0; i < n; i++)
sum++;
for (j = 0; j < n; j++)
sum++;
```

The code has an O(n) runtime. In this case, it's O(n) because the highest rate of growth is linear, so we do not say O(2n). You might see the two loops and think the code has an O(n^2) runtime, but that would be a nested loop like:

```
var sum = 0;
for (i = 0; i < n; i++)
for (j = 0; j < n; j++)
sum++;
```

To summarize before you go, here’s a summarization list of some main points we covered:

- We analyse our algorithms to improve our code or pick the right algorithms.
- We look at how the input size affects the growth rate.
- We look at the worst case to get an indication of how well our algorithm is designed.
- Big O notation is used to communicate the complexity.