Time Complexity and Big-O Notation

Charlie Knight
4 min readFeb 18, 2021

When we first start out learning to code we are concerned with just getting our function or simple application to run. The runtime of the different methods don’t matter much yet. As we soon learn, this is a very important concept to understand, and knowing how time works as it relates to an algorithm can allow one to make a much faster site/application/game when it comes to the algorithmic portion. Big-O notation was invented by Paul Bachmann and Edmund Landau and is the standard by which time is measured as the the input size grows by n. With n input we could write the function as O(n), which would be the time complexity of iterating through an unsorted array of size n and is also called “linear time.” The prior example is just one of many used in computer science, and it’s important to understand why an algorithm is a certain time complexity so you can 1) optimize/refactor 2) understand where your application may be slow 3) understand how it will scale. Going through the below diagram, I will explain where these time complexities are seen.

Constant Time — O(1)

The simplest example of time complexity is constant time. Denoted as O(1) and means time does not grow as the input size grows, it will always be the same. This time complexity is what we should always try to achieve, if possible. An example operation with a constant complexity is accessing an index in an array.

var arr = [1,2,3,4]var[0] /=> 1

Even if this array had a thousand elements, accessing the first element wouldn’t change the time. When we use arr.pop() or arr.push() we are also using this same concept. Constant time is reserved for arrays and hash tables. Example problems include insertion or deletion within a linked list, index of an array or a search problem within a hash table.

Logarithmic Time — O(log n)

Logarithmic functions by halving the input until it reaches a growth rate of zero, and in an algorithm, the target value. It essentially “flattens” as the input size grows. The most common example of this is Binary Search which is where the position of a target value is found within a sorted array. In a sorted array we can start in the middle and check if the target value is greater or less than the median, if greater, we only we need to search the right side of the array and opposite is true if it is less. This process is repeated until the position is found, with the size of the array being cut in half with every iteration.

// Iterative function to implement Binary Searchlet iterativeFunction = function (arr, x) {  let start = 0 
let end = arr.length-1
// Iterate while start not meets end while (start<=end){// Find the mid index let mid=Math.floor((start + end)/2);// If element is present at mid, return True if (arr[mid]===x) return true;// Else look in left or right half accordingly else if (arr[mid] < x)
start = mid + 1;
else
end = mid - 1;


return false;
}

Binary search trees are set up to perform this very type of algorithm (hence the name), with the left child always less than the parent node and the right child always being greater than the parent node.

Linear Time — O(n)

Fairly simple to understand, linear time grows in a straight line. Anytime a search is performed on an array — that isn’t binary search — we are typically performing a search in O(n) time. An example would be to find the max value in an unsorted array.

function findMax(n) {
let max;
let counter = 0;

for (let i = 0; i < n.length; i++) {
counter++;
if(max === undefined || max < n[i]) {
max = n[i];
}
}

console.log(`n: ${n.length}, counter: ${counter}`);
return max;
}

We only need to iterate through the entire list once, but we still need to iterate through it entirely to come to the max value. If you had to perform another iteration within the for loop in the example above, you would be solving in our next time.

Quadratic Time — O(n)²

Whenever a nested loop is required to solve for an output, we add the nth degree, for the number of loops. An example would be looping through a list to find a match, which requires a check for every element in the array against every other element in the array.

const matcher = array => {
for (let i = 0; i < array.length; i++){
for (let j = 0; j < array.length; j++){
if (i !==j && array[i] === array[j]){
return `Match found at ${i} and ${j}`;
}
}
}
return 'No matches found';
}

In pseudocode:

//pseudocodearray[a] == array[i] ?
i++ // this will loop n times (n)
when i reaches length of array
a++ // this will loop n times (n^2)

If there is way to not use a nested loop in an algorithm, it should be solved for. It is not a scalable solution when a certain size is reached.

https://towardsdatascience.com/linear-time-vs-logarithmic-time-big-o-notation-6ef4227051fb

--

--