Table of Contents

## What is time complexity?

Time complexity is defined as the amount of time an algorithm takes to run as a function of the length of the input. It measures the time required to execute each code statement in the algorithm. It will not examine the total execution time of the algorithm. Rather, it will provide information about the variation (increase or decrease) in execution time as the number of operations (increase or decrease) in the algorithm. Yes, as the definition says, the amount of time depends only on the length of the input.

## Introduction to Time Complexity

Space and time define any physical object in the universe. Similarly, space and time complexity can define the efficiency of an algorithm. While we know that there is more than one way to solve a problem in programming, knowing how an algorithm works efficiently can add value to the way we program. Knowing the efficiency of a program/algorithm, knowing how to evaluate them using space and time complexity can make the program behave under the desired optimal conditions, thus making us powerful programmers.

While we reserve space for understanding space complexity for the future, let’s focus on time complexity in this post. Time is money! In this post, you will discover a gentle introduction to algorithm time complexity and how to evaluate a program based on time complexity.

**After reading this post you will know:**

1. Why is time complexity so important?

2. What is time complexity?

3. How to calculate time complexity?

4. Time complexity of sorting algorithms

5. Time complexity of search algorithms

6. Spatial complexity

**Let’s get started.**

## Why is time complexity important?

### Let’s first understand what defines an algorithm.

An algorithm in computer programming is a finite sequence of well-defined instructions, typically executed by a computer, to solve a class of problems or to perform a common task. By definition, there must be a sequence of defined instructions that must be given to the computer to execute the algorithm/perform a particular task. In this context, changes may occur in the way instructions are defined. There can be any number of ways, a specific set of instructions can be defined to perform the same task. With the options available to choose any of the available programming languages, the instructions can have any form of syntax along with the performance limits of the chosen programming language. We have also listed the algorithm to be executed on the computer, which leads to further variation in terms of the operating system used, processor, hardware, etc., which can also affect how the algorithm is executed.

### What are the different types of time complexity notation used?

As we have seen, time complexity is given by time as a function of input length. And there is a relationship between the size of the input data (n) and the number of operations performed (N) with respect to time. This relation is denoted Order of Growth in Time Complexity and given the notation O[n], where O is the order of growth and n is the length of the input. Also called “Big O Notation”

Big O Notation expresses the running time of an algorithm in terms of how fast it grows with respect to input ‘n’ by defining N the number of operations performed on it. Thus, the time complexity of the algorithm is denoted by the combination of all O[n] assigned for each line of the function.

**Different types of time complexity are used, let’s look at them one by one:**

1. Constant Time – O (1)

2. Linear time – O (n)

3. Logarithmic time – O (log n)

4. Quadratic time – O (n^2)

5. Cubic Time – O (n^3)

and many more complex notations such as exponential time, quasi-linear time, factorial time, etc. are used based on the type of functions defined.

## How to calculate time complexity?

We have seen how each function is assigned an order and the relationship between runtime and number of operations and input size. Now it’s time to know how to evaluate the time complexity of an algorithm based on notation of the order it gets for each operation and the size of the input, and calculate the total running time required to run the algorithm for a given n.

### Let’s see how to evaluate the time complexity of an algorithm using an example:

**The algorithm is defined as:**

1. Given 2 input matrices, which are square matrices of order n

2. The values of each element in both arrays are chosen randomly using the np.random function

3. Initially assigned a matrix of results with 0 values of order equal to the order of the input matrix

4. Each element of X is multiplied by each element of Y and the resulting value is stored in the result matrix

5. The resulting matrix is then converted to a list type

6. For each element in the result list, it is summed to give the final answer

Consider a cost function C as the unit of time required to run the function, while “n” represents the number of times a command is defined to run in the algorithm.

For example, if the time required to run the PRINT function is say 1 microsecond (C), and if the algorithm is defined to run the PRINT function 1000 times (n),

then total runtime = (C * n) = 1 microsec * 1000 = 1 millisec

## Time complexity of sorting algorithms

Understanding the time complexity of sorting algorithms helps us choose the best sorting technique in a given situation. Here are some sorting techniques:

### What is the time complexity of insertion sort?

The time complexity of Insertion Sort is O(n) at best. In the worst case, the time complexity is O(n^2).

What is the time complexity of sorting?

This sorting technique is suitable for all kinds of cases. Merge Sort is O(nlogn) at best. In the worst case, the time complexity is O(nlogn). This is because Merge Sort implements the same number of sorting steps for all kinds of cases.

### What is the time complexity of bubble sort?

The time complexity of Bubble Sort is O(n) at best. In the worst case, the time complexity is O(n^2).

### What is the time complexity of quicksort?

Quicksort is O(nlogn) at best. In the worst case, the time complexity is O(n^2). Quicksort is considered the fastest of the sorting algorithms due to its O(nlogn) performance in the best and average cases.

### Time complexity of search algorithms

Now let’s dive into the time complexity of some search algorithms and understand which one is faster.

### Time complexity of linear search:

Linear Search follows a sequential approach. The time complexity of linear search is O(1) at best. In the worst case, the time complexity is O(n).

### Time complexity of binary search:

Binary Search is the faster of the two search algorithms. However, linear search works better for smaller fields. The time complexity of Binary Search is O(1) at best. In the worst case, the time complexity is O(log n).