Algorithms are ubiquitous and are used in almost every aspect of our lives. Choosing the right algorithm for performing a specific task is no trivial matter. In the best case scenario, the algorithm not only performs its job, but it does it using the least amount of resources. The most relevant resources when it comes to computer algorithms is the time of execution and the memory space used. In this article, we will give an overview of a popular method for measuring algorithm’s time efficiency called time complexity.

Basic Theory

The amount of time required by an algorithm to complete as a function of its input data size is referred to as time complexity.

Fig. 1 Simplified diagram of different time complexity functions

Right from the start, we should state that the time complexity of an algorithm is not exactly the same as the running time of an algorithm. As both terms are dealing with time, there is usually a confusion on the difference.

Running time is the time required by an algorithm to complete and it can be influenced by factors such as CPU frequency, CPU architecture, memory access times, input data sizes, etc. In contrast, when we calculate the time complexity we are abstracting ourselves from the factors mentioned above except one – input data size. We are focusing on how the time required by an algorithm to complete is changing in relation to its input data size.

It is important to know the time complexity of an algorithm, as this has direct implication in the scalability of the algorithm and the program as a whole. On a practical level, we are usually concerned with the worst-case time complexity of an algorithm, as this will give us information about the maximum time required for any size of the input data. It is commonly accepted that the time complexity is estimated using the number of operations (or steps) required by an algorithm to complete, instead of the total time. This is in line with the approach of abstracting ourselves as much as possible from the underlying hardware when analyzing time complexity.

In order to find the time complexity of an algorithm, we must try to construct an equation that defines the rate of growth of the time with respect to the input data size. Commonly the time complexity is expressed using a Big O notation – O(n). This means an asymptotic analysis is used, where we accept that the input data size (n) goes to infinity. In this kind of analysis, all constant factors and lower order terms are deemed irrelevant and ignored.

Calculating Running Time and Time Complexity

In the examples below, we will analyze some simple algorithms and see how we can deduct their equations. The order of each equation will help us estimate how the running time of the algorithm will increase when the input (n) also increases. In order to make the examples simple to understand we will use a simplified model of a microprocessor that executes each operation (logical, arithmetic, comparison, branching, etc. ) for 1 unit of time.

Constant Time: O(1)

An algorithm that takes the same amount of time to complete its processing regardless of the input size is said to run in constant time O(1).

The following example shows an algorithm with constant time complexity:

/* sum of variables a and b */
int Sum (int a, int b) {
  return (a+b);

Now let’s calculate the running time of this algorithm on our simplified microprocessor. We have one addition operation (a+b) and one return operation. The total runtime is T = 1+ 1. As we can see this running time is constant and it does not depend on the size of the input data. Whether we add (2 + 2) or (200 + 200), the algorithm will always take 2 units of time.

Algorithms that have constant time complexity include accessing an element from an array, stack’s push/pop methods, etc.

Linear Time: O(n)

If an algorithm’s running time is directly proportional to the input data size, (e.g it grows linearly when the input data size increases), we call this a linear time complexity O(n).


/* Sum of elements from an array */
int sum (int * a, int n) {
  int sum_elements = 0;
  for (int i =0; i<n; i++) {
    sum_elements = sum_elements + a[i];
  return sum_elements;

Let’s analyze the implemented algorithm in the function shown above. We have one operation for assigning sum_elements variable to 0, one operation for assigning i variable to 0. Then we will have n+1 number of comparison operations (i<n) and n number of increment operations (i++). We will also have n number of assignment and addition operations (sum_elements = sum_elements + a [i]). The total number of operations of the algorithm and the total running time (one operation = 1 unit of time) is T = 1 + 1 + n +1 + n + n +n = 3 + 4n. The constants 3 and 4 in this equation can be ignored in our asymptotic analysis (as n grows infinitely larger their effect will be minimal). In the end T = n and we get linear time complexity O(n).

Algorithms that have linear time complexity include linear search, counting sort, etc.

Logarithmic Time: O(log n)

An algorithm is said to run in logarithmic time if its time execution is proportional to the logarithm of the input data size O(log n).


for (int i = n; i > 1; i /= 2) {
   /* constant time expressions O(1) */

Let’s analyze the algorithm above. What we are doing is dividing a number n by 2, and we doing this k number of times until n becomes less than or equal to 1. In the end, we get a number n/ 2k ≤ 1. Now we should do some simplifying. Multiplying both sides by 2k, we get 2k * n/2k ≤ 2k, this in turns becomes n ≤ 2k. Based on the definition of a logarithm, we can represent n ≤ 2k as log2n ≤ k. In summary, the number of operations (k steps) in the algorithm shown above is log2n and we have logarithmic time complexity.

One of the most popular algorithms with logarithmic time complexity is the binary search algorithm.

Quadratic Time: O(n2)

An algorithm is said to have a quadratic time complexity if its running time is proportional to the square of the input data size O(n2).


int cnt = 0;
for (int i = 0; i < n; i++) {
    for (int k = 0; k < i; k++) {

In the algorithm above we have one operation for cnt assignment, one operation for i assignment and one operation for k assignment. We have n+1 comparison operations, n increase operation (i++), n*n number of addition operation (cnt++). The total running time (number of operations) of the algorithm is T = 1 + 1 + 1 + (n +1) + n + n * n. By removing the constants and the lower order terms in accordance with the asymptotic analysis, we get T = n * n = n2.

Algorithms with quadratic time complexity include bubble sort, selection sort, etc.

Was this article helpful?