Let us take one example, suppose your friend picked a number between 1 to 1000 and he told you to guess the number. If your guess is correct he will tell you that it is correct, otherwise, if your guess is bigger than his number he would tell you that it's 'too big and if it is smaller than his number then he would tell you that it is ‘too small. Here are some ways by which we could find the number.
As you might have guessed correctly, the 2nd way is actually way better than the first way. In the worst case, the 1st way would take 1000 guesses before we get the correct number ( if the number is 1000 ), while the 2nd way would only take 10 guesses in the worst case ( this is because at every guess we discard one of the halves).
Later you would see that the time complexity of the first way is O(n) and that of the second way is O(logn).
As we saw from the above example there can be multiple approaches to solving the same problem. The same applies to computer programming. For every approach (algorithm) the time taken, amount of space used, and computational power might be different. Therefore there has to be a way by which we can distinguish these different approaches (algorithms) and choose the one which is the most efficient.
In this article, we are going to speak about how we can choose the best algorithm based on the time taken by an algorithm to execute. But how do we compare the algorithms which are written in two different languages, running on two different machines? This is exactly why the concept of time complexity was introduced. But what is time complexity?
By definition, Time complexity is the time taken by an algorithm/program to run as a function of the length of the input.
Why is it so important?
It’s important to note here that time complexity doesn’t really measure the actual time taken by an algorithm to run ( Since that kind of depends on the programming language, processing power etc.). It calculates the execution time of an algorithm in terms of the algorithms and the inputs.