![]() |
This Asymptotic Analysis of Algorithms is a critical topic for the GATE (Graduate Aptitude Test in Engineering) exam, especially for candidates in computer science and related fields. This set of notes provides an in-depth understanding of how algorithms behave as input sizes grow and is fundamental for assessing their efficiency. Let’s delve into an introduction for these notes: Table of Content
Introduction of AlgorithmsThe word Algorithm means “A set of rules to be followed in calculations or other problem-solving operations” Or “A procedure for solving a mathematical problem in a finite number of steps that frequently involves recursive operations “. Therefore Algorithm refers to a sequence of finite steps to solve a particular problem. Algorithms can be simple and complex depending on what you want to achieve. Types of Algorithms:
Asymptotic NotationsIn mathematics, asymptotic analysis, also known as asymptotics, is a method of describing the limiting behavior of a function. In computing, asymptotic analysis of an algorithm refers to defining the mathematical boundation of its run-time performance based on the input size. For example, the running time of one operation is computed as f(n), and maybe for another operation, it is computed as g(n2). This means the first operation running time will increase linearly with the increase in n and the running time of the second operation will increase exponentially when n increases. Similarly, the running time of both operations will be nearly the same if n is small in value. Usually, the analysis of an algorithm is done based on three cases:
Asymptotic AnalysisGiven two algorithms for a task, how do we find out which one is better? One naive way of doing this is – to implement both the algorithms and run the two programs on your computer for different inputs and see which one takes less time. There are many problems with this approach for the analysis of algorithms.
Measurement of Complexity of an Algorithm (Worst, Best and Average Case)Based on the above three notations of Time Complexity there are three cases to analyze an algorithm: 1. Worst Case Analysis (Mostly used): Denoted by Big-O notationIn the worst-case analysis, we calculate the upper bound on the running time of an algorithm. We must know the case that causes a maximum number of operations to be executed. For Linear Search, the worst case happens when the element to be searched (x) is not present in the array. When x is not present, the search() function compares it with all the elements of arr[] one by one. Therefore, the worst-case time complexity of the linear search would be O(n). 2. Best Case Analysis (Very Rarely used): Denoted by Omega notationIn the best-case analysis, we calculate the lower bound on the running time of an algorithm. We must know the case that causes a minimum number of operations to be executed. In the linear search problem, the best case occurs when x is present at the first location. The number of operations in the best case is constant (not dependent on n). So time complexity in the best case would be Ω(1) 3. Average Case Analysis (Rarely used): Denoted by Theta notationIn average case analysis, we take all possible inputs and calculate the computing time for all of the inputs. Sum all the calculated values and divide the sum by the total number of inputs. We must know (or predict) the distribution of cases. For the linear search problem, let us assume that all cases are uniformly distributed (including the case of x not being present in the array). So we sum all the cases and divide the sum by (n+1). Following is the value of average-case time complexity. How to Analyse Loops for Complexity Analysis of Algorithms?Constant Time Complexity O(1):The time complexity of a function (or set of statements) is considered as O(1) if it doesn’t contain a loop, recursion, and call to any other non-constant time function. In computer science, O(1) refers to constant time complexity, which means that the running time of an algorithm remains constant and does not depend on the size of the input. This means that the execution time of an O(1) algorithm will always take the same amount of time regardless of the input size. An example of an O(1) algorithm is accessing an element in an array using an index. Linear Time Complexity O(n):The Time Complexity of a loop is considered as O(n) if the loop variables are incremented/decremented by a constant amount. For example following functions have O(n) time complexity. Linear time complexity, denoted as O(n), is a measure of the growth of the running time of an algorithm proportional to the size of the input. In an O(n) algorithm, the running time increases linearly with the size of the input. For example, searching for an element in an unsorted array or iterating through an array and performing a constant amount of work for each element would be O(n) operations. In simple words, for an input of size n, the algorithm takes n steps to complete the operation. Quadratic Time Complexity O(nc):The time complexity is defined as an algorithm whose performance is directly proportional to the squared size of the input data, as in nested loops it is equal to the number of times the innermost statement is executed. For example, the following sample loops have O(n2) time complexity Quadratic time complexity, denoted as O(n^2), refers to an algorithm whose running time increases proportional to the square of the size of the input. In other words, for an input of size n, the algorithm takes n * n steps to complete the operation. An example of an O(n^2) algorithm is a nested loop that iterates over the entire input for each element, performing a constant amount of work for each iteration. This results in a total of n * n iterations, making the running time quadratic in the size of the input. Logarithmic Time Complexity O(Log n):The time Complexity of a loop is considered as O(Logn) if the loop variables are divided/multiplied by a constant amount. And also for recursive calls in the recursive function, the Time Complexity is considered as O(Logn). Logarithmic Time Complexity O(Log Log n):The Time Complexity of a loop is considered as O(LogLogn) if the loop variables are reduced/increased exponentially by a constant amount. How to combine the time complexities of consecutive loops?When there are consecutive loops, we calculate time complexity as a sum of the time complexities of individual loops. To combine the time complexities of consecutive loops, you need to consider the number of iterations performed by each loop and the amount of work performed in each iteration. The total time complexity of the algorithm can be calculated by multiplying the number of iterations of each loop by the time complexity of each iteration and taking the maximum of all possible combinations. For example, consider the following code:
Here, the outer loop performs n iterations, and the inner loop performs m iterations for each iteration of the outer loop. So, the total number of iterations performed by the inner loop is n * m, and the total time complexity is O(n * m). In another example, consider the following code:
Here, the outer loop performs n iterations, and the inner loop performs i iterations for each iteration of the outer loop, where i is the current iteration count of the outer loop. The total number of iterations performed by the inner loop can be calculated by summing the number of iterations performed in each iteration of the outer loop, which is given by the formula sum(i) from i=1 to n, which is equal to n * (n + 1) / 2. Hence, the total time complex Algorithms Cheat Sheet for Complexity Analysis:
Runtime Analysis of Algorithms:In general cases, we mainly used to measure and compare the worst-case theoretical running time complexities of algorithms for the performance analysis.
Little o and Little omega notations:Little-o: Big-O is used as a tight upper bound on the growth of an algorithm’s effort (this effort is described by the function f(n)), even though, as written, it can also be a loose upper bound. “Little-o” (o()) notation is used to describe an upper bound that cannot be tight.
little-omega: The relationship between Big Omega (Ω) and Little Omega (ω) is similar to that of Big-O and Little o except that now we are looking at the lower bounds. Little Omega (ω) is a rough estimate of the order of the growth whereas Big Omega (Ω) may represent exact order of growth. We use ω notation to denote a lower bound that is not asymptotically tight.
What does ‘Space Complexity’ mean?The term Space Complexity is misused for Auxiliary Space at many places. Following are the correct definitions of Auxiliary Space and Space Complexity. Auxiliary Space is the extra space or temporary space used by an algorithm. The space Complexity of an algorithm is the total space taken by the algorithm with respect to the input size. Space complexity includes both Auxiliary space and space used by input. For example, if we want to compare standard sorting algorithms on the basis of space, then Auxiliary Space would be a better criterion than Space Complexity. Merge Sort uses O(n) auxiliary space, Insertion sort, and Heap Sort use O(1) auxiliary space. The space complexity of all these sorting algorithms is O(n) though. Space complexity is a parallel concept to time complexity. If we need to create an array of size n, this will require O(n) space. If we create a two-dimensional array of size n*n, this will require O(n2) space. In recursive calls stack space also counts. Example :
However, just because you have n calls total doesn’t mean it takes O(n) space.
Previous Year GATE Questions:
|
Reffered: https://www.geeksforgeeks.org
Analysis Of Algorithms |
Type: | Geek |
Category: | Coding |
Sub Category: | Tutorial |
Uploaded by: | Admin |
Views: | 11 |