Big O is a notation used to describe how the computational complexity of an algorithm. In other words, how hard/easy it is to run the algorithm.
There are two aspects that need to be examined, to figure out the computational complexity:
- time complexity — How much time?
- space complexity — How much memory space?
As per Big O, complexity is described as a function of the input variables, wrapped by a capital O.
I.e.:
- Constant: O(1)
- Linear time: O(n)
- Logarithmic time: O(n log n)
- Quadratic time: O(n²)
- Exponential time: 2 ^{polyn}
- Factorial time: O(n!)
- Polynomial-time: 2^{O(log n)}
These functions describe how the amount of operations and therefore memory needed by the algorithm grows as the arguments tend to infinity. Therefore, any constants are considered irrelevant and are always ignored.
That means that:
O(9999999n) = O(8n +5) = O(n)
In simple terms, we use these functions to show what would happen to the algorithm for any input. And since we are testing arguments as they tend to infinity, any aspect of the algorithm that is constant can be ignored.
if n-> infinity, n is so dominant that we can say:
n =~ 3*n =~ 3*n+5
Calculating time complexity:
To calculate, you first need to consider how many operations are performed.
The following are simple steps:
- Split your algorithm into operations
- Calculate the Big O of each operation
- Add the Big O from each operation
- Strip out the constants
- Find the highest-order term (the largest term in that function)
Example 1:
for (int num: arr) {
print(num)
}
This algorithm has a time complexity of O(n). In each for-loop iteration, we are performing a print, which costs O(1). The for loop iterates n times, which gives a time complexity of O(1 * n) = O(1*n)=O(n).
Example 2:
for (int num: arr) {
for (int i = 0; i < 500,000; i++) {
print(num)
}
}
In each inner for loop iteration, we are performing a print 500000 times, which costs O(1), since constant numbers can be eliminated. The outer for loop iterates n times, which gives a time complexity of O(n).
This algorithm has a time complexity of O(n)O(n).
Example 3:
for (int num: arr) {
for (int num2: arr) {
print(num * num2)
}
}
In each inner for loop iteration, 1 multiplication and 1 print are called, which both cost O(1).
The inner for loop runs n times, which means each outer for loop iteration costs O(n). The outer for loop runs O(n) times, which gives a time complexity of O(n*n)=O(n^2).
This algorithm has a time complexity of O(n^2).
Example 4:
for (int num: arr) {
print(num)
}for (int num: arr) {
print(num)
}for (int num: arr2) {
print(num)
}
The first two for loops both cost O(n), whereas the final for loop costs O(m). This gives a time complexity of O(2n+m) = O(n+m).
This algorithm has a time complexity of O(n+m).
Logarithmic time
A logarithm is the inverse operation to exponents. The time complexity O(logn) is called logarithmic time and is extremely fast. A common time complexity is O(n⋅logn), which is fast for most problems and also the time complexity of efficient sorting algorithms.
Typically, the base of the logarithm will be 2. This means that if your input is size n, then the algorithm will perform x operations, where 2^x =n. However, the base of the logarithm doesn't actually matter for big O.
O(logn) means that somewhere in your algorithm, the input is reduced by a percentage at every step. A good example of this is binary search where we initially consider the entire input (n elements). At each step, we are reducing our search space by 50%, which gives us a logarithmic time complexity.