JavaScript Algorithm

MD ARIF HOSSAIN
9 min readMay 9, 2020

--

From Google

Topics:

1. What is Algorithm?2. Factorial a math algorithm3. Fibonacci a math algorithm4. Linear Search5. Binary Search6. Bubble Sort7. Selection Sort8. Insertion Sort9. Merge Sort10. AVL Tree

What is Algorithm?

An algorithm is a procedure or formula for solving a problem, based on conducting a sequence of specified actions. A computer program can be viewed as an elaborate algorithm. In mathematics and computer science, an algorithm usually means a small procedure that solves a recurrent problem.

Factorial a math algorithm

In mathematics, the factorial of a positive integer n, denoted by n!, is the product of all positive integers less than or equal to n:

n ! = n × ( n − 1 ) × ( n − 2 ) × ( n − 3 ) × ⋯ × 3 × 2 × 1 

For example,

5 ! = 5 × 4 × 3 × 2 × 1 = 120

Note: The value of 0! is 1.

Let’s see an example in JavaScript language:

function factorial(number) {
let result = 1
for (let i = 2; i <= number; i++) {
result *= i
}

return result
}
console.log(factorial(5)) //120

Let’s see this example in ES6 arrow function with recursive way:

// Factorial in recursive wayconst factorialRecursive = num => {
num > 1 ? num * factorialRecursive(num - 1) : 1
}
console.log(factorialRecursive2(5)) //120

Fibonacci a math algorithm

In mathematics, the Fibonacci numbers, commonly denoted Fn, form a sequence, called the Fibonacci sequence, such that each number is the sum of the two preceding ones, starting from 0 and 1. That is,

F = 0 , F = 1 ,

and

F = F − 1 + F − 2 , for n > 1.

The beginning of the sequence is thus:

0 , 1 , 1 , 2 , 3 , 5 , 8 , 13 , 21 , 34 , 55 , 89 , 144 , … 

In some older books, the value F 0 = 0 is omitted, so that the sequence starts with F₁ = F = 1 , and the recurrence F = F − 1 + F − 2 is valid for n > 2.

Let’s see an example in JavaScript language:

function fibonacci(number){
let a = 1, b = 0, temp;
let series = [0]
while (number >= 0){
temp = a
a = a + b
b = temp
series.push(b)
number--
}
return series
}
console.log(fibonacci(5)) // [0, 1, 1, 2, 3, 5, 8]

To know the nth number value in fibonacci value:

//fibonacci Nthfunction fibonacciNth(n) {
let currentValue = 1
let previousValue = 0
if (n === 1) {
return 1
}
let iterationsCounter = n - 1;

while (iterationsCounter) {
currentValue += previousValue
previousValue = currentValue - previousValue
iterationsCounter -= 1
}
return currentValue
}
console.log(fibonacciNth(3)) //2

Linear Search

In computer science, a linear search or sequential search is a method for finding an element within a list. It sequentially checks each element of the list until a match is found or the whole list has been searched.[1]

A linear search runs in at worst linear time and makes at most n comparisons, where n is the length of the list. If each element is equally likely to be searched, then linear search has an average case of (n+1)/2 comparisons, but the average case can be affected if the search probabilities for each element vary. Linear search is rarely practical because other search algorithms and schemes, such as the binary search algorithm and hash tables, allow significantly faster searching for all but short lists.

Look at this example:

let colors = ["red", "orange", "yellow", "green", "blue", "indigo", "violet"]const linearSearch = (arr, elementToFind) => {
for (let i = 0; i < arr.length; i++) {
if (arr[i] === elementToFind) {
return i
}
}
return null;
}
const result1 = linearSearch(colors, "black") // null
const result2 = linearSearch(colors, "yellow") // 2
console.log(result1)
console.log(result2)

Time Complexity: O(n) - since in worst case we're checking each element exactly once.

Binary Search

In computer science, binary search, also known as half-interval search, logarithmic search, or binary chop, is a search algorithm that finds the position of a target value within a sorted array. Binary search compares the target value to the middle element of the array. If they are not equal, the half in which the target cannot lie is eliminated and the search continues on the remaining half, again taking the middle element to compare to the target value, and repeating this until the target value is found. If the search ends with the remaining half being empty, the target is not in the array.

Binary search runs in logarithmic time in the worst case, making O ( log ⁡ n ) comparisons, where n is the number of elements in the array, the O is Big O notation, and log is the logarithm. Binary search is faster than linear search except for small arrays. However, the array must be sorted first to be able to apply binary search. There are specialized data structures designed for fast searching, such as hash tables, that can be searched more efficiently than binary search. However, binary search can be used to solve a wider range of problems, such as finding the next-smallest or next-largest element in the array relative to the target even if it is absent from the array.

Let’s see an example:

let colors = ["red", "orange", "yellow", "green", "blue", "indigo", "violet"]let sortedColors = colors.sort()
// ["blue", "green", "indigo", "orange", "red", "violet", "yellow"]

function binarySearch(sortedArray, elementToFind) {
let lowIndex = 0;
let highIndex = sortedArray.length - 1
while (lowIndex <= highIndex) {
const midIndex = Math.floor((lowIndex + highIndex) / 2)

if (sortedArray[midIndex] === elementToFind) {
return midIndex
} else if (sortedArray[midIndex] < elementToFind) {
lowIndex = midIndex + 1
} else {
highIndex = midIndex - 1
}
}
return null
}
const result1 = binarySearch(sortedColors, "black")
const result2 = binarySearch(sortedColors, "red")
console.log(result1) // null
console.log(result2) // 4

Time Complexity: O(log(n)) - since we split search area by two for every next iteration.

Bubble Sort

Bubble sort, sometimes referred to as sinking sort, is a simple sorting algorithm that repeatedly steps through the list, compares adjacent elements and swaps them if they are in the wrong order. The pass through the list is repeated until the list is sorted. The algorithm, which is a comparison sort, is named for the way smaller or larger elements “bubble” to the top of the list.

Visual example:

Bubble Sort (From Google)

Time Complexity:

O(n) - Best.

O(n²) - Average.

O(n²) - Worst.

Complexity is stable for this.

Selection Sort

In computer science, selection sort is an in-place comparison sorting algorithm. It has an O(n2) time complexity, which makes it inefficient on large lists, and generally performs worse than the similar insertion sort. Selection sort is noted for its simplicity and has performance advantages over more complicated algorithms in certain situations, particularly where auxiliary memory is limited.

The algorithm divides the input list into two parts: a sorted sublist of items which is built up from left to right at the front (left) of the list and a sublist of the remaining unsorted items that occupy the rest of the list. Initially, the sorted sublist is empty and the unsorted sublist is the entire input list. The algorithm proceeds by finding the smallest (or largest, depending on sorting order) element in the unsorted sublist, exchanging (swapping) it with the leftmost unsorted element (putting it in sorted order), and moving the sublist boundaries one element to the right.

The time efficiency of selection sort is quadratic, so there are a number of sorting techniques which have better time complexity than selection sort. One thing which distinguishes selection sort from other sorting algorithms is that it makes the minimum possible number of swaps, n − 1 in the worst case.

Visual example:

Selection Sort (From Google)

Time Complexity:

O(n²) — Best.

O(n²) — Average.

O(n²) — Worst.

Complexity is not stable for this.

Insertion Sort

Insertion sort is a simple sorting algorithm that builds the final sorted array (or list) one item at a time. It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge sort. However, insertion sort provides several advantages:

  • Efficient for (quite) small data sets, much like other quadratic sorting algorithms
  • More efficient in practice than most other simple quadratic (for example, O(n²)) algorithms such as selection sort or bubble sort
  • Adaptive: Efficient for data sets that are already substantially sorted: the time complexity is O(kn) when each element in the input is no more than k places away from its sorted position
  • Stable: Does not change the relative order of elements with equal keys
  • In-place: Only requires a constant amount O(1) of additional memory space
  • Online: Can sort a list as it receives it

When people manually sort cards in a bridge hand, most use a method that is similar to insertion sort.

Visual example:

Insertion Sort (From Google)

Time Complexity:

O(n) — Best.

O(n²) — Average.

O(n²) — Worst.

Complexity is stable for this.

Merge Sort

In computer science, merge sort (also commonly spelled mergesort) is an efficient, general-purpose, comparison-based sorting algorithm. Most implementations produce a stable sort, which means that the implementation preserves the input order of equal elements in the sorted output. Mergesort is a divide and conquer algorithm that was invented by John von Neumann in 1945.

An example of merge sort. First divide the list into the smallest unit (1 element), then compare each element with the adjacent list to sort and merge the two adjacent lists. Finally all the elements are sorted and merged.

Visual example:

Merge Sort (From Google)

A recursive merge sort algorithm used to sort an array of 5 integer values. These are the steps a human would take to emulate merge sort (top-down).

From Google

Time Complexity:

O(n Log(n)) — Best.

O(n Log(n)) — Average.

O(n Log(n)) — Worst.

Complexity is stable for this.

AVL Tree

In computer science, an AVL tree (named after inventors Adelson-Velsky and Landis) is a self-balancing binary search tree. It was the first such data structure to be invented. In an AVL tree, the heights of the two child subtrees of any node differ by at most one; if at any time they differ by more than one, rebalancing is done to restore this property. Lookup, insertion, and deletion all take O(log n) time in both the average and worst cases, where n is the number of nodes in the tree prior to the operation. Insertions and deletions may require the tree to be rebalanced by one or more tree rotations.

Visual example:

AVL tree (From Google)

Let’s try an example:

The following keys are inserted into an empty tree in the given order: 56, 12, 70, 5, 10, 33, 93, 65, 24, 48, 8, 67. Construct the AVL tree use this numbers.

Solve:

That’s all for today.

--

--