Speed Up JavaScript Array Processing

Suneel Kumar
7 min readSep 18, 2023

Arrays are one of the most commonly used data structures in JavaScript. They allow you to store and access ordered collections of data. However, working with large arrays or performing complex operations can slow down your code. In this article, we’ll explore several techniques to optimize array performance in JavaScript.

Photo by Pierre Bamin on Unsplash

Use Typed Arrays for Numeric Data

JavaScript arrays are flexible and can hold any data type. This flexibility comes at a cost — lookups and operations on normal arrays are slower compared to arrays that only contain a single data type.

Connect with me:

Linkedin: https://www.linkedin.com/in/suneel-kumar-52164625a/

If you’re storing numeric data like integers or floats, use typed arrays instead:

// Normal array 
const nums = [1, 2, 3, 4];

// Typed array
const uint8 = new Uint8Array([1, 2, 3, 4]);

Typed arrays can provide up to a 2x speed improvement for numeric data. Some common typed arrays are:

  • Int8Array: 8-bit signed integers
  • Uint8Array: 8-bit unsigned integers
  • Uint8ClampedArray: 8-bit unsigned integers (clamped)
  • Int16Array: 16-bit signed integers
  • Uint16Array: 16-bit unsigned integers
  • Int32Array: 32-bit signed integers
  • Uint32Array: 32-bit unsigned integers
  • Float32Array: 32-bit floating point numbers
  • Float64Array: 64-bit floating point numbers

The only caveat is that typed arrays have a fixed size that needs to be defined upfront.

Use Array Methods Intelligently

Native array methods like map, filter and reduce are convenient abstractions that can sometimes mask slow implementations.

For example, using for loops can be faster than array.filter() in some cases:

// filter
const evens = [1, 2, 3, 4, 5].filter(num => num % 2 === 0);

// faster for loop
const evens = [];
for (let i = 0; i < arr.length; i++) {
if (arr[i] % 2 === 0) {
evens.push(arr[i]);
}
}

However, this approach requires more code and misses out on the expressiveness of array methods.

A better option is to optimize the predicate function passed to the array method:

// Array of numbers
const numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];

// Suboptimal way without optimizing the predicate function
const suboptimalResult = numbers.filter(num => num % 2 === 0);
console.log("Suboptimal result:", suboptimalResult);

// Better option by optimizing the predicate function
const isEven = num => num % 2 === 0;
const optimizedResult = numbers.filter(isEven);
console.log("Optimized result:", optimizedResult);

The optimized predicate function does less work per array element, so the overall filter is faster.

// Original function 
function isPrime(num) {
for (let i = 2; i < num; i++) {
if (num % i === 0) {
return false;
}
}
return true;
}

console.time('original');
for (let i = 0; i < 10000; i++) {
isPrime(i);
}
console.timeEnd('original');

// Optimized function
const primes = [];
for (let i = 2; i < 10000; i++) {
let isPrime = true;
for (let j = 2; j <= Math.sqrt(i); j++) { // Fix: Change loop condition to j <= Math.sqrt(i)
if (i % j === 0) {
isPrime = false;
break;
}
}
if (isPrime) {
primes.push(i);
}
}

function isPrimeOptimized(num) {
return primes.includes(num);
}

console.time('optimized');
for (let i = 0; i < 10000; i++) {
isPrimeOptimized(i);
}
console.timeEnd('optimized');


//these four test cases
original: 15.976ms
optimized: 12.822ms
original: 11.15ms
optimized: 10.547ms
original: 15.938ms
optimized: 8.162ms
original: 8.605ms
optimized: 8.199ms

The key optimizations:

  • Precompute a list of primes up to 10,000
  • Use a more efficient sqrt loop instead of checking all previous numbers
  • Check against the precomputed list instead of recalculating

This reduces the execution time significantly. Other possible optimizations:

  • Use a Set instead of an array for faster lookup
  • Use web workers for parallel execution
  • Implement memoization to cache results
  • Use asm.js or WebAssembly for performance critical sections

The main ideas are similar to other languages — precompute, use better data structures, parallelize and cache results wherever possible. This allows optimizing the predicate function for faster execution in JavaScript.

Always profile before optimizing array methods — in many cases, the built-in implementations are highly optimized already.

Limit Array Mutations

Methods that mutate the original array like push, pop or unshift have to do more work per operation compared to non-mutating methods.

For example, inserting at the start using unshift requires re-indexing all array elements:

const arr = [1, 2, 3];

// Slow - mutates original
arr.unshift(0);

// Faster
const newArr = [0, ...arr];

The spread operator creates a new array, avoiding the need to mutate and re-index.

Some other tips:

  • Use concat to append arrays instead of push
  • Use the spread operator instead of splice to insert/remove elements
  • Use slice instead of pop/shift if you don't need the removed elements

Immutable data leads to simpler code and faster operations. Copy arrays selectively when mutating to limit performance impacts.

Parallelize Independent Work

Multi-core CPUs and web workers make it possible to parallelize array operations for faster processing.

For example, mapping over an array to transform its elements can be parallelized:

// Serial mapping 
const squareSerial = arr.map(num => num * num);

// Parallel mapping
const squareParallel = arr.map(num => {
return new Promise(resolve => {
resolve(num * num);
});
});

Promise.all(squareParallel).then(squares => {
// squares ready
});

By returning promises and waiting on Promise.all, the mappings are spread across multiple cores.

Other parallelization options:

  • Web workers — farm out array work to other threads
  • Array.parallelMap (proposed) - parallel map built into JavaScript
  • Third party libs like p-map - parallel array utilities

Parallelizing requires more code, so only apply it for expensive operations. Test parallel and serial versions to compare speeds.

Limit Array Copies

Copying large arrays can be slow due to memory allocation and copying costs.

This is often an issue with chained array operations:

const processed = arr
.filter(f)
.map(m)
.slice(10)
.concat(arr2); // Copies at every step

To avoid intermediary copies, compose the operations into a single loop:

const processed = [];
for (let i = 0; i < arr.length; i++) {
if (f(arr[i])) {
processed.push(m(arr[i]));
}
}

processed.push(...arr2); // Single copy

Chaining is often more readable so only optimize where performance is critical.

Other tips:

  • Reuse existing arrays instead of creating new ones where possible
  • Use structural sharing libraries like Immutable.js to efficiently reuse array memory

Filter Early, Map Late

When applying both filtering and mapping, put the filter first:

const filteredMapped = arr
.filter(f)
.map(m);

// More efficient
const filtered = arr.filter(f);
const mapped = filtered.map(m);

Filtering first reduces the number of elements that need mapping.

You can even skip the mapping entirely if the filter returns an empty array.

Note that in cases where the mapping function is much faster than the filter function, you may want to reverse the order.

Pre-Allocate Array Length

When building up an array dynamically with push/unshift, pre-allocate the estimated length:

// No length pre-allocation
const arr = [];
for (let i = 0; i < 100; i++) {
arr.push(i);
}

// Preallocate length
const arr = new Array(100);
for (let i = 0; i < 100; i++) {
arr[i] = i;
}

Pre-allocation eliminates incremental reallocation and copying as the array grows.

This can also be applied when you know the length upfront:

// No length 
const arr = [1, 2, 3];

// Preallocated
const arr = new Array(3).fill(null);
arr[0] = 1;
arr[1] = 2;
arr[2] = 3;

Preallocation works for typed arrays as well.

Just be careful not to preallocate way more than you need, as it wastes memory.

Use Workers for Expensive Operations

Offloading expensive array operations like filtering, mapping and reductions to web workers can greatly improve UI responsiveness.

For example:

// Main UI thread 
const worker = new Worker('worker.js');

worker.postMessage(arr);

worker.onmessage = (e) => {
const result = e.data;
// ...
}

// Worker thread
onmessage = (e) => {
const arr = e.data;
const result = doExpensiveWork(arr);

postMessage(result);
}

Workers allow computation to happen in a separate thread. Data is passed between threads via postMessage.

Consider workers if:

  • There are many array elements
  • Mapping, filtering or reducing over the array takes > 100–200ms
  • The UI thread is blocked during the operation

Keep in mind there is a thread switch overhead, so only apply workers to expensive computations.

Avoid Recomputing Derived Data

When deriving new arrays from existing ones, cache derived arrays to avoid recomputing them redundantly.

For example, you may filter an array and then map the filtered array:

// Unoptimized
function getFilteredMappedData(arr) {
const filtered = arr.filter(f); // Expensive filter

return filtered.map(m);
}

// Optimized
let filtered;

function getFilteredMappedData(arr) {
if (!filtered) {
filtered = arr.filter(f);
}

return filtered.map(m);
}

Caching the filtered array avoids re-filtering every time the function is called.

More complex memoization libraries like Lodash’s _.memoize can automate caching for arbitrary functions.

Defer One-Time Setup Costs

Some array operations have an initial “setup” cost that can be deferred or amortized.

For example, sorting an array requires analyzing the full array to determine sort order. This setup cost grows with the array size.

If you need to sort multiple times, it’s faster to do the setup once:

// Slow
const sortByName = arr => arr.sort(byName);

function process(arr) {
const arr1 = sortByName(arr);
const arr2 = sortByName(arr);
}

// Faster
let nameSorted; // Cached sort order

const sortByName = arr => {
if (!nameSorted) {
// Determine sort order once
nameSorted = computeNameSort(arr);
}

return arr.sort(nameSorted);
}

function process(arr) {
const arr1 = sortByName(arr);
const arr2 = sortByName(arr);
}

The same applies for functions, regular expressions, and other constructs with one-time setup costs.

Summary

Optimizing array performance requires a mix of algorithms knowledge, profiling, and understanding JavaScript internals. Key takeaways:

  • Use typed arrays for better numeric performance
  • Intelligently choose array methods based on algorithms
  • Limit array mutations and copies
  • Parallelize independent operations where possible
  • Filter early, map late when pairing operations
  • Pre-allocate array length if known
  • Use web workers for long-running tasks
  • Cache derived arrays instead of recomputing
  • Amortize one-time setup costs like sorting

Not all optimizations will apply in every situation. Measure performance before and after applying optimizations to ensure there is a real benefit. With care, arrays in JavaScript can be processed efficiently even at large scale.

--

--