JavaScript Reducers — More Than a Sum Function
Almost every introduction to Array.prototype.reduce looks the same. There’s an array of numbers, the function adds them up, and the post moves on. By the time you’ve read three of these, you walk away thinking reduce is the verbose way to write +=; and most JavaScript I see in the wild treats it exactly like that.
It’s a much bigger tool than that. James Sinclair has a great long-form post on this that I keep coming back to. This post is my shorter take, the patterns I actually reach for, with examples I keep close to my muscle memory.
The mental model
reduce is a generalised “fold an array into a single value”. The “single value” can be any shape: a number, an object, an array, a function, a tree, etc.
arr.reduce(
(accumulator, currentItem) => /* return the next accumulator */,
initialValue
);
The initial value is the seed. Each item updates the accumulator. The final accumulator is the answer. That’s it.
Pattern 1: Group-by
Group-by is one of the most common transforms in any data-shaped codebase. ES2024 finally added Object.groupBy, but reduce did this for years and is still useful when you want a custom shape.
const orders = [
{ id: 1, status: "paid" },
{ id: 2, status: "pending" },
{ id: 3, status: "paid" },
{ id: 4, status: "cancelled" },
];
const byStatus = orders.reduce((acc, order) => {
(acc[order.status] ??= []).push(order);
return acc;
}, {} as Record<string, typeof orders>);
// {
// paid: [{ id: 1, ... }, { id: 3, ... }],
// pending: [{ id: 2, ... }],
// cancelled: [{ id: 4, ... }],
// }
The ??= (nullish-coalescing assignment) is the small trick that makes this one liner and readable, initialise the bucket on first sight, append every time after.
Pattern 2: Index by
Same shape, different question — turn an array into a lookup map by ID.
const users = [
{ id: "u1", name: "Rohan" },
{ id: "u2", name: "Asha" },
{ id: "u3", name: "Vikram" },
];
const byId = users.reduce((acc, user) => {
acc[user.id] = user;
return acc;
}, {} as Record<string, (typeof users)[number]>);
byId["u2"]; // { id: "u2", name: "Asha" }
This is the move whenever you’re about to write a users.find(u => u.id === someId) inside a loop. One reduce up front turns repeated O(n) lookups into a single O(1) map access.
Pattern 3: Counting / frequency tables
Counting occurrences is another one-liner. Useful for analytics, logging, building histograms.
const events = ["click", "scroll", "click", "click", "submit", "scroll"];
const counts = events.reduce(
(acc, e) => {
acc[e] = (acc[e] ?? 0) + 1;
return acc;
},
{} as Record<string, number>,
);
// { click: 3, scroll: 2, submit: 1 }
The same pattern works for any “tally things by category” task — words in a string, file extensions in a directory listing, status codes in a server log.
Pattern 4: Flatten + transform in one pass
flatMap covers most cases, but if you need to filter and flatten and transform in a single sweep, reduce is still the cleanest answer:
const folders = [
{ name: "blog", files: ["a.md", "b.md"] },
{ name: "drafts", files: [] },
{ name: "snippets", files: ["c.md", "d.md", "e.md"] },
];
// All non-empty file paths, prefixed with folder name
const allPaths = folders.reduce(
(acc, folder) => {
if (folder.files.length === 0) return acc;
return acc.concat(folder.files.map((f) => `${folder.name}/${f}`));
},
[] as string[],
);
// ["blog/a.md", "blog/b.md", "snippets/c.md", "snippets/d.md", "snippets/e.md"]
The traditional “filter then flatMap then map” chain is also fine; reduce wins when each step depends on the accumulated state of previous items.
Pattern 5: Building a tree from a flat list
This is the one that originally sold me on reduce. Given a flat list of comments where each one has a parentId, build a nested tree.
type Comment = { id: string; parentId: string | null; text: string };
type Node = Comment & { children: Node[] };
function buildTree(comments: Comment[]): Node[] {
const byId = comments.reduce(
(acc, c) => {
acc[c.id] = { ...c, children: [] };
return acc;
},
{} as Record<string, Node>,
);
return comments.reduce((roots, c) => {
if (c.parentId === null) {
roots.push(byId[c.id]);
} else {
byId[c.parentId]?.children.push(byId[c.id]);
}
return roots;
}, [] as Node[]);
}
Two reduces: the first to build a lookup, the second to wire up parents and collect roots. No recursion, single pass per step. Works for arbitrary depths, including comments-on-comments-on-comments.
Pattern 6: Function composition
This is the brain-bending one. reduce over an array of functions gives you function composition.
const pipe =
(...fns) =>
(input) =>
fns.reduce((acc, fn) => fn(acc), input);
const transform = pipe(
(s) => s.trim(),
(s) => s.toLowerCase(),
(s) => s.replace(/\s+/g, "-"),
(s) => `slug-${s}`,
);
transform(" Hello World "); // "slug-hello-world"
The accumulator is the “running output”, and each function transforms it. Read the pipeline top-to-bottom and that’s the order things happen.
The pipe helper is the building block of every functional library out there - Ramda, fp-ts, Lodash/fp. Having it in your back pocket means you don’t need any of them for simple cases.
Pattern 7: Tiny state machines
I consider xstate as a de facto for state machines, but reduce can model a tiny and simple state-machine transitions cleanly, when you don’t need much. Each event is reduced into the running state.
const events = [
{ type: "increment" },
{ type: "increment" },
{ type: "set", value: 10 },
{ type: "decrement" },
];
const finalState = events.reduce(
(state, event) => {
switch (event.type) {
case "increment":
return { ...state, count: state.count + 1 };
case "decrement":
return { ...state, count: state.count - 1 };
case "set":
return { ...state, count: event.value };
default:
return state;
}
},
{ count: 0 },
);
// { count: 9 }
This is, of course, exactly what React’s useReducer does — except instead of folding over a fixed array of events, it folds over events as they happen at runtime. Same primitive, different timing.
When not to use reduce
reduce isn’t always the right answer. If a more specific operation exists, prefer it:
- Summing or averaging: still fine to use reduce, but
arr.reduce((a, b) => a + b, 0)is no clearer thanlet sum = 0; for (const n of arr) sum += n;. - Filtering: use
.filter(). A reduce that conditionally pushes is just a verbose filter. - Mapping: use
.map(). A reduce that always pushes a transformed value is just a verbose map. - Finding the first match: use
.find(). - Joining strings: use
.join("").
The rough rule: if your reducer is only ever doing one of “push if condition”, “push transformed”, “return early”, reach for the dedicated method. If the accumulator’s shape differs from the input’s shape — that’s reduce country.
Closing Thoughts
reduce has been a Swiss Army knife of array methods. It can do everything, which is exactly why it shouldn’t be used for everything. But the patterns above — group-by, index-by, frequency tables, tree-building, pipeline composition, state machines — are all genuinely cleaner with reduce than without.