Skip to content

benchmarks

Eugene Lazutkin edited this page Mar 26, 2026 · 1 revision

Benchmarks measure the per-value overhead of different pipeline abstractions. They use nano-benchmark.

Running benchmarks

Specify a benchmark file to run:

npm run bench -- bench/<name>.mjs

Benchmark files

bench/gen-fun-stream.mjs

Compares three ways to run the same pipeline of synchronous functions:

  • gengen() async generator, consumed with for await.
  • funfun() function, result unwrapped with getManyValues().
  • streamchain() with asStream(gen(...)), consumed as a Node stream.

This is the main benchmark for understanding the relative overhead of each abstraction.

bench/gen-fun.mjs

Head-to-head comparison of gen() vs fun() without any stream machinery. Useful for isolating the cost of the async generator protocol (for await) vs a direct function call.

bench/gen-opt.mjs

Tests the function-list inlining optimization in gen():

  • simple list — flat gen(f1, f2, f3, f4, f5).
  • optimization on — nested gen(f1, gen(f2, f3, f4), f5) where the inner gen() is inlined automatically via function-list detection.
  • optimization off — same nesting but with clearFunctionList() preventing inlining.

The first two should be nearly identical in performance. The third shows the cost of not inlining (each nested gen() becomes a separate async generator).

bench/fun-opt.mjs

Same structure as gen-opt.mjs but for fun(). Tests function-list inlining with nested fun() pipelines.

What the benchmarks use

All benchmarks use a pipeline of simple synchronous arithmetic functions:

const fns = [x => x - 2, x => x + 1, x => 2 * x, x => x + 2, x => x >> 1];

This isolates framework overhead from application logic. Each benchmark iteration processes many values through the pipeline and accumulates a checksum to prevent dead-code elimination.

Clone this wiki locally