That’s why i created mitata, it greatly improves on javascript (micro-)benchmarking tooling
it provides bunch of features to help avoiding jit optimization foot-guns during benchmarking and dips into more advanced stuff like hardware cpu counters to see what’s the end result of jit on cpu
Yeah, I’ve hit those OOM issues with vitest before too. Mitata’s time budget + sample approach sounds like a solid way to keep things simple while avoiding those long GC pauses. Excited to give it a try on my own benchmarks!
i didn’t want it to be complex so it uses simple time budget + at least x amount of samples, both and more can be configured with lower level api.
in practice i haven’t found any js function that gets faster after mitata’s time budget (excluding cpu clock speed increasing because of continuous workload)
another problem is garbage collection can cause long pauses that cause big jumps for some runs, thus causing loop to continue searching for best result longer than necessary
I have been thinking of reusing/creating something like https://perf.rust-lang.org/ that lets you pick and compare specific hash/commit with all data from json format
It works anywhere where javascript works, so you can easily run it in browser too. Tho idea of making jsbench like website but with mitata accuracy (+ dedicated runners) keeps bugging me.
it provides bunch of features to help avoiding jit optimization foot-guns during benchmarking and dips into more advanced stuff like hardware cpu counters to see what’s the end result of jit on cpu