This guide walks through the three common workflows:
All commands assume you have run a build already
(e.g. scripts/build.sh -t Release --test).
Tip: Prefer
--profile ci,--profile series,--profile crossover, or--profile deepfor repeatable named workflows, then override only the few knobs you intentionally want to change.
Run the executable directly specifying a size and number of runs, or use a canonical named profile:
./build/hashbrowns --profile ci --out-format json --output benchmark_single.jsonYou’ll see a colorful banner followed by benchmark progress:
🥔 hashbrowns - C++ Data Structure Benchmarking Suite
======================================================
=== Benchmark Results (avg ms over 5 runs, size=10000) ===
- array: insert=712.99, search=16337.3, remove=199231, mem=655416 bytes
- slist: insert=336.687, search=57940.9, remove=79.0626, mem=480104 bytes
- hashmap: insert=1099.77, search=198.998, remove=142.816, mem=786664 bytes
Saved JSON to: benchmark_single.json
Key points: - --structures picks the set; omit it to use
all defaults (array, slist, dlist, hashmap). -
--out-format csv would emit a CSV instead. -
--memory-tracking includes allocation deltas (the
mem= values shown). - --bootstrap 200 computes
95% confidence intervals for statistical rigor.
Minimal CSV example:
./build/hashbrowns --size 10000 --runs 8 --output results/csvs/benchmark_results.csvUse a linear series of sizes up to a max (given by
--size) with --series-count:
./build/hashbrowns --profile seriesThis produces 6 evenly spaced sizes between a small floor and 60,000. For explicit sizes:
./build/hashbrowns --series-sizes 1024,4096,8192,16384,32768,65536 --series-runs 2 --out-format csv \
--series-out results/csvs/series_results.csvJSON series output contains an array of per-size measurements plus a
meta block including runs_per_size, seed, and
the selected profile.
Wizard alternative (interactive prompts):
./build/hashbrowns --wizardEstimate sizes where one structure overtakes another for each operation:
./build/hashbrowns --profile crossoverOptional time budget:
./build/hashbrowns --crossover-analysis --max-size 200000 --max-seconds 15 --runs 2 \
--structures array,hashmap --out-format csv --output results/csvs/crossover_results.csvEach crossover row captures an operation
(insert|search|remove), structure pair (a,b),
and estimated size at crossover. Use JSON for richer meta when
publishing.
We provide an example parser at
scripts/example_parse.py:
python3 scripts/example_parse.py results/csvs/benchmark_results.jsonSample output:
Structure Insert(ms) Search(ms) Remove(ms) Memory(bytes)
array 12.34 4.11 8.22 65536
hashmap 7.91 5.02 6.10 98304
Add --summary to reduce output to a one-line aggregate
or --csv to emit a condensed table for piping.
All JSON blobs contain schema_version. The parser checks
this first. If you upgrade the schema, adjust the script or add a
validation step (scripts/validate_json_schema.py,
planned).
For comparing benchmark runs over time: - Pin the CPU or disable
turbo on Linux (--pin-cpu --no-turbo). - Fix random seed:
--seed 12345. - Record branch / commit
(--version flag is script friendly). - Include bootstrap
CIs for statistical confidence. - Document environment meta (already
embedded in JSON).
| Symptom | Suggestion |
|---|---|
| Very high variance | Increase --runs, enable --bootstrap, pin
CPU, close background apps |
| HashMap probes > 5 | Reduce --hash-load, increase
--hash-capacity, try chaining strategy |
| Long array remove times | Reduce max size or omit array from large sweeps |
| CSV missing memory columns | Add --memory-tracking |
Explore deeper memory & probe interpretation in
memory_and_probes.md (to be added), or build plots
with:
scripts/run_benchmarks.sh --runs 8 --size 50000 --max-size 65536 --plots --yscale autoCheck your version:
./build/hashbrowns --version
# Output: hashbrowns 1.0.0 (git b792a4354ada)Last updated: February 2026