ilusm.dev

bnch

Benchmarking utilities - timer with named laps, benchmark runner with warmup, suites, side-by-side comparison, microbenchmark ranking, function profiling, memory tracking, and export.

Load with: use bnch

What this module does

bnch measures how fast your ilusm code runs. Wrap a function in a benchmark with bnew, set the iteration count, and call bnbru - it runs a warmup pass first (10% of iterations, discarded), times the full run, and returns total ms, per-op ms, and ops/sec.

Group benchmarks into a suite with bnbsu, compare ilusm against a reference implementation with bnbco, rank a set of operations fastest-first with bnbmi, or track per-function timing across multiple calls with the profiler. Results can be exported to JSON or HTML.

See also: bench module for a nearly identical API under slightly different function names.

Quick example

use bnch

# Time a single function
b = bnew("list sort", \() trl.srt([5,3,1,4,2]), 10000)
r = bnbru(b)
bnbpr(r)
# list sort:
#   10000 iterations in 38.20ms
#   0.0038ms per op
#   263157 ops/sec

# Suite of multiple benchmarks
s = bnbsu("String ops")
s = bnbsu(s, bnew("concat",  \() "a" + "b", 50000))
s = bnbsu(s, bnew("length",  \() len("hello"), 50000))
bnbsu(s)

Functions

Timer

bnbti is overloaded on argument count.

bnbti()

Creates a new timer with start: tim.now() and an empty laps list.

bnbti(timer)

Resets the timer's start time to now. Returns the timer.

bnbti(timer, name)

Records a named lap - stores elapsed time since last start in timer.laps and resets start. Returns the timer.

bnbti(timer) (elapsed)

Returns elapsed milliseconds since the last start without recording a lap.

Single benchmark

bnew(name, fn, iterations)

Creates a benchmark descriptor. Warmup is set to iterations / 10. Call bnbru to execute it.

bnbru(benchmark)

Runs the benchmark. Executes warmup iterations first (discarded), then times iterations calls. Logs "Running: name" via obs.obslg. Returns {name, iterations, total_ms, per_op_ms, ops_per_sec}.

bnbpr(result)

Prints a formatted result: name, iteration count, total time in ms, time per op, and ops/sec - all formatted to 2–4 decimal places.

Suite

bnbsu is overloaded on argument count.

bnbsu(name)

Creates a new suite with a name, empty benchmark list, and empty results list.

bnbsu(suite, benchmark)

Adds a benchmark to the suite. Returns the updated suite.

bnbsu(suite)

Runs all benchmarks, prints each result and a total time summary. Returns the list of results.

Comparison

bnbco(name, ilusm_fn, ref_fn, ref_name, iterations)

Runs an ilusm benchmark and a reference side by side. ref_fn is called with no arguments and should return a result object or nil. Prints a speedup ratio - e.g. "ilusm is 1.3x faster than reference" or vice versa.

Microbenchmarks

bnbmi(name, operations, iterations)

Runs a list of {name, fn} operations as individual benchmarks. Sorts results by per_op_ms ascending and prints a ranked list (fastest first). Returns the unsorted results list.

Profiling

bncof is overloaded on argument count.

bncof()

Creates a new profile object with an empty calls map and no active section.

bncof(profile, name)

Starts timing a named section. Sets profile.current to {name, start: tim.now()}.

bncof(profile) (end)

Ends the current section. Accumulates elapsed ms and call count in profile.calls[name]. Clears profile.current.

bncof(profile) (report)

Prints all section names with call count, total ms, and average ms per call.

Memory tracking

bncms()

Returns current memory stats via host native __bench_mem_stats().

bnbme(before, after)

Returns {heap_delta, objects_delta} - the difference in heap and object count between two snapshots from bncms.

Export

bnbex is overloaded.

bnbex(results) (JSON)

Exports results as a JSON string with a timestamp and benchmarks array.

bnbex(results) (HTML)

Exports results as a minimal HTML page with a table of Name, Iterations, Total (ms), Per Op (ms), Ops/sec.

Notes

  • This module (bnch) and the bench module cover the same functionality with slightly different function naming. Use whichever you prefer.
  • Memory tracking requires the host runtime to provide __bench_mem_stats.
  • Requires tim, txt, and jsn.