Performance
ilusm ships Haskell-style: a prebuilt native runner (stage‑0 seed) plus ilusm.ilbc and the full toolchain as .ilu under lib/. The bytecode compiler and stack VM are Ilusm sources (lib/backend/compiler.ilu, mcde.ilu, ilusm_vm.ilu), not a second C VM in that tree. What you feel at runtime is the seed binary, syscall coverage, and whether execution stays on the tree-walk evaluator or the ILBC VM path-see Execution modes.
What you actually run
From Download: an installer or minimal bundle gives you the prebuilt stage‑0 runner, ilusm, and lib/**/*.ilu. Running ilusm … executes Ilusm via that runner. In a full checkout (with source), ./build.sh rebuilds ilusm.ilbc from Ilusm sources (lib/backend/compile_pipeline.ilu)-not a from-source compile of the native VM.
For syscall semantics and the host bridge, use Runtime, Host contract, and Syscall ABI. Instruction-level detail: Bytecode ISA.
Bytecode VM & ILBC layout
Ilusm compiles to ILBC (packed in ilusm.ilbc). The encoder (lib/backend/mcde.ilu) builds:
- Header - magic,
ILU_BC_VER, metadata (reject mismatched bytecode). - Constant pool - deduplicated strings, numbers, and names referenced by index.
- Function table - name → instruction offset for calls.
- Instruction stream - stack-oriented opcodes (
push,load,call,syscall, …).
The VM (lib/backend/ilusm_vm.ilu) keeps a value stack, call frames (locals + PC), module state, and dispatches syscall to the host. Tagged values (numbers, strings, lists, objects, functions, builtins) are the language’s runtime model-see Bytecode ISA for the opcode set.
Execution modes (hybrid runtime)
lib/runtime/hybrid_rt.ilu chooses between:
- Tree-walk evaluator (
lib/runtime/evl.ilu) - runs the AST directly; simpler, slower; used when bytecode is missing or for bootstrap-style paths. - Bytecode VM (
lib/backend/ilusm_vm.ilu) - runs ILBC; intended primary path whenilusm.ilbcis present and up to date.
Benchmark takeaway: wall times depend on which path your driver hit. The repo’s multi-language driver documents ILUSM_BENCH_CMD / ilusm-vm run vs ./ilusm-label the runner whenever you publish numbers (Benchmarks).
Memory & GC
Ilusm values are heap-allocated with a reference-counted model and cycle detection, plus arena-style helpers for predictable short-lived allocation (arena / arn stdlib modules).
lib/runtime/mem_heap.ilu holds a small synthetic GC-proc state used from the VM/evaluator (mhrun, mhsts, mhtun): tunable counters, not a generational collector described in source. VM opcodes expose this as __mem_gc_proc_* builtins.
The native seed still owns low-level allocation behavior for host-backed objects. For introspection hooks, see lib/stdlib/mem.ilu and syscall names like __heap_* / __dx_heap_dump in Syscall ABI when your host implements them.
JIT & native hooks (contract vs reality)
The syscall contract lists many __jit_* names (compile, execute, OSR, profiling, x86_64/arm64 backends, etc.), and lib/stdlib/jit.ilu is the Ilusm-level API that calls them. That surface exists so a conforming host can attach a real JIT.
On a minimal seed, those calls may be stubs or unimplemented until the host wires them. There is no guarantee on ilusm.dev that your install JIT-compiles hot loops today-measure, or read your seed’s syscall table.
Related: __native_* syscalls (mmap, mprotect, etc.) are part of the same “optional native tier” story.
Benchmarks
Cross-language microbench: after installing ilusm:
python3 release/benchmarks/run_multi_lang_bench.py 2>&1 | tee ilusm-bench-$(date -u +%Y%m%dT%H%MZ).txt
Three workloads (integer sum 1..1e6, iterative Fibonacci step count, triple nested loops with mod) are timed as median of 3 wall-clock runs per language. Set ILUSM_BENCH_CMD (or ILUSM_CMD) so the Ilusm row uses the same runner you mean to ship (e.g. ilusm-vm run)-default resolution prefers ilusm-vm on PATH, else ./ilusm.
In-language harness: stdlib bench (lib/stdlib/bench.ilu) provides timers, warmup loops, and suite helpers built on tim and obs logging.
Curated tables and recorded host snapshots are shown on the companion page: Benchmarks.
Test-ladder timing
To see how long each verification phase takes on your machine:
./scripts/run_all_tests_timed.sh
Emits TSV lines per domain/golden/integration/fuzz file plus stdlib import/target passes-useful for regression tracking alongside microbenchmarks.
Profiling
In-language (wall time)
The stdlib module pfl wraps wall-clock timing (see lib/stdlib/pfl.ilu):
use pfl
r = pfl.pflnm("my_block", \() prn("work"))
prn(r)
Heavier diagnostics (dx, CPU/heap hooks) depend on host syscalls-see lib/stdlib/dx.ilu and Syscall ABI.
OS-level
Profile the native executable the launcher actually execs (often under stage0/…), not only the shell wrapper:
perf record/perf reportstrace -cfor syscall countsvalgrindwhen compatible (heavy slowdown)
In a checkout: sh stage0/resolve_seed.sh "$ILUSM_HOME" prints the resolved seed path.
Optimization guidelines
- Lists and text - Prefer
trl/txtpipelines where they match your workload. - Host work - I/O, network, and crypto usually dominate; cut round-trips and copies.
- Compiler / encoder - The AST→bytecode pipeline and constant pool dedup live in
lib/backend/compiler.iluandmcde.ilu. There is no separate “marketing list” of passes-read sources for what is actually implemented.
Buffered I/O: see stdlib modules such as bio and strio in lib/stdlib/.
Concurrency
The syn module exposes channels, spawn, wait, and helpers built on __sys_spawn and related syscalls (see lib/stdlib/syn.ilu). If your host does not implement those syscalls, concurrency helpers will not work.
use syn
t = syn.run(\() prn("async body"))
t.wai()
Multicore behavior remains the host process model’s responsibility.
Telemetry
The obs module records spans, notes, and timeline-style events (see lib/stdlib/obs.ilu). Wire schemas to your deployment; there is no fixed production JSON schema on this page.