Development
Running Tests
# Build in development mode
python -m venv venv
source venv/bin/activate
maturin develop
# Run Python tests
python -m pytest tests/python/
# Or via the CI helper script
./contrib/ci/local-ci.sh python-test
Running Benchmarks
# Build release extension first (verify .so is ~1.4 MB, not ~20 MB debug)
maturin develop --release
# Certificate parsing and field-access benchmark (synta vs cryptography)
python python/bench_certificate.py
# Include per-field getter breakdown
python python/bench_certificate.py --per-field
# PKCS#7 / PKCS#12 certificate extraction benchmark (synta vs cryptography)
python python/bench_pkcs.py
# Save Criterion-compatible JSON alongside Rust benchmarks
# (default output: target/criterion/)
python python/bench_certificate.py --save-criterion
python python/bench_pkcs.py --save-criterion
# Custom output directory
python python/bench_certificate.py --save-criterion path/to/criterion
# Compare with a previous run using criterion-compare
# (first run writes new/, second run rotates new/→base/ then writes new/)
python python/bench_certificate.py --save-criterion
criterion-compare target/criterion
The benchmark certificate files are cloned by the Rust Criterion benchmarks on
their first run. If tests/vectors/ is missing, run:
cargo bench -p synta-bench --bench bindings --features bench-bindings --no-run
The parse-only group prints three Python entries per certificate in the
binding_comparison and binding_post_quantum groups:
binding_comparison/synta/cert_00_… # from_der: shallow scan only (~0.15 µs)
binding_comparison/synta_full/cert_00_… # full_from_der: complete RFC 5280 decode
binding_comparison/cryptography_x509/… # envelope validation
The Rust benchmark (cargo bench --bench bindings) adds matching entries:
binding_comparison/rust_shallow/… # validate_envelope: same 4-op scan as from_der
binding_comparison/rust_typed/… # full Certificate::decode — apples-to-apples with synta_full
criterion_compat.py
python/criterion_compat.py is a standalone, import-ready module that provides
the Criterion-compatible sampling and JSON-writing primitives. Import it
directly from other benchmark or test scripts:
from criterion_compat import measure, save_criterion_files
from pathlib import Path
avg_us, iters, times_ns = measure(fn, warmup_s=3.0, measure_s=5.0)
print(f"avg: {avg_us:.2f} µs ({sum(int(n) for n in iters)} iterations)")
save_criterion_files(
Path("target/criterion"),
group_id="my_group",
function_id="my_fn",
value_str="param_name",
iters=iters,
times_ns=times_ns,
)
See also Performance for benchmark result tables and Project Structure for the source tree layout.