bevy/benches
re0312 f0bdce7425
Fair Change Detection Benchmarking (#11173)
# Objective

- #4972 introduce a benchmark to measure chang detection performance
- However,it uses `iter_batch ` cause a lot of overhead in clone data to
each routine closure(it feels like a bug in`iter_batch `) and constructs
new query in every iter.This overhead masks the real change detection
throughput we want to measure. Instead of evaluating raw change
detection, the benchmark ends up dominated by data cloning and
allocation costs.


## Solution

- Use iter_batch_ref to reduce the benchmark overload 
- Use cached query to better reflect real-world usage scenarios.
- Add more benmark

---

## Changelog
2024-06-26 12:46:41 +00:00
..
benches Fair Change Detection Benchmarking (#11173) 2024-06-26 12:46:41 +00:00
Cargo.toml multi_threaded feature rename (#12997) 2024-05-06 20:49:32 +00:00
README.md Add README to benches (#11508) 2024-01-24 17:11:28 +00:00

Bevy Benchmarks

This is a crate with a collection of benchmarks for Bevy, separate from the rest of the Bevy crates.

Running the benchmarks

  1. Setup everything you need for Bevy with the setup guide.

  2. Move into the benches directory (where this README is located).

    bevy $ cd benches
    
  3. Run the benchmarks with cargo (This will take a while)

    bevy/benches $ cargo bench
    

    If you'd like to only compile the benchmarks (without running them), you can do that like this:

    bevy/benches $ cargo bench --no-run
    

Criterion

Bevy's benchmarks use Criterion. If you want to learn more about using Criterion for comparing performance against a baseline or generating detailed reports, you can read the Criterion.rs documentation.