mirror of
https://github.com/bevyengine/bevy
synced 2024-11-10 07:04:33 +00:00
f0bdce7425
# Objective - #4972 introduce a benchmark to measure chang detection performance - However,it uses `iter_batch ` cause a lot of overhead in clone data to each routine closure(it feels like a bug in`iter_batch `) and constructs new query in every iter.This overhead masks the real change detection throughput we want to measure. Instead of evaluating raw change detection, the benchmark ends up dominated by data cloning and allocation costs. ## Solution - Use iter_batch_ref to reduce the benchmark overload - Use cached query to better reflect real-world usage scenarios. - Add more benmark --- ## Changelog |
||
---|---|---|
.. | ||
benches | ||
Cargo.toml | ||
README.md |
Bevy Benchmarks
This is a crate with a collection of benchmarks for Bevy, separate from the rest of the Bevy crates.
Running the benchmarks
-
Setup everything you need for Bevy with the setup guide.
-
Move into the
benches
directory (where this README is located).bevy $ cd benches
-
Run the benchmarks with cargo (This will take a while)
bevy/benches $ cargo bench
If you'd like to only compile the benchmarks (without running them), you can do that like this:
bevy/benches $ cargo bench --no-run
Criterion
Bevy's benchmarks use Criterion. If you want to learn more about using Criterion for comparing performance against a baseline or generating detailed reports, you can read the Criterion.rs documentation.