mirror of
https://github.com/bevyengine/bevy
synced 2024-11-29 08:00:20 +00:00
484721be80
# Objective Fixes #14883 ## Solution Pretty simple update to `EntityCommands` methods to consume `self` and return it rather than taking `&mut self`. The things probably worth noting: * I added `#[allow(clippy::should_implement_trait)]` to the `add` method because it causes a linting conflict with `std::ops::Add`. * `despawn` and `log_components` now return `Self`. I'm not sure if that's exactly the desired behavior so I'm happy to adjust if that seems wrong. ## Testing Tested with `cargo run -p ci`. I think that should be sufficient to call things good. ## Migration Guide The most likely migration needed is changing code from this: ``` let mut entity = commands.get_or_spawn(entity); if depth_prepass { entity.insert(DepthPrepass); } if normal_prepass { entity.insert(NormalPrepass); } if motion_vector_prepass { entity.insert(MotionVectorPrepass); } if deferred_prepass { entity.insert(DeferredPrepass); } ``` to this: ``` let mut entity = commands.get_or_spawn(entity); if depth_prepass { entity = entity.insert(DepthPrepass); } if normal_prepass { entity = entity.insert(NormalPrepass); } if motion_vector_prepass { entity = entity.insert(MotionVectorPrepass); } if deferred_prepass { entity.insert(DeferredPrepass); } ``` as can be seen in several of the example code updates here. There will probably also be instances where mutable `EntityCommands` vars no longer need to be mutable. |
||
---|---|---|
.. | ||
benches | ||
Cargo.toml | ||
README.md |
Bevy Benchmarks
This is a crate with a collection of benchmarks for Bevy, separate from the rest of the Bevy crates.
Running the benchmarks
-
Setup everything you need for Bevy with the setup guide.
-
Move into the
benches
directory (where this README is located).bevy $ cd benches
-
Run the benchmarks with cargo (This will take a while)
bevy/benches $ cargo bench
If you'd like to only compile the benchmarks (without running them), you can do that like this:
bevy/benches $ cargo bench --no-run
Criterion
Bevy's benchmarks use Criterion. If you want to learn more about using Criterion for comparing performance against a baseline or generating detailed reports, you can read the Criterion.rs documentation.