mirror of
https://github.com/bevyengine/bevy
synced 2024-11-25 06:00:20 +00:00
336c23c1aa
# Objective The current observers have some unfortunate footguns where you can end up confused about what is actually being observed. For apps you can chain observe like `app.observe(..).observe(..)` which works like you would expect, but if you try the same with world the first `observe()` will return the `EntityWorldMut` for the created observer, and the second `observe()` will only observe on the observer entity. It took several hours for multiple people on discord to figure this out, which is not a great experience. ## Solution Rename `observe` on entities to `observe_entity`. It's slightly more verbose when you know you have an entity, but it feels right to me that observers for specific things have more specific naming, and it prevents this issue completely. Another possible solution would be to unify `observe` on `App` and `World` to have the same kind of return type, but I'm not sure exactly what that would look like. ## Testing Simple name change, so only concern is docs really. --- ## Migration Guide The `observe()` method on entities has been renamed to `observe_entity()` to prevent confusion about what is being observed in some cases. |
||
---|---|---|
.. | ||
benches | ||
Cargo.toml | ||
README.md |
Bevy Benchmarks
This is a crate with a collection of benchmarks for Bevy, separate from the rest of the Bevy crates.
Running the benchmarks
-
Setup everything you need for Bevy with the setup guide.
-
Move into the
benches
directory (where this README is located).bevy $ cd benches
-
Run the benchmarks with cargo (This will take a while)
bevy/benches $ cargo bench
If you'd like to only compile the benchmarks (without running them), you can do that like this:
bevy/benches $ cargo bench --no-run
Criterion
Bevy's benchmarks use Criterion. If you want to learn more about using Criterion for comparing performance against a baseline or generating detailed reports, you can read the Criterion.rs documentation.