Commit graph

43 commits

Author SHA1 Message Date
andriyDev
b1e4512648
Fix the picking backend features not actually disabling the features (#16470)
# Objective

- Fixes #16469.

## Solution

- Make the picking backend features not enabled by default in each
sub-crate.
- Make features in `bevy_internal` to set the backend features
- Make the root `bevy` crate set the features by default.

## Testing

- The mesh and sprite picking examples still work correctly.
2024-11-22 18:14:16 +00:00
Joona Aalto
0e30b68b20
Add mesh picking backend and MeshRayCast system parameter (#15800)
# Objective

Closes #15545.

`bevy_picking` supports UI and sprite picking, but not mesh picking.
Being able to pick meshes would be extremely useful for various games,
tools, and our own examples, as well as scene editors and inspectors.
So, we need a mesh picking backend!

Luckily,
[`bevy_mod_picking`](https://github.com/aevyrie/bevy_mod_picking) (which
`bevy_picking` is based on) by @aevyrie already has a [backend for
it](74f0c3c0fb/backends/bevy_picking_raycast/src/lib.rs)
using [`bevy_mod_raycast`](https://github.com/aevyrie/bevy_mod_raycast).
As a side product of adding mesh picking, we also get support for
performing ray casts on meshes!

## Solution

Upstream a large chunk of the immediate-mode ray casting functionality
from `bevy_mod_raycast`, and add a mesh picking backend based on
`bevy_mod_picking`. Huge thanks to @aevyrie who did all the hard work on
these incredible crates!

All meshes are pickable by default. Picking can be disabled for
individual entities by adding `PickingBehavior::IGNORE`, like normal.
Or, if you want mesh picking to be entirely opt-in, you can set
`MeshPickingBackendSettings::require_markers` to `true` and add a
`RayCastPickable` component to the desired camera and target entities.

You can also use the new `MeshRayCast` system parameter to cast rays
into the world manually:

```rust
fn ray_cast_system(mut ray_cast: MeshRayCast, foo_query: Query<(), With<Foo>>) {
    let ray = Ray3d::new(Vec3::ZERO, Dir3::X);

    // Only ray cast against entities with the `Foo` component.
    let filter = |entity| foo_query.contains(entity);

    // Never early-exit. Note that you can change behavior per-entity.
    let early_exit_test = |_entity| false;

    // Ignore the visibility of entities. This allows ray casting hidden entities.
    let visibility = RayCastVisibility::Any;

    let settings = RayCastSettings::default()
        .with_filter(&filter)
        .with_early_exit_test(&early_exit_test)
        .with_visibility(visibility);

    // Cast the ray with the settings, returning a list of intersections.
    let hits = ray_cast.cast_ray(ray, &settings);
}
```

This is largely a direct port, but I did make several changes to match
our APIs better, remove things we don't need or that I think are
unnecessary, and do some general improvements to code quality and
documentation.

### Changes Relative to `bevy_mod_raycast` and `bevy_mod_picking`

- Every `Raycast` and "raycast" has been renamed to `RayCast` and "ray
cast" (similar reasoning as the "Naming" section in #15724)
- `Raycast` system param has been renamed to `MeshRayCast` to avoid
naming conflicts and to be explicit that it is not for colliders
- `RaycastBackend` has been renamed to `MeshPickingBackend`
- `RayCastVisibility` variants are now `Any`, `Visible`, and
`VisibleInView` instead of `Ignore`, `MustBeVisible`, and
`MustBeVisibleAndInView`
- `NoBackfaceCulling` has been renamed to `RayCastBackfaces`, to avoid
implying that it affects the rendering of backfaces for meshes (it
doesn't)
- `SimplifiedMesh` and `RayCastBackfaces` live near other ray casting
API types, not in their own 10 LoC module
- All intersection logic and types are in the same `intersections`
module, not split across several modules
- Some intersection types have been renamed to be clearer and more
consistent
	- `IntersectionData` -> `RayMeshHit` 
	- `RayHit` -> `RayTriangleHit`
- General documentation and code quality improvements

### Removed / Not Ported

- Removed unused ray helpers and types, like `PrimitiveIntersection`
- Removed getters on intersection types, and made their properties
public
- There is no `2d` feature, and `Raycast::mesh_query` and
`Raycast::mesh2d_query` have been merged into `MeshRayCast::mesh_query`,
which handles both 2D and 3D
- I assume this existed previously because `Mesh2dHandle` used to be in
`bevy_sprite`. Now both the 2D and 3D mesh are in `bevy_render`.
- There is no `debug` feature or ray debug rendering
- There is no deferred API (`RaycastSource`)
- There is no `CursorRayPlugin` (the picking backend handles this)

### Note for Reviewers

In case it's helpful, the [first
commit](281638ef10)
here is essentially a one-to-one port. The rest of the commits are
primarily refactoring and cleaning things up in the ways listed earlier,
as well as changes to the module structure.

It may also be useful to compare the original [picking
backend](74f0c3c0fb/backends/bevy_picking_raycast/src/lib.rs)
and [`bevy_mod_raycast`](https://github.com/aevyrie/bevy_mod_raycast) to
this PR. Feel free to mention if there are any changes that I should
revert or something I should not include in this PR.

## Testing

I tested mesh picking and relevant components in some examples, for both
2D and 3D meshes, and added a new `mesh_picking` example. I also
~~stole~~ ported over the [ray-mesh intersection
benchmark](dbc5ef32fe/benches/ray_mesh_intersection.rs)
from `bevy_mod_raycast`.

---

## Showcase

Below is a version of the `2d_shapes` example modified to demonstrate 2D
mesh picking. This is not included in this PR.


https://github.com/user-attachments/assets/7742528c-8630-4c00-bacd-81576ac432bf

And below is the new `mesh_picking` example:


https://github.com/user-attachments/assets/b65c7a5a-fa3a-4c2d-8bbd-e7a2c772986e

There is also a really cool new `mesh_ray_cast` example ported over from
`bevy_mod_raycast`:


https://github.com/user-attachments/assets/3c5eb6c0-bd94-4fb0-bec6-8a85668a06c9

---------

Co-authored-by: Aevyrie <aevyrie@gmail.com>
Co-authored-by: Trent <2771466+tbillington@users.noreply.github.com>
Co-authored-by: François Mockers <mockersf@gmail.com>
2024-10-13 17:24:19 +00:00
targrub
de3c70a8d3
Update `glam to 0.29, encase` to 0.10. (#15249)
# Objective

Updating ``glam`` to 0.29, ``encase`` to 0.10.

## Solution

Update the necessary ``Cargo.toml`` files.

## Testing

Ran ``cargo run -p ci`` on Windows; no issues came up.

---------

Co-authored-by: aecsocket <aecsocket@tutanota.com>
2024-09-23 19:44:02 +00:00
Mike
3d460e98ec
Fix CI bench compile check (#14728)
# Objective

- Fixes #14723 

## Solution

- add the manifest path to the cargo command

## Testing

- ran `cargo run -p ci -- bench-check` locally
2024-08-14 13:23:00 +00:00
Gino Valente
91fa4bb649
bevy_reflect: Function reflection benchmarks (#14647)
# Objective

It would be good to have benchmarks handy for function reflection as it
continues to be worked on.

## Solution

Add some basic benchmarks for function reflection.

## Testing

To test locally, run the following in the `benches` directory:

```
cargo bench --bench reflect_function
```

## Results

Here are a couple of the results (M1 Max MacBook Pro):

<img width="936" alt="Results of benching calling functions vs closures
via reflection. Closures average about 40ns, while functions average
about 55ns"
src="https://github.com/user-attachments/assets/b9a6c585-5fbe-43db-9a7b-f57dbd3815e3">
<img width="936" alt="Results of benching converting functions vs
closures into their dynamic representations. Closures average about
34ns, while functions average about 37ns"
src="https://github.com/user-attachments/assets/4614560a-7192-4c1e-9ade-7bc5a4ca68e3">

Currently, it seems `DynamicClosure` is just a bit more performant. This
is likely due to the fact that `DynamicFunction` stores its function
object in an `Arc` instead of a `Box` so that it can be `Send + Sync`
(and also `Clone`).

We'll likely need to do the same for `DynamicClosure` so I suspect these
results to change in the near future.
2024-08-11 03:02:06 +00:00
Robert Walter
70a18d26e2
Glam 0.28 update - adopted (#14613)
Basically it's https://github.com/bevyengine/bevy/pull/13792 with the
bumped versions of `encase` and `hexasphere`.

---------

Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
2024-08-06 01:28:00 +00:00
Miles Silberling-Cook
ed2b8e0f35
Minimal Bubbling Observers (#13991)
# Objective

Add basic bubbling to observers, modeled off `bevy_eventlistener`.

## Solution

- Introduce a new `Traversal` trait for components which point to other
entities.
- Provide a default `TraverseNone: Traversal` component which cannot be
constructed.
- Implement `Traversal` for `Parent`.
- The `Event` trait now has an associated `Traversal` which defaults to
`TraverseNone`.
- Added a field `bubbling: &mut bool` to `Trigger` which can be used to
instruct the runner to bubble the event to the entity specified by the
event's traversal type.
- Added an associated constant `SHOULD_BUBBLE` to `Event` which
configures the default bubbling state.
- Added logic to wire this all up correctly.

Introducing the new associated information directly on `Event` (instead
of a new `BubblingEvent` trait) lets us dispatch both bubbling and
non-bubbling events through the same api.

## Testing

I have added several unit tests to cover the common bugs I identified
during development. Running the unit tests should be enough to validate
correctness. The changes effect unsafe portions of the code, but should
not change any of the safety assertions.

## Changelog

Observers can now bubble up the entity hierarchy! To create a bubbling
event, change your `Derive(Event)` to something like the following:

```rust
#[derive(Component)]
struct MyEvent;

impl Event for MyEvent {
    type Traverse = Parent; // This event will propagate up from child to parent.
    const AUTO_PROPAGATE: bool = true; // This event will propagate by default.
}
```

You can dispatch a bubbling event using the normal
`world.trigger_targets(MyEvent, entity)`.

Halting an event mid-bubble can be done using
`trigger.propagate(false)`. Events with `AUTO_PROPAGATE = false` will
not propagate by default, but you can enable it using
`trigger.propagate(true)`.

If there are multiple observers attached to a target, they will all be
triggered by bubbling. They all share a bubbling state, which can be
accessed mutably using `trigger.propagation_mut()` (`trigger.propagate`
is just sugar for this).

You can choose to implement `Traversal` for your own types, if you want
to bubble along a different structure than provided by `bevy_hierarchy`.
Implementers must be careful never to produce loops, because this will
cause bevy to hang.

## Migration Guide
+ Manual implementations of `Event` should add associated type `Traverse
= TraverseNone` and associated constant `AUTO_PROPAGATE = false`;
+ `Trigger::new` has new field `propagation: &mut Propagation` which
provides the bubbling state.
+ `ObserverRunner` now takes the same `&mut Propagation` as a final
parameter.

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2024-07-15 13:39:41 +00:00
andristarr
bb76a2c69c
multi_threaded feature rename (#12997)
# Objective

Fixes #12966

## Solution

Renaming multi_threaded feature to match snake case

## Migration Guide

Bevy feature multi-threaded should be refered to multi_threaded from now
on.
2024-05-06 20:49:32 +00:00
Martín Maita
32cd0c5dc1
Update glam version requirement from 0.25 to 0.27 (#12757)
# Objective

- Update glam version requirement to latest version.

## Solution

- Updated `glam` version requirement from 0.25 to 0.27.
- Updated `encase` and `encase_derive_impl` version requirement from 0.7
to 0.8.
- Updated `hexasphere` version requirement from 10.0 to 12.0.
- Breaking changes from glam changelog:
- [0.26.0] Minimum Supported Rust Version bumped to 1.68.2 for impl
From<bool> for {f32,f64} support.
- [0.27.0] Changed implementation of vector fract method to match the
Rust implementation instead of the GLSL implementation, that is self -
self.trunc() instead of self - self.floor().

---

## Migration Guide

- When using `glam` exports, keep in mind that `vector` `fract()` method
now matches Rust implementation (that is `self - self.trunc()` instead
of `self - self.floor()`). If you want to use the GLSL implementation
you should now use `fract_gl()`.

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
2024-05-02 18:42:34 +00:00
BD103
9ee02e87d3
Remove version field for non-publish crates and update descriptions (#13100)
# Objective

- The [`version`] field in `Cargo.toml` is optional for crates not
published on <https://crates.io>.
- We have several `publish = false` tools in this repository that still
have a version field, even when it's not useful.

[`version`]:
https://doc.rust-lang.org/cargo/reference/manifest.html#the-version-field

## Solution

- Remove the [`version`] field for all crates where `publish = false`.
- Update the description on a few crates and remove extra newlines as
well.
2024-04-26 11:55:03 +00:00
W. Black
622f9a35b6
Torus benchmark (#12781)
# Objective

- Primitive meshing is suboptimal
- Improve primitive meshing

## Solution

- Add primitive meshing benchmark
- Allows measuring future improvements

---

First of a few PRs to refactor and improve primitive meshing.
2024-03-29 20:30:26 +00:00
Doonv
1c67e020f7
Move EntityHash related types into bevy_ecs (#11498)
# Objective

Reduce the size of `bevy_utils`
(https://github.com/bevyengine/bevy/issues/11478)

## Solution

Move `EntityHash` related types into `bevy_ecs`. This also allows us
access to `Entity`, which means we no longer need `EntityHashMap`'s
first generic argument.

---

## Changelog

- Moved `bevy::utils::{EntityHash, EntityHasher, EntityHashMap,
EntityHashSet}` into `bevy::ecs::entity::hash` .
- Removed `EntityHashMap`'s first generic argument. It is now hardcoded
to always be `Entity`.

## Migration Guide

- Uses of `bevy::utils::{EntityHash, EntityHasher, EntityHashMap,
EntityHashSet}` now have to be imported from `bevy::ecs::entity::hash`.
- Uses of `EntityHashMap` no longer have to specify the first generic
parameter. It is now hardcoded to always be `Entity`.
2024-02-12 15:02:24 +00:00
irate
ec14e946b8
Update glam, encase and hexasphere (#11082)
Update to `glam` 0.25, `encase` 0.7 and `hexasphere` to 10.0

## Changelog
Added the `FloatExt` trait to the `bevy_math` prelude which adds `lerp`,
`inverse_lerp` and `remap` methods to the `f32` and `f64` types.
2024-01-08 22:58:45 +00:00
scottmcm
713b1d8fa4
Optimize Entity::eq (#10519)
(This is my first PR here, so I've probably missed some things. Please
let me know what else I should do to help you as a reviewer!)

# Objective

Due to https://github.com/rust-lang/rust/issues/117800, the `derive`'d
`PartialEq::eq` on `Entity` isn't as good as it could be. Since that's
used in hashtable lookup, let's improve it.

## Solution

The derived `PartialEq::eq` short-circuits if the generation doesn't
match. However, having a branch there is sub-optimal, especially on
64-bit systems like x64 that could just load the whole `Entity` in one
load anyway.

Due to complications around `poison` in LLVM and the exact details of
what unsafe code is allowed to do with reference in Rust
(https://github.com/rust-lang/unsafe-code-guidelines/issues/346), LLVM
isn't allowed to completely remove the short-circuiting. `&Entity` is
marked `dereferencable(8)` so LLVM knows it's allowed to *load* all 8
bytes -- and does so -- but it has to assume that the `index` might be
undef/poison if the `generation` doesn't match, and thus while it finds
a way to do it without needing a branch, it has to do something slightly
more complicated than optimal to combine the results. (LLVM is allowed
to change non-short-circuiting code to use branches, but not the other
way around.)

Here's a link showing the codegen today:
<https://rust.godbolt.org/z/9WzjxrY7c>
```rust
#[no_mangle]
pub fn demo_eq_ref(a: &Entity, b: &Entity) -> bool {
    a == b
}
```
ends up generating the following assembly:
```asm
demo_eq_ref:
        movq    xmm0, qword ptr [rdi]
        movq    xmm1, qword ptr [rsi]
        pcmpeqd xmm1, xmm0
        pshufd  xmm0, xmm1, 80
        movmskpd        eax, xmm0
        cmp     eax, 3
        sete    al
        ret
```
(It's usually not this bad in real uses after inlining and LTO, but it
makes a strong demo.)

This PR manually implements `PartialEq::eq` *without* short-circuiting,
and because that tells LLVM that neither the generations nor the index
can be poison, it doesn't need to be so careful and can generate the
"just compare the two 64-bit values" code you'd have probably already
expected:
```asm
demo_eq_ref:
        mov     rax, qword ptr [rsi]
        cmp     qword ptr [rdi], rax
        sete    al
        ret
```

Since this doesn't change the representation of `Entity`, if it's
instead passed by *value*, then each `Entity` is two `u32` registers,
and the old and the new code do exactly the same thing. (Other
approaches, like changing `Entity` to be `[u32; 2]` or `u64`, affect
this case.)

This should hopefully merge easily with changes like
https://github.com/bevyengine/bevy/pull/9907 that also want to change
`Entity`.

## Benchmarks

I'm not super-confident that I got my machine fully consistent for
benchmarking, but whether I run the old or the new one first I get
reasonably consistent results.

Here's a fairly typical example of the benchmarks I added in this PR:

![image](https://github.com/bevyengine/bevy/assets/18526288/24226308-4616-4082-b0ff-88fc06285ef1)

Building the sets seems to be basically the same. It's usually reported
as noise, but sometimes I see a few percent slower or faster.

But lookup hits in particular -- since a hit checks that the key is
equal -- consistently shows around 10% improvement.

`cargo run --example many_cubes --features bevy/trace_tracy --release --
--benchmark` showed as slightly faster with this change, though if I had
to bet I'd probably say it's more noise than meaningful (but at least
it's not worse either):

![image](https://github.com/bevyengine/bevy/assets/18526288/58bb8c96-9c45-487f-a5ab-544bbfe9fba0)

This is my first PR here -- and my first time running Tracy -- so please
let me know what else I should run, or run things on your own more
reliable machines to double-check.

---

## Changelog

(probably not worth including)

Changed: micro-optimized `Entity::eq` to help LLVM slightly.

## Migration Guide

(I really hope nobody was using this on uninitialized entities where
sufficiently tortured `unsafe` could could technically notice that this
has changed.)
2023-11-14 02:06:21 +00:00
Nicola Papale
ee3cc8ca86
Fix erronenous glam version (#9653)
# Objective

- Fix compilation issue with wrongly specified glam version
- bevy uses `Vec2::INFINITY`, depends on `0.24` (equivalent to `0.24.0`)
yet it was only introduced in version `0.24.1`

Context:
https://discord.com/channels/691052431525675048/692572690833473578/1146586570787397794

## Solution

- Bump glam version.
2023-08-31 12:55:17 +00:00
Nicola Papale
7b0809b532
Add reflect path parsing benchmark (#9364)
# Objective

We want to measure performance on path reflection parsing.

## Solution

Benchmark path-based reflection:
- Add a benchmark for `ParsedPath::parse`

It's fairly noisy, this is why I added the 3% threshold.

Ideally we would fix the noisiness though. Supposedly I'm seeding the
RNG correctly, so there shouldn't be much observable variance. Maybe
someone can help spot the issue.
2023-08-25 14:30:26 +00:00
Mike
ac8f36743e
enable multithreading on benches (#9388)
# Objective

- Enable the new multithreading feature on benches
2023-08-11 22:08:36 +00:00
François
0736195a1e
update syn, encase, glam and hexasphere (#8573)
# Objective

- Fixes #8282 
- Update `syn` to 2.0, `encase` to 0.6, `glam` to 0.24 and `hexasphere`
to 9.0


Blocked ~~on https://github.com/teoxoy/encase/pull/42~~ and ~~on
https://github.com/OptimisticPeach/hexasphere/pull/17~~

---------

Co-authored-by: Nicola Papale <nicopap@users.noreply.github.com>
Co-authored-by: JoJoJet <21144246+JoJoJet@users.noreply.github.com>
2023-05-16 01:24:17 +00:00
Edgar Geier
cbbf8ac575 Update glam to 0.23 (#7883)
# Objective

- Update `glam` to the latest version.

## Solution

- Update `glam` to version `0.23`.

Since the breaking change in `glam` only affects the `scalar-math` feature, this should cause no issues.
2023-03-04 11:42:27 +00:00
Aevyrie
2a598d3e5a Add Beziers to bevy_math (#7653)
# Objective

- Adds foundational math for Bezier curves, useful for UI/2D/3D animation and smooth paths.

https://user-images.githubusercontent.com/2632925/218883143-e138f994-1795-40da-8c59-21d779666991.mp4

## Solution

- Adds the generic `Bezier` type, and a `Point` trait. The `Point` trait allows us to use control points of any dimension, as long as they support vector math. I've implemented it for `f32`(1D), `Vec2`(2D), and `Vec3`/`Vec3A`(3D).
- Adds `CubicBezierEasing` on top of `Bezier` with the addition of an implementation of cubic Bezier easing, which is a foundational tool for UI animation.
  - This involves solving for $t$ in the parametric Bezier function $B(t)$ using the Newton-Raphson method to find a value with error $\leq$ 1e-7, capped at 8 iterations.
- Added type aliases for common Bezier curves: `CubicBezier2d`, `CubicBezier3d`, `QuadraticBezier2d`, and `QuadraticBezier3d`. These types use `Vec3A` to represent control points, as this was found to have an 80-90% speedup over using `Vec3`.
- Benchmarking shows quadratic/cubic Bezier evaluations $B(t)$ take \~1.8/2.4ns respectively. Easing, which requires an iterative solve takes \~50ns for cubic Beziers. 

---

## Changelog

- Added `CubicBezier2d`, `CubicBezier3d`, `QuadraticBezier2d`, and `QuadraticBezier3d` types with methods for sampling position, velocity, and acceleration. The generic `Bezier` type is also available, and generic over any degree of Bezier curve.
- Added `CubicBezierEasing`, with additional methods to allow for smooth easing animations.
2023-02-20 18:34:52 +00:00
François
0aab699a84 Update glam 0.22, hexasphere 8.0, encase 0.4 (#6427)
# Objective

- Update glam to 0.22, hexasphere to 8.0, encase to 0.4

## Solution

- Update glam to 0.22, hexasphere to 8.0, encase to 0.4
- ~~waiting on https://github.com/teoxoy/encase/pull/17 and https://github.com/OptimisticPeach/hexasphere/pull/13~~
2022-11-07 19:44:13 +00:00
Thierry Berger
9dd019bf86 Change Detection Benchmarks (#4972)
# Objective

Fixes #4883.

## Solution

- Add benchmarks for Changed and Added

Disclaimer: I'm not that much familiar with benchmarking.
2022-11-04 17:53:54 +00:00
James Liu
ec8c8fbc8a Remove unnecesary branches/panics from Query accesses (#6461)
# Objective
Supercedes #6452. Upon inspection of the [generated assembly](https://gist.github.com/james7132/c2740c6941b80d7912f1e8888e223cbb#file-original-s) of a [simple Bevy binary](https://gist.github.com/james7132/c2740c6941b80d7912f1e8888e223cbb#file-source-rs) compiled with `cargo rustc --release -- --emit asm`, it's apparent that there are multiple unnecessary branches in the generated assembly:

```assembly
.LBB5_5:
	cmpq	%r10, %r11
	je	.LBB5_15
	movq	(%r11), %rcx
	movq	328(%r15), %rdx
	cmpq	%rdx, %rcx
	jae	.LBB5_14
	movq	312(%r15), %rdi
	leaq	(%rcx,%rcx,2), %rcx
	shlq	$5, %rcx
	movq	336(%r12), %rdx
	movq	64(%rdi,%rcx), %rax
	cmpq	%rdx, %rax
	jbe	.LBB5_4
	leaq	(%rdi,%rcx), %rsi
	movq	48(%rsi), %rbp
	shlq	$4, %rdx
	cmpq	$0, (%rbp,%rdx)
	je	.LBB5_4
	movq	344(%r12), %rbx
	cmpq	%rbx, %rax
	jbe	.LBB5_4
	shlq	$4, %rbx
	cmpq	$0, (%rbp,%rbx)
	je	.LBB5_4
	addq	$8, %r11
	movq	88(%rdi,%rcx), %rcx
	testq	%rcx, %rcx
	je	.LBB5_5
	movq	(%rsi), %rax
	movq	8(%rbp,%rdx), %rdx
	leaq	(%rdx,%rdx,4), %rdi
	shlq	$4, %rdi
	movq	32(%rax,%rdi), %rdx
	movq	56(%rax,%rdi), %r8
	movq	8(%rbp,%rbx), %rbp
	leaq	(%rbp,%rbp,4), %rbp
	shlq	$4, %rbp
	movq	32(%rax,%rbp), %r9
	xorl	%ebp, %ebp
	jmp	.LBB5_13
	.p2align	4, 0x90
```

Almost every one of the instructions starting with `j` is a potential branch, which can significantly slow down accesses. Of these, two labels are both common and never used:

```asm
.LBB5_14:
	leaq	__unnamed_2(%rip), %r8
	callq	_ZN4core9panicking18panic_bounds_check17h70367088e72af65aE
	ud2
.LBB5_4:
	callq	_ZN8bevy_ecs5query25debug_checked_unreachable17h0855ff520ceaea77E
	ud2
	.seh_endproc
```

These correpsond to subprocedure calls to panicking due to out of bounds from indexing `Tables` and `debug_checked_unreadable`. Both of which should be inlined and optimized out, but are not.

## Solution
Make `debug_checked_unreachable` a macro to forcibly inline either `unreachable!()` in debug builds, and `std::hint::unreachable_unchecked()` in release mode. Replace the `Tables` and `Archetype` index access with `get(id).unwrap_or_else(|| debug_checked_unreachable!())` to assume that the table or archetype provided exists.

This has no external breaking change of any kind.

The equivalent section of code with these changes removes most of the conditional jump instructions:

```asm
.LBB5_5:
	movss	(%rbx,%rbp,4), %xmm0
	movl	%r14d, 4(%r8,%rbp,8)
	addss	(%rdi,%rbp,4), %xmm0
	movss	%xmm0, (%rdi,%rbp,4)
	incq	%rbp
.LBB5_1:
	cmpq	%rdx, %rbp
	jne	.LBB5_5
	.p2align	4, 0x90
.LBB5_2:
	cmpq	%rcx, %rax
	je	.LBB5_6
	movq	(%rax), %rdx
	addq	$8, %rax
	movq	312(%rsi), %rbp
	leaq	(%rdx,%rdx,2), %rbx
	shlq	$5, %rbx
	movq	88(%rbp,%rbx), %rdx
	testq	%rdx, %rdx
	je	.LBB5_2
	leaq	(%rbx,%rbp), %r8
	movq	336(%r15), %rdi
	movq	344(%r15), %r9
	movq	48(%rbp,%rbx), %r10
	shlq	$4, %rdi
	movq	(%r8), %rbx
	movq	8(%r10,%rdi), %rdi
	leaq	(%rdi,%rdi,4), %rbp
	shlq	$4, %rbp
	movq	32(%rbx,%rbp), %rdi
	movq	56(%rbx,%rbp), %r8
	shlq	$4, %r9
	movq	8(%r10,%r9), %rbp
	leaq	(%rbp,%rbp,4), %rbp
	shlq	$4, %rbp
	movq	32(%rbx,%rbp), %rbx
	xorl	%ebp, %ebp
	jmp	.LBB5_5
.LBB5_6:
	addq	$40, %rsp
	popq	%rbx
	popq	%rbp
	popq	%rdi
	popq	%rsi
	popq	%r14
	popq	%r15
	retq
	.seh_endproc

```

## Performance

Microbenchmarks results:

<details>

```
group                                                    main                                     no-panic-query
-----                                                    ----                                     --------------
busy_systems/01x_entities_03_systems                     1.20     42.4±2.66µs        ? ?/sec      1.00     35.3±1.68µs        ? ?/sec
busy_systems/01x_entities_06_systems                     1.32     83.8±3.50µs        ? ?/sec      1.00     63.6±1.72µs        ? ?/sec
busy_systems/01x_entities_09_systems                     1.15    113.3±8.90µs        ? ?/sec      1.00     98.2±6.15µs        ? ?/sec
busy_systems/01x_entities_12_systems                     1.27   160.8±32.44µs        ? ?/sec      1.00    126.6±4.70µs        ? ?/sec
busy_systems/01x_entities_15_systems                     1.12    179.6±3.71µs        ? ?/sec      1.00   160.3±11.03µs        ? ?/sec
busy_systems/02x_entities_03_systems                     1.18     76.8±3.14µs        ? ?/sec      1.00     65.2±3.17µs        ? ?/sec
busy_systems/02x_entities_06_systems                     1.16    144.6±6.10µs        ? ?/sec      1.00    124.5±5.14µs        ? ?/sec
busy_systems/02x_entities_09_systems                     1.19    215.3±9.18µs        ? ?/sec      1.00    181.5±5.67µs        ? ?/sec
busy_systems/02x_entities_12_systems                     1.20    266.7±8.33µs        ? ?/sec      1.00    222.0±9.53µs        ? ?/sec
busy_systems/02x_entities_15_systems                     1.23   338.8±10.53µs        ? ?/sec      1.00    276.3±6.94µs        ? ?/sec
busy_systems/03x_entities_03_systems                     1.43    113.5±5.06µs        ? ?/sec      1.00     79.6±1.49µs        ? ?/sec
busy_systems/03x_entities_06_systems                     1.38   217.3±12.67µs        ? ?/sec      1.00    157.5±3.07µs        ? ?/sec
busy_systems/03x_entities_09_systems                     1.23   308.8±24.75µs        ? ?/sec      1.00    251.6±8.93µs        ? ?/sec
busy_systems/03x_entities_12_systems                     1.05   347.7±12.43µs        ? ?/sec      1.00   330.6±11.43µs        ? ?/sec
busy_systems/03x_entities_15_systems                     1.13   455.5±13.88µs        ? ?/sec      1.00   401.7±17.29µs        ? ?/sec
busy_systems/04x_entities_03_systems                     1.24    144.7±5.89µs        ? ?/sec      1.00    116.9±6.29µs        ? ?/sec
busy_systems/04x_entities_06_systems                     1.24   282.8±21.40µs        ? ?/sec      1.00   228.6±21.31µs        ? ?/sec
busy_systems/04x_entities_09_systems                     1.35   431.8±14.10µs        ? ?/sec      1.00    319.6±9.83µs        ? ?/sec
busy_systems/04x_entities_12_systems                     1.16   493.8±22.87µs        ? ?/sec      1.00   424.9±15.24µs        ? ?/sec
busy_systems/04x_entities_15_systems                     1.10   587.5±23.25µs        ? ?/sec      1.00   531.7±16.32µs        ? ?/sec
busy_systems/05x_entities_03_systems                     1.14    148.2±9.61µs        ? ?/sec      1.00    129.5±4.32µs        ? ?/sec
busy_systems/05x_entities_06_systems                     1.31   359.7±17.46µs        ? ?/sec      1.00   273.6±10.55µs        ? ?/sec
busy_systems/05x_entities_09_systems                     1.22   473.5±23.11µs        ? ?/sec      1.00   389.3±13.62µs        ? ?/sec
busy_systems/05x_entities_12_systems                     1.05   562.9±20.76µs        ? ?/sec      1.00   536.5±24.35µs        ? ?/sec
busy_systems/05x_entities_15_systems                     1.23   818.5±28.70µs        ? ?/sec      1.00   666.6±45.87µs        ? ?/sec
contrived/01x_entities_03_systems                        1.27     27.5±0.49µs        ? ?/sec      1.00     21.6±1.71µs        ? ?/sec
contrived/01x_entities_06_systems                        1.22     49.9±1.18µs        ? ?/sec      1.00     40.7±2.62µs        ? ?/sec
contrived/01x_entities_09_systems                        1.30     72.3±2.39µs        ? ?/sec      1.00     55.4±2.60µs        ? ?/sec
contrived/01x_entities_12_systems                        1.28     94.3±9.44µs        ? ?/sec      1.00     73.7±3.62µs        ? ?/sec
contrived/01x_entities_15_systems                        1.25    118.0±2.43µs        ? ?/sec      1.00     94.1±3.99µs        ? ?/sec
contrived/02x_entities_03_systems                        1.23     41.6±1.71µs        ? ?/sec      1.00     33.7±2.30µs        ? ?/sec
contrived/02x_entities_06_systems                        1.19     78.6±2.63µs        ? ?/sec      1.00     65.9±2.35µs        ? ?/sec
contrived/02x_entities_09_systems                        1.28    113.6±3.60µs        ? ?/sec      1.00     88.6±3.60µs        ? ?/sec
contrived/02x_entities_12_systems                        1.20    146.4±5.75µs        ? ?/sec      1.00    121.7±3.35µs        ? ?/sec
contrived/02x_entities_15_systems                        1.23    178.5±4.86µs        ? ?/sec      1.00    145.7±4.00µs        ? ?/sec
contrived/03x_entities_03_systems                        1.42     58.3±2.77µs        ? ?/sec      1.00     41.1±1.54µs        ? ?/sec
contrived/03x_entities_06_systems                        1.32    108.5±7.30µs        ? ?/sec      1.00     82.4±4.86µs        ? ?/sec
contrived/03x_entities_09_systems                        1.23    153.7±4.61µs        ? ?/sec      1.00    125.0±4.76µs        ? ?/sec
contrived/03x_entities_12_systems                        1.18    197.5±5.12µs        ? ?/sec      1.00    166.8±8.14µs        ? ?/sec
contrived/03x_entities_15_systems                        1.23    238.8±6.38µs        ? ?/sec      1.00    194.6±4.55µs        ? ?/sec
contrived/04x_entities_03_systems                        1.34     66.4±3.42µs        ? ?/sec      1.00     49.5±1.98µs        ? ?/sec
contrived/04x_entities_06_systems                        1.27    134.3±4.86µs        ? ?/sec      1.00    105.8±3.58µs        ? ?/sec
contrived/04x_entities_09_systems                        1.26    193.2±3.83µs        ? ?/sec      1.00    153.0±5.60µs        ? ?/sec
contrived/04x_entities_12_systems                        1.16    237.1±5.78µs        ? ?/sec      1.00   204.9±18.77µs        ? ?/sec
contrived/04x_entities_15_systems                        1.17    289.2±4.76µs        ? ?/sec      1.00    246.3±8.57µs        ? ?/sec
contrived/05x_entities_03_systems                        1.26     80.4±2.90µs        ? ?/sec      1.00     63.7±3.07µs        ? ?/sec
contrived/05x_entities_06_systems                        1.27   161.6±13.47µs        ? ?/sec      1.00    127.2±5.59µs        ? ?/sec
contrived/05x_entities_09_systems                        1.22    228.0±7.76µs        ? ?/sec      1.00    186.2±7.68µs        ? ?/sec
contrived/05x_entities_12_systems                        1.20    289.5±6.21µs        ? ?/sec      1.00    241.8±7.52µs        ? ?/sec
contrived/05x_entities_15_systems                        1.18   357.3±11.24µs        ? ?/sec      1.00    302.7±7.21µs        ? ?/sec
heavy_compute/base                                       1.01    302.4±3.52µs        ? ?/sec      1.00    300.2±3.40µs        ? ?/sec
iter_fragmented/base                                     1.00    348.1±7.51ns        ? ?/sec      1.01    351.9±8.32ns        ? ?/sec
iter_fragmented/foreach                                  1.03   239.8±23.78ns        ? ?/sec      1.00   233.8±18.12ns        ? ?/sec
iter_fragmented/foreach_wide                             1.00      3.9±0.13µs        ? ?/sec      1.02      4.0±0.22µs        ? ?/sec
iter_fragmented/wide                                     1.18      4.6±0.15µs        ? ?/sec      1.00      3.9±0.10µs        ? ?/sec
iter_fragmented_sparse/base                              1.02      8.1±0.15ns        ? ?/sec      1.00      7.9±0.56ns        ? ?/sec
iter_fragmented_sparse/foreach                           1.00      7.8±0.22ns        ? ?/sec      1.01      7.9±0.62ns        ? ?/sec
iter_fragmented_sparse/foreach_wide                      1.00     37.2±1.17ns        ? ?/sec      1.10     40.9±0.95ns        ? ?/sec
iter_fragmented_sparse/wide                              1.09     48.4±2.13ns        ? ?/sec      1.00    44.5±18.34ns        ? ?/sec
iter_simple/base                                         1.02      8.4±0.10µs        ? ?/sec      1.00      8.2±0.14µs        ? ?/sec
iter_simple/foreach                                      1.01      8.3±0.07µs        ? ?/sec      1.00      8.2±0.09µs        ? ?/sec
iter_simple/foreach_sparse_set                           1.00     25.3±0.32µs        ? ?/sec      1.02     25.7±0.42µs        ? ?/sec
iter_simple/foreach_wide                                 1.03     41.1±0.94µs        ? ?/sec      1.00     39.9±0.41µs        ? ?/sec
iter_simple/foreach_wide_sparse_set                      1.05    123.6±2.05µs        ? ?/sec      1.00    118.1±2.78µs        ? ?/sec
iter_simple/sparse_set                                   1.14     30.5±1.40µs        ? ?/sec      1.00     26.9±0.64µs        ? ?/sec
iter_simple/system                                       1.01      8.4±0.25µs        ? ?/sec      1.00      8.4±0.11µs        ? ?/sec
iter_simple/wide                                         1.18     48.2±0.62µs        ? ?/sec      1.00     40.7±0.38µs        ? ?/sec
iter_simple/wide_sparse_set                              1.12   140.8±21.56µs        ? ?/sec      1.00    126.0±2.30µs        ? ?/sec
query_get/50000_entities_sparse                          1.17    378.6±7.60µs        ? ?/sec      1.00   324.1±23.17µs        ? ?/sec
query_get/50000_entities_table                           1.08   330.9±10.90µs        ? ?/sec      1.00    306.8±4.98µs        ? ?/sec
query_get_component/50000_entities_sparse                1.00   976.7±19.55µs        ? ?/sec      1.00   979.8±35.87µs        ? ?/sec
query_get_component/50000_entities_table                 1.00  1029.0±15.11µs        ? ?/sec      1.05  1080.0±59.18µs        ? ?/sec
query_get_component_simple/system                        1.13   839.7±14.18µs        ? ?/sec      1.00   742.8±10.72µs        ? ?/sec
query_get_component_simple/unchecked                     1.01   909.0±15.17µs        ? ?/sec      1.00   898.0±13.56µs        ? ?/sec
query_get_many_10/50000_calls_sparse                     1.04      5.5±0.54ms        ? ?/sec      1.00      5.3±0.67ms        ? ?/sec
query_get_many_10/50000_calls_table                      1.01      4.9±0.49ms        ? ?/sec      1.00      4.8±0.45ms        ? ?/sec
query_get_many_2/50000_calls_sparse                      1.28  848.4±210.89µs        ? ?/sec      1.00   664.8±47.69µs        ? ?/sec
query_get_many_2/50000_calls_table                       1.05   779.0±73.85µs        ? ?/sec      1.00   739.2±83.02µs        ? ?/sec
query_get_many_5/50000_calls_sparse                      1.05      2.4±0.37ms        ? ?/sec      1.00      2.3±0.33ms        ? ?/sec
query_get_many_5/50000_calls_table                       1.00  1939.9±75.22µs        ? ?/sec      1.04      2.0±0.19ms        ? ?/sec
run_criteria/yes_using_query/001_systems                 1.00      3.7±0.38µs        ? ?/sec      1.30      4.9±0.14µs        ? ?/sec
run_criteria/yes_using_query/006_systems                 1.00      8.9±0.40µs        ? ?/sec      1.17     10.3±0.57µs        ? ?/sec
run_criteria/yes_using_query/011_systems                 1.00     13.9±0.49µs        ? ?/sec      1.08     15.0±0.89µs        ? ?/sec
run_criteria/yes_using_query/016_systems                 1.00     18.8±0.74µs        ? ?/sec      1.00     18.8±1.43µs        ? ?/sec
run_criteria/yes_using_query/021_systems                 1.07     24.1±0.87µs        ? ?/sec      1.00     22.6±1.58µs        ? ?/sec
run_criteria/yes_using_query/026_systems                 1.04     27.9±0.62µs        ? ?/sec      1.00     26.8±1.71µs        ? ?/sec
run_criteria/yes_using_query/031_systems                 1.09     33.3±1.03µs        ? ?/sec      1.00     30.5±2.18µs        ? ?/sec
run_criteria/yes_using_query/036_systems                 1.14     38.7±0.80µs        ? ?/sec      1.00     33.9±1.75µs        ? ?/sec
run_criteria/yes_using_query/041_systems                 1.18     43.7±1.07µs        ? ?/sec      1.00     37.0±2.39µs        ? ?/sec
run_criteria/yes_using_query/046_systems                 1.14     47.6±1.16µs        ? ?/sec      1.00     41.9±2.09µs        ? ?/sec
run_criteria/yes_using_query/051_systems                 1.17     52.9±2.04µs        ? ?/sec      1.00     45.3±1.75µs        ? ?/sec
run_criteria/yes_using_query/056_systems                 1.25     59.2±2.38µs        ? ?/sec      1.00     47.2±2.01µs        ? ?/sec
run_criteria/yes_using_query/061_systems                 1.28    66.1±15.84µs        ? ?/sec      1.00     51.5±2.47µs        ? ?/sec
run_criteria/yes_using_query/066_systems                 1.28     70.2±2.57µs        ? ?/sec      1.00     54.7±2.58µs        ? ?/sec
run_criteria/yes_using_query/071_systems                 1.30     75.5±2.27µs        ? ?/sec      1.00     58.2±3.31µs        ? ?/sec
run_criteria/yes_using_query/076_systems                 1.26     81.5±2.66µs        ? ?/sec      1.00     64.5±3.13µs        ? ?/sec
run_criteria/yes_using_query/081_systems                 1.29     89.7±2.58µs        ? ?/sec      1.00     69.3±3.47µs        ? ?/sec
run_criteria/yes_using_query/086_systems                 1.33     95.6±3.39µs        ? ?/sec      1.00     71.8±3.48µs        ? ?/sec
run_criteria/yes_using_query/091_systems                 1.25    102.0±3.67µs        ? ?/sec      1.00     81.4±4.82µs        ? ?/sec
run_criteria/yes_using_query/096_systems                 1.33    111.7±3.29µs        ? ?/sec      1.00     83.8±4.15µs        ? ?/sec
run_criteria/yes_using_query/101_systems                 1.29   113.2±12.04µs        ? ?/sec      1.00     87.7±5.15µs        ? ?/sec
world_query_for_each/50000_entities_sparse               1.00     47.4±0.51µs        ? ?/sec      1.00     47.3±0.33µs        ? ?/sec
world_query_for_each/50000_entities_table                1.00     27.2±0.50µs        ? ?/sec      1.00     27.2±0.17µs        ? ?/sec
world_query_get/50000_entities_sparse_wide               1.09    210.5±1.78µs        ? ?/sec      1.00    192.5±2.61µs        ? ?/sec
world_query_get/50000_entities_table                     1.00    127.7±2.09µs        ? ?/sec      1.07    136.2±5.95µs        ? ?/sec
world_query_get/50000_entities_table_wide                1.00    209.8±2.37µs        ? ?/sec      1.15    240.6±2.04µs        ? ?/sec
world_query_iter/50000_entities_sparse                   1.00     54.2±0.36µs        ? ?/sec      1.01     54.7±0.61µs        ? ?/sec
world_query_iter/50000_entities_table                    1.00     27.2±0.31µs        ? ?/sec      1.00     27.3±0.64µs        ? ?/sec
```
</details>

NOTE: This PR includes a change to enable LTO on our benchmarks to get a "fully optimized" baseline for our benchmarks. Both the main and the current PR's results were with LTO enabled.
2022-11-04 06:04:55 +00:00
ShadowCurse
179f719553 ECS benchmarks organization (#5189)
## Objective

Fixes: #5110

## Solution

- Moved benches into separate modules according to the part of ECS they are testing.
- Made so all ECS benches are included in one `benches.rs` so they don’t need to be added separately in `Cargo.toml`.
- Renamed a bunch of files to have more coherent names.
- Merged `schedule.rs` and `system_schedule.rs` into one file.
2022-07-04 14:17:45 +00:00
CGMossa
33f9b3940d Updated glam to 0.21. (#5142)
Removed `const_vec2`/`const_vec3`
and replaced with equivalent `.from_array`.

# Objective

Fixes #5112 

## Solution

- `encase` needs to update to `glam` as well. See teoxoy/encase#4 on progress on that. 
- `hexasphere` also needs to be updated, see OptimisticPeach/hexasphere#12.
2022-07-03 19:55:33 +00:00
JoJoJet
92ea730362 Add benchmarks for schedule dependency resolution (#4961)
# Objective

- Add benchmarks to test the performance of `Schedule`'s system dependency resolution.

## Solution

- Do a series of benchmarks while increasing the number of systems in the schedule to see how the run-time scales.
- Split the benchmarks into a group with no dependencies, and a group with many dependencies.
2022-06-20 17:35:54 +00:00
dataphract
0917c49b9b bench: add bevy_reflect::{List, Map, Struct} benchmarks (#3690)
# Objective

Partially addresses #3594.

## Solution

This adds basic benchmarks for `List`, `Map`, and `Struct` implementors, both concrete (`Vec`, `HashMap`, and defined struct types) and dynamic (`DynamicList`, `DynamicMap` and `DynamicStruct`). 

A few insights from the benchmarks (all measurements are local on my machine):
- Applying a list with many elements to a list with no elements is slower than applying to a list of the same length:
  - 3-4x slower when applying to a `Vec`
  - 5-6x slower when applying to a `DynamicList`
  
  I suspect this could be improved by `reserve()`ing the correct length up front, but haven't tested.
- Applying a `DynamicMap` to another `Map` is linear in the number of elements, but applying a `HashMap` seems to be at least quadratic. No intuition on this one.
- Applying like structs (concrete -> concrete, `DynamicStruct` -> `DynamicStruct`) seems to be faster than applying unlike structs.
2022-05-17 04:16:53 +00:00
TheRawMeatball
87991c50f1 Add a random access get_component benchmark (#4607)
# Objective

Add a benchmark to measure the performance of get_component, particularly for cases involving random access.

Enables #2965
2022-04-26 21:20:13 +00:00
Mike
45d2c78949 add benches for simple run criteria (#4196)
# Objective

- Add benches for run criteria. This is in anticipation of run criteria being redone in stageless.

## Solution

- Benches run criteria that don't access anything to test overhead
- Test run criteria that use a query
- Test run criteria that use a resource
2022-04-26 19:59:19 +00:00
Torne Wuff
b1afe2dcca Make System responsible for updating its own archetypes (#4115)
# Objective

- Make it possible to use `System`s outside of the scheduler/executor without having to define logic to track new archetypes and call `System::add_archetype()` for each.

## Solution

- Replace `System::add_archetype(&Archetype)` with `System::update_archetypes(&World)`, making systems responsible for tracking their own most recent archetype generation the way that `SystemState` already does.

This has minimal (or simplifying) effect on most of the code with the exception of `FunctionSystem`, which must now track the latest `ArchetypeGeneration` it saw instead of relying on the executor to do it.

Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-04-07 20:50:43 +00:00
Alice Cecile
1b63d949ea Re-add ECS benchmark suite (#4332)
# Objective

- Benchmarks are good.
- Licensing situation appears to be [cleared up](https://github.com/bevyengine/bevy/pull/4225#issuecomment-1078710209).

## Solution

- Add the benchmark suite back in
- Suggested PR title: "Revert "Revert "Add cart's fork of ecs_bench_suite (#4225)" (#4252)"


Co-authored-by: Daniel McNab <36049421+DJMcNab@users.noreply.github.com>
2022-03-29 22:18:07 +00:00
Daniel McNab
95d3f32b9b Revert "Add cart's fork of ecs_bench_suite (#4225)" (#4252)
This reverts commit 08ef2f0a28.

# Objective

- #4225 was merged without considering the licensing considerations.
- It merges in code taken from https://github.com/cart/ecs_bench_suite/tree/bevy-benches/src/bevy.
- We can safely assume that we do have a license to cart's contributions. However, these build upon 377e96e69a, for which we have no license. 
- This has been verified by looking in the Cargo.toml, the root folder and the readme, none of which mention a license. Additionally, the string "license" [doesn't appear](https://github.com/rust-gamedev/ecs_bench_suite/search?q=license) in the repository.
- This means the code is all rights reserved.
- (The author of these commits also hasn't commented in #2373, though even if they had, it would be legally *dubious* to rely on that to license any code they ever wrote)
- (Note that the latest commit on the head at https://github.com/rust-gamedev/ecs_bench_suite hasn't had a license added either.)
- We are currently incorrectly claiming to be able to give an MIT/Apache 2.0 license to this code.

## Solution

- Revert it
2022-03-19 11:12:24 +00:00
James Liu
08ef2f0a28 Add cart's fork of ecs_bench_suite (#4225)
# Objective
Better benchmarking for ECS. Fix #2062.

## Solution
Port @cart's fork of ecs_bench_suite to the official bench suite for bevy_ecs, replace cgmath with glam, update to latest bevy.
2022-03-19 04:41:30 +00:00
Daniel McNab
1ba9818a78 Significantly reduce the amount of building required for benchmarks (#4067)
# Objective

- Release mode. Many time

## Solution

- Less things, less time
2022-03-01 00:51:07 +00:00
Niklas Eicker
94db0176fe Add readme to errors crate and clean up cargo files (#3125)
# Objective

- Document that the error codes will be rendered on the bevy website (see bevyengine/bevy-website#216)
- Some Cargo.toml files did not include the license or a description field

## Solution

- Readme for the errors crate
- Mark internal/development crates with `publish = false`
- Add missing license/descriptions to some crates

- [x] merge bevyengine/bevy-website#216
2021-11-13 23:06:48 +00:00
Yoh Deadfall
ffde86efa0 Update to edition 2021 on master (#3028)
Objective
During work on #3009 I've found that not all jobs use actions-rs, and therefore, an previous version of Rust is used for them. So while compilation and other stuff can pass, checking markup and Android build may fail with compilation errors.

Solution
This PR adds `action-rs` for any job running cargo, and updates the edition to 2021.
2021-10-27 00:12:14 +00:00
Carter Anderson
a89a954a17 Not me ... us (#2654)
I don't see much of a reason at this point to boost my name over anyone elses. We are all Bevy Contributors.
2021-08-15 20:08:52 +00:00
François
234b2efa71 Inline world get (#2520)
# Objective

While looking at the code of `World`, I noticed two basic functions (`get` and `get_mut`) that are probably called a lot and with simple code that are not `inline`

## Solution

- Add benchmark to check impact
- Add `#[inline]`


```
group                                            this pr                                main
-----                                            ----                                   ----
world_entity/50000_entities                      1.00   115.9±11.90µs        ? ?/sec    1.71   198.5±29.54µs        ? ?/sec
world_get/50000_entities_SparseSet               1.00   409.9±46.96µs        ? ?/sec    1.18   483.5±36.41µs        ? ?/sec
world_get/50000_entities_Table                   1.00   391.3±29.83µs        ? ?/sec    1.16   455.6±57.85µs        ? ?/sec
world_query_for_each/50000_entities_SparseSet    1.02   121.3±18.36µs        ? ?/sec    1.00   119.4±13.88µs        ? ?/sec
world_query_for_each/50000_entities_Table        1.03     13.8±0.96µs        ? ?/sec    1.00     13.3±0.54µs        ? ?/sec
world_query_get/50000_entities_SparseSet         1.00   666.9±54.36µs        ? ?/sec    1.03   687.1±57.77µs        ? ?/sec
world_query_get/50000_entities_Table             1.01   584.4±55.12µs        ? ?/sec    1.00   576.3±36.13µs        ? ?/sec
world_query_iter/50000_entities_SparseSet        1.01   169.7±19.50µs        ? ?/sec    1.00   168.6±32.56µs        ? ?/sec
world_query_iter/50000_entities_Table            1.00     26.2±1.38µs        ? ?/sec    1.91     50.0±4.40µs        ? ?/sec
```

I didn't add benchmarks for the mutable path but I don't see how it could hurt to make it inline too...
2021-07-27 23:19:26 +00:00
Nathan Ward
46b822e3da Commands benchmarking (#2334)
# Objective

- Currently the `Commands` and `CommandQueue` have no performance testing. 
- As `Commands` are quite expensive due to the `Box<dyn Command>` allocated for each command, there should be perf tests for implementations that attempt to improve the performance.  

## Solution

- Add some benchmarking for `Commands` and `CommandQueue`.
2021-06-28 23:26:29 +00:00
Carter Anderson
3a2a68852c Bevy ECS V2 (#1525)
# Bevy ECS V2

This is a rewrite of Bevy ECS (basically everything but the new executor/schedule, which are already awesome). The overall goal was to improve the performance and versatility of Bevy ECS. Here is a quick bulleted list of changes before we dive into the details:

* Complete World rewrite
* Multiple component storage types:
    * Tables: fast cache friendly iteration, slower add/removes (previously called Archetypes)
    * Sparse Sets: fast add/remove, slower iteration
* Stateful Queries (caches query results for faster iteration. fragmented iteration is _fast_ now)
* Stateful System Params (caches expensive operations. inspired by @DJMcNab's work in #1364)
* Configurable System Params (users can set configuration when they construct their systems. once again inspired by @DJMcNab's work)
* Archetypes are now "just metadata", component storage is separate
* Archetype Graph (for faster archetype changes)
* Component Metadata
    * Configure component storage type
    * Retrieve information about component size/type/name/layout/send-ness/etc
    * Components are uniquely identified by a densely packed ComponentId
    * TypeIds are now totally optional (which should make implementing scripting easier)
* Super fast "for_each" query iterators
* Merged Resources into World. Resources are now just a special type of component
* EntityRef/EntityMut builder apis (more efficient and more ergonomic)
* Fast bitset-backed `Access<T>` replaces old hashmap-based approach everywhere
* Query conflicts are determined by component access instead of archetype component access (to avoid random failures at runtime)
    * With/Without are still taken into account for conflicts, so this should still be comfy to use
* Much simpler `IntoSystem` impl
* Significantly reduced the amount of hashing throughout the ecs in favor of Sparse Sets (indexed by densely packed ArchetypeId, ComponentId, BundleId, and TableId)
* Safety Improvements
    * Entity reservation uses a normal world reference instead of unsafe transmute
    * QuerySets no longer transmute lifetimes
    * Made traits "unsafe" where relevant
    * More thorough safety docs
* WorldCell
    * Exposes safe mutable access to multiple resources at a time in a World 
* Replaced "catch all" `System::update_archetypes(world: &World)` with `System::new_archetype(archetype: &Archetype)`
* Simpler Bundle implementation
* Replaced slow "remove_bundle_one_by_one" used as fallback for Commands::remove_bundle with fast "remove_bundle_intersection"
* Removed `Mut<T>` query impl. it is better to only support one way: `&mut T` 
* Removed with() from `Flags<T>` in favor of `Option<Flags<T>>`, which allows querying for flags to be "filtered" by default 
* Components now have is_send property (currently only resources support non-send)
* More granular module organization
* New `RemovedComponents<T>` SystemParam that replaces `query.removed::<T>()`
* `world.resource_scope()` for mutable access to resources and world at the same time
* WorldQuery and QueryFilter traits unified. FilterFetch trait added to enable "short circuit" filtering. Auto impled for cases that don't need it
* Significantly slimmed down SystemState in favor of individual SystemParam state
* System Commands changed from `commands: &mut Commands` back to `mut commands: Commands` (to allow Commands to have a World reference)

Fixes #1320

## `World` Rewrite

This is a from-scratch rewrite of `World` that fills the niche that `hecs` used to. Yes, this means Bevy ECS is no longer a "fork" of hecs. We're going out our own!

(the only shared code between the projects is the entity id allocator, which is already basically ideal)

A huge shout out to @SanderMertens (author of [flecs](https://github.com/SanderMertens/flecs)) for sharing some great ideas with me (specifically hybrid ecs storage and archetype graphs). He also helped advise on a number of implementation details.

## Component Storage (The Problem)

Two ECS storage paradigms have gained a lot of traction over the years:

* **Archetypal ECS**: 
    * Stores components in "tables" with static schemas. Each "column" stores components of a given type. Each "row" is an entity.
    * Each "archetype" has its own table. Adding/removing an entity's component changes the archetype.
    * Enables super-fast Query iteration due to its cache-friendly data layout
    * Comes at the cost of more expensive add/remove operations for an Entity's components, because all components need to be copied to the new archetype's "table"
* **Sparse Set ECS**:
    * Stores components of the same type in densely packed arrays, which are sparsely indexed by densely packed unsigned integers (Entity ids)
    * Query iteration is slower than Archetypal ECS because each entity's component could be at any position in the sparse set. This "random access" pattern isn't cache friendly. Additionally, there is an extra layer of indirection because you must first map the entity id to an index in the component array.
    * Adding/removing components is a cheap, constant time operation 

Bevy ECS V1, hecs, legion, flec, and Unity DOTS are all "archetypal ecs-es". I personally think "archetypal" storage is a good default for game engines. An entity's archetype doesn't need to change frequently in general, and it creates "fast by default" query iteration (which is a much more common operation). It is also "self optimizing". Users don't need to think about optimizing component layouts for iteration performance. It "just works" without any extra boilerplate.

Shipyard and EnTT are "sparse set ecs-es". They employ "packing" as a way to work around the "suboptimal by default" iteration performance for specific sets of components. This helps, but I didn't think this was a good choice for a general purpose engine like Bevy because:

1. "packs" conflict with each other. If bevy decides to internally pack the Transform and GlobalTransform components, users are then blocked if they want to pack some custom component with Transform.
2. users need to take manual action to optimize

Developers selecting an ECS framework are stuck with a hard choice. Select an "archetypal" framework with "fast iteration everywhere" but without the ability to cheaply add/remove components, or select a "sparse set" framework to cheaply add/remove components but with slower iteration performance.

## Hybrid Component Storage (The Solution)

In Bevy ECS V2, we get to have our cake and eat it too. It now has _both_ of the component storage types above (and more can be added later if needed):

* **Tables** (aka "archetypal" storage)
    * The default storage. If you don't configure anything, this is what you get
    * Fast iteration by default
    * Slower add/remove operations
* **Sparse Sets**
    * Opt-in
    * Slower iteration
    * Faster add/remove operations

These storage types complement each other perfectly. By default Query iteration is fast. If developers know that they want to add/remove a component at high frequencies, they can set the storage to "sparse set":

```rust
world.register_component(
    ComponentDescriptor:🆕:<MyComponent>(StorageType::SparseSet)
).unwrap();
```

## Archetypes

Archetypes are now "just metadata" ... they no longer store components directly. They do store:

* The `ComponentId`s of each of the Archetype's components (and that component's storage type)
    * Archetypes are uniquely defined by their component layouts
    * For example: entities with "table" components `[A, B, C]` _and_ "sparse set" components `[D, E]` will always be in the same archetype.
* The `TableId` associated with the archetype
    * For now each archetype has exactly one table (which can have no components),
    * There is a 1->Many relationship from Tables->Archetypes. A given table could have any number of archetype components stored in it:
        * Ex: an entity with "table storage" components `[A, B, C]` and "sparse set" components `[D, E]` will share the same `[A, B, C]` table as an entity with `[A, B, C]` table component and `[F]` sparse set components.
        * This 1->Many relationship is how we preserve fast "cache friendly" iteration performance when possible (more on this later)
* A list of entities that are in the archetype and the row id of the table they are in
* ArchetypeComponentIds
    * unique densely packed identifiers for (ArchetypeId, ComponentId) pairs
    * used by the schedule executor for cheap system access control
* "Archetype Graph Edges" (see the next section)  

## The "Archetype Graph"

Archetype changes in Bevy (and a number of other archetypal ecs-es) have historically been expensive to compute. First, you need to allocate a new vector of the entity's current component ids, add or remove components based on the operation performed, sort it (to ensure it is order-independent), then hash it to find the archetype (if it exists). And thats all before we get to the _already_ expensive full copy of all components to the new table storage.

The solution is to build a "graph" of archetypes to cache these results. @SanderMertens first exposed me to the idea (and he got it from @gjroelofs, who came up with it). They propose adding directed edges between archetypes for add/remove component operations. If `ComponentId`s are densely packed, you can use sparse sets to cheaply jump between archetypes.

Bevy takes this one step further by using add/remove `Bundle` edges instead of `Component` edges. Bevy encourages the use of `Bundles` to group add/remove operations. This is largely for "clearer game logic" reasons, but it also helps cut down on the number of archetype changes required. `Bundles` now also have densely-packed `BundleId`s. This allows us to use a _single_ edge for each bundle operation (rather than needing to traverse N edges ... one for each component). Single component operations are also bundles, so this is strictly an improvement over a "component only" graph.

As a result, an operation that used to be _heavy_ (both for allocations and compute) is now two dirt-cheap array lookups and zero allocations.

## Stateful Queries

World queries are now stateful. This allows us to:

1. Cache archetype (and table) matches
    * This resolves another issue with (naive) archetypal ECS: query performance getting worse as the number of archetypes goes up (and fragmentation occurs).
2. Cache Fetch and Filter state
    * The expensive parts of fetch/filter operations (such as hashing the TypeId to find the ComponentId) now only happen once when the Query is first constructed
3. Incrementally build up state
    * When new archetypes are added, we only process the new archetypes (no need to rebuild state for old archetypes)

As a result, the direct `World` query api now looks like this:

```rust
let mut query = world.query::<(&A, &mut B)>();
for (a, mut b) in query.iter_mut(&mut world) {
}
```

Requiring `World` to generate stateful queries (rather than letting the `QueryState` type be constructed separately) allows us to ensure that _all_ queries are properly initialized (and the relevant world state, such as ComponentIds). This enables QueryState to remove branches from its operations that check for initialization status (and also enables query.iter() to take an immutable world reference because it doesn't need to initialize anything in world).

However in systems, this is a non-breaking change. State management is done internally by the relevant SystemParam.

## Stateful SystemParams

Like Queries, `SystemParams` now also cache state. For example, `Query` system params store the "stateful query" state mentioned above. Commands store their internal `CommandQueue`. This means you can now safely use as many separate `Commands` parameters in your system as you want. `Local<T>` system params store their `T` value in their state (instead of in Resources). 

SystemParam state also enabled a significant slim-down of SystemState. It is much nicer to look at now.

Per-SystemParam state naturally insulates us from an "aliased mut" class of errors we have hit in the past (ex: using multiple `Commands` system params).

(credit goes to @DJMcNab for the initial idea and draft pr here #1364)

## Configurable SystemParams

@DJMcNab also had the great idea to make SystemParams configurable. This allows users to provide some initial configuration / values for system parameters (when possible). Most SystemParams have no config (the config type is `()`), but the `Local<T>` param now supports user-provided parameters:

```rust

fn foo(value: Local<usize>) {    
}

app.add_system(foo.system().config(|c| c.0 = Some(10)));
```

## Uber Fast "for_each" Query Iterators

Developers now have the choice to use a fast "for_each" iterator, which yields ~1.5-3x iteration speed improvements for "fragmented iteration", and minor ~1.2x iteration speed improvements for unfragmented iteration. 

```rust
fn system(query: Query<(&A, &mut B)>) {
    // you now have the option to do this for a speed boost
    query.for_each_mut(|(a, mut b)| {
    });

    // however normal iterators are still available
    for (a, mut b) in query.iter_mut() {
    }
}
```

I think in most cases we should continue to encourage "normal" iterators as they are more flexible and more "rust idiomatic". But when that extra "oomf" is needed, it makes sense to use `for_each`.

We should also consider using `for_each` for internal bevy systems to give our users a nice speed boost (but that should be a separate pr).

## Component Metadata

`World` now has a `Components` collection, which is accessible via `world.components()`. This stores mappings from `ComponentId` to `ComponentInfo`, as well as `TypeId` to `ComponentId` mappings (where relevant). `ComponentInfo` stores information about the component, such as ComponentId, TypeId, memory layout, send-ness (currently limited to resources), and storage type.

## Significantly Cheaper `Access<T>`

We used to use `TypeAccess<TypeId>` to manage read/write component/archetype-component access. This was expensive because TypeIds must be hashed and compared individually. The parallel executor got around this by "condensing" type ids into bitset-backed access types. This worked, but it had to be re-generated from the `TypeAccess<TypeId>`sources every time archetypes changed.

This pr removes TypeAccess in favor of faster bitset access everywhere. We can do this thanks to the move to densely packed `ComponentId`s and `ArchetypeComponentId`s.

## Merged Resources into World

Resources had a lot of redundant functionality with Components. They stored typed data, they had access control, they had unique ids, they were queryable via SystemParams, etc. In fact the _only_ major difference between them was that they were unique (and didn't correlate to an entity).

Separate resources also had the downside of requiring a separate set of access controls, which meant the parallel executor needed to compare more bitsets per system and manage more state.

I initially got the "separate resources" idea from `legion`. I think that design was motivated by the fact that it made the direct world query/resource lifetime interactions more manageable. It certainly made our lives easier when using Resources alongside hecs/bevy_ecs. However we already have a construct for safely and ergonomically managing in-world lifetimes: systems (which use `Access<T>` internally).

This pr merges Resources into World:

```rust
world.insert_resource(1);
world.insert_resource(2.0);
let a = world.get_resource::<i32>().unwrap();
let mut b = world.get_resource_mut::<f64>().unwrap();
*b = 3.0;
```

Resources are now just a special kind of component. They have their own ComponentIds (and their own resource TypeId->ComponentId scope, so they don't conflict wit components of the same type). They are stored in a special "resource archetype", which stores components inside the archetype using a new `unique_components` sparse set (note that this sparse set could later be used to implement Tags). This allows us to keep the code size small by reusing existing datastructures (namely Column, Archetype, ComponentFlags, and ComponentInfo). This allows us the executor to use a single `Access<ArchetypeComponentId>` per system. It should also make scripting language integration easier.

_But_ this merge did create problems for people directly interacting with `World`. What if you need mutable access to multiple resources at the same time? `world.get_resource_mut()` borrows World mutably!

## WorldCell

WorldCell applies the `Access<ArchetypeComponentId>` concept to direct world access:

```rust
let world_cell = world.cell();
let a = world_cell.get_resource_mut::<i32>().unwrap();
let b = world_cell.get_resource_mut::<f64>().unwrap();
```

This adds cheap runtime checks (a sparse set lookup of `ArchetypeComponentId` and a counter) to ensure that world accesses do not conflict with each other. Each operation returns a `WorldBorrow<'w, T>` or `WorldBorrowMut<'w, T>` wrapper type, which will release the relevant ArchetypeComponentId resources when dropped.

World caches the access sparse set (and only one cell can exist at a time), so `world.cell()` is a cheap operation. 

WorldCell does _not_ use atomic operations. It is non-send, does a mutable borrow of world to prevent other accesses, and uses a simple `Rc<RefCell<ArchetypeComponentAccess>>` wrapper in each WorldBorrow pointer. 

The api is currently limited to resource access, but it can and should be extended to queries / entity component access.

## Resource Scopes

WorldCell does not yet support component queries, and even when it does there are sometimes legitimate reasons to want a mutable world ref _and_ a mutable resource ref (ex: bevy_render and bevy_scene both need this). In these cases we could always drop down to the unsafe `world.get_resource_unchecked_mut()`, but that is not ideal!

Instead developers can use a "resource scope"

```rust
world.resource_scope(|world: &mut World, a: &mut A| {
})
```

This temporarily removes the `A` resource from `World`, provides mutable pointers to both, and re-adds A to World when finished. Thanks to the move to ComponentIds/sparse sets, this is a cheap operation.

If multiple resources are required, scopes can be nested. We could also consider adding a "resource tuple" to the api if this pattern becomes common and the boilerplate gets nasty.

## Query Conflicts Use ComponentId Instead of ArchetypeComponentId

For safety reasons, systems cannot contain queries that conflict with each other without wrapping them in a QuerySet. On bevy `main`, we use ArchetypeComponentIds to determine conflicts. This is nice because it can take into account filters:

```rust
// these queries will never conflict due to their filters
fn filter_system(a: Query<&mut A, With<B>>, b: Query<&mut B, Without<B>>) {
}
```

But it also has a significant downside:
```rust
// these queries will not conflict _until_ an entity with A, B, and C is spawned
fn maybe_conflicts_system(a: Query<(&mut A, &C)>, b: Query<(&mut A, &B)>) {
}
```

The system above will panic at runtime if an entity with A, B, and C is spawned. This makes it hard to trust that your game logic will run without crashing.

In this pr, I switched to using `ComponentId` instead. This _is_ more constraining. `maybe_conflicts_system` will now always fail, but it will do it consistently at startup. Naively, it would also _disallow_ `filter_system`, which would be a significant downgrade in usability. Bevy has a number of internal systems that rely on disjoint queries and I expect it to be a common pattern in userspace.

To resolve this, I added a new `FilteredAccess<T>` type, which wraps `Access<T>` and adds with/without filters. If two `FilteredAccess` have with/without values that prove they are disjoint, they will no longer conflict.

## EntityRef / EntityMut

World entity operations on `main` require that the user passes in an `entity` id to each operation:

```rust
let entity = world.spawn((A, )); // create a new entity with A
world.get::<A>(entity);
world.insert(entity, (B, C));
world.insert_one(entity, D);
```

This means that each operation needs to look up the entity location / verify its validity. The initial spawn operation also requires a Bundle as input. This can be awkward when no components are required (or one component is required).

These operations have been replaced by `EntityRef` and `EntityMut`, which are "builder-style" wrappers around world that provide read and read/write operations on a single, pre-validated entity:

```rust
// spawn now takes no inputs and returns an EntityMut
let entity = world.spawn()
    .insert(A) // insert a single component into the entity
    .insert_bundle((B, C)) // insert a bundle of components into the entity
    .id() // id returns the Entity id

// Returns EntityMut (or panics if the entity does not exist)
world.entity_mut(entity)
    .insert(D)
    .insert_bundle(SomeBundle::default());
{
    // returns EntityRef (or panics if the entity does not exist)
    let d = world.entity(entity)
        .get::<D>() // gets the D component
        .unwrap();
    // world.get still exists for ergonomics
    let d = world.get::<D>(entity).unwrap();
}

// These variants return Options if you want to check existence instead of panicing 
world.get_entity_mut(entity)
    .unwrap()
    .insert(E);

if let Some(entity_ref) = world.get_entity(entity) {
    let d = entity_ref.get::<D>().unwrap();
}
```

This _does not_ affect the current Commands api or terminology. I think that should be a separate conversation as that is a much larger breaking change.

## Safety Improvements

* Entity reservation in Commands uses a normal world borrow instead of an unsafe transmute
* QuerySets no longer transmutes lifetimes
* Made traits "unsafe" when implementing a trait incorrectly could cause unsafety
* More thorough safety docs

## RemovedComponents SystemParam

The old approach to querying removed components: `query.removed:<T>()` was confusing because it had no connection to the query itself. I replaced it with the following, which is both clearer and allows us to cache the ComponentId mapping in the SystemParamState:

```rust
fn system(removed: RemovedComponents<T>) {
    for entity in removed.iter() {
    }
} 
```

## Simpler Bundle implementation

Bundles are no longer responsible for sorting (or deduping) TypeInfo. They are just a simple ordered list of component types / data. This makes the implementation smaller and opens the door to an easy "nested bundle" implementation in the future (which i might even add in this pr). Duplicate detection is now done once per bundle type by World the first time a bundle is used.

## Unified WorldQuery and QueryFilter types

(don't worry they are still separate type _parameters_ in Queries .. this is a non-breaking change)

WorldQuery and QueryFilter were already basically identical apis. With the addition of `FetchState` and more storage-specific fetch methods, the overlap was even clearer (and the redundancy more painful).

QueryFilters are now just `F: WorldQuery where F::Fetch: FilterFetch`. FilterFetch requires `Fetch<Item = bool>` and adds new "short circuit" variants of fetch methods. This enables a filter tuple like `(With<A>, Without<B>, Changed<C>)` to stop evaluating the filter after the first mismatch is encountered. FilterFetch is automatically implemented for `Fetch` implementations that return bool.

This forces fetch implementations that return things like `(bool, bool, bool)` (such as the filter above) to manually implement FilterFetch and decide whether or not to short-circuit.

## More Granular Modules

World no longer globs all of the internal modules together. It now exports `core`, `system`, and `schedule` separately. I'm also considering exporting `core` submodules directly as that is still pretty "glob-ey" and unorganized (feedback welcome here).

## Remaining Draft Work (to be done in this pr)

* ~~panic on conflicting WorldQuery fetches (&A, &mut A)~~
    * ~~bevy `main` and hecs both currently allow this, but we should protect against it if possible~~
* ~~batch_iter / par_iter (currently stubbed out)~~
* ~~ChangedRes~~
    * ~~I skipped this while we sort out #1313. This pr should be adapted to account for whatever we land on there~~.
* ~~The `Archetypes` and `Tables` collections use hashes of sorted lists of component ids to uniquely identify each archetype/table. This hash is then used as the key in a HashMap to look up the relevant ArchetypeId or TableId. (which doesn't handle hash collisions properly)~~
* ~~It is currently unsafe to generate a Query from "World A", then use it on "World B" (despite the api claiming it is safe). We should probably close this gap. This could be done by adding a randomly generated WorldId to each world, then storing that id in each Query. They could then be compared to each other on each `query.do_thing(&world)` operation. This _does_ add an extra branch to each query operation, so I'm open to other suggestions if people have them.~~
* ~~Nested Bundles (if i find time)~~

## Potential Future Work

* Expand WorldCell to support queries.
* Consider not allocating in the empty archetype on `world.spawn()`
    * ex: return something like EntityMutUninit, which turns into EntityMut after an `insert` or `insert_bundle` op
    * this actually regressed performance last time i tried it, but in theory it should be faster
* Optimize SparseSet::insert (see `PERF` comment on insert)
* Replace SparseArray `Option<T>` with T::MAX to cut down on branching
    * would enable cheaper get_unchecked() operations
* upstream fixedbitset optimizations
    * fixedbitset could be allocation free for small block counts (store blocks in a SmallVec)
    * fixedbitset could have a const constructor 
* Consider implementing Tags (archetype-specific by-value data that affects archetype identity) 
    * ex: ArchetypeA could have `[A, B, C]` table components and `[D(1)]` "tag" component. ArchetypeB could have `[A, B, C]` table components and a `[D(2)]` tag component. The archetypes are different, despite both having D tags because the value inside D is different.
    * this could potentially build on top of the `archetype.unique_components` added in this pr for resource storage.
* Consider reverting `all_tuples` proc macro in favor of the old `macro_rules` implementation
    * all_tuples is more flexible and produces cleaner documentation (the macro_rules version produces weird type parameter orders due to parser constraints)
    * but unfortunately all_tuples also appears to make Rust Analyzer sad/slow when working inside of `bevy_ecs` (does not affect user code)
* Consider "resource queries" and/or "mixed resource and entity component queries" as an alternative to WorldCell
    * this is basically just "systems" so maybe it's not worth it
* Add more world ops
    * `world.clear()`
    * `world.reserve<T: Bundle>(count: usize)`
 * Try using the old archetype allocation strategy (allocate new memory on resize and copy everything over). I expect this to improve batch insertion performance at the cost of unbatched performance. But thats just a guess. I'm not an allocation perf pro :)
 * Adapt Commands apis for consistency with new World apis 

## Benchmarks

key:

* `bevy_old`: bevy `main` branch
* `bevy`: this branch
* `_foreach`: uses an optimized for_each iterator
* ` _sparse`: uses sparse set storage (if unspecified assume table storage)
* `_system`: runs inside a system (if unspecified assume test happens via direct world ops)

### Simple Insert (from ecs_bench_suite)

![image](https://user-images.githubusercontent.com/2694663/109245573-9c3ce100-7795-11eb-9003-bfd41cd5c51f.png)

### Simpler Iter (from ecs_bench_suite)

![image](https://user-images.githubusercontent.com/2694663/109245795-ffc70e80-7795-11eb-92fb-3ffad09aabf7.png)

### Fragment Iter (from ecs_bench_suite)

![image](https://user-images.githubusercontent.com/2694663/109245849-0fdeee00-7796-11eb-8d25-eb6b7a682c48.png)

### Sparse Fragmented Iter

Iterate a query that matches 5 entities from a single matching archetype, but there are 100 unmatching archetypes

![image](https://user-images.githubusercontent.com/2694663/109245916-2b49f900-7796-11eb-9a8f-ed89c203f940.png)
 
### Schedule (from ecs_bench_suite)

![image](https://user-images.githubusercontent.com/2694663/109246428-1fab0200-7797-11eb-8841-1b2161e90fa4.png)

### Add Remove Component (from ecs_bench_suite)

![image](https://user-images.githubusercontent.com/2694663/109246492-39e4e000-7797-11eb-8985-2706bd0495ab.png)


### Add Remove Component Big

Same as the test above, but each entity has 5 "large" matrix components and 1 "large" matrix component is added and removed

![image](https://user-images.githubusercontent.com/2694663/109246517-449f7500-7797-11eb-835e-28b6790daeaa.png)


### Get Component

Looks up a single component value a large number of times

![image](https://user-images.githubusercontent.com/2694663/109246129-87ad1880-7796-11eb-9fcb-c38012aa7c70.png)
2021-03-05 07:54:35 +00:00
Yoh Deadfall
578a5b1b88 Moved benchmarks to a single place (#1477)
Closes #1472.
2021-02-19 22:11:00 +00:00
Smite Rust
a9ce7f4e82
update dependencies (#470) 2020-09-10 12:54:24 -07:00
Carter Anderson
9c850057c0 move benches to separate crate to cut test/example build times 2020-09-08 13:30:44 -07:00