Commit graph

127 commits

Author SHA1 Message Date
Nicola Papale
47409c8a72
Add inline(never) to bench systems (#9824)
# Objective

It is difficult to inspect the generated assembly of benchmark systems
using a tool such as `cargo-asm`

## Solution

Mark the related functions as `#[inline(never)]`. This way, you can pass
the module name as argument to `cargo-asm` to get the generated assembly
for the given function.

It may have as side effect to make benchmarks a bit more predictable and
useful too. As it prevents inlining where in bevy no inlining could
possibly take place.

### Measurements

Following the recommendations in
<https://easyperf.net/blog/2019/08/02/Perf-measurement-environment-on-Linux>,
I

1. Put my CPU in "AMD ECO" mode, which surprisingly is the equivalent of
disabling turboboost, giving more consistent performances
2. Disabled all hyperthreading cores using `echo 0 >
/sys/devices/system/cpu/cpu{11,12…}/online`
3. Set the scaling governor to `performance`
4. Manually disabled AMD boost with `echo 0 >
/sys/devices/system/cpu/cpufreq/boost`
5. Set the nice level of the criterion benchmark using `cargo bench … &
sudo renice -n -5 -p $! ; fg`
6. Not running any other program than the benchmarks (outside of system
daemons and the X11 server)

With this setup, running multiple times the same benchmarks on `main`
gives me a lot of "regression" and "improvement" messages, which is
absurd given that no code changed.

On this branch, there is still some spurious performance change
detection, but they are much less frequent.

This only accounts for `iter_simple` and `iter_frag` benchmarks of
course.
2023-10-02 12:52:18 +00:00
Nicola Papale
359e6c718d
Use single threaded executor for archetype benches (#9835)
# Objective

`no_archetype` benchmark group results were very noisy

## Solution

Use the `SingeThreaded` executor.

On my machine, this makes the `no_archetype` bench group 20 to 30 times
faster. Meaning that most of the runtime was accounted by the
multithreaded scheduler. ie: the benchmark was not testing system
archetype update, but the overhead of multithreaded scheduling.

With this change, the benchmark results are more meaningful.

The add_archetypes function is also simplified.
2023-09-18 16:06:42 +00:00
Mike
c61faf35b8
fix deprecation warning in bench (#9823)
# Objective

- Fix deprecation warning when running benches.

## Solution

- iter -> read
2023-09-16 09:27:13 +00:00
Joseph
8eb6ccdd87
Remove useless single tuples and trailing commas (#9720)
# Objective

Title
2023-09-08 21:46:54 +00:00
Nicola Papale
ee3cc8ca86
Fix erronenous glam version (#9653)
# Objective

- Fix compilation issue with wrongly specified glam version
- bevy uses `Vec2::INFINITY`, depends on `0.24` (equivalent to `0.24.0`)
yet it was only introduced in version `0.24.1`

Context:
https://discord.com/channels/691052431525675048/692572690833473578/1146586570787397794

## Solution

- Bump glam version.
2023-08-31 12:55:17 +00:00
Mike
33fdc5f5db
Move schedule name into Schedule (#9600)
# Objective

- Move schedule name into `Schedule` to allow the schedule name to be
used for errors and tracing in Schedule methods
- Fixes #9510

## Solution

- Move label onto `Schedule` and adjust api's on `World` and `Schedule`
to not pass explicit label where it makes sense to.
- add name to errors and tracing.
- `Schedule::new` now takes a label so either add the label or use
`Schedule::default` which uses a default label. `default` is mostly used
in doc examples and tests.

---

## Changelog

- move label onto `Schedule` to improve error message and logging for
schedules.

## Migration Guide

`Schedule::new` and `App::add_schedule`
```rust
// old
let schedule = Schedule::new();
app.add_schedule(MyLabel, schedule);

// new
let schedule = Schedule::new(MyLabel);
app.add_schedule(schedule);
```

if you aren't using a label and are using the schedule struct directly
you can use the default constructor.
```rust
// old
let schedule = Schedule::new();
schedule.run(world);

// new
let schedule = Schedule::default();
schedule.run(world);
```

`Schedules:insert`
```rust
// old
let schedule = Schedule::new();
schedules.insert(MyLabel, schedule);

// new
let schedule = Schedule::new(MyLabel);
schedules.insert(schedule);
```

`World::add_schedule`
```rust
// old
let schedule = Schedule::new();
world.add_schedule(MyLabel, schedule);

// new
let schedule = Schedule::new(MyLabel);
world.add_schedule(schedule);
```
2023-08-28 20:44:48 +00:00
jnhyatt
087a345579
Rename Bezier to CubicBezier for clarity (#9554)
# Objective

A Bezier curve is a curve defined by two or more control points. In the
simplest form, it's just a line. The (arguably) most common type of
Bezier curve is a cubic Bezier, defined by four control points. These
are often used in animation, etc. Bevy has a Bezier curve struct called
`Bezier`. However, this is technically a misnomer as it only represents
cubic Bezier curves.

## Solution

This PR changes the struct name to `CubicBezier` to more accurately
reflect the struct's usage. Since it's exposed in Bevy's prelude, it can
potentially collide with other `Bezier` implementations. While that
might instead be an argument for removing it from the prelude, there's
also something to be said for adding a more general `Bezier` into Bevy,
in which case we'd likely want to use the name `Bezier`. As a final
motivator, not only is the struct located in `cubic_spines.rs`, there
are also several other spline-related structs which follow the
`CubicXxx` naming convention where applicable. For example,
`CubicSegment` represents a cubic Bezier curve (with coefficients
pre-baked).

---

## Migration Guide

- Change all `Bezier` references to `CubicBezier`
2023-08-28 17:37:42 +00:00
Nicola Papale
7b0809b532
Add reflect path parsing benchmark (#9364)
# Objective

We want to measure performance on path reflection parsing.

## Solution

Benchmark path-based reflection:
- Add a benchmark for `ParsedPath::parse`

It's fairly noisy, this is why I added the 3% threshold.

Ideally we would fix the noisiness though. Supposedly I'm seeding the
RNG correctly, so there shouldn't be much observable variance. Maybe
someone can help spot the issue.
2023-08-25 14:30:26 +00:00
Mike
ac8f36743e
enable multithreading on benches (#9388)
# Objective

- Enable the new multithreading feature on benches
2023-08-11 22:08:36 +00:00
Joseph
ddbfa48711
Simplify parallel iteration methods (#8854)
# Objective

The `QueryParIter::for_each_mut` function is required when doing
parallel iteration with mutable queries.
This results in an unfortunate stutter:
`query.par_iter_mut().par_for_each_mut()` ('mut' is repeated).

## Solution

- Make `for_each` compatible with mutable queries, and deprecate
`for_each_mut`. In order to prevent `for_each` from being called
multiple times in parallel, we take ownership of the QueryParIter.

---

## Changelog

- `QueryParIter::for_each` is now compatible with mutable queries.
`for_each_mut` has been deprecated as it is now redundant.

## Migration Guide

The method `QueryParIter::for_each_mut` has been deprecated and is no
longer functional. Use `for_each` instead, which now supports mutable
queries.

```rust
// Before:
query.par_iter_mut().for_each_mut(|x| ...);

// After:
query.par_iter_mut().for_each(|x| ...);
```

The method `QueryParIter::for_each` now takes ownership of the
`QueryParIter`, rather than taking a shared reference.

```rust
// Before:
let par_iter = my_query.par_iter().batching_strategy(my_batching_strategy);
par_iter.for_each(|x| {
    // ...Do stuff with x...
    par_iter.for_each(|y| {
        // ...Do nested stuff with y...
    });
});

// After:
my_query.par_iter().batching_strategy(my_batching_strategy).for_each(|x| {
    // ...Do stuff with x...
    my_query.par_iter().batching_strategy(my_batching_strategy).for_each(|y| {
        // ...Do nested stuff with y...
    });
});
```
2023-07-23 11:09:24 +00:00
JoJoJet
db8d3651e0
Migrate the rest of the engine to UnsafeWorldCell (#8833)
# Objective

Follow-up to #6404 and #8292.

Mutating the world through a shared reference is surprising, and it
makes the meaning of `&World` unclear: sometimes it gives read-only
access to the entire world, and sometimes it gives interior mutable
access to only part of it.

This is an up-to-date version of #6972.

## Solution

Use `UnsafeWorldCell` for all interior mutability. Now, `&World`
*always* gives you read-only access to the entire world.

---

## Changelog

TODO - do we still care about changelogs?

## Migration Guide

Mutating any world data using `&World` is now considered unsound -- the
type `UnsafeWorldCell` must be used to achieve interior mutability. The
following methods now accept `UnsafeWorldCell` instead of `&World`:

- `QueryState`: `get_unchecked`, `iter_unchecked`,
`iter_combinations_unchecked`, `for_each_unchecked`,
`get_single_unchecked`, `get_single_unchecked_manual`.
- `SystemState`: `get_unchecked_manual`

```rust
let mut world = World::new();
let mut query = world.query::<&mut T>();

// Before:
let t1 = query.get_unchecked(&world, entity_1);
let t2 = query.get_unchecked(&world, entity_2);

// After:
let world_cell = world.as_unsafe_world_cell();
let t1 = query.get_unchecked(world_cell, entity_1);
let t2 = query.get_unchecked(world_cell, entity_2);
```

The methods `QueryState::validate_world` and
`SystemState::matches_world` now take a `WorldId` instead of `&World`:

```rust
// Before:
query_state.validate_world(&world);

// After:
query_state.validate_world(world.id());
```

The methods `QueryState::update_archetypes` and
`SystemState::update_archetypes` now take `UnsafeWorldCell` instead of
`&World`:

```rust
// Before:
query_state.update_archetypes(&world);

// After:
query_state.update_archetypes(world.as_unsafe_world_cell_readonly());
```
2023-06-15 01:31:56 +00:00
Natanael Mojica
f135535cd6
Rename Command's "write" method to "apply" (#8814)
# Objective

- Fixes #8811 .

## Solution

- Rename "write" method to "apply" in Command trait definition.
- Rename other implementations of command trait throughout bevy's code
base.

---

## Changelog

- Changed: `Command::write` has been changed to `Command::apply`
- Changed: `EntityCommand::write` has been changed to
`EntityCommand::apply`

## Migration Guide

- `Command::write` implementations need to be changed to implement
`Command::apply` instead. This is a mere name change, with no further
actions needed.
- `EntityCommand::write` implementations need to be changed to implement
`EntityCommand::apply` instead. This is a mere name change, with no
further actions needed.

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
2023-06-12 17:53:47 +00:00
CatThingy
89cbc78d3d
Require #[derive(Event)] on all Events (#7086)
# Objective

Be consistent with `Resource`s and `Components` and have `Event` types
be more self-documenting.
Although not susceptible to accidentally using a function instead of a
value due to `Event`s only being initialized by their type, much of the
same reasoning for removing the blanket impl on `Resource` also applies
here.

* Not immediately obvious if a type is intended to be an event
* Prevent invisible conflicts if the same third-party or primitive types
are used as events
* Allows for further extensions (e.g. opt-in warning for missed events)

## Solution

Remove the blanket impl for the `Event` trait. Add a derive macro for
it.

---

## Changelog

- `Event` is no longer implemented for all applicable types. Add the
`#[derive(Event)]` macro for events.

## Migration Guide

* Add the `#[derive(Event)]` macro for events. Third-party types used as
events should be wrapped in a newtype.
2023-06-06 14:44:32 +00:00
JoJoJet
85a918a8dd
Improve safety for the multi-threaded executor using UnsafeWorldCell (#8292)
# Objective

Fix #7833.

Safety comments in the multi-threaded executor don't really talk about
system world accesses, which makes it unclear if the code is actually
valid.

## Solution

Update the `System` trait to use `UnsafeWorldCell`. This type's API is
written in a way that makes it much easier to cleanly maintain safety
invariants. Use this type throughout the multi-threaded executor, with a
liberal use of safety comments.

---

## Migration Guide

The `System` trait now uses `UnsafeWorldCell` instead of `&World`. This
type provides a robust API for interior mutable world access.
- The method `run_unsafe` uses this type to manage world mutations
across multiple threads.
- The method `update_archetype_component_access` uses this type to
ensure that only world metadata can be used.

```rust
let mut system = IntoSystem::into_system(my_system);
system.initialize(&mut world);

// Before:
system.update_archetype_component_access(&world);
unsafe { system.run_unsafe(&world) }

// After:
system.update_archetype_component_access(world.as_unsafe_world_cell_readonly());
unsafe { system.run_unsafe(world.as_unsafe_world_cell()) }
```

---------

Co-authored-by: James Liu <contact@jamessliu.com>
2023-05-29 15:22:10 +00:00
François
0736195a1e
update syn, encase, glam and hexasphere (#8573)
# Objective

- Fixes #8282 
- Update `syn` to 2.0, `encase` to 0.6, `glam` to 0.24 and `hexasphere`
to 9.0


Blocked ~~on https://github.com/teoxoy/encase/pull/42~~ and ~~on
https://github.com/OptimisticPeach/hexasphere/pull/17~~

---------

Co-authored-by: Nicola Papale <nicopap@users.noreply.github.com>
Co-authored-by: JoJoJet <21144246+JoJoJet@users.noreply.github.com>
2023-05-16 01:24:17 +00:00
Gino Valente
75130bd5ec
bevy_reflect: Better proxies (#6971)
# Objective

> This PR is based on discussion from #6601

The Dynamic types (e.g. `DynamicStruct`, `DynamicList`, etc.) act as
both:
1. Dynamic containers which may hold any arbitrary data
2. Proxy types which may represent any other type

Currently, the only way we can represent the proxy-ness of a Dynamic is
by giving it a name.

```rust
// This is just a dynamic container
let mut data = DynamicStruct::default();

// This is a "proxy"
data.set_name(std::any::type_name::<Foo>());
```

This type name is the only way we check that the given Dynamic is a
proxy of some other type. When we need to "assert the type" of a `dyn
Reflect`, we call `Reflect::type_name` on it. However, because we're
only using a string to denote the type, we run into a few gotchas and
limitations.

For example, hashing a Dynamic proxy may work differently than the type
it proxies:

```rust
#[derive(Reflect, Hash)]
#[reflect(Hash)]
struct Foo(i32);

let concrete = Foo(123);
let dynamic = concrete.clone_dynamic();

let concrete_hash = concrete.reflect_hash();
let dynamic_hash = dynamic.reflect_hash();

// The hashes are not equal because `concrete` uses its own `Hash` impl
// while `dynamic` uses a reflection-based hashing algorithm
assert_ne!(concrete_hash, dynamic_hash);
```

Because the Dynamic proxy only knows about the name of the type, it's
unaware of any other information about it. This means it also differs on
`Reflect::reflect_partial_eq`, and may include ignored or skipped fields
in places the concrete type wouldn't.

## Solution

Rather than having Dynamics pass along just the type name of proxied
types, we can instead have them pass around the `TypeInfo`.

Now all Dynamic types contain an `Option<&'static TypeInfo>` rather than
a `String`:

```diff
pub struct DynamicTupleStruct {
-    type_name: String,
+    represented_type: Option<&'static TypeInfo>,
    fields: Vec<Box<dyn Reflect>>,
}
```

By changing `Reflect::get_type_info` to
`Reflect::represented_type_info`, hopefully we make this behavior a
little clearer. And to account for `None` values on these dynamic types,
`Reflect::represented_type_info` now returns `Option<&'static
TypeInfo>`.

```rust
let mut data = DynamicTupleStruct::default();

// Not proxying any specific type
assert!(dyn_tuple_struct.represented_type_info().is_none());

let type_info = <Foo as Typed>::type_info();
dyn_tuple_struct.set_represented_type(Some(type_info));
// Alternatively:
// let dyn_tuple_struct = foo.clone_dynamic();

// Now we're proxying `Foo`
assert!(dyn_tuple_struct.represented_type_info().is_some());
```

This means that we can have full access to all the static type
information for the proxied type. Future work would include
transitioning more static type information (trait impls, attributes,
etc.) over to the `TypeInfo` so it can actually be utilized by Dynamic
proxies.

### Alternatives & Rationale

> **Note** 
> These alternatives were written when this PR was first made using a
`Proxy` trait. This trait has since been removed.

<details>
<summary>View</summary>

#### Alternative: The `Proxy<T>` Approach

I had considered adding something like a `Proxy<T>` type where `T` would
be the Dynamic and would contain the proxied type information.

This was nice in that it allows us to explicitly determine whether
something is a proxy or not at a type level. `Proxy<DynamicStruct>`
proxies a struct. Makes sense.

The reason I didn't go with this approach is because (1) tuples, (2)
complexity, and (3) `PartialReflect`.

The `DynamicTuple` struct allows us to represent tuples at runtime. It
also allows us to do something you normally can't with tuples: add new
fields. Because of this, adding a field immediately invalidates the
proxy (e.g. our info for `(i32, i32)` doesn't apply to `(i32, i32,
NewField)`). By going with this PR's approach, we can just remove the
type info on `DynamicTuple` when that happens. However, with the
`Proxy<T>` approach, it becomes difficult to represent this behavior—
we'd have to completely control how we access data for `T` for each `T`.

Secondly, it introduces some added complexities (aside from the manual
impls for each `T`). Does `Proxy<T>` impl `Reflect`? Likely yes, if we
want to represent it as `dyn Reflect`. What `TypeInfo` do we give it?
How would we forward reflection methods to the inner type (remember, we
don't have specialization)? How do we separate this from Dynamic types?
And finally, how do all this in a way that's both logical and intuitive
for users?

Lastly, introducing a `Proxy` trait rather than a `Proxy<T>` struct is
actually more inline with the [Unique Reflect
RFC](https://github.com/bevyengine/rfcs/pull/56). In a way, the `Proxy`
trait is really one part of the `PartialReflect` trait introduced in
that RFC (it's technically not in that RFC but it fits well with it),
where the `PartialReflect` serves as a way for proxies to work _like_
concrete types without having full access to everything a concrete
`Reflect` type can do. This would help bridge the gap between the
current state of the crate and the implementation of that RFC.

All that said, this is still a viable solution. If the community
believes this is the better path forward, then we can do that instead.
These were just my reasons for not initially going with it in this PR.

#### Alternative: The Type Registry Approach

The `Proxy` trait is great and all, but how does it solve the original
problem? Well, it doesn't— yet!

The goal would be to start moving information from the derive macro and
its attributes to the generated `TypeInfo` since these are known
statically and shouldn't change. For example, adding `ignored: bool` to
`[Un]NamedField` or a list of impls.

However, there is another way of storing this information. This is, of
course, one of the uses of the `TypeRegistry`. If we're worried about
Dynamic proxies not aligning with their concrete counterparts, we could
move more type information to the registry and require its usage.

For example, we could replace `Reflect::reflect_hash(&self)` with
`Reflect::reflect_hash(&self, registry: &TypeRegistry)`.

That's not the _worst_ thing in the world, but it is an ergonomics loss.

Additionally, other attributes may have their own requirements, further
restricting what's possible without the registry. The `Reflect::apply`
method will require the registry as well now. Why? Well because the
`map_apply` function used for the `Reflect::apply` impls on `Map` types
depends on `Map::insert_boxed`, which (at least for `DynamicMap`)
requires `Reflect::reflect_hash`. The same would apply when adding
support for reflection-based diffing, which will require
`Reflect::reflect_partial_eq`.

Again, this is a totally viable alternative. I just chose not to go with
it for the reasons above. If we want to go with it, then we can close
this PR and we can pursue this alternative instead.

#### Downsides

Just to highlight a quick potential downside (likely needs more
investigation): retrieving the `TypeInfo` requires acquiring a lock on
the `GenericTypeInfoCell` used by the `Typed` impls for generic types
(non-generic types use a `OnceBox which should be faster). I am not sure
how much of a performance hit that is and will need to run some
benchmarks to compare against.

</details>

### Open Questions

1. Should we use `Cow<'static, TypeInfo>` instead? I think that might be
easier for modding? Perhaps, in that case, we need to update
`Typed::type_info` and friends as well?
2. Are the alternatives better than the approach this PR takes? Are
there other alternatives?

---

## Changelog

### Changed

- `Reflect::get_type_info` has been renamed to
`Reflect::represented_type_info`
- This method now returns `Option<&'static TypeInfo>` rather than just
`&'static TypeInfo`

### Added

- Added `Reflect::is_dynamic` method to indicate when a type is dynamic
- Added a `set_represented_type` method on all dynamic types

### Removed

- Removed `TypeInfo::Dynamic` (use `Reflect::is_dynamic` instead)
- Removed `Typed` impls for all dynamic types

## Migration Guide

- The Dynamic types no longer take a string type name. Instead, they
require a static reference to `TypeInfo`:

    ```rust
    #[derive(Reflect)]
    struct MyTupleStruct(f32, f32);
    
    let mut dyn_tuple_struct = DynamicTupleStruct::default();
    dyn_tuple_struct.insert(1.23_f32);
    dyn_tuple_struct.insert(3.21_f32);
    
    // BEFORE:
    let type_name = std::any::type_name::<MyTupleStruct>();
    dyn_tuple_struct.set_name(type_name);
    
    // AFTER:
    let type_info = <MyTupleStruct as Typed>::type_info();
    dyn_tuple_struct.set_represented_type(Some(type_info));
    ```

- `Reflect::get_type_info` has been renamed to
`Reflect::represented_type_info` and now also returns an
`Option<&'static TypeInfo>` (instead of just `&'static TypeInfo`):

    ```rust
    // BEFORE:
    let info: &'static TypeInfo = value.get_type_info();
    // AFTER:
let info: &'static TypeInfo = value.represented_type_info().unwrap();
    ```

- `TypeInfo::Dynamic` and `DynamicInfo` has been removed. Use
`Reflect::is_dynamic` instead:
   
    ```rust
    // BEFORE:
    if matches!(value.get_type_info(), TypeInfo::Dynamic) {
      // ...
    }
    // AFTER:
    if value.is_dynamic() {
      // ...
    }
    ```

---------

Co-authored-by: radiish <cb.setho@gmail.com>
2023-04-26 12:17:46 +00:00
Gino Valente
c320f54b50
bevy_reflect: Add benches for Struct::clone_dynamic and Reflect::get_type_info (#7764)
# Objective

There aren't any reflection bench tests for `Struct::clone_dynamic` or
`Reflect::get_type_info`.

## Solution

Add benches for `Struct::clone_dynamic` and `Reflect::get_type_info`.
2023-04-17 15:19:08 +00:00
James Liu
8e82c88131
Basic event benchmarks (#8251)
# Objective
Fix #7731. Add basic Event sending and iteration benchmarks to
bevy_ecs's benchmark suite.

## Solution
Add said benchmarks scaling from 100 to 50,000 events.

Not sure if I want to include a randomization of the events going in,
the current implementation might be too easy for the compiler to
optimize.

---------

Co-authored-by: JoJoJet <21144246+JoJoJet@users.noreply.github.com>
2023-03-31 07:12:18 +00:00
Carter Anderson
aefe1f0739
Schedule-First: the new and improved add_systems (#8079)
Co-authored-by: Mike <mike.hsu@gmail.com>
2023-03-18 01:45:34 +00:00
JoJoJet
fd1af7c8b8
Replace multiple calls to add_system with add_systems (#8001) 2023-03-10 18:15:22 +00:00
Edgar Geier
cbbf8ac575 Update glam to 0.23 (#7883)
# Objective

- Update `glam` to the latest version.

## Solution

- Update `glam` to version `0.23`.

Since the breaking change in `glam` only affects the `scalar-math` feature, this should cause no issues.
2023-03-04 11:42:27 +00:00
Aevyrie
2ea0061018 Add generic cubic splines to bevy_math (#7683)
# Objective

- Make cubic splines more flexible and more performant
- Remove the existing spline implementation that is generic over many degrees
  - This is a potential performance footgun and adds type complexity for negligible gain.
- Add implementations of:
  - Bezier splines
  - Cardinal splines (inc. Catmull-Rom)
  - B-Splines
  - Hermite splines

https://user-images.githubusercontent.com/2632925/221780519-495d1b20-ab46-45b4-92a3-32c46da66034.mp4


https://user-images.githubusercontent.com/2632925/221780524-2b154016-699f-404f-9c18-02092f589b04.mp4


https://user-images.githubusercontent.com/2632925/221780525-f934f99d-9ad4-4999-bae2-75d675f5644f.mp4


## Solution

- Implements the concept that splines are curve generators (e.g. https://youtu.be/jvPPXbo87ds?t=3488) via the `CubicGenerator` trait.
- Common splines are bespoke data types that implement this trait. This gives us flexibility to add custom spline-specific methods on these types, while ultimately all generating a `CubicCurve`.
- All splines generate `CubicCurve`s, which are a chain of precomputed polynomial coefficients. This means that all splines have the same evaluation cost, as the calculations for determining position, velocity, and acceleration are all identical. In addition, `CubicCurve`s are simply a list of `CubicSegment`s, which are evaluated from t=0 to t=1. This also means cubic splines of different type can be chained together, as ultimately they all are simply a collection of `CubicSegment`s.
- Because easing is an operation on a singe segment of a Bezier curve, we can simply implement easing on `Beziers` that use the `Vec2` type for points. Higher level crates such as `bevy_ui` can wrap this in a more ergonomic interface as needed.

### Performance
Measured on a desktop i5 8600K (6-year-old CPU):
- easing: 2.7x faster (19ns)
- cubic vec2 position sample: 1.5x faster (1.8ns)
- cubic vec3 position sample: 1.5x faster (2.6ns)
- cubic vec3a position sample: 1.9x faster (1.4ns)

On a laptop i7 11800H:
- easing: 16ns
- cubic vec2 position sample: 1.6ns
- cubic vec3 position sample: 2.3ns
- cubic vec3a position sample: 1.2ns

---

## Changelog

- Added a generic cubic curve trait, and implementation for Cardinal splines (including Catmull-Rom), B-Splines, Beziers, and Hermite Splines. 2D cubic curve segments also implement easing functionality for animation.
2023-03-03 22:06:42 +00:00
JoJoJet
9733613c07 Add marker traits to distinguish base sets from regular system sets (#7863)
# Objective

Base sets, added in #7466  are a special type of system set. Systems can only be added to base sets via `in_base_set`, while non-base sets can only be added via `in_set`. Unfortunately this is currently guarded by a runtime panic, which presents an unfortunate toe-stub when the wrong method is used. The delayed response between writing code and encountering the error (possibly hours) makes the distinction between base sets and other sets much more difficult to learn.

## Solution

Add the marker traits `BaseSystemSet` and `FreeSystemSet`. `in_base_set` and `in_set` now respectively accept these traits, which moves the runtime panic to a compile time error.

---

## Changelog

+ Added the marker trait `BaseSystemSet`, which is distinguished from a `FreeSystemSet`. These are both subtraits of `SystemSet`.

## Migration Guide

None if merged with 0.10
2023-03-02 13:22:58 +00:00
Aevyrie
2a598d3e5a Add Beziers to bevy_math (#7653)
# Objective

- Adds foundational math for Bezier curves, useful for UI/2D/3D animation and smooth paths.

https://user-images.githubusercontent.com/2632925/218883143-e138f994-1795-40da-8c59-21d779666991.mp4

## Solution

- Adds the generic `Bezier` type, and a `Point` trait. The `Point` trait allows us to use control points of any dimension, as long as they support vector math. I've implemented it for `f32`(1D), `Vec2`(2D), and `Vec3`/`Vec3A`(3D).
- Adds `CubicBezierEasing` on top of `Bezier` with the addition of an implementation of cubic Bezier easing, which is a foundational tool for UI animation.
  - This involves solving for $t$ in the parametric Bezier function $B(t)$ using the Newton-Raphson method to find a value with error $\leq$ 1e-7, capped at 8 iterations.
- Added type aliases for common Bezier curves: `CubicBezier2d`, `CubicBezier3d`, `QuadraticBezier2d`, and `QuadraticBezier3d`. These types use `Vec3A` to represent control points, as this was found to have an 80-90% speedup over using `Vec3`.
- Benchmarking shows quadratic/cubic Bezier evaluations $B(t)$ take \~1.8/2.4ns respectively. Easing, which requires an iterative solve takes \~50ns for cubic Beziers. 

---

## Changelog

- Added `CubicBezier2d`, `CubicBezier3d`, `QuadraticBezier2d`, and `QuadraticBezier3d` types with methods for sampling position, velocity, and acceleration. The generic `Bezier` type is also available, and generic over any degree of Bezier curve.
- Added `CubicBezierEasing`, with additional methods to allow for smooth easing animations.
2023-02-20 18:34:52 +00:00
Niklas Eicker
0bce78439b Cleanup system sets called labels (#7678)
# Objective

We have a few old system labels that are now system sets but are still named or documented as labels. Documentation also generally mentioned system labels in some places.


## Solution

- Clean up naming and documentation regarding system sets

## Migration Guide

`PrepareAssetLabel` is now called `PrepareAssetSet`
2023-02-14 21:46:07 +00:00
JoJoJet
f7fbfaf9c7 Rename an outdated benchmark from "run criteria" to "run condition" (#7645)
# Objective

The benchmark for run conditions still uses the outdated terminology "run criteria".

## Solution

Update the name.
2023-02-12 19:15:44 +00:00
张林伟
aa4170d9a4 Rename schedule v3 to schedule (#7519)
# Objective

- Follow up of https://github.com/bevyengine/bevy/pull/7267

## Solution

- Rename schedule_v3 to schedule
- Suppress "module inception" lint
2023-02-06 18:44:40 +00:00
Alice Cecile
206c7ce219 Migrate engine to Schedule v3 (#7267)
Huge thanks to @maniwani, @devil-ira, @hymm, @cart, @superdump and @jakobhellermann for the help with this PR.

# Objective

- Followup #6587.
- Minimal integration for the Stageless Scheduling RFC: https://github.com/bevyengine/rfcs/pull/45

## Solution

- [x]  Remove old scheduling module
- [x] Migrate new methods to no longer use extension methods
- [x] Fix compiler errors
- [x] Fix benchmarks
- [x] Fix examples
- [x] Fix docs
- [x] Fix tests

## Changelog

### Added

- a large number of methods on `App` to work with schedules ergonomically
- the `CoreSchedule` enum
- `App::add_extract_system` via the `RenderingAppExtension` trait extension method
- the private `prepare_view_uniforms` system now has a public system set for scheduling purposes, called `ViewSet::PrepareUniforms`

### Removed

- stages, and all code that mentions stages
- states have been dramatically simplified, and no longer use a stack
- `RunCriteriaLabel`
- `AsSystemLabel` trait
- `on_hierarchy_reports_enabled` run criteria (now just uses an ad hoc resource checking run condition)
- systems in `RenderSet/Stage::Extract` no longer warn when they do not read data from the main world
- `RunCriteriaLabel`
- `transform_propagate_system_set`: this was a nonstandard pattern that didn't actually provide enough control. The systems are already `pub`: the docs have been updated to ensure that the third-party usage is clear.

### Changed

- `System::default_labels` is now `System::default_system_sets`.
- `App::add_default_labels` is now `App::add_default_sets`
- `CoreStage` and `StartupStage` enums are now `CoreSet` and `StartupSet`
- `App::add_system_set` was renamed to `App::add_systems`
- The `StartupSchedule` label is now defined as part of the `CoreSchedules` enum
-  `.label(SystemLabel)` is now referred to as `.in_set(SystemSet)`
- `SystemLabel` trait was replaced by `SystemSet`
- `SystemTypeIdLabel<T>` was replaced by `SystemSetType<T>`
- The `ReportHierarchyIssue` resource now has a public constructor (`new`), and implements `PartialEq`
- Fixed time steps now use a schedule (`CoreSchedule::FixedTimeStep`) rather than a run criteria.
- Adding rendering extraction systems now panics rather than silently failing if no subapp with the `RenderApp` label is found.
- the `calculate_bounds` system, with the `CalculateBounds` label, is now in `CoreSet::Update`, rather than in `CoreSet::PostUpdate` before commands are applied. 
- `SceneSpawnerSystem` now runs under `CoreSet::Update`, rather than `CoreStage::PreUpdate.at_end()`.
- `bevy_pbr::add_clusters` is no longer an exclusive system
- the top level `bevy_ecs::schedule` module was replaced with `bevy_ecs::scheduling`
- `tick_global_task_pools_on_main_thread` is no longer run as an exclusive system. Instead, it has been replaced by `tick_global_task_pools`, which uses a `NonSend` resource to force running on the main thread.

## Migration Guide

- Calls to `.label(MyLabel)` should be replaced with `.in_set(MySet)`
- Stages have been removed. Replace these with system sets, and then add command flushes using the `apply_system_buffers` exclusive system where needed.
- The `CoreStage`, `StartupStage, `RenderStage` and `AssetStage`  enums have been replaced with `CoreSet`, `StartupSet, `RenderSet` and `AssetSet`. The same scheduling guarantees have been preserved.
  - Systems are no longer added to `CoreSet::Update` by default. Add systems manually if this behavior is needed, although you should consider adding your game logic systems to `CoreSchedule::FixedTimestep` instead for more reliable framerate-independent behavior.
  - Similarly, startup systems are no longer part of `StartupSet::Startup` by default. In most cases, this won't matter to you.
  - For example, `add_system_to_stage(CoreStage::PostUpdate, my_system)` should be replaced with 
  - `add_system(my_system.in_set(CoreSet::PostUpdate)`
- When testing systems or otherwise running them in a headless fashion, simply construct and run a schedule using `Schedule::new()` and `World::run_schedule` rather than constructing stages
- Run criteria have been renamed to run conditions. These can now be combined with each other and with states.
- Looping run criteria and state stacks have been removed. Use an exclusive system that runs a schedule if you need this level of control over system control flow.
- For app-level control flow over which schedules get run when (such as for rollback networking), create your own schedule and insert it under the `CoreSchedule::Outer` label.
- Fixed timesteps are now evaluated in a schedule, rather than controlled via run criteria. The `run_fixed_timestep` system runs this schedule between `CoreSet::First` and `CoreSet::PreUpdate` by default.
- Command flush points introduced by `AssetStage` have been removed. If you were relying on these, add them back manually.
- Adding extract systems is now typically done directly on the main app. Make sure the `RenderingAppExtension` trait is in scope, then call `app.add_extract_system(my_system)`.
- the `calculate_bounds` system, with the `CalculateBounds` label, is now in `CoreSet::Update`, rather than in `CoreSet::PostUpdate` before commands are applied. You may need to order your movement systems to occur before this system in order to avoid system order ambiguities in culling behavior.
- the `RenderLabel` `AppLabel` was renamed to `RenderApp` for clarity
- `App::add_state` now takes 0 arguments: the starting state is set based on the `Default` impl.
- Instead of creating `SystemSet` containers for systems that run in stages, simply use `.on_enter::<State::Variant>()` or its `on_exit` or `on_update` siblings.
- `SystemLabel` derives should be replaced with `SystemSet`. You will also need to add the `Debug`, `PartialEq`, `Eq`, and `Hash` traits to satisfy the new trait bounds.
- `with_run_criteria` has been renamed to `run_if`. Run criteria have been renamed to run conditions for clarity, and should now simply return a bool.
- States have been dramatically simplified: there is no longer a "state stack". To queue a transition to the next state, call `NextState::set`

## TODO

- [x] remove dead methods on App and World
- [x] add `App::add_system_to_schedule` and `App::add_systems_to_schedule`
- [x] avoid adding the default system set at inappropriate times
- [x] remove any accidental cycles in the default plugins schedule
- [x] migrate benchmarks
- [x] expose explicit labels for the built-in command flush points
- [x] migrate engine code
- [x] remove all mentions of stages from the docs
- [x] verify docs for States
- [x] fix uses of exclusive systems that use .end / .at_start / .before_commands
- [x] migrate RenderStage and AssetStage
- [x] migrate examples
- [x] ensure that transform propagation is exported in a sufficiently public way (the systems are already pub)
- [x] ensure that on_enter schedules are run at least once before the main app
- [x] re-enable opt-in to execution order ambiguities
- [x] revert change to `update_bounds` to ensure it runs in `PostUpdate`
- [x] test all examples
  - [x] unbreak directional lights
  - [x] unbreak shadows (see 3d_scene, 3d_shape, lighting, transparaency_3d examples)
  - [x] game menu example shows loading screen and menu simultaneously
  - [x] display settings menu is a blank screen
  - [x] `without_winit` example panics
- [x] ensure all tests pass
  - [x] SubApp doc test fails
  - [x] runs_spawn_local tasks fails
  - [x] [Fix panic_when_hierachy_cycle test hanging](https://github.com/alice-i-cecile/bevy/pull/120)

## Points of Difficulty and Controversy

**Reviewers, please give feedback on these and look closely**

1.  Default sets, from the RFC, have been removed. These added a tremendous amount of implicit complexity and result in hard to debug scheduling errors. They're going to be tackled in the form of "base sets" by @cart in a followup.
2. The outer schedule controls which schedule is run when `App::update` is called.
3. I implemented `Label for `Box<dyn Label>` for our label types. This enables us to store schedule labels in concrete form, and then later run them. I ran into the same set of problems when working with one-shot systems. We've previously investigated this pattern in depth, and it does not appear to lead to extra indirection with nested boxes.
4. `SubApp::update` simply runs the default schedule once. This sucks, but this whole API is incomplete and this was the minimal changeset.
5. `time_system` and `tick_global_task_pools_on_main_thread` no longer use exclusive systems to attempt to force scheduling order
6. Implemetnation strategy for fixed timesteps
7. `AssetStage` was migrated to `AssetSet` without reintroducing command flush points. These did not appear to be used, and it's nice to remove these bottlenecks.
8. Migration of `bevy_render/lib.rs` and pipelined rendering. The logic here is unusually tricky, as we have complex scheduling requirements.

## Future Work (ideally before 0.10)

- Rename schedule_v3 module to schedule or scheduling
- Add a derive macro to states, and likely a `EnumIter` trait of some form
- Figure out what exactly to do with the "systems added should basically work by default" problem
- Improve ergonomics for working with fixed timesteps and states
- Polish FixedTime API to match Time
- Rebase and merge #7415
- Resolve all internal ambiguities (blocked on better tools, especially #7442)
- Add "base sets" to replace the removed default sets.
2023-02-06 02:04:50 +00:00
James Liu
dfea88c64d Basic adaptive batching for parallel query iteration (#4777)
# Objective
Fixes #3184. Fixes #6640. Fixes #4798. Using `Query::par_for_each(_mut)` currently requires a `batch_size` parameter, which affects how it chunks up large archetypes and tables into smaller chunks to run in parallel. Tuning this value is difficult, as the performance characteristics entirely depends on the state of the `World` it's being run on. Typically, users will just use a flat constant and just tune it by hand until it performs well in some benchmarks. However, this is both error prone and risks overfitting the tuning on that benchmark.

This PR proposes a naive automatic batch-size computation based on the current state of the `World`.

## Background
`Query::par_for_each(_mut)` schedules a new Task for every archetype or table that it matches. Archetypes/tables larger than the batch size are chunked into smaller tasks. Assuming every entity matched by the query has an identical workload, this makes the worst case scenario involve using a batch size equal to the size of the largest matched archetype or table. Conversely, a batch size of `max {archetype, table} size / thread count * COUNT_PER_THREAD` is likely the sweetspot where the overhead of scheduling tasks is minimized, at least not without grouping small archetypes/tables together.

There is also likely a strict minimum batch size below which the overhead of scheduling these tasks is heavier than running the entire thing single-threaded.

## Solution

- [x] Remove the `batch_size` from `Query(State)::par_for_each`  and friends.
- [x] Add a check to compute `batch_size = max {archeytpe/table} size / thread count  * COUNT_PER_THREAD`
- [x] ~~Panic if thread count is 0.~~ Defer to `for_each` if the thread count is 1 or less.
- [x] Early return if there is no matched table/archetype. 
- [x] Add override option for users have queries that strongly violate the initial assumption that all iterated entities have an equal workload.

---

## Changelog
Changed: `Query::par_for_each(_mut)` has been changed to `Query::par_iter(_mut)` and will now automatically try to produce a batch size for callers based on the current `World` state.

## Migration Guide
The `batch_size` parameter for `Query(State)::par_for_each(_mut)` has been removed. These calls will automatically compute a batch size for you. Remove these parameters from all calls to these functions.

Before:
```rust
fn parallel_system(query: Query<&MyComponent>) {
   query.par_for_each(32, |comp| {
        ...
   });
}
```

After:

```rust
fn parallel_system(query: Query<&MyComponent>) {
   query.par_iter().for_each(|comp| {
        ...
   });
}
```

Co-authored-by: Arnav Choubey <56453634+x-52@users.noreply.github.com>
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Corey Farwell <coreyf@rwell.org>
Co-authored-by: Aevyrie <aevyrie@gmail.com>
2023-01-20 08:47:20 +00:00
张林伟
bb79903938 Fix clippy issue for benches crate (#6806)
# Objective

- https://github.com/bevyengine/bevy/pull/3505 marked `S-Adopt-Me` , this pr is to continue his work.

## Solution

- run `cargo clippy --workspace --all-targets --all-features -- -Aclippy::type_complexity -Wclippy::doc_markdown -Wclippy::redundant_else -Wclippy::match_same_arms -Wclippy::semicolon_if_nothing_returned -Wclippy::explicit_iter_loop -Wclippy::map_flatten -Dwarnings` under benches dir.
- fix issue according to suggestion.
2023-01-11 09:32:07 +00:00
François
0aab699a84 Update glam 0.22, hexasphere 8.0, encase 0.4 (#6427)
# Objective

- Update glam to 0.22, hexasphere to 8.0, encase to 0.4

## Solution

- Update glam to 0.22, hexasphere to 8.0, encase to 0.4
- ~~waiting on https://github.com/teoxoy/encase/pull/17 and https://github.com/OptimisticPeach/hexasphere/pull/13~~
2022-11-07 19:44:13 +00:00
Thierry Berger
9dd019bf86 Change Detection Benchmarks (#4972)
# Objective

Fixes #4883.

## Solution

- Add benchmarks for Changed and Added

Disclaimer: I'm not that much familiar with benchmarking.
2022-11-04 17:53:54 +00:00
James Liu
ec8c8fbc8a Remove unnecesary branches/panics from Query accesses (#6461)
# Objective
Supercedes #6452. Upon inspection of the [generated assembly](https://gist.github.com/james7132/c2740c6941b80d7912f1e8888e223cbb#file-original-s) of a [simple Bevy binary](https://gist.github.com/james7132/c2740c6941b80d7912f1e8888e223cbb#file-source-rs) compiled with `cargo rustc --release -- --emit asm`, it's apparent that there are multiple unnecessary branches in the generated assembly:

```assembly
.LBB5_5:
	cmpq	%r10, %r11
	je	.LBB5_15
	movq	(%r11), %rcx
	movq	328(%r15), %rdx
	cmpq	%rdx, %rcx
	jae	.LBB5_14
	movq	312(%r15), %rdi
	leaq	(%rcx,%rcx,2), %rcx
	shlq	$5, %rcx
	movq	336(%r12), %rdx
	movq	64(%rdi,%rcx), %rax
	cmpq	%rdx, %rax
	jbe	.LBB5_4
	leaq	(%rdi,%rcx), %rsi
	movq	48(%rsi), %rbp
	shlq	$4, %rdx
	cmpq	$0, (%rbp,%rdx)
	je	.LBB5_4
	movq	344(%r12), %rbx
	cmpq	%rbx, %rax
	jbe	.LBB5_4
	shlq	$4, %rbx
	cmpq	$0, (%rbp,%rbx)
	je	.LBB5_4
	addq	$8, %r11
	movq	88(%rdi,%rcx), %rcx
	testq	%rcx, %rcx
	je	.LBB5_5
	movq	(%rsi), %rax
	movq	8(%rbp,%rdx), %rdx
	leaq	(%rdx,%rdx,4), %rdi
	shlq	$4, %rdi
	movq	32(%rax,%rdi), %rdx
	movq	56(%rax,%rdi), %r8
	movq	8(%rbp,%rbx), %rbp
	leaq	(%rbp,%rbp,4), %rbp
	shlq	$4, %rbp
	movq	32(%rax,%rbp), %r9
	xorl	%ebp, %ebp
	jmp	.LBB5_13
	.p2align	4, 0x90
```

Almost every one of the instructions starting with `j` is a potential branch, which can significantly slow down accesses. Of these, two labels are both common and never used:

```asm
.LBB5_14:
	leaq	__unnamed_2(%rip), %r8
	callq	_ZN4core9panicking18panic_bounds_check17h70367088e72af65aE
	ud2
.LBB5_4:
	callq	_ZN8bevy_ecs5query25debug_checked_unreachable17h0855ff520ceaea77E
	ud2
	.seh_endproc
```

These correpsond to subprocedure calls to panicking due to out of bounds from indexing `Tables` and `debug_checked_unreadable`. Both of which should be inlined and optimized out, but are not.

## Solution
Make `debug_checked_unreachable` a macro to forcibly inline either `unreachable!()` in debug builds, and `std::hint::unreachable_unchecked()` in release mode. Replace the `Tables` and `Archetype` index access with `get(id).unwrap_or_else(|| debug_checked_unreachable!())` to assume that the table or archetype provided exists.

This has no external breaking change of any kind.

The equivalent section of code with these changes removes most of the conditional jump instructions:

```asm
.LBB5_5:
	movss	(%rbx,%rbp,4), %xmm0
	movl	%r14d, 4(%r8,%rbp,8)
	addss	(%rdi,%rbp,4), %xmm0
	movss	%xmm0, (%rdi,%rbp,4)
	incq	%rbp
.LBB5_1:
	cmpq	%rdx, %rbp
	jne	.LBB5_5
	.p2align	4, 0x90
.LBB5_2:
	cmpq	%rcx, %rax
	je	.LBB5_6
	movq	(%rax), %rdx
	addq	$8, %rax
	movq	312(%rsi), %rbp
	leaq	(%rdx,%rdx,2), %rbx
	shlq	$5, %rbx
	movq	88(%rbp,%rbx), %rdx
	testq	%rdx, %rdx
	je	.LBB5_2
	leaq	(%rbx,%rbp), %r8
	movq	336(%r15), %rdi
	movq	344(%r15), %r9
	movq	48(%rbp,%rbx), %r10
	shlq	$4, %rdi
	movq	(%r8), %rbx
	movq	8(%r10,%rdi), %rdi
	leaq	(%rdi,%rdi,4), %rbp
	shlq	$4, %rbp
	movq	32(%rbx,%rbp), %rdi
	movq	56(%rbx,%rbp), %r8
	shlq	$4, %r9
	movq	8(%r10,%r9), %rbp
	leaq	(%rbp,%rbp,4), %rbp
	shlq	$4, %rbp
	movq	32(%rbx,%rbp), %rbx
	xorl	%ebp, %ebp
	jmp	.LBB5_5
.LBB5_6:
	addq	$40, %rsp
	popq	%rbx
	popq	%rbp
	popq	%rdi
	popq	%rsi
	popq	%r14
	popq	%r15
	retq
	.seh_endproc

```

## Performance

Microbenchmarks results:

<details>

```
group                                                    main                                     no-panic-query
-----                                                    ----                                     --------------
busy_systems/01x_entities_03_systems                     1.20     42.4±2.66µs        ? ?/sec      1.00     35.3±1.68µs        ? ?/sec
busy_systems/01x_entities_06_systems                     1.32     83.8±3.50µs        ? ?/sec      1.00     63.6±1.72µs        ? ?/sec
busy_systems/01x_entities_09_systems                     1.15    113.3±8.90µs        ? ?/sec      1.00     98.2±6.15µs        ? ?/sec
busy_systems/01x_entities_12_systems                     1.27   160.8±32.44µs        ? ?/sec      1.00    126.6±4.70µs        ? ?/sec
busy_systems/01x_entities_15_systems                     1.12    179.6±3.71µs        ? ?/sec      1.00   160.3±11.03µs        ? ?/sec
busy_systems/02x_entities_03_systems                     1.18     76.8±3.14µs        ? ?/sec      1.00     65.2±3.17µs        ? ?/sec
busy_systems/02x_entities_06_systems                     1.16    144.6±6.10µs        ? ?/sec      1.00    124.5±5.14µs        ? ?/sec
busy_systems/02x_entities_09_systems                     1.19    215.3±9.18µs        ? ?/sec      1.00    181.5±5.67µs        ? ?/sec
busy_systems/02x_entities_12_systems                     1.20    266.7±8.33µs        ? ?/sec      1.00    222.0±9.53µs        ? ?/sec
busy_systems/02x_entities_15_systems                     1.23   338.8±10.53µs        ? ?/sec      1.00    276.3±6.94µs        ? ?/sec
busy_systems/03x_entities_03_systems                     1.43    113.5±5.06µs        ? ?/sec      1.00     79.6±1.49µs        ? ?/sec
busy_systems/03x_entities_06_systems                     1.38   217.3±12.67µs        ? ?/sec      1.00    157.5±3.07µs        ? ?/sec
busy_systems/03x_entities_09_systems                     1.23   308.8±24.75µs        ? ?/sec      1.00    251.6±8.93µs        ? ?/sec
busy_systems/03x_entities_12_systems                     1.05   347.7±12.43µs        ? ?/sec      1.00   330.6±11.43µs        ? ?/sec
busy_systems/03x_entities_15_systems                     1.13   455.5±13.88µs        ? ?/sec      1.00   401.7±17.29µs        ? ?/sec
busy_systems/04x_entities_03_systems                     1.24    144.7±5.89µs        ? ?/sec      1.00    116.9±6.29µs        ? ?/sec
busy_systems/04x_entities_06_systems                     1.24   282.8±21.40µs        ? ?/sec      1.00   228.6±21.31µs        ? ?/sec
busy_systems/04x_entities_09_systems                     1.35   431.8±14.10µs        ? ?/sec      1.00    319.6±9.83µs        ? ?/sec
busy_systems/04x_entities_12_systems                     1.16   493.8±22.87µs        ? ?/sec      1.00   424.9±15.24µs        ? ?/sec
busy_systems/04x_entities_15_systems                     1.10   587.5±23.25µs        ? ?/sec      1.00   531.7±16.32µs        ? ?/sec
busy_systems/05x_entities_03_systems                     1.14    148.2±9.61µs        ? ?/sec      1.00    129.5±4.32µs        ? ?/sec
busy_systems/05x_entities_06_systems                     1.31   359.7±17.46µs        ? ?/sec      1.00   273.6±10.55µs        ? ?/sec
busy_systems/05x_entities_09_systems                     1.22   473.5±23.11µs        ? ?/sec      1.00   389.3±13.62µs        ? ?/sec
busy_systems/05x_entities_12_systems                     1.05   562.9±20.76µs        ? ?/sec      1.00   536.5±24.35µs        ? ?/sec
busy_systems/05x_entities_15_systems                     1.23   818.5±28.70µs        ? ?/sec      1.00   666.6±45.87µs        ? ?/sec
contrived/01x_entities_03_systems                        1.27     27.5±0.49µs        ? ?/sec      1.00     21.6±1.71µs        ? ?/sec
contrived/01x_entities_06_systems                        1.22     49.9±1.18µs        ? ?/sec      1.00     40.7±2.62µs        ? ?/sec
contrived/01x_entities_09_systems                        1.30     72.3±2.39µs        ? ?/sec      1.00     55.4±2.60µs        ? ?/sec
contrived/01x_entities_12_systems                        1.28     94.3±9.44µs        ? ?/sec      1.00     73.7±3.62µs        ? ?/sec
contrived/01x_entities_15_systems                        1.25    118.0±2.43µs        ? ?/sec      1.00     94.1±3.99µs        ? ?/sec
contrived/02x_entities_03_systems                        1.23     41.6±1.71µs        ? ?/sec      1.00     33.7±2.30µs        ? ?/sec
contrived/02x_entities_06_systems                        1.19     78.6±2.63µs        ? ?/sec      1.00     65.9±2.35µs        ? ?/sec
contrived/02x_entities_09_systems                        1.28    113.6±3.60µs        ? ?/sec      1.00     88.6±3.60µs        ? ?/sec
contrived/02x_entities_12_systems                        1.20    146.4±5.75µs        ? ?/sec      1.00    121.7±3.35µs        ? ?/sec
contrived/02x_entities_15_systems                        1.23    178.5±4.86µs        ? ?/sec      1.00    145.7±4.00µs        ? ?/sec
contrived/03x_entities_03_systems                        1.42     58.3±2.77µs        ? ?/sec      1.00     41.1±1.54µs        ? ?/sec
contrived/03x_entities_06_systems                        1.32    108.5±7.30µs        ? ?/sec      1.00     82.4±4.86µs        ? ?/sec
contrived/03x_entities_09_systems                        1.23    153.7±4.61µs        ? ?/sec      1.00    125.0±4.76µs        ? ?/sec
contrived/03x_entities_12_systems                        1.18    197.5±5.12µs        ? ?/sec      1.00    166.8±8.14µs        ? ?/sec
contrived/03x_entities_15_systems                        1.23    238.8±6.38µs        ? ?/sec      1.00    194.6±4.55µs        ? ?/sec
contrived/04x_entities_03_systems                        1.34     66.4±3.42µs        ? ?/sec      1.00     49.5±1.98µs        ? ?/sec
contrived/04x_entities_06_systems                        1.27    134.3±4.86µs        ? ?/sec      1.00    105.8±3.58µs        ? ?/sec
contrived/04x_entities_09_systems                        1.26    193.2±3.83µs        ? ?/sec      1.00    153.0±5.60µs        ? ?/sec
contrived/04x_entities_12_systems                        1.16    237.1±5.78µs        ? ?/sec      1.00   204.9±18.77µs        ? ?/sec
contrived/04x_entities_15_systems                        1.17    289.2±4.76µs        ? ?/sec      1.00    246.3±8.57µs        ? ?/sec
contrived/05x_entities_03_systems                        1.26     80.4±2.90µs        ? ?/sec      1.00     63.7±3.07µs        ? ?/sec
contrived/05x_entities_06_systems                        1.27   161.6±13.47µs        ? ?/sec      1.00    127.2±5.59µs        ? ?/sec
contrived/05x_entities_09_systems                        1.22    228.0±7.76µs        ? ?/sec      1.00    186.2±7.68µs        ? ?/sec
contrived/05x_entities_12_systems                        1.20    289.5±6.21µs        ? ?/sec      1.00    241.8±7.52µs        ? ?/sec
contrived/05x_entities_15_systems                        1.18   357.3±11.24µs        ? ?/sec      1.00    302.7±7.21µs        ? ?/sec
heavy_compute/base                                       1.01    302.4±3.52µs        ? ?/sec      1.00    300.2±3.40µs        ? ?/sec
iter_fragmented/base                                     1.00    348.1±7.51ns        ? ?/sec      1.01    351.9±8.32ns        ? ?/sec
iter_fragmented/foreach                                  1.03   239.8±23.78ns        ? ?/sec      1.00   233.8±18.12ns        ? ?/sec
iter_fragmented/foreach_wide                             1.00      3.9±0.13µs        ? ?/sec      1.02      4.0±0.22µs        ? ?/sec
iter_fragmented/wide                                     1.18      4.6±0.15µs        ? ?/sec      1.00      3.9±0.10µs        ? ?/sec
iter_fragmented_sparse/base                              1.02      8.1±0.15ns        ? ?/sec      1.00      7.9±0.56ns        ? ?/sec
iter_fragmented_sparse/foreach                           1.00      7.8±0.22ns        ? ?/sec      1.01      7.9±0.62ns        ? ?/sec
iter_fragmented_sparse/foreach_wide                      1.00     37.2±1.17ns        ? ?/sec      1.10     40.9±0.95ns        ? ?/sec
iter_fragmented_sparse/wide                              1.09     48.4±2.13ns        ? ?/sec      1.00    44.5±18.34ns        ? ?/sec
iter_simple/base                                         1.02      8.4±0.10µs        ? ?/sec      1.00      8.2±0.14µs        ? ?/sec
iter_simple/foreach                                      1.01      8.3±0.07µs        ? ?/sec      1.00      8.2±0.09µs        ? ?/sec
iter_simple/foreach_sparse_set                           1.00     25.3±0.32µs        ? ?/sec      1.02     25.7±0.42µs        ? ?/sec
iter_simple/foreach_wide                                 1.03     41.1±0.94µs        ? ?/sec      1.00     39.9±0.41µs        ? ?/sec
iter_simple/foreach_wide_sparse_set                      1.05    123.6±2.05µs        ? ?/sec      1.00    118.1±2.78µs        ? ?/sec
iter_simple/sparse_set                                   1.14     30.5±1.40µs        ? ?/sec      1.00     26.9±0.64µs        ? ?/sec
iter_simple/system                                       1.01      8.4±0.25µs        ? ?/sec      1.00      8.4±0.11µs        ? ?/sec
iter_simple/wide                                         1.18     48.2±0.62µs        ? ?/sec      1.00     40.7±0.38µs        ? ?/sec
iter_simple/wide_sparse_set                              1.12   140.8±21.56µs        ? ?/sec      1.00    126.0±2.30µs        ? ?/sec
query_get/50000_entities_sparse                          1.17    378.6±7.60µs        ? ?/sec      1.00   324.1±23.17µs        ? ?/sec
query_get/50000_entities_table                           1.08   330.9±10.90µs        ? ?/sec      1.00    306.8±4.98µs        ? ?/sec
query_get_component/50000_entities_sparse                1.00   976.7±19.55µs        ? ?/sec      1.00   979.8±35.87µs        ? ?/sec
query_get_component/50000_entities_table                 1.00  1029.0±15.11µs        ? ?/sec      1.05  1080.0±59.18µs        ? ?/sec
query_get_component_simple/system                        1.13   839.7±14.18µs        ? ?/sec      1.00   742.8±10.72µs        ? ?/sec
query_get_component_simple/unchecked                     1.01   909.0±15.17µs        ? ?/sec      1.00   898.0±13.56µs        ? ?/sec
query_get_many_10/50000_calls_sparse                     1.04      5.5±0.54ms        ? ?/sec      1.00      5.3±0.67ms        ? ?/sec
query_get_many_10/50000_calls_table                      1.01      4.9±0.49ms        ? ?/sec      1.00      4.8±0.45ms        ? ?/sec
query_get_many_2/50000_calls_sparse                      1.28  848.4±210.89µs        ? ?/sec      1.00   664.8±47.69µs        ? ?/sec
query_get_many_2/50000_calls_table                       1.05   779.0±73.85µs        ? ?/sec      1.00   739.2±83.02µs        ? ?/sec
query_get_many_5/50000_calls_sparse                      1.05      2.4±0.37ms        ? ?/sec      1.00      2.3±0.33ms        ? ?/sec
query_get_many_5/50000_calls_table                       1.00  1939.9±75.22µs        ? ?/sec      1.04      2.0±0.19ms        ? ?/sec
run_criteria/yes_using_query/001_systems                 1.00      3.7±0.38µs        ? ?/sec      1.30      4.9±0.14µs        ? ?/sec
run_criteria/yes_using_query/006_systems                 1.00      8.9±0.40µs        ? ?/sec      1.17     10.3±0.57µs        ? ?/sec
run_criteria/yes_using_query/011_systems                 1.00     13.9±0.49µs        ? ?/sec      1.08     15.0±0.89µs        ? ?/sec
run_criteria/yes_using_query/016_systems                 1.00     18.8±0.74µs        ? ?/sec      1.00     18.8±1.43µs        ? ?/sec
run_criteria/yes_using_query/021_systems                 1.07     24.1±0.87µs        ? ?/sec      1.00     22.6±1.58µs        ? ?/sec
run_criteria/yes_using_query/026_systems                 1.04     27.9±0.62µs        ? ?/sec      1.00     26.8±1.71µs        ? ?/sec
run_criteria/yes_using_query/031_systems                 1.09     33.3±1.03µs        ? ?/sec      1.00     30.5±2.18µs        ? ?/sec
run_criteria/yes_using_query/036_systems                 1.14     38.7±0.80µs        ? ?/sec      1.00     33.9±1.75µs        ? ?/sec
run_criteria/yes_using_query/041_systems                 1.18     43.7±1.07µs        ? ?/sec      1.00     37.0±2.39µs        ? ?/sec
run_criteria/yes_using_query/046_systems                 1.14     47.6±1.16µs        ? ?/sec      1.00     41.9±2.09µs        ? ?/sec
run_criteria/yes_using_query/051_systems                 1.17     52.9±2.04µs        ? ?/sec      1.00     45.3±1.75µs        ? ?/sec
run_criteria/yes_using_query/056_systems                 1.25     59.2±2.38µs        ? ?/sec      1.00     47.2±2.01µs        ? ?/sec
run_criteria/yes_using_query/061_systems                 1.28    66.1±15.84µs        ? ?/sec      1.00     51.5±2.47µs        ? ?/sec
run_criteria/yes_using_query/066_systems                 1.28     70.2±2.57µs        ? ?/sec      1.00     54.7±2.58µs        ? ?/sec
run_criteria/yes_using_query/071_systems                 1.30     75.5±2.27µs        ? ?/sec      1.00     58.2±3.31µs        ? ?/sec
run_criteria/yes_using_query/076_systems                 1.26     81.5±2.66µs        ? ?/sec      1.00     64.5±3.13µs        ? ?/sec
run_criteria/yes_using_query/081_systems                 1.29     89.7±2.58µs        ? ?/sec      1.00     69.3±3.47µs        ? ?/sec
run_criteria/yes_using_query/086_systems                 1.33     95.6±3.39µs        ? ?/sec      1.00     71.8±3.48µs        ? ?/sec
run_criteria/yes_using_query/091_systems                 1.25    102.0±3.67µs        ? ?/sec      1.00     81.4±4.82µs        ? ?/sec
run_criteria/yes_using_query/096_systems                 1.33    111.7±3.29µs        ? ?/sec      1.00     83.8±4.15µs        ? ?/sec
run_criteria/yes_using_query/101_systems                 1.29   113.2±12.04µs        ? ?/sec      1.00     87.7±5.15µs        ? ?/sec
world_query_for_each/50000_entities_sparse               1.00     47.4±0.51µs        ? ?/sec      1.00     47.3±0.33µs        ? ?/sec
world_query_for_each/50000_entities_table                1.00     27.2±0.50µs        ? ?/sec      1.00     27.2±0.17µs        ? ?/sec
world_query_get/50000_entities_sparse_wide               1.09    210.5±1.78µs        ? ?/sec      1.00    192.5±2.61µs        ? ?/sec
world_query_get/50000_entities_table                     1.00    127.7±2.09µs        ? ?/sec      1.07    136.2±5.95µs        ? ?/sec
world_query_get/50000_entities_table_wide                1.00    209.8±2.37µs        ? ?/sec      1.15    240.6±2.04µs        ? ?/sec
world_query_iter/50000_entities_sparse                   1.00     54.2±0.36µs        ? ?/sec      1.01     54.7±0.61µs        ? ?/sec
world_query_iter/50000_entities_table                    1.00     27.2±0.31µs        ? ?/sec      1.00     27.3±0.64µs        ? ?/sec
```
</details>

NOTE: This PR includes a change to enable LTO on our benchmarks to get a "fully optimized" baseline for our benchmarks. Both the main and the current PR's results were with LTO enabled.
2022-11-04 06:04:55 +00:00
JoJoJet
3d6706f86d Speed up Query::get_many and add benchmarks (#6400)
# Objective

* Add benchmarks for `Query::get_many`.
* Speed up `Query::get_many`.

## Solution

Previously, `get_many` and `get_many_mut` used the method `array::map`, which tends to optimize very poorly. This PR replaces uses of that method with loops.

## Benchmarks

| Benchmark name                       | Execution time | Change from this PR |
|--------------------------------------|----------------|---------------------|
| query_get_many_2/50000_calls_table   | 1.3732 ms      | -24.967%            |
| query_get_many_2/50000_calls_sparse  | 1.3826 ms      | -24.572%            |
| query_get_many_5/50000_calls_table   | 2.6833 ms      | -30.681%            |
| query_get_many_5/50000_calls_sparse  | 2.9936 ms      | -30.672%            |
| query_get_many_10/50000_calls_table  | 5.7771 ms      | -36.950%            |
| query_get_many_10/50000_calls_sparse | 7.4345 ms      | -36.987%            |
2022-11-01 03:51:41 +00:00
JoJoJet
336049da68 Remove outdated uses of single-tuple bundles (#6406)
# Objective

Bevy still has many instances of using single-tuples `(T,)` to create a bundle. Due to #2975, this is no longer necessary.

## Solution

Search for regex `\(.+\s*,\)`. This should have found every instance.
2022-10-29 18:15:28 +00:00
James Liu
b508b5c7c7 Skip empty archetypes and tables when iterating over queries (#4724)
# Objective
Speed up queries that are fragmented over many empty archetypes and tables.

## Solution
Add a early-out to check if the table or archetype is empty before iterating over it. This adds an extra branch for every archetype matched, but skips setting the archetype/table to the underlying state and any iteration over it.

This may not be worth it for the default `Query::iter` and maybe even the `Query::for_each` implementations, but this definitely avoids scheduling unnecessary tasks in the `Query::par_for_each` case.

Ideally, `matched_archetypes` should only contain archetypes where there's actually work to do, but this would add a `O(n)` flat cost to every call to `update_archetypes` that scales with the number of matched archetypes.

TODO: Benchmark
2022-10-24 13:22:05 +00:00
Carter Anderson
dc3f801239 Exclusive Systems Now Implement System. Flexible Exclusive System Params (#6083)
# Objective

The [Stageless RFC](https://github.com/bevyengine/rfcs/pull/45) involves allowing exclusive systems to be referenced and ordered relative to parallel systems. We've agreed that unifying systems under `System` is the right move.

This is an alternative to #4166 (see rationale in the comments I left there). Note that this builds on the learnings established there (and borrows some patterns).

## Solution

This unifies parallel and exclusive systems under the shared `System` trait, removing the old `ExclusiveSystem` trait / impls. This is accomplished by adding a new `ExclusiveFunctionSystem` impl similar to `FunctionSystem`. It is backed by `ExclusiveSystemParam`, which is similar to `SystemParam`. There is a new flattened out SystemContainer api (which cuts out a lot of trait and type complexity). 

This means you can remove all cases of `exclusive_system()`:

```rust
// before
commands.add_system(some_system.exclusive_system());
// after
commands.add_system(some_system);
```

I've also implemented `ExclusiveSystemParam` for `&mut QueryState` and `&mut SystemState`, which makes this possible in exclusive systems:

```rust
fn some_exclusive_system(
    world: &mut World,
    transforms: &mut QueryState<&Transform>,
    state: &mut SystemState<(Res<Time>, Query<&Player>)>,
) {
    for transform in transforms.iter(world) {
        println!("{transform:?}");
    }
    let (time, players) = state.get(world);
    for player in players.iter() {
        println!("{player:?}");
    }
}
```

Note that "exclusive function systems" assume `&mut World` is present (and the first param). I think this is a fair assumption, given that the presence of `&mut World` is what defines the need for an exclusive system.

I added some targeted SystemParam `static` constraints, which removed the need for this:
``` rust
fn some_exclusive_system(state: &mut SystemState<(Res<'static, Time>, Query<&'static Player>)>) {}
```

## Related

- #2923
- #3001
- #3946

## Changelog

- `ExclusiveSystem` trait (and implementations) has been removed in favor of sharing the `System` trait.
- `ExclusiveFunctionSystem` and `ExclusiveSystemParam` were added, enabling flexible exclusive function systems
- `&mut SystemState` and `&mut QueryState` now implement `ExclusiveSystemParam`
- Exclusive and parallel System configuration is now done via a unified `SystemDescriptor`, `IntoSystemDescriptor`, and `SystemContainer` api.

## Migration Guide

Calling `.exclusive_system()` is no longer required (or supported) for converting exclusive system functions to exclusive systems:

```rust
// Old (0.8)
app.add_system(some_exclusive_system.exclusive_system());
// New (0.9)
app.add_system(some_exclusive_system);
```

Converting "normal" parallel systems to exclusive systems is done by calling the exclusive ordering apis:

```rust
// Old (0.8)
app.add_system(some_system.exclusive_system().at_end());
// New (0.9)
app.add_system(some_system.at_end());
```

Query state in exclusive systems can now be cached via ExclusiveSystemParams, which should be preferred for clarity and performance reasons:
```rust
// Old (0.8)
fn some_system(world: &mut World) {
  let mut transforms = world.query::<&Transform>();
  for transform in transforms.iter(world) {
  }
}
// New (0.9)
fn some_system(world: &mut World, transforms: &mut QueryState<&Transform>) {
  for transform in transforms.iter(world) {
  }
}
```
2022-09-26 23:57:07 +00:00
Carter Anderson
01aedc8431 Spawn now takes a Bundle (#6054)
# Objective

Now that we can consolidate Bundles and Components under a single insert (thanks to #2975 and #6039), almost 100% of world spawns now look like `world.spawn().insert((Some, Tuple, Here))`. Spawning an entity without any components is an extremely uncommon pattern, so it makes sense to give spawn the "first class" ergonomic api. This consolidated api should be made consistent across all spawn apis (such as World and Commands).

## Solution

All `spawn` apis (`World::spawn`, `Commands:;spawn`, `ChildBuilder::spawn`, and `WorldChildBuilder::spawn`) now accept a bundle as input:

```rust
// before:
commands
  .spawn()
  .insert((A, B, C));
world
  .spawn()
  .insert((A, B, C);

// after
commands.spawn((A, B, C));
world.spawn((A, B, C));
```

All existing instances of `spawn_bundle` have been deprecated in favor of the new `spawn` api. A new `spawn_empty` has been added, replacing the old `spawn` api.  

By allowing `world.spawn(some_bundle)` to replace `world.spawn().insert(some_bundle)`, this opened the door to removing the initial entity allocation in the "empty" archetype / table done in `spawn()` (and subsequent move to the actual archetype in `.insert(some_bundle)`).

This improves spawn performance by over 10%:
![image](https://user-images.githubusercontent.com/2694663/191627587-4ab2f949-4ccd-4231-80eb-80dd4d9ad6b9.png)

To take this measurement, I added a new `world_spawn` benchmark.

Unfortunately, optimizing `Commands::spawn` is slightly less trivial, as Commands expose the Entity id of spawned entities prior to actually spawning. Doing the optimization would (naively) require assurances that the `spawn(some_bundle)` command is applied before all other commands involving the entity (which would not necessarily be true, if memory serves). Optimizing `Commands::spawn` this way does feel possible, but it will require careful thought (and maybe some additional checks), which deserves its own PR. For now, it has the same performance characteristics of the current `Commands::spawn_bundle` on main.

**Note that 99% of this PR is simple renames and refactors. The only code that needs careful scrutiny is the new `World::spawn()` impl, which is relatively straightforward, but it has some new unsafe code (which re-uses battle tested BundlerSpawner code path).** 

---

## Changelog

- All `spawn` apis (`World::spawn`, `Commands:;spawn`, `ChildBuilder::spawn`, and `WorldChildBuilder::spawn`) now accept a bundle as input
- All instances of `spawn_bundle` have been deprecated in favor of the new `spawn` api
- World and Commands now have `spawn_empty()`, which is equivalent to the old `spawn()` behavior.  

## Migration Guide

```rust
// Old (0.8):
commands
  .spawn()
  .insert_bundle((A, B, C));
// New (0.9)
commands.spawn((A, B, C));

// Old (0.8):
commands.spawn_bundle((A, B, C));
// New (0.9)
commands.spawn((A, B, C));

// Old (0.8):
let entity = commands.spawn().id();
// New (0.9)
let entity = commands.spawn_empty().id();

// Old (0.8)
let entity = world.spawn().id();
// New (0.9)
let entity = world.spawn_empty();
```
2022-09-23 19:55:54 +00:00
Carter Anderson
cd15f0f5be Accept Bundles for insert and remove. Deprecate insert/remove_bundle (#6039)
# Objective

Take advantage of the "impl Bundle for Component" changes in #2975 / add the follow up changes discussed there.

## Solution

- Change `insert` and `remove` to accept a Bundle instead of a Component (for both Commands and World)
- Deprecate `insert_bundle`, `remove_bundle`, and `remove_bundle_intersection`
- Add `remove_intersection`

---

## Changelog

- Change `insert` and `remove` now accept a Bundle instead of a Component (for both Commands and World)
- `insert_bundle` and `remove_bundle` are deprecated
 

## Migration Guide

Replace `insert_bundle` with `insert`:
```rust
// Old (0.8)
commands.spawn().insert_bundle(SomeBundle::default());
// New (0.9)
commands.spawn().insert(SomeBundle::default());
```

Replace `remove_bundle` with `remove`:
```rust
// Old (0.8)
commands.entity(some_entity).remove_bundle::<SomeBundle>();
// New (0.9)
commands.entity(some_entity).remove::<SomeBundle>();
```

Replace `remove_bundle_intersection` with `remove_intersection`:
```rust
// Old (0.8)
world.entity_mut(some_entity).remove_bundle_intersection::<SomeBundle>();
// New (0.9)
world.entity_mut(some_entity).remove_intersection::<SomeBundle>();
```

Consider consolidating as many operations as possible to improve ergonomics and cut down on archetype moves:
```rust
// Old (0.8)
commands.spawn()
  .insert_bundle(SomeBundle::default())
  .insert(SomeComponent);

// New (0.9) - Option 1
commands.spawn().insert((
  SomeBundle::default(),
  SomeComponent,
))

// New (0.9) - Option 2
commands.spawn_bundle((
  SomeBundle::default(),
  SomeComponent,
))
```

## Next Steps

Consider changing `spawn` to accept a bundle and deprecate `spawn_bundle`.
2022-09-21 21:47:53 +00:00
JoJoJet
697d297b55 Remove last uses of string-labels (#5420)
# Objective

* Related: #4341
* Remove all remaining uses of stringly-typed labels in the repo. Right now, it's just a bunch of tests and examples.
2022-09-03 18:06:41 +00:00
ira
992681b59b Make Resource trait opt-in, requiring #[derive(Resource)] V2 (#5577)
*This PR description is an edited copy of #5007, written by @alice-i-cecile.*
# Objective
Follow-up to https://github.com/bevyengine/bevy/pull/2254. The `Resource` trait currently has a blanket implementation for all types that meet its bounds.

While ergonomic, this results in several drawbacks:

* it is possible to make confusing, silent mistakes such as inserting a function pointer (Foo) rather than a value (Foo::Bar) as a resource
* it is challenging to discover if a type is intended to be used as a resource
* we cannot later add customization options (see the [RFC](https://github.com/bevyengine/rfcs/blob/main/rfcs/27-derive-component.md) for the equivalent choice for Component).
* dependencies can use the same Rust type as a resource in invisibly conflicting ways
* raw Rust types used as resources cannot preserve privacy appropriately, as anyone able to access that type can read and write to internal values
* we cannot capture a definitive list of possible resources to display to users in an editor
## Notes to reviewers
 * Review this commit-by-commit; there's effectively no back-tracking and there's a lot of churn in some of these commits.
   *ira: My commits are not as well organized :')*
 * I've relaxed the bound on Local to Send + Sync + 'static: I don't think these concerns apply there, so this can keep things simple. Storing e.g. a u32 in a Local is fine, because there's a variable name attached explaining what it does.
 * I think this is a bad place for the Resource trait to live, but I've left it in place to make reviewing easier. IMO that's best tackled with https://github.com/bevyengine/bevy/issues/4981.

## Changelog
`Resource` is no longer automatically implemented for all matching types. Instead, use the new `#[derive(Resource)]` macro.

## Migration Guide
Add `#[derive(Resource)]` to all types you are using as a resource.

If you are using a third party type as a resource, wrap it in a tuple struct to bypass orphan rules. Consider deriving `Deref` and `DerefMut` to improve ergonomics.

`ClearColor` no longer implements `Component`. Using `ClearColor` as a component in 0.8 did nothing.
Use the `ClearColorConfig` in the `Camera3d` and `Camera2d` components instead.


Co-authored-by: Alice <alice.i.cecile@gmail.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: devil-ira <justthecooldude@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-08-08 21:36:35 +00:00
JoJoJet
c43295af80 Simplify design for *Labels (#4957)
# Objective

- Closes #4954 
- Reduce the complexity of the `{System, App, *}Label` APIs.

## Solution

For the sake of brevity I will only refer to `SystemLabel`, but everything applies to all of the other label types as well.

- Add `SystemLabelId`, a lightweight, `copy` struct.
- Convert custom types into `SystemLabelId` using the trait `SystemLabel`.

## Changelog

- String literals implement `SystemLabel` for now, but this should be changed with #4409 .

## Migration Guide

- Any previous use of `Box<dyn SystemLabel>` should be replaced with `SystemLabelId`.
- `AsSystemLabel` trait has been modified.
    - No more output generics.
    - Method `as_system_label` now returns `SystemLabelId`, removing an unnecessary level of indirection.
- If you *need* a label that is determined at runtime, you can use `Box::leak`. Not recommended.

## Questions for later

* Should we generate a `Debug` impl along with `#[derive(*Label)]`?
* Should we rename `as_str()`?
* Should we remove the extra derives (such as `Hash`) from builtin `*Label` types?
* Should we automatically derive types like `Clone, Copy, PartialEq, Eq`?
* More-ergonomic comparisons between `Label` and `LabelId`.
* Move `Dyn{Eq, Hash,Clone}` somewhere else.
* Some API to make interning dynamic labels easier.
* Optimize string representation
    * Empty string for unit structs -- no debug info but faster comparisons
    * Don't show enum types -- same tradeoffs as asbove.
2022-07-14 18:23:01 +00:00
ira
234e5af882 Implement From<bool> for ShouldRun. (#5306)
Make writing simple yes/no run criteria easier.


Co-authored-by: devil-ira <justthecooldude@gmail.com>
2022-07-14 17:26:40 +00:00
ira
4847f7e3ad Update codebase to use IntoIterator where possible. (#5269)
Remove unnecessary calls to `iter()`/`iter_mut()`.
Mainly updates the use of queries in our code, docs, and examples.

```rust
// From
for _ in list.iter() {
for _ in list.iter_mut() {

// To
for _ in &list {
for _ in &mut list {
```

We already enable the pedantic lint [clippy::explicit_iter_loop](https://rust-lang.github.io/rust-clippy/stable/) inside of Bevy. However, this only warns for a few known types from the standard library.

## Note for reviewers
As you can see the additions and deletions are exactly equal.
Maybe give it a quick skim to check I didn't sneak in a crypto miner, but you don't have to torture yourself by reading every line.
I already experienced enough pain making this PR :) 


Co-authored-by: devil-ira <justthecooldude@gmail.com>
2022-07-11 15:28:50 +00:00
ShadowCurse
179f719553 ECS benchmarks organization (#5189)
## Objective

Fixes: #5110

## Solution

- Moved benches into separate modules according to the part of ECS they are testing.
- Made so all ECS benches are included in one `benches.rs` so they don’t need to be added separately in `Cargo.toml`.
- Renamed a bunch of files to have more coherent names.
- Merged `schedule.rs` and `system_schedule.rs` into one file.
2022-07-04 14:17:45 +00:00
CGMossa
33f9b3940d Updated glam to 0.21. (#5142)
Removed `const_vec2`/`const_vec3`
and replaced with equivalent `.from_array`.

# Objective

Fixes #5112 

## Solution

- `encase` needs to update to `glam` as well. See teoxoy/encase#4 on progress on that. 
- `hexasphere` also needs to be updated, see OptimisticPeach/hexasphere#12.
2022-07-03 19:55:33 +00:00
James Liu
ba3d8bedc5 Wider ECS Benchmarks (#5123)
# Objective
As a part of evaluating #4800, at the behest of @cart, it was noted that the ECS microbenchmarks all focus on singular component queries, whereas in reality most systems will have wider queries with multiple components in each.

## Solution
Use const generics to add wider variants of existing benchmarks.
2022-06-29 02:29:50 +00:00
JoJoJet
92ea730362 Add benchmarks for schedule dependency resolution (#4961)
# Objective

- Add benchmarks to test the performance of `Schedule`'s system dependency resolution.

## Solution

- Do a series of benchmarks while increasing the number of systems in the schedule to see how the run-time scales.
- Split the benchmarks into a group with no dependencies, and a group with many dependencies.
2022-06-20 17:35:54 +00:00
James Liu
012ae07dc8 Add global init and get accessors for all newtyped TaskPools (#2250)
Right now, a direct reference to the target TaskPool is required to launch tasks on the pools, despite the three newtyped pools (AsyncComputeTaskPool, ComputeTaskPool, and IoTaskPool) effectively acting as global instances. The need to pass a TaskPool reference adds notable friction to spawning subtasks within existing tasks. Possible use cases for this may include chaining tasks within the same pool like spawning separate send/receive I/O tasks after waiting on a network connection to be established, or allowing cross-pool dependent tasks like starting dependent multi-frame computations following a long I/O load. 

Other task execution runtimes provide static access to spawning tasks (i.e. `tokio::spawn`), which is notably easier to use than the reference passing required by `bevy_tasks` right now.

This PR makes does the following:

 * Adds `*TaskPool::init` which initializes a `OnceCell`'ed with a provided TaskPool. Failing if the pool has already been initialized.
 * Adds `*TaskPool::get` which fetches the initialized global pool of the respective type or panics. This generally should not be an issue in normal Bevy use, as the pools are initialized before they are accessed.
 * Updated default task pool initialization to either pull the global handles and save them as resources, or if they are already initialized, pull the a cloned global handle as the resource.

This should make it notably easier to build more complex task hierarchies for dependent tasks. It should also make writing bevy-adjacent, but not strictly bevy-only plugin crates easier, as the global pools ensure it's all running on the same threads.

One alternative considered is keeping a thread-local reference to the pool for all threads in each pool to enable the same `tokio::spawn` interface. This would spawn tasks on the same pool that a task is currently running in. However this potentially leads to potential footgun situations where long running blocking tasks run on `ComputeTaskPool`.
2022-06-09 02:43:24 +00:00
James Liu
c5e89894f4 Remove task_pool parameter from par_for_each(_mut) (#4705)
# Objective
Fixes #3183. Requiring a `&TaskPool` parameter is sort of meaningless if the only correct one is to use the one provided by `Res<ComputeTaskPool>` all the time.

## Solution
Have `QueryState` save a clone of the `ComputeTaskPool` which is used for all `par_for_each` functions.

~~Adds a small overhead of the internal `Arc` clone as a part of the startup, but the ergonomics win should be well worth this hardly-noticable overhead.~~

Updated the docs to note that it will panic the task pool is not present as a resource.

# Future Work
If https://github.com/bevyengine/rfcs/pull/54 is approved, we can replace these resource lookups with a static function call instead to get the `ComputeTaskPool`.

---

## Changelog
Removed: The `task_pool` parameter of `Query(State)::par_for_each(_mut)`. These calls will use the `World`'s `ComputeTaskPool` resource instead.

## Migration Guide
The `task_pool` parameter for `Query(State)::par_for_each(_mut)` has been removed. Remove these parameters from all calls to these functions.

Before:
```rust
fn parallel_system(
   task_pool: Res<ComputeTaskPool>,
   query: Query<&MyComponent>,
) {
   query.par_for_each(&task_pool, 32, |comp| {
        ...
   });
}
```

After:

```rust
fn parallel_system(query: Query<&MyComponent>) {
   query.par_for_each(32, |comp| {
        ...
   });
}
```

If using `Query(State)` outside of a system run by the scheduler, you may need to manually configure and initialize a `ComputeTaskPool` as a resource in the `World`.
2022-05-30 16:59:38 +00:00
dataphract
0917c49b9b bench: add bevy_reflect::{List, Map, Struct} benchmarks (#3690)
# Objective

Partially addresses #3594.

## Solution

This adds basic benchmarks for `List`, `Map`, and `Struct` implementors, both concrete (`Vec`, `HashMap`, and defined struct types) and dynamic (`DynamicList`, `DynamicMap` and `DynamicStruct`). 

A few insights from the benchmarks (all measurements are local on my machine):
- Applying a list with many elements to a list with no elements is slower than applying to a list of the same length:
  - 3-4x slower when applying to a `Vec`
  - 5-6x slower when applying to a `DynamicList`
  
  I suspect this could be improved by `reserve()`ing the correct length up front, but haven't tested.
- Applying a `DynamicMap` to another `Map` is linear in the number of elements, but applying a `HashMap` seems to be at least quadratic. No intuition on this one.
- Applying like structs (concrete -> concrete, `DynamicStruct` -> `DynamicStruct`) seems to be faster than applying unlike structs.
2022-05-17 04:16:53 +00:00
Daniel McNab
94d941661d Tidy up the code of events (#4713)
# Objective

- The code in `events.rs` was a bit messy. There was lots of duplication between `EventReader` and `ManualEventReader`, and the state management code is not needed.

## Solution

- Clean it up.

## Future work

Should we remove the type parameter from `ManualEventReader`? 
It doesn't have any meaning outside of its source `Events`. But there's no real reason why it needs to have a type parameter - it's just plain data. I didn't remove it yet to keep the type safety in some of the users of it (primarily related to `&mut World` usage)
2022-05-10 20:18:59 +00:00
TheRawMeatball
87991c50f1 Add a random access get_component benchmark (#4607)
# Objective

Add a benchmark to measure the performance of get_component, particularly for cases involving random access.

Enables #2965
2022-04-26 21:20:13 +00:00
Mike
45d2c78949 add benches for simple run criteria (#4196)
# Objective

- Add benches for run criteria. This is in anticipation of run criteria being redone in stageless.

## Solution

- Benches run criteria that don't access anything to test overhead
- Test run criteria that use a query
- Test run criteria that use a resource
2022-04-26 19:59:19 +00:00
Torne Wuff
b1afe2dcca Make System responsible for updating its own archetypes (#4115)
# Objective

- Make it possible to use `System`s outside of the scheduler/executor without having to define logic to track new archetypes and call `System::add_archetype()` for each.

## Solution

- Replace `System::add_archetype(&Archetype)` with `System::update_archetypes(&World)`, making systems responsible for tracking their own most recent archetype generation the way that `SystemState` already does.

This has minimal (or simplifying) effect on most of the code with the exception of `FunctionSystem`, which must now track the latest `ArchetypeGeneration` it saw instead of relying on the executor to do it.

Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-04-07 20:50:43 +00:00
Alice Cecile
1b63d949ea Re-add ECS benchmark suite (#4332)
# Objective

- Benchmarks are good.
- Licensing situation appears to be [cleared up](https://github.com/bevyengine/bevy/pull/4225#issuecomment-1078710209).

## Solution

- Add the benchmark suite back in
- Suggested PR title: "Revert "Revert "Add cart's fork of ecs_bench_suite (#4225)" (#4252)"


Co-authored-by: Daniel McNab <36049421+DJMcNab@users.noreply.github.com>
2022-03-29 22:18:07 +00:00
Daniel McNab
95d3f32b9b Revert "Add cart's fork of ecs_bench_suite (#4225)" (#4252)
This reverts commit 08ef2f0a28.

# Objective

- #4225 was merged without considering the licensing considerations.
- It merges in code taken from https://github.com/cart/ecs_bench_suite/tree/bevy-benches/src/bevy.
- We can safely assume that we do have a license to cart's contributions. However, these build upon 377e96e69a, for which we have no license. 
- This has been verified by looking in the Cargo.toml, the root folder and the readme, none of which mention a license. Additionally, the string "license" [doesn't appear](https://github.com/rust-gamedev/ecs_bench_suite/search?q=license) in the repository.
- This means the code is all rights reserved.
- (The author of these commits also hasn't commented in #2373, though even if they had, it would be legally *dubious* to rely on that to license any code they ever wrote)
- (Note that the latest commit on the head at https://github.com/rust-gamedev/ecs_bench_suite hasn't had a license added either.)
- We are currently incorrectly claiming to be able to give an MIT/Apache 2.0 license to this code.

## Solution

- Revert it
2022-03-19 11:12:24 +00:00
James Liu
08ef2f0a28 Add cart's fork of ecs_bench_suite (#4225)
# Objective
Better benchmarking for ECS. Fix #2062.

## Solution
Port @cart's fork of ecs_bench_suite to the official bench suite for bevy_ecs, replace cgmath with glam, update to latest bevy.
2022-03-19 04:41:30 +00:00
Daniel McNab
1ba9818a78 Significantly reduce the amount of building required for benchmarks (#4067)
# Objective

- Release mode. Many time

## Solution

- Less things, less time
2022-03-01 00:51:07 +00:00
Daniel McNab
6ac9d6876f Make ECS benchmark more representative (#2941)
# Objective

- The addition was being optimised out in the `for_each` loop, but not the `for` loop
- Previously this meant that the `for_each` loop looked 3 times as fast - it's actually only 2 times as fast
- Effectively, the addition take one unit of time, the for_each takes one unit of time, and the for loop version takes two units of time. 

## Solution

- `black_box` the count in each loop

Note that this does not fix `for_each` being faster than `for`, unfortunately.
2022-02-03 22:34:29 +00:00
James Liu
e30199f7a9 Document bevy_tasks and enable #![warn(missing_docs)] (#3509)
This PR is part of the issue #3492.

# Objective

-  Add and update the bevy_tasks documentation to achieve a 100% documentation coverage (sans `prelude` module)
-  Add the #![warn(missing_docs)] lint to keep the documentation coverage for the future.

## Solution

 -  Add and update the bevy_math documentation.
 -  Add the #![warn(missing_docs)] lint.
 - Added doctest wherever there should be in the missing docs.
2022-01-16 04:53:22 +00:00
David Sugar
8a8293b266 Renamed Entity::new to Entity::from_raw (#3465)
# Objective

- Rename `Entity::new(id: u32)` to `Entity::from_raw(id: u32)`.
- Add further documentation.
- fixes #3108

## Solution

- Renamed `Entity::new(id: u32)` to `Entity::from_raw(id: u32)`.
- Docs extended.

I derived the examples from the discussion of issue #3108 .

The [first case](https://github.com/bevyengine/bevy/issues/3108#issuecomment-966669781) mentioned in the linked issue is quite obvious but the [second one](https://github.com/bevyengine/bevy/issues/3108#issuecomment-967093902) probably needs further explanation.


Co-authored-by: r4gus <david@thesugar.de>
2021-12-29 20:49:00 +00:00
Niklas Eicker
94db0176fe Add readme to errors crate and clean up cargo files (#3125)
# Objective

- Document that the error codes will be rendered on the bevy website (see bevyengine/bevy-website#216)
- Some Cargo.toml files did not include the license or a description field

## Solution

- Readme for the errors crate
- Mark internal/development crates with `publish = false`
- Add missing license/descriptions to some crates

- [x] merge bevyengine/bevy-website#216
2021-11-13 23:06:48 +00:00
Yoh Deadfall
ffde86efa0 Update to edition 2021 on master (#3028)
Objective
During work on #3009 I've found that not all jobs use actions-rs, and therefore, an previous version of Rust is used for them. So while compilation and other stuff can pass, checking markup and Android build may fail with compilation errors.

Solution
This PR adds `action-rs` for any job running cargo, and updates the edition to 2021.
2021-10-27 00:12:14 +00:00
Paweł Grabarz
07ed1d053e Implement and require #[derive(Component)] on all component structs (#2254)
This implements the most minimal variant of #1843 - a derive for marker trait. This is a prerequisite to more complicated features like statically defined storage type or opt-out component reflection.

In order to make component struct's purpose explicit and avoid misuse, it must be annotated with `#[derive(Component)]` (manual impl is discouraged for compatibility). Right now this is just a marker trait, but in the future it might be expanded. Making this change early allows us to make further changes later without breaking backward compatibility for derive macro users.

This already prevents a lot of issues, like using bundles in `insert` calls. Primitive types are no longer valid components as well. This can be easily worked around by adding newtype wrappers and deriving `Component` for them.

One funny example of prevented bad code (from our own tests) is when an newtype struct or enum variant is used. Previously, it was possible to write `insert(Newtype)` instead of `insert(Newtype(value))`. That code compiled, because function pointers (in this case newtype struct constructor) implement `Send + Sync + 'static`, so we allowed them to be used as components. This is no longer the case and such invalid code will trigger a compile error.


Co-authored-by: = <=>
Co-authored-by: TheRawMeatball <therawmeatball@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2021-10-03 19:23:44 +00:00
François
f6b42b8669 check that benches build (#2675)
# Objective

- Fixes #2674 
- Check that benches build

## Solution

- Adds a job that runs `cargo check --benches`


Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2021-08-26 00:40:56 +00:00
Carter Anderson
b47217bfab Spawn specific entities: spawn or insert operations, refactor spawn internals, world clearing (#2673)
This upstreams the code changes used by the new renderer to enable cross-app Entity reuse:

* Spawning at specific entities
* get_or_spawn: spawns an entity if it doesn't already exist and returns an EntityMut
* insert_or_spawn_batch: the batched equivalent to `world.get_or_spawn(entity).insert_bundle(bundle)`
* Clearing entities and storages
* Allocating Entities with "invalid" archetypes. These entities cannot be queried / are treated as "non existent". They serve as "reserved" entities that won't show up when calling `spawn()`. They must be "specifically spawned at" using apis like `get_or_spawn(entity)`.

In combination, these changes enable the "render world" to clear entities / storages each frame and reserve all "app world entities". These can then be spawned during the "render extract step".

This refactors "spawn" and "insert" code in a way that I think is a massive improvement to legibility and re-usability. It also yields marginal performance wins by reducing some duplicate lookups (less than a percentage point improvement on insertion benchmarks). There is also some potential for future unsafe reduction (by making BatchSpawner and BatchInserter generic). But for now I want to cut down generic usage to a minimum to encourage smaller binaries and faster compiles.

This is currently a draft because it needs more tests (although this code has already had some real-world testing on my custom-shaders branch). 

I also fixed the benchmarks (which currently don't compile!) / added new ones to illustrate batching wins.

After these changes, Bevy ECS is basically ready to accommodate the new renderer. I think the biggest missing piece at this point is "sub apps".
2021-08-25 23:34:02 +00:00
Carter Anderson
a89a954a17 Not me ... us (#2654)
I don't see much of a reason at this point to boost my name over anyone elses. We are all Bevy Contributors.
2021-08-15 20:08:52 +00:00
François
b724a0f586 Down with the system! (#2496)
# Objective

- Remove all the `.system()` possible.
- Check for remaining missing cases.

## Solution

- Remove all `.system()`, fix compile errors
- 32 calls to `.system()` remains, mostly internals, the few others should be removed after #2446
2021-07-27 23:42:36 +00:00
François
234b2efa71 Inline world get (#2520)
# Objective

While looking at the code of `World`, I noticed two basic functions (`get` and `get_mut`) that are probably called a lot and with simple code that are not `inline`

## Solution

- Add benchmark to check impact
- Add `#[inline]`


```
group                                            this pr                                main
-----                                            ----                                   ----
world_entity/50000_entities                      1.00   115.9±11.90µs        ? ?/sec    1.71   198.5±29.54µs        ? ?/sec
world_get/50000_entities_SparseSet               1.00   409.9±46.96µs        ? ?/sec    1.18   483.5±36.41µs        ? ?/sec
world_get/50000_entities_Table                   1.00   391.3±29.83µs        ? ?/sec    1.16   455.6±57.85µs        ? ?/sec
world_query_for_each/50000_entities_SparseSet    1.02   121.3±18.36µs        ? ?/sec    1.00   119.4±13.88µs        ? ?/sec
world_query_for_each/50000_entities_Table        1.03     13.8±0.96µs        ? ?/sec    1.00     13.3±0.54µs        ? ?/sec
world_query_get/50000_entities_SparseSet         1.00   666.9±54.36µs        ? ?/sec    1.03   687.1±57.77µs        ? ?/sec
world_query_get/50000_entities_Table             1.01   584.4±55.12µs        ? ?/sec    1.00   576.3±36.13µs        ? ?/sec
world_query_iter/50000_entities_SparseSet        1.01   169.7±19.50µs        ? ?/sec    1.00   168.6±32.56µs        ? ?/sec
world_query_iter/50000_entities_Table            1.00     26.2±1.38µs        ? ?/sec    1.91     50.0±4.40µs        ? ?/sec
```

I didn't add benchmarks for the mutable path but I don't see how it could hurt to make it inline too...
2021-07-27 23:19:26 +00:00
Nathan Ward
ecb78048cf [ecs] Improve Commands performance (#2332)
# Objective

- Currently `Commands` are quite slow due to the need to allocate for each command and wrap it in a `Box<dyn Command>`.
- For example:
```rust
fn my_system(mut cmds: Commands) {
    cmds.spawn().insert(42).insert(3.14);
}
```
will have 3 separate `Box<dyn Command>` that need to be allocated and ran.

## Solution

- Utilize a specialized data structure keyed `CommandQueueInner`. 
- The purpose of `CommandQueueInner` is to hold a collection of commands in contiguous memory. 
- This allows us to store each `Command` type contiguously in memory and quickly iterate through them and apply the `Command::write` trait function to each element.
2021-07-16 19:57:20 +00:00
Nathan Ward
46b822e3da Commands benchmarking (#2334)
# Objective

- Currently the `Commands` and `CommandQueue` have no performance testing. 
- As `Commands` are quite expensive due to the `Box<dyn Command>` allocated for each command, there should be perf tests for implementations that attempt to improve the performance.  

## Solution

- Add some benchmarking for `Commands` and `CommandQueue`.
2021-06-28 23:26:29 +00:00
Carter Anderson
3a2a68852c Bevy ECS V2 (#1525)
# Bevy ECS V2

This is a rewrite of Bevy ECS (basically everything but the new executor/schedule, which are already awesome). The overall goal was to improve the performance and versatility of Bevy ECS. Here is a quick bulleted list of changes before we dive into the details:

* Complete World rewrite
* Multiple component storage types:
    * Tables: fast cache friendly iteration, slower add/removes (previously called Archetypes)
    * Sparse Sets: fast add/remove, slower iteration
* Stateful Queries (caches query results for faster iteration. fragmented iteration is _fast_ now)
* Stateful System Params (caches expensive operations. inspired by @DJMcNab's work in #1364)
* Configurable System Params (users can set configuration when they construct their systems. once again inspired by @DJMcNab's work)
* Archetypes are now "just metadata", component storage is separate
* Archetype Graph (for faster archetype changes)
* Component Metadata
    * Configure component storage type
    * Retrieve information about component size/type/name/layout/send-ness/etc
    * Components are uniquely identified by a densely packed ComponentId
    * TypeIds are now totally optional (which should make implementing scripting easier)
* Super fast "for_each" query iterators
* Merged Resources into World. Resources are now just a special type of component
* EntityRef/EntityMut builder apis (more efficient and more ergonomic)
* Fast bitset-backed `Access<T>` replaces old hashmap-based approach everywhere
* Query conflicts are determined by component access instead of archetype component access (to avoid random failures at runtime)
    * With/Without are still taken into account for conflicts, so this should still be comfy to use
* Much simpler `IntoSystem` impl
* Significantly reduced the amount of hashing throughout the ecs in favor of Sparse Sets (indexed by densely packed ArchetypeId, ComponentId, BundleId, and TableId)
* Safety Improvements
    * Entity reservation uses a normal world reference instead of unsafe transmute
    * QuerySets no longer transmute lifetimes
    * Made traits "unsafe" where relevant
    * More thorough safety docs
* WorldCell
    * Exposes safe mutable access to multiple resources at a time in a World 
* Replaced "catch all" `System::update_archetypes(world: &World)` with `System::new_archetype(archetype: &Archetype)`
* Simpler Bundle implementation
* Replaced slow "remove_bundle_one_by_one" used as fallback for Commands::remove_bundle with fast "remove_bundle_intersection"
* Removed `Mut<T>` query impl. it is better to only support one way: `&mut T` 
* Removed with() from `Flags<T>` in favor of `Option<Flags<T>>`, which allows querying for flags to be "filtered" by default 
* Components now have is_send property (currently only resources support non-send)
* More granular module organization
* New `RemovedComponents<T>` SystemParam that replaces `query.removed::<T>()`
* `world.resource_scope()` for mutable access to resources and world at the same time
* WorldQuery and QueryFilter traits unified. FilterFetch trait added to enable "short circuit" filtering. Auto impled for cases that don't need it
* Significantly slimmed down SystemState in favor of individual SystemParam state
* System Commands changed from `commands: &mut Commands` back to `mut commands: Commands` (to allow Commands to have a World reference)

Fixes #1320

## `World` Rewrite

This is a from-scratch rewrite of `World` that fills the niche that `hecs` used to. Yes, this means Bevy ECS is no longer a "fork" of hecs. We're going out our own!

(the only shared code between the projects is the entity id allocator, which is already basically ideal)

A huge shout out to @SanderMertens (author of [flecs](https://github.com/SanderMertens/flecs)) for sharing some great ideas with me (specifically hybrid ecs storage and archetype graphs). He also helped advise on a number of implementation details.

## Component Storage (The Problem)

Two ECS storage paradigms have gained a lot of traction over the years:

* **Archetypal ECS**: 
    * Stores components in "tables" with static schemas. Each "column" stores components of a given type. Each "row" is an entity.
    * Each "archetype" has its own table. Adding/removing an entity's component changes the archetype.
    * Enables super-fast Query iteration due to its cache-friendly data layout
    * Comes at the cost of more expensive add/remove operations for an Entity's components, because all components need to be copied to the new archetype's "table"
* **Sparse Set ECS**:
    * Stores components of the same type in densely packed arrays, which are sparsely indexed by densely packed unsigned integers (Entity ids)
    * Query iteration is slower than Archetypal ECS because each entity's component could be at any position in the sparse set. This "random access" pattern isn't cache friendly. Additionally, there is an extra layer of indirection because you must first map the entity id to an index in the component array.
    * Adding/removing components is a cheap, constant time operation 

Bevy ECS V1, hecs, legion, flec, and Unity DOTS are all "archetypal ecs-es". I personally think "archetypal" storage is a good default for game engines. An entity's archetype doesn't need to change frequently in general, and it creates "fast by default" query iteration (which is a much more common operation). It is also "self optimizing". Users don't need to think about optimizing component layouts for iteration performance. It "just works" without any extra boilerplate.

Shipyard and EnTT are "sparse set ecs-es". They employ "packing" as a way to work around the "suboptimal by default" iteration performance for specific sets of components. This helps, but I didn't think this was a good choice for a general purpose engine like Bevy because:

1. "packs" conflict with each other. If bevy decides to internally pack the Transform and GlobalTransform components, users are then blocked if they want to pack some custom component with Transform.
2. users need to take manual action to optimize

Developers selecting an ECS framework are stuck with a hard choice. Select an "archetypal" framework with "fast iteration everywhere" but without the ability to cheaply add/remove components, or select a "sparse set" framework to cheaply add/remove components but with slower iteration performance.

## Hybrid Component Storage (The Solution)

In Bevy ECS V2, we get to have our cake and eat it too. It now has _both_ of the component storage types above (and more can be added later if needed):

* **Tables** (aka "archetypal" storage)
    * The default storage. If you don't configure anything, this is what you get
    * Fast iteration by default
    * Slower add/remove operations
* **Sparse Sets**
    * Opt-in
    * Slower iteration
    * Faster add/remove operations

These storage types complement each other perfectly. By default Query iteration is fast. If developers know that they want to add/remove a component at high frequencies, they can set the storage to "sparse set":

```rust
world.register_component(
    ComponentDescriptor:🆕:<MyComponent>(StorageType::SparseSet)
).unwrap();
```

## Archetypes

Archetypes are now "just metadata" ... they no longer store components directly. They do store:

* The `ComponentId`s of each of the Archetype's components (and that component's storage type)
    * Archetypes are uniquely defined by their component layouts
    * For example: entities with "table" components `[A, B, C]` _and_ "sparse set" components `[D, E]` will always be in the same archetype.
* The `TableId` associated with the archetype
    * For now each archetype has exactly one table (which can have no components),
    * There is a 1->Many relationship from Tables->Archetypes. A given table could have any number of archetype components stored in it:
        * Ex: an entity with "table storage" components `[A, B, C]` and "sparse set" components `[D, E]` will share the same `[A, B, C]` table as an entity with `[A, B, C]` table component and `[F]` sparse set components.
        * This 1->Many relationship is how we preserve fast "cache friendly" iteration performance when possible (more on this later)
* A list of entities that are in the archetype and the row id of the table they are in
* ArchetypeComponentIds
    * unique densely packed identifiers for (ArchetypeId, ComponentId) pairs
    * used by the schedule executor for cheap system access control
* "Archetype Graph Edges" (see the next section)  

## The "Archetype Graph"

Archetype changes in Bevy (and a number of other archetypal ecs-es) have historically been expensive to compute. First, you need to allocate a new vector of the entity's current component ids, add or remove components based on the operation performed, sort it (to ensure it is order-independent), then hash it to find the archetype (if it exists). And thats all before we get to the _already_ expensive full copy of all components to the new table storage.

The solution is to build a "graph" of archetypes to cache these results. @SanderMertens first exposed me to the idea (and he got it from @gjroelofs, who came up with it). They propose adding directed edges between archetypes for add/remove component operations. If `ComponentId`s are densely packed, you can use sparse sets to cheaply jump between archetypes.

Bevy takes this one step further by using add/remove `Bundle` edges instead of `Component` edges. Bevy encourages the use of `Bundles` to group add/remove operations. This is largely for "clearer game logic" reasons, but it also helps cut down on the number of archetype changes required. `Bundles` now also have densely-packed `BundleId`s. This allows us to use a _single_ edge for each bundle operation (rather than needing to traverse N edges ... one for each component). Single component operations are also bundles, so this is strictly an improvement over a "component only" graph.

As a result, an operation that used to be _heavy_ (both for allocations and compute) is now two dirt-cheap array lookups and zero allocations.

## Stateful Queries

World queries are now stateful. This allows us to:

1. Cache archetype (and table) matches
    * This resolves another issue with (naive) archetypal ECS: query performance getting worse as the number of archetypes goes up (and fragmentation occurs).
2. Cache Fetch and Filter state
    * The expensive parts of fetch/filter operations (such as hashing the TypeId to find the ComponentId) now only happen once when the Query is first constructed
3. Incrementally build up state
    * When new archetypes are added, we only process the new archetypes (no need to rebuild state for old archetypes)

As a result, the direct `World` query api now looks like this:

```rust
let mut query = world.query::<(&A, &mut B)>();
for (a, mut b) in query.iter_mut(&mut world) {
}
```

Requiring `World` to generate stateful queries (rather than letting the `QueryState` type be constructed separately) allows us to ensure that _all_ queries are properly initialized (and the relevant world state, such as ComponentIds). This enables QueryState to remove branches from its operations that check for initialization status (and also enables query.iter() to take an immutable world reference because it doesn't need to initialize anything in world).

However in systems, this is a non-breaking change. State management is done internally by the relevant SystemParam.

## Stateful SystemParams

Like Queries, `SystemParams` now also cache state. For example, `Query` system params store the "stateful query" state mentioned above. Commands store their internal `CommandQueue`. This means you can now safely use as many separate `Commands` parameters in your system as you want. `Local<T>` system params store their `T` value in their state (instead of in Resources). 

SystemParam state also enabled a significant slim-down of SystemState. It is much nicer to look at now.

Per-SystemParam state naturally insulates us from an "aliased mut" class of errors we have hit in the past (ex: using multiple `Commands` system params).

(credit goes to @DJMcNab for the initial idea and draft pr here #1364)

## Configurable SystemParams

@DJMcNab also had the great idea to make SystemParams configurable. This allows users to provide some initial configuration / values for system parameters (when possible). Most SystemParams have no config (the config type is `()`), but the `Local<T>` param now supports user-provided parameters:

```rust

fn foo(value: Local<usize>) {    
}

app.add_system(foo.system().config(|c| c.0 = Some(10)));
```

## Uber Fast "for_each" Query Iterators

Developers now have the choice to use a fast "for_each" iterator, which yields ~1.5-3x iteration speed improvements for "fragmented iteration", and minor ~1.2x iteration speed improvements for unfragmented iteration. 

```rust
fn system(query: Query<(&A, &mut B)>) {
    // you now have the option to do this for a speed boost
    query.for_each_mut(|(a, mut b)| {
    });

    // however normal iterators are still available
    for (a, mut b) in query.iter_mut() {
    }
}
```

I think in most cases we should continue to encourage "normal" iterators as they are more flexible and more "rust idiomatic". But when that extra "oomf" is needed, it makes sense to use `for_each`.

We should also consider using `for_each` for internal bevy systems to give our users a nice speed boost (but that should be a separate pr).

## Component Metadata

`World` now has a `Components` collection, which is accessible via `world.components()`. This stores mappings from `ComponentId` to `ComponentInfo`, as well as `TypeId` to `ComponentId` mappings (where relevant). `ComponentInfo` stores information about the component, such as ComponentId, TypeId, memory layout, send-ness (currently limited to resources), and storage type.

## Significantly Cheaper `Access<T>`

We used to use `TypeAccess<TypeId>` to manage read/write component/archetype-component access. This was expensive because TypeIds must be hashed and compared individually. The parallel executor got around this by "condensing" type ids into bitset-backed access types. This worked, but it had to be re-generated from the `TypeAccess<TypeId>`sources every time archetypes changed.

This pr removes TypeAccess in favor of faster bitset access everywhere. We can do this thanks to the move to densely packed `ComponentId`s and `ArchetypeComponentId`s.

## Merged Resources into World

Resources had a lot of redundant functionality with Components. They stored typed data, they had access control, they had unique ids, they were queryable via SystemParams, etc. In fact the _only_ major difference between them was that they were unique (and didn't correlate to an entity).

Separate resources also had the downside of requiring a separate set of access controls, which meant the parallel executor needed to compare more bitsets per system and manage more state.

I initially got the "separate resources" idea from `legion`. I think that design was motivated by the fact that it made the direct world query/resource lifetime interactions more manageable. It certainly made our lives easier when using Resources alongside hecs/bevy_ecs. However we already have a construct for safely and ergonomically managing in-world lifetimes: systems (which use `Access<T>` internally).

This pr merges Resources into World:

```rust
world.insert_resource(1);
world.insert_resource(2.0);
let a = world.get_resource::<i32>().unwrap();
let mut b = world.get_resource_mut::<f64>().unwrap();
*b = 3.0;
```

Resources are now just a special kind of component. They have their own ComponentIds (and their own resource TypeId->ComponentId scope, so they don't conflict wit components of the same type). They are stored in a special "resource archetype", which stores components inside the archetype using a new `unique_components` sparse set (note that this sparse set could later be used to implement Tags). This allows us to keep the code size small by reusing existing datastructures (namely Column, Archetype, ComponentFlags, and ComponentInfo). This allows us the executor to use a single `Access<ArchetypeComponentId>` per system. It should also make scripting language integration easier.

_But_ this merge did create problems for people directly interacting with `World`. What if you need mutable access to multiple resources at the same time? `world.get_resource_mut()` borrows World mutably!

## WorldCell

WorldCell applies the `Access<ArchetypeComponentId>` concept to direct world access:

```rust
let world_cell = world.cell();
let a = world_cell.get_resource_mut::<i32>().unwrap();
let b = world_cell.get_resource_mut::<f64>().unwrap();
```

This adds cheap runtime checks (a sparse set lookup of `ArchetypeComponentId` and a counter) to ensure that world accesses do not conflict with each other. Each operation returns a `WorldBorrow<'w, T>` or `WorldBorrowMut<'w, T>` wrapper type, which will release the relevant ArchetypeComponentId resources when dropped.

World caches the access sparse set (and only one cell can exist at a time), so `world.cell()` is a cheap operation. 

WorldCell does _not_ use atomic operations. It is non-send, does a mutable borrow of world to prevent other accesses, and uses a simple `Rc<RefCell<ArchetypeComponentAccess>>` wrapper in each WorldBorrow pointer. 

The api is currently limited to resource access, but it can and should be extended to queries / entity component access.

## Resource Scopes

WorldCell does not yet support component queries, and even when it does there are sometimes legitimate reasons to want a mutable world ref _and_ a mutable resource ref (ex: bevy_render and bevy_scene both need this). In these cases we could always drop down to the unsafe `world.get_resource_unchecked_mut()`, but that is not ideal!

Instead developers can use a "resource scope"

```rust
world.resource_scope(|world: &mut World, a: &mut A| {
})
```

This temporarily removes the `A` resource from `World`, provides mutable pointers to both, and re-adds A to World when finished. Thanks to the move to ComponentIds/sparse sets, this is a cheap operation.

If multiple resources are required, scopes can be nested. We could also consider adding a "resource tuple" to the api if this pattern becomes common and the boilerplate gets nasty.

## Query Conflicts Use ComponentId Instead of ArchetypeComponentId

For safety reasons, systems cannot contain queries that conflict with each other without wrapping them in a QuerySet. On bevy `main`, we use ArchetypeComponentIds to determine conflicts. This is nice because it can take into account filters:

```rust
// these queries will never conflict due to their filters
fn filter_system(a: Query<&mut A, With<B>>, b: Query<&mut B, Without<B>>) {
}
```

But it also has a significant downside:
```rust
// these queries will not conflict _until_ an entity with A, B, and C is spawned
fn maybe_conflicts_system(a: Query<(&mut A, &C)>, b: Query<(&mut A, &B)>) {
}
```

The system above will panic at runtime if an entity with A, B, and C is spawned. This makes it hard to trust that your game logic will run without crashing.

In this pr, I switched to using `ComponentId` instead. This _is_ more constraining. `maybe_conflicts_system` will now always fail, but it will do it consistently at startup. Naively, it would also _disallow_ `filter_system`, which would be a significant downgrade in usability. Bevy has a number of internal systems that rely on disjoint queries and I expect it to be a common pattern in userspace.

To resolve this, I added a new `FilteredAccess<T>` type, which wraps `Access<T>` and adds with/without filters. If two `FilteredAccess` have with/without values that prove they are disjoint, they will no longer conflict.

## EntityRef / EntityMut

World entity operations on `main` require that the user passes in an `entity` id to each operation:

```rust
let entity = world.spawn((A, )); // create a new entity with A
world.get::<A>(entity);
world.insert(entity, (B, C));
world.insert_one(entity, D);
```

This means that each operation needs to look up the entity location / verify its validity. The initial spawn operation also requires a Bundle as input. This can be awkward when no components are required (or one component is required).

These operations have been replaced by `EntityRef` and `EntityMut`, which are "builder-style" wrappers around world that provide read and read/write operations on a single, pre-validated entity:

```rust
// spawn now takes no inputs and returns an EntityMut
let entity = world.spawn()
    .insert(A) // insert a single component into the entity
    .insert_bundle((B, C)) // insert a bundle of components into the entity
    .id() // id returns the Entity id

// Returns EntityMut (or panics if the entity does not exist)
world.entity_mut(entity)
    .insert(D)
    .insert_bundle(SomeBundle::default());
{
    // returns EntityRef (or panics if the entity does not exist)
    let d = world.entity(entity)
        .get::<D>() // gets the D component
        .unwrap();
    // world.get still exists for ergonomics
    let d = world.get::<D>(entity).unwrap();
}

// These variants return Options if you want to check existence instead of panicing 
world.get_entity_mut(entity)
    .unwrap()
    .insert(E);

if let Some(entity_ref) = world.get_entity(entity) {
    let d = entity_ref.get::<D>().unwrap();
}
```

This _does not_ affect the current Commands api or terminology. I think that should be a separate conversation as that is a much larger breaking change.

## Safety Improvements

* Entity reservation in Commands uses a normal world borrow instead of an unsafe transmute
* QuerySets no longer transmutes lifetimes
* Made traits "unsafe" when implementing a trait incorrectly could cause unsafety
* More thorough safety docs

## RemovedComponents SystemParam

The old approach to querying removed components: `query.removed:<T>()` was confusing because it had no connection to the query itself. I replaced it with the following, which is both clearer and allows us to cache the ComponentId mapping in the SystemParamState:

```rust
fn system(removed: RemovedComponents<T>) {
    for entity in removed.iter() {
    }
} 
```

## Simpler Bundle implementation

Bundles are no longer responsible for sorting (or deduping) TypeInfo. They are just a simple ordered list of component types / data. This makes the implementation smaller and opens the door to an easy "nested bundle" implementation in the future (which i might even add in this pr). Duplicate detection is now done once per bundle type by World the first time a bundle is used.

## Unified WorldQuery and QueryFilter types

(don't worry they are still separate type _parameters_ in Queries .. this is a non-breaking change)

WorldQuery and QueryFilter were already basically identical apis. With the addition of `FetchState` and more storage-specific fetch methods, the overlap was even clearer (and the redundancy more painful).

QueryFilters are now just `F: WorldQuery where F::Fetch: FilterFetch`. FilterFetch requires `Fetch<Item = bool>` and adds new "short circuit" variants of fetch methods. This enables a filter tuple like `(With<A>, Without<B>, Changed<C>)` to stop evaluating the filter after the first mismatch is encountered. FilterFetch is automatically implemented for `Fetch` implementations that return bool.

This forces fetch implementations that return things like `(bool, bool, bool)` (such as the filter above) to manually implement FilterFetch and decide whether or not to short-circuit.

## More Granular Modules

World no longer globs all of the internal modules together. It now exports `core`, `system`, and `schedule` separately. I'm also considering exporting `core` submodules directly as that is still pretty "glob-ey" and unorganized (feedback welcome here).

## Remaining Draft Work (to be done in this pr)

* ~~panic on conflicting WorldQuery fetches (&A, &mut A)~~
    * ~~bevy `main` and hecs both currently allow this, but we should protect against it if possible~~
* ~~batch_iter / par_iter (currently stubbed out)~~
* ~~ChangedRes~~
    * ~~I skipped this while we sort out #1313. This pr should be adapted to account for whatever we land on there~~.
* ~~The `Archetypes` and `Tables` collections use hashes of sorted lists of component ids to uniquely identify each archetype/table. This hash is then used as the key in a HashMap to look up the relevant ArchetypeId or TableId. (which doesn't handle hash collisions properly)~~
* ~~It is currently unsafe to generate a Query from "World A", then use it on "World B" (despite the api claiming it is safe). We should probably close this gap. This could be done by adding a randomly generated WorldId to each world, then storing that id in each Query. They could then be compared to each other on each `query.do_thing(&world)` operation. This _does_ add an extra branch to each query operation, so I'm open to other suggestions if people have them.~~
* ~~Nested Bundles (if i find time)~~

## Potential Future Work

* Expand WorldCell to support queries.
* Consider not allocating in the empty archetype on `world.spawn()`
    * ex: return something like EntityMutUninit, which turns into EntityMut after an `insert` or `insert_bundle` op
    * this actually regressed performance last time i tried it, but in theory it should be faster
* Optimize SparseSet::insert (see `PERF` comment on insert)
* Replace SparseArray `Option<T>` with T::MAX to cut down on branching
    * would enable cheaper get_unchecked() operations
* upstream fixedbitset optimizations
    * fixedbitset could be allocation free for small block counts (store blocks in a SmallVec)
    * fixedbitset could have a const constructor 
* Consider implementing Tags (archetype-specific by-value data that affects archetype identity) 
    * ex: ArchetypeA could have `[A, B, C]` table components and `[D(1)]` "tag" component. ArchetypeB could have `[A, B, C]` table components and a `[D(2)]` tag component. The archetypes are different, despite both having D tags because the value inside D is different.
    * this could potentially build on top of the `archetype.unique_components` added in this pr for resource storage.
* Consider reverting `all_tuples` proc macro in favor of the old `macro_rules` implementation
    * all_tuples is more flexible and produces cleaner documentation (the macro_rules version produces weird type parameter orders due to parser constraints)
    * but unfortunately all_tuples also appears to make Rust Analyzer sad/slow when working inside of `bevy_ecs` (does not affect user code)
* Consider "resource queries" and/or "mixed resource and entity component queries" as an alternative to WorldCell
    * this is basically just "systems" so maybe it's not worth it
* Add more world ops
    * `world.clear()`
    * `world.reserve<T: Bundle>(count: usize)`
 * Try using the old archetype allocation strategy (allocate new memory on resize and copy everything over). I expect this to improve batch insertion performance at the cost of unbatched performance. But thats just a guess. I'm not an allocation perf pro :)
 * Adapt Commands apis for consistency with new World apis 

## Benchmarks

key:

* `bevy_old`: bevy `main` branch
* `bevy`: this branch
* `_foreach`: uses an optimized for_each iterator
* ` _sparse`: uses sparse set storage (if unspecified assume table storage)
* `_system`: runs inside a system (if unspecified assume test happens via direct world ops)

### Simple Insert (from ecs_bench_suite)

![image](https://user-images.githubusercontent.com/2694663/109245573-9c3ce100-7795-11eb-9003-bfd41cd5c51f.png)

### Simpler Iter (from ecs_bench_suite)

![image](https://user-images.githubusercontent.com/2694663/109245795-ffc70e80-7795-11eb-92fb-3ffad09aabf7.png)

### Fragment Iter (from ecs_bench_suite)

![image](https://user-images.githubusercontent.com/2694663/109245849-0fdeee00-7796-11eb-8d25-eb6b7a682c48.png)

### Sparse Fragmented Iter

Iterate a query that matches 5 entities from a single matching archetype, but there are 100 unmatching archetypes

![image](https://user-images.githubusercontent.com/2694663/109245916-2b49f900-7796-11eb-9a8f-ed89c203f940.png)
 
### Schedule (from ecs_bench_suite)

![image](https://user-images.githubusercontent.com/2694663/109246428-1fab0200-7797-11eb-8841-1b2161e90fa4.png)

### Add Remove Component (from ecs_bench_suite)

![image](https://user-images.githubusercontent.com/2694663/109246492-39e4e000-7797-11eb-8985-2706bd0495ab.png)


### Add Remove Component Big

Same as the test above, but each entity has 5 "large" matrix components and 1 "large" matrix component is added and removed

![image](https://user-images.githubusercontent.com/2694663/109246517-449f7500-7797-11eb-835e-28b6790daeaa.png)


### Get Component

Looks up a single component value a large number of times

![image](https://user-images.githubusercontent.com/2694663/109246129-87ad1880-7796-11eb-9fcb-c38012aa7c70.png)
2021-03-05 07:54:35 +00:00
Nathan Stocks
13b602ee3f Xtask CI (#1387)
This PR is easiest to review commit by commit.

Followup on https://github.com/bevyengine/bevy/pull/1309#issuecomment-767310084

- [x] Switch from a bash script to an xtask rust workspace member.
  - Results in ~30s longer CI due to compilation of the xtask itself
  - Enables Bevy contributors on any platform to run `cargo ci` to run linting -- if the default available Rust is the same version as on CI, then the command should give an identical result.
- [x] Use the xtask from official CI so there's only one place to update.
- [x] Bonus: Run clippy on the _entire_ workspace (existing CI setup was missing the `--workspace` flag
  - [x] Clean up newly-exposed clippy errors 

~#1388 builds on this to clean up newly discovered clippy errors -- I thought it might be nicer as a separate PR.~  Nope, merged it into this one so CI would pass.

Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2021-02-22 08:42:19 +00:00
Yoh Deadfall
578a5b1b88 Moved benchmarks to a single place (#1477)
Closes #1472.
2021-02-19 22:11:00 +00:00
Smite Rust
a9ce7f4e82
update dependencies (#470) 2020-09-10 12:54:24 -07:00
Carter Anderson
9c850057c0 move benches to separate crate to cut test/example build times 2020-09-08 13:30:44 -07:00