PR #15756 made us create temporary render entities for all visible
objects, even if they had no render world counterpart. This regressed
our `many_cubes` time from about 3.59 ms/frame to 4.66 ms/frame.
This commit changes that behavior to use `Entity::PLACEHOLDER` instead
of creating a temporary render entity. This improves our `many_cubes`
time from 5.66 ms/frame to 3.96 ms/frame, a 43% speedup.
I tested 3D, 2D gizmos, and UI and they seem to work.
See the following graph of `many_cubes` frame time (lower is better). PR
#15756 is the one in October.
![Time (ms_frame) vs
Date(3)](https://github.com/user-attachments/assets/2c31a893-97bd-40f6-9e89-d2195a44cf40)
# Objective
The `RayCastSettings` type is only used in the context of ray casts with
the `MeshRayCast` system parameter. The current name is somewhat
inconsistent with other existing types, like `MeshRayCast` and
`MeshPickingSettings`, but more importantly, it easily conflicts with
physics, and forces those crates to opt for some other name like
`RayCastConfig` or `RayCastOptions`.
We should rename `RayCastSettings` to `MeshRayCastSettings` to avoid
naming conflicts and improve consistency.
## Solution
Rename `RayCastSettings` to `MeshRayCastSettings`.
---
## Migration Guide
`RayCastSettings` has been renamed to `MeshRayCastSettings` to avoid
naming conflicts with other ray casting backends and types.
## Objective
Follow-up to #16672.
`EntityCommands::clone` looks the same as the `Clone` trait, which could
be confusing. A discord discussion has made me realize that's probably a
bigger problem than I thought. Oops :P
## Solution
Renamed `EntityCommands::clone` to `EntityCommands::clone_and_spawn`,
renamed `EntityCommands::clone_with` to
`EntityCommands::clone_and_spawn_with`. Also added some docs explaining
the commands' relation to `Clone` (components need to implement it (or
`Reflect`)).
## Showcase
```
// Create a new entity and keep its EntityCommands
let mut entity = commands.spawn((ComponentA(10), ComponentB(20)));
// Create a clone of the first entity
let mut entity_clone = entity.clone_and_spawn();
```
## The Bikeshed
- `clone_and_spawn` (Alice's suggestion)
- `spawn_clone` (benfrankel's suggestion)
- `spawn_cloned` (rparrett's suggestion)
# Objective
The documentation for `Query::transmute_lens` lists some allowed
transmutes, but the list is incomplete.
## Solution
Document the underlying rules for what transmutes are allowed.
Add a longer list of examples. Write them as doc tests to ensure that
those examples are actually allowed.
I'm assuming that anything that can be done today is intended to be
supported! If any of these examples are things we plan to prohibit in
the future then we can add some warnings to that effect.
# Objective
The parameter names for `bevy::math::ops::atan2` are labelled such that
`x` is the first argument and `y` is the second argument, but it passes
those arguments directly to
[`f32::atan2`](https://doc.rust-lang.org/stable/std/primitive.f32.html#method.atan2),
whose parameters are expected to be `(y, x)`. This PR changes the
parameter names in the bevy documentation to use the correct order for
the operation being performed. You can verify this by doing:
```rust
fn main() {
let x = 3.0;
let y = 4.0;
let angle = bevy::math::ops::atan2(x, y);
// standard polar coordinates formula
dbg!(5.0 * angle.cos(), 5.0 * angle.sin());
}
```
This will print `(4.0, 3.0)`, which has flipped `x` and `y`. The problem
is that the `atan2` function to calculate the angle was really expecting
`(y, x)`, not `(x, y)`.
## Solution
I flipped the parameter names for `bevy::math::ops::atan2` and updated
the documentation. I also removed references to `self` and `other` from
the documentation which seemed to be copied from the `f32::atan2`
documentation.
## Testing
Not really needed, you can compare the `f32::atan2` docs to the
`bevy::math::ops::atan2` docs to see the problem is obvious. If a test
is required I could add a short one.
## Migration Guide
I'm not sure if this counts as a breaking change, since the
implementation clearly meant to use `f32::atan2` directly, so it was
really just the parameter names that were wrong.
This fixes a minor copy-paste mistake in the `FontAtlasSet::is_empty`
method's documentation.
# Objective
- Correct the documentation for that method.
## Solution
- Remove the copy + paste'd docs from `FontAtlasSet::is_empty` and add
something similar to
`alloc::collections::btree_set::BTreeSet::is_empty`.
## Testing
- No testable changes were made. However, the two tests in the
`bevy_text` module still pass.
A small documentation improvement. The description was copied from
insert_children. I changed the documentation to be singular instead of
plural when referring to the child in add_child.
# Objective
- The description was copied from insert_children and still refers to
the child being added as plural children
## Solution
- Description now has child in singular form.
## Testing
- N/A
---------
Co-authored-by: IQuick 143 <IQuick143cz@gmail.com>
# Objective
We currently have no benchmarks for large worlds with many entities,
components and systems.
Having a benchmark for a world with many components is especially useful
for the performance improvements needed for relations. This is also a
response to this [comment from
cart](https://github.com/bevyengine/bevy/pull/14385#issuecomment-2311292546).
> I'd like both a small bevy_ecs-scoped executor benchmark that
generates thousands of components used by hundreds of systems.
## Solution
I use dynamic components and components to construct a benchmark with
2000 components, 4000 systems, and 10000 entities.
## Some notes
- ~I use a lot of random entities, which creates unpredictable
performance, I should use a seeded PRNG.~
- Not entirely sure if everything is ran concurrently currently. And
there are many conflicts, meaning there's probably a lot of
first-come-first-serve going on. Not entirely sure if these benchmarks
are very reproducible.
- Maybe add some more safety comments
- Also component_reads_and_writes() is about to be deprecated #16339,
but there's no other way to currently do what I'm trying to do.
---------
Co-authored-by: Chris Russell <8494645+chescock@users.noreply.github.com>
Co-authored-by: BD103 <59022059+BD103@users.noreply.github.com>
# Objective
Fixes#16192
## Solution
I renamed the Pointer<Down/Up> to <Pressed/Released> and then I resolved
all the errors.
Renamed variables like "is_down" to "is_pressed" to maintain
consistency.
Modified the docs in places where 'down/up' were used to maintain
consistency.
## Testing
I haven't tested this in any way beside the checks from rust analyzer
and the examples in the examples/ directory.
---
## Migration Guide
### `bevy_picking/src/pointer.rs`:
#### `enum PressDirection`:
- `PressDirection::Down` changes to `PressDirection::Pressed`.
- `PressDirection::Up` changes to `PressDirection::Released`.
These changes are also relevant when working with `enum PointerAction`
### `bevy_picking/src/events.rs`:
Clicking and pressing Events in events.rs categories change from [Down],
[Up], [Click] to [Pressed], [Released], [Click].
- `struct Down` changes to `struct Pressed` - fires when a pointer
button is pressed over the 'target' entity.
- `struct Up` changes to `struct Released` - fires when a pointer button
is released over the 'target' entity.
- `struct Click` now fires when a pointer sends a Pressed event followed
by a Released event on the same 'target'.
- `struct DragStart` now fires when the 'target' entity receives a
pointer Pressed event followed by a pointer Move event.
- `struct DragEnd` now fires when the 'target' entity is being dragged
and receives a pointer Released event.
- `PickingEventWriters<'w>::down_events: EventWriter<'w, Pointer<Down>>`
changes to `PickingEventWriters<'w>::pressed_events: EventWriter<'w,
Pointer<Pressed>>`.
- `PickingEventWriters<'w>::up_events changes to
PickingEventWriters<'w>::released_events`.
---------
Co-authored-by: Harun Ibram <harun.ibram@outlook.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Updates the requirements on
[thiserror](https://github.com/dtolnay/thiserror) to permit the latest
version.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/dtolnay/thiserror/releases">thiserror's
releases</a>.</em></p>
<blockquote>
<h2>2.0.3</h2>
<ul>
<li>Support the same Path field being repeated in both Debug and Display
representation in error message (<a
href="https://redirect.github.com/dtolnay/thiserror/issues/383">#383</a>)</li>
<li>Improve error message when a format trait used in error message is
not implemented by some field (<a
href="https://redirect.github.com/dtolnay/thiserror/issues/384">#384</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="15fd26e476"><code>15fd26e</code></a>
Release 2.0.3</li>
<li><a
href="7046023130"><code>7046023</code></a>
Simplify how has_bonus_display is accumulated</li>
<li><a
href="9cc1d0b251"><code>9cc1d0b</code></a>
Merge pull request <a
href="https://redirect.github.com/dtolnay/thiserror/issues/384">#384</a>
from dtolnay/nowrap</li>
<li><a
href="1d040f358a"><code>1d040f3</code></a>
Use Var wrapper only for Pointer formatting</li>
<li><a
href="6a6132d79b"><code>6a6132d</code></a>
Extend no-display ui test to cover another fmt trait</li>
<li><a
href="a061beb9dc"><code>a061beb</code></a>
Merge pull request <a
href="https://redirect.github.com/dtolnay/thiserror/issues/383">#383</a>
from dtolnay/both</li>
<li><a
href="63882935be"><code>6388293</code></a>
Support Display and Debug of same path in error message</li>
<li><a
href="dc0359eeec"><code>dc0359e</code></a>
Defer binding_value construction</li>
<li><a
href="520343e37d"><code>520343e</code></a>
Add test of Debug and Display of paths</li>
<li><a
href="49be39dee1"><code>49be39d</code></a>
Release 2.0.2</li>
<li>Additional commits viewable in <a
href="https://github.com/dtolnay/thiserror/compare/1.0.0...2.0.3">compare
view</a></li>
</ul>
</details>
<br />
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
# Objective
- This PR adds the ability to determine whether a `Polygon<N>` or
`BoxedPolygon` is simple (aka. not self-intersecting) by calling
`my_polygon.is_simple()`.
- This may be useful information for users to determine whether their
polygons are 'valid' and will be useful when adding meshing for
polygons.
- As such this is a step towards fixing #15255
## Solution
- Implemented the Shamos-Hoey algorithm in its own module `polygon`.
## Testing
- Tests are included, and can be verified visually.
---
## Performance
- The Shamos-Hoey algorithm runs in O(n * log n)
- In reality, the results look more linear to me.
- Determining simplicity for a simple polygon (the worst case) with less
than 100 vertices takes less than 0.2ms.
![image](https://github.com/user-attachments/assets/23c62234-abdc-4710-a3b4-feaad5929133)
# Objective
Currently function reflection requires users to manually monomorphize
their generic functions. For example:
```rust
fn add<T: Add<Output=T>>(a: T, b: T) -> T {
a + b
}
// We have to specify the type of `T`:
let reflect_add = add::<i32>.into_function();
```
This PR doesn't aim to solve that problem—this is just a limitation in
Rust. However, it also means that reflected functions can only ever work
for a single monomorphization. If we wanted to support other types for
`T`, we'd have to create a separate function for each one:
```rust
let reflect_add_i32 = add::<i32>.into_function();
let reflect_add_u32 = add::<u32>.into_function();
let reflect_add_f32 = add::<f32>.into_function();
// ...
```
So in addition to requiring manual monomorphization, we also lose the
benefit of having a single function handle multiple argument types.
If a user wanted to create a small modding script that utilized function
reflection, they'd have to either:
- Store all sets of supported monomorphizations and require users to
call the correct one
- Write out some logic to find the correct function based on the given
arguments
While the first option would work, it wouldn't be very ergonomic. The
second option is better, but it adds additional complexity to the user's
logic—complexity that `bevy_reflect` could instead take on.
## Solution
Introduce [function
overloading](https://en.wikipedia.org/wiki/Function_overloading).
A `DynamicFunction` can now be overloaded with other `DynamicFunction`s.
We can rewrite the above code like so:
```rust
let reflect_add = add::<i32>
.into_function()
.with_overload(add::<u32>)
.with_overload(add::<f32>);
```
When invoked, the `DynamicFunction` will attempt to find a matching
overload for the given set of arguments.
And while I went into this PR only looking to improve generic function
reflection, I accidentally added support for variadic functions as well
(hence why I use the broader term "overload" over "generic").
```rust
// Supports 1 to 4 arguments
let multiply_all = (|a: i32| a)
.into_function()
.with_overload(|a: i32, b: i32| a * b)
.with_overload(|a: i32, b: i32, c: i32| a * b * c)
.with_overload(|a: i32, b: i32, c: i32, d: i32| a * b * c * d);
```
This is simply an added bonus to this particular implementation. ~~Full
variadic support (i.e. allowing for an indefinite number of arguments)
will be added in a later PR.~~ I actually decided to limit the maximum
number of arguments to 63 to supplement faster lookups, a reduced memory
footprint, and faster cloning.
### Alternatives & Rationale
I explored a few options for handling generic functions. This PR is the
one I feel the most confident in, but I feel I should mention the others
and why I ultimately didn't move forward with them.
#### Adding `GenericDynamicFunction`
**TL;DR:** Adding a distinct `GenericDynamicFunction` type unnecessarily
splits and complicates the API.
<details>
<summary>Details</summary>
My initial explorations involved a dedicated `GenericDynamicFunction` to
contain and handle the mappings.
This was initially started back when `DynamicFunction` was distinct from
`DynamicClosure`. My goal was to not prevent us from being able to
somehow make `DynamicFunction` implement `Copy`. But once we reverted
back to a single `DynamicFunction`, that became a non-issue.
But that aside, the real problem was that it created a split in the API.
If I'm using a third-party library that uses function reflection, I have
to know whether to request a `DynamicFunction` or a
`GenericDynamicFunction`. I might not even know ahead of time which one
I want. It might need to be determined at runtime.
And if I'm creating a library, I might want a type to contain both
`DynamicFunction` and `GenericDynamicFunction`. This might not be
possible if, for example, I need to store the function in a `HashMap`.
The other concern is with `IntoFunction`. Right now `DynamicFunction`
trivially implements `IntoFunction` since it can just return itself. But
what should `GenericDynamicFunction` do? It could return itself wrapped
into a `DynamicFunction`, but then the API for `DynamicFunction` would
have to account for this. So then what was the point of having a
separate `GenericDynamicFunction` anyways?
And even apart from `IntoFunction`, there's nothing stopping someone
from manually creating a generic `DynamicFunction` through lying about
its `FunctionInfo` and wrapping a `GenericDynamicFunction`.
That being said, this is probably the "best" alternative if we added a
`Function` trait and stored functions as `Box<dyn Function>`.
However, I'm not convinced we gain much from this. Sure, we could keep
the API for `DynamicFunction` the same, but consumers of `Function` will
need to account for `GenericDynamicFunction` regardless (e.g. handling
multiple `FunctionInfo`, a ranged argument count, etc.). And for all
cases, except where using `DynamicFunction` directly, you end up
treating them all like `GenericDynamicFunction`.
Right now, if we did go with `GenericDynamicFunction`, the only major
benefit we'd gain would be saving 24 bytes. If memory ever does become
an issue here, we could swap over. But I think for the time being it's
better for us to pursue a clearer mental model and end-user ergonomics
through unification.
</details>
##### Using the `FunctionRegistry`
**TL;DR:** Having overloads only exist in the `FunctionRegistry`
unnecessarily splits and complicates the API.
<details>
<summary>Details</summary>
Another idea was to store the overloads in the `FunctionRegistry`. Users
would then just call functions directly through the registry (i.e.
`registry.call("my_func", my_args)`).
I didn't go with this option because of how it specifically relies on
the functions being registered. You'd not only always need access to the
registry, but you'd need to ensure that the functions you want to call
are even registered.
It also means you can't just store a generic `DynamicFunction` on a
type. Instead, you'll need to store the function's name and use that to
look up the function in the registry—even if it's only ever used by that
type.
Doing so also removes all the benefits of `DynamicFunction`, such as the
ability to pass it to functions accepting `IntoFunction`, modify it if
needed, and so on.
Like `GenericDynamicFunction` this introduces a split in the ecosystem:
you either store `DynamicFunction`, store a string to look up the
function, or force `DynamicFunction` to wrap your generic function
anyways. Or worse yet: have `DynamicFunction` wrap the lookup function
using `FunctionRegistryArc`.
</details>
#### Generic `ArgInfo`
**TL;DR:** Allowing `ArgInfo` and `ReturnInfo` to store the generic
information introduces a footgun when interpreting `FunctionInfo`.
<details>
<summary>Details</summary>
Regardless of how we represent a generic function, one thing is clear:
we need to be able to represent the information for such a function.
This PR does so by introducing a `FunctionInfoType` enum to wrap one or
more `FunctionInfo` values.
Originally, I didn't do this. I had `ArgInfo` and `ReturnInfo` allow for
generic types. This allowed us to have a single `FunctionInfo` to
represent our function, but then I realized that it actually lies about
our function.
If we have two `ArgInfo` that both allow for either `i32` or `u32`, what
does this tell us about our function? It turns out: nothing! We can't
know whether our function takes `(i32, i32)`, `(u32, u32)`, `(i32,
u32)`, or `(u32, i32)`.
It therefore makes more sense to just represent a function with multiple
`FunctionInfo` since that's really what it's made up of.
</details>
#### Flatten `FunctionInfo`
**TL;DR:** Flattening removes additional per-overload information some
users may desire and prevents us from adding more information in the
future.
<details>
<summary>Details</summary>
Why don't we just flatten multiple `FunctionInfo` into just one that can
contain multiple signatures?
This is something we could do, but I decided against it for a few
reasons:
- The only thing we'd be able to get rid of for each signature would be
the `name`. While not enough to not do it, it doesn't really suggest we
*have* to either.
- Some consumers may want access to the names of the functions that make
up the overloaded function. For example, to track a bug where an
undesirable function is being added as an overload. Or to more easily
locate the original function of an overload.
- We may eventually allow for more information to be stored on
`FunctionInfo`. For example, we may allow for documentation to be stored
like we do for `TypeInfo`. Consumers of this documentation may want
access to the documentation of each overload as they may provide
documentation specific to that overload.
</details>
## Testing
This PR adds lots of tests and benchmarks, and also adds to the example.
To run the tests:
```
cargo test --package bevy_reflect --all-features
```
To run the benchmarks:
```
cargo bench --bench reflect_function --all-features
```
To run the example:
```
cargo run --package bevy --example function_reflection --all-features
```
### Benchmarks
One of my goals with this PR was to leave the typical case of
non-overloaded functions largely unaffected by the changes introduced in
this PR. ~~And while the static size of `DynamicFunction` has increased
by 17% (from 136 to 160 bytes), the performance has generally stayed the
same~~ The static size of `DynamicFunction` has decreased from 136 to
112 bytes, while calling performance has generally stayed the same:
| | `main` | 7d293ab | 252f3897d |
|-------------------------------------|--------|---------|-----------|
| `into/function` | 37 ns | 46 ns | 142 ns |
| `with_overload/01_simple_overload` | - | 149 ns | 268 ns |
| `with_overload/01_complex_overload` | - | 332 ns | 431 ns |
| `with_overload/10_simple_overload` | - | 1266 ns | 2618 ns |
| `with_overload/10_complex_overload` | - | 2544 ns | 4170 ns |
| `call/function` | 57 ns | 58 ns | 61 ns |
| `call/01_simple_overload` | - | 255 ns | 242 ns |
| `call/01_complex_overload` | - | 595 ns | 431 ns |
| `call/10_simple_overload` | - | 740 ns | 699 ns |
| `call/10_complex_overload` | - | 1824 ns | 1618 ns |
For the overloaded function tests, the leading number indicates how many
overloads there are: `01` indicates 1 overload, `10` indicates 10
overloads. The `complex` cases have 10 unique generic types and 10
arguments, compared to the `simple` 1 generic type and 2 arguments.
I aimed to prioritize the performance of calling the functions over
creating them, hence creation speed tends to be a bit slower.
There may be other optimizations we can look into but that's probably
best saved for a future PR.
The important bit is that the standard ~~`into/function`~~ and
`call/function` benchmarks show minimal regressions. Since the latest
changes, `into/function` does have some regressions, but again the
priority was `call/function`. We can probably optimize `into/function`
if needed in the future.
---
## Showcase
Function reflection now supports [function
overloading](https://en.wikipedia.org/wiki/Function_overloading)! This
can be used to simulate generic functions:
```rust
fn add<T: Add<Output=T>>(a: T, b: T) -> T {
a + b
}
let reflect_add = add::<i32>
.into_function()
.with_overload(add::<u32>)
.with_overload(add::<f32>);
let args = ArgList::default().push_owned(25_i32).push_owned(75_i32);
let result = func.call(args).unwrap().unwrap_owned();
assert_eq!(result.try_take::<i32>().unwrap(), 100);
let args = ArgList::default().push_owned(25.0_f32).push_owned(75.0_f32);
let result = func.call(args).unwrap().unwrap_owned();
assert_eq!(result.try_take::<f32>().unwrap(), 100.0);
```
You can also simulate variadic functions:
```rust
#[derive(Reflect, PartialEq, Debug)]
struct Player {
name: Option<String>,
health: u32,
}
// Creates a `Player` with one of the following:
// - No name and 100 health
// - A name and 100 health
// - No name and custom health
// - A name and custom health
let create_player = (|| Player {
name: None,
health: 100,
})
.into_function()
.with_overload(|name: String| Player {
name: Some(name),
health: 100,
})
.with_overload(|health: u32| Player {
name: None,
health
})
.with_overload(|name: String, health: u32| Player {
name: Some(name),
health,
});
let args = ArgList::default()
.push_owned(String::from("Urist"))
.push_owned(55_u32);
let player = create_player
.call(args)
.unwrap()
.unwrap_owned()
.try_take::<Player>()
.unwrap();
assert_eq!(
player,
Player {
name: Some(String::from("Urist")),
health: 55
}
);
```
This commit removes the logic that attempted to keep the
`MeshInputUniform` buffer contiguous. Not only was it slow and complex,
but it was also incorrect, which caused #16686 and #16690. I changed the
logic to simply maintain a free list of unused slots in the buffer and
preferentially fill them when pushing new mesh input uniforms.
Closes#16686.
Closes#16690.
# Objective
- A `Trigger` has multiple associated `Entity`s - the entity observing
the event, and the entity that was targeted by the event.
- The field `entity: Entity` encodes no semantic information about what
the entity is used for, you can already tell that it's an `Entity` by
the type signature!
## Solution
- Rename `trigger.entity()` to `trigger.target()`
---
## Changelog
- `Trigger`s are associated with multiple entities. `Trigger::entity()`
has been renamed to `Trigger::target()` to reflect the semantics of the
entity being returned.
## Migration Guide
- Rename `Trigger::entity()` to `Trigger::target()`.
- Rename `ObserverTrigger::entity` to `ObserverTrigger::target`
# Objective
Fixes#16610, related to #16702
## Solution
Upgrade typos and its configuration
## Testing
- Did you test these changes? If so, how? No
- Are there any parts that need more testing? No
- How can other people (reviewers) test your changes? Is there anything
specific they need to know? No
- If relevant, what platforms did you test these changes on, and are
there any important ones you can't test? Not applicable
# Objective
Fixes#16706
## Solution
- Added new method: `try_resource_scope` which returns `None` if the
requested resource doesn't exist.
- Changed the `resource_scope` test to use `try_resource_scope` as well
to test for the `None` case.
---
## Showcase
```rust
world.try_resource_scope::<MyResource, _>(|world, mut my_resource| {
// do something with the resource if it exists
});
```
# Objective
Fixes typos in bevy project, following suggestion in
https://github.com/bevyengine/bevy-website/pull/1912#pullrequestreview-2483499337
## Solution
I used https://github.com/crate-ci/typos to find them.
I included only the ones that feel undebatable too me, but I am not in
game engine so maybe some terms are expected.
I left out the following typos:
- `reparametrize` => `reparameterize`: There are a lot of occurences, I
believe this was expected
- `semicircles` => `hemicircles`: 2 occurences, may mean something
specific in geometry
- `invertation` => `inversion`: may mean something specific
- `unparented` => `parentless`: may mean something specific
- `metalness` => `metallicity`: may mean something specific
## Testing
- Did you test these changes? If so, how? I did not test the changes,
most changes are related to raw text. I expect the others to be tested
by the CI.
- Are there any parts that need more testing? I do not think
- How can other people (reviewers) test your changes? Is there anything
specific they need to know? To me there is nothing to test
- If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?
---
## Migration Guide
> This section is optional. If there are no breaking changes, you can
delete this section.
(kept in case I include the `reparameterize` change here)
- If this PR is a breaking change (relative to the last release of
Bevy), describe how a user might need to migrate their code to support
these changes
- Simply adding new functionality is not a breaking change.
- Fixing behavior that was definitely a bug, rather than a questionable
design choice is not a breaking change.
## Questions
- [x] Should I include the above typos? No
(https://github.com/bevyengine/bevy/pull/16702#issuecomment-2525271152)
- [ ] Should I add `typos` to the CI? (I will check how to configure it
properly)
This project looks awesome, I really enjoy reading the progress made,
thanks to everyone involved.
# Objective
This PR update breakout to use the new 0.15 Required Component feature
instead of the Bundle.
Add more information in the comment about where to find more info about
Required Components.
## Solution
Replace `#[derive(Bundle)]` with a new Wall component and `#[require()]`
Macro to include the other components.
## Testing
Tested with `cargo test` as well tested the game manually with `cargo
run --example breakout` It looks to me that it works like it used to
before the changes. Tested on Arch Linux, Wayland
---------
Co-authored-by: Arnav Mummineni <45217840+RCoder01@users.noreply.github.com>
Co-authored-by: Joona Aalto <jondolf.dev@gmail.com>
This commit adds support for *multidraw*, which is a feature that allows
multiple meshes to be drawn in a single drawcall. `wgpu` currently
implements multidraw on Vulkan, so this feature is only enabled there.
Multiple meshes can be drawn at once if they're in the same vertex and
index buffers and are otherwise placed in the same bin. (Thus, for
example, at present the materials and textures must be identical, but
see #16368.) Multidraw is a significant performance improvement during
the draw phase because it reduces the number of rebindings, as well as
the number of drawcalls.
This feature is currently only enabled when GPU culling is used: i.e.
when `GpuCulling` is present on a camera. Therefore, if you run for
example `scene_viewer`, you will not see any performance improvements,
because `scene_viewer` doesn't add the `GpuCulling` component to its
camera.
Additionally, the multidraw feature is only implemented for opaque 3D
meshes and not for shadows or 2D meshes. I plan to make GPU culling the
default and to extend the feature to shadows in the future. Also, in the
future I suspect that polyfilling multidraw on APIs that don't support
it will be fruitful, as even without driver-level support use of
multidraw allows us to avoid expensive `wgpu` rebindings.
# Objective
The "mehses" typo introduced in a review comment
[here](https://github.com/bevyengine/bevy/pull/16657#discussion_r1870834999)
hurts my soul, it was merged right as I was about to comment about it :(
## Solution
Fix it :D
(also, why didn't the CI typo checker catch this?)
# Objective
- Remove `derive_more`'s error derivation and replace it with
`thiserror`
## Solution
- Added `derive_more`'s `error` feature to `deny.toml` to prevent it
sneaking back in.
- Reverted to `thiserror` error derivation
## Notes
Merge conflicts were too numerous to revert the individual changes, so
this reversion was done manually. Please scrutinise carefully during
review.
## Objective
I was resolving a conflict between #16132 and my PR #15929 and thought
the `clone_entity` commands made more sense in `EntityCommands`.
## Solution
Moved `Commands::clone_entity` to `EntityCommands::clone`, moved
`Commands::clone_entity_with` to `EntityCommands::clone_with`.
## Testing
Ran the two tests that used the old methods.
## Showcase
```
// Create a new entity and keep its EntityCommands.
let mut entity = commands.spawn((ComponentA(10), ComponentB(20)));
// Create a clone of the first entity
let mut entity_clone = entity.clone();
```
The only potential downside is that the method name is now the same as
the one from the `Clone` trait. `EntityCommands` doesn't implement
`Clone` though, so there's no actual conflict.
Maybe I'm biased because this'll work better with my PR, but I think the
UX is nicer regardless.
# Objective
Volumetric fog was broken by #13746.
Looks like this particular shader just got missed. I don't see any other
instances of `unpack_offset_and_counts` in the codebase.
```
2024-12-06T03:18:42.297494Z ERROR bevy_render::render_resource::pipeline_cache: failed to process shader:
error: no definition in scope for identifier: 'bevy_pbr::clustered_forward::unpack_offset_and_counts'
┌─ crates/bevy_pbr/src/volumetric_fog/volumetric_fog.wgsl:312:29
│
312 │ let offset_and_counts = bevy_pbr::clustered_forward::unpack_offset_and_counts(cluster_index);
│ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ unknown identifier
│
= no definition in scope for identifier: 'bevy_pbr::clustered_forward::unpack_offset_and_counts'
```
## Solution
Use `unpack_clusterable_object_index_ranges` to get the indices for
point/spot lights.
## Testing
`cargo run --example volumetric_fog`
`cargo run --example fog_volumes`
`cargo run --example scrolling_fog`
# Objective
The weekly beta compile test workflow can be useful to people who have
forked the repository.
However, there are a few small issues with how this workflow is
currently set up:
* Scheduled workflows run on the base/default branch, with no way
(currently) to change this. On forks, the base/default branch is usually
kept in sync with the main Bevy repository, meaning that running this
workflow on forks would just be a waste of resources. (See
[here](https://docs.github.com/en/actions/writing-workflows/choosing-when-your-workflow-runs/events-that-trigger-workflows#schedule)
- "Scheduled workflows will only run on the default branch.")
* Even if there was a way to change the branch that a scheduled workflow
runs on, forks default to not having an issue tracker.
* Even in the event that a fork's issue tracker is enabled, most users
probably don't want to receive automated issues in the event of a
compilation failure.
Because of these reasons, this workflow is irrelevant for 99% of forks.
## Solution
We now skip all jobs on the `Weekly beta compile test` workflow, if we
detect we're running this workflow on a fork.
## Testing
Testing? Who needs testing? (Seriously though, I made sure the syntax is
correct.)
# Objective
- Contributes to #15460
## Solution
- Added the following features:
- `std` (default)
- `async_executor` (default)
- `edge_executor`
- `critical-section`
- `portable-atomic`
- Added [`edge-executor`](https://crates.io/crates/edge-executor) as a
`no_std` alternative to `async-executor`.
- Updated the `single_threaded_task_pool` to work in `no_std`
environments by gating its reliance on `thread_local`.
## Testing
- Added to `compile-check-no-std` CI command
## Notes
- In previous iterations of this PR, a custom `async-executor`
alternative was vendored in. This raised concerns around maintenance and
testing. In this iteration, an existing version of that same vendoring
is now used, but _only_ in `no_std` contexts. For existing `std`
contexts, the original `async-executor` is used.
- Due to the way statics work, certain `TaskPool` operations have added
restrictions around `Send`/`Sync` in `no_std`. This is because there
isn't a straightforward way to create a thread-local in `no_std`. If
these added constraints pose an issue we can revisit this at a later
date.
- If a user enables both the `async_executor` and `edge_executor`
features, we will default to using `async-executor`. Since enabling
`async_executor` requires `std`, we can safely assume we are in an `std`
context and use the original library.
---------
Co-authored-by: Mike <2180432+hymm@users.noreply.github.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
This adds a few minor items which were left out of the previous PR:
- Added synchronization from bevy_input_focus to bevy_a11y.
- Initialize InputFocusVisible resource.
- Make `input_focus` available from `bevy` module.
I've tested this using VoiceOver on Mac OS. It works, but it needs
considerable polish.
# Objective
- Fixes#16498
## Solution
- Trivially swaps ordering of hooks and observers for all call sites
where they are triggered for `on_replace` or `on_remove`
## Testing
- Just CI
---
## Migration Guide
The order of hooks and observers for `on_replace` and `on_remove` has
been swapped. Observers are now run before hooks. This is a more natural
ordering where the removal ordering is inverted compared to the
insertion ordering.
# Objective
Error handling in bevy is hard. See for reference
https://github.com/bevyengine/bevy/issues/11562,
https://github.com/bevyengine/bevy/issues/10874 and
https://github.com/bevyengine/bevy/issues/12660. The goal of this PR is
to make it better, by allowing users to optionally return `Result` from
systems as outlined by Cart in
<https://github.com/bevyengine/bevy/issues/14275#issuecomment-2223708314>.
## Solution
This PR introduces a new `ScheuleSystem` type to represent systems that
can be added to schedules. Instances of this type contain either an
infallible `BoxedSystem<(), ()>` or a fallible `BoxedSystem<(),
Result>`. `ScheuleSystem` implements `System<In = (), Out = Result>` and
replaces all uses of `BoxedSystem` in schedules. The async executor now
receives a result after executing a system, which for infallible systems
is always `Ok(())`. Currently it ignores this result, but more useful
error handling could also be implemented.
Aliases for `Error` and `Result` have been added to the `bevy_ecs`
prelude, as well as const `OK` which new users may find more friendly
than `Ok(())`.
## Testing
- Currently there are not actual semantics changes that really require
new tests, but I added a basic one just to make sure we don't break
stuff in the future.
- The behavior of existing systems is totally unchanged, including
logging.
- All of the existing systems tests pass, and I have not noticed
anything strange while playing with the examples
## Showcase
The following minimal example prints "hello world" once, then completes.
```rust
use bevy::prelude::*;
fn main() {
App::new().add_systems(Update, hello_world_system).run();
}
fn hello_world_system() -> Result {
println!("hello world");
Err("string")?;
println!("goodbye world");
OK
}
```
## Migration Guide
This change should be pretty much non-breaking, except for users who
have implemented their own custom executors. Those users should use
`ScheduleSystem` in place of `BoxedSystem<(), ()>` and import the
`System` trait where needed. They can choose to do whatever they wish
with the result.
## Current Work
+ [x] Fix tests & doc comments
+ [x] Write more tests
+ [x] Add examples
+ [X] Draft release notes
## Draft Release Notes
As of this release, systems can now return results.
First a bit of background: Bevy has hisotrically expected systems to
return the empty type `()`. While this makes sense in the context of the
ecs, it's at odds with how error handling is typically done in rust:
returning `Result::Error` to indicate failure, and using the
short-circuiting `?` operator to propagate that error up the call stack
to where it can be properly handled. Users of functional languages will
tell you this is called "monadic error handling".
Not being able to return `Results` from systems left bevy users with a
quandry. They could add custom error handling logic to every system, or
manually pipe every system into an error handler, or perhaps sidestep
the issue with some combination of fallible assignents, logging, macros,
and early returns. Often, users would just litter their systems with
unwraps and possible panics.
While any one of these approaches might be fine for a particular user,
each of them has their own drawbacks, and none makes good use of the
language. Serious issues could also arrise when two different crates
used by the same project made different choices about error handling.
Now, by returning results, systems can defer error handling to the
application itself. It looks like this:
```rust
// Previous, handling internally
app.add_systems(my_system)
fn my_system(window: Query<&Window>) {
let Ok(window) = query.get_single() else {
return;
};
// ... do something to the window here
}
// Previous, handling externally
app.add_systems(my_system.pipe(my_error_handler))
fn my_system(window: Query<&Window>) -> Result<(), impl Error> {
let window = query.get_single()?;
// ... do something to the window here
Ok(())
}
// Previous, panicking
app.add_systems(my_system)
fn my_system(window: Query<&Window>) {
let window = query.single();
// ... do something to the window here
}
// Now
app.add_systems(my_system)
fn my_system(window: Query<&Window>) -> Result {
let window = query.get_single()?;
// ... do something to the window here
Ok(())
}
```
There are currently some limitations. Systems must either return `()` or
`Result<(), Box<dyn Error + Send + Sync + 'static>>`, with no
in-between. Results are also ignored by default, and though implementing
a custom handler is possible, it involves writing your own custom ecs
executor (which is *not* recomended).
Systems should return errors when they cannot perform their normal
behavior. In turn, errors returned to the executor while running the
schedule will (eventually) be treated as unexpected. Users and library
authors should prefer to return errors for anything that disrupts the
normal expected behavior of a system, and should only handle expected
cases internally.
We have big plans for improving error handling further:
+ Allowing users to change the error handling logic of the default
executors.
+ Adding source tracking and optional backtraces to errors.
+ Possibly adding tracing-levels (Error/Warn/Info/Debug/Trace) to
errors.
+ Generally making the default error logging more helpful and
inteligent.
+ Adding monadic system combininators for fallible systems.
+ Possibly removing all panicking variants from our api.
---------
Co-authored-by: Zachary Harrold <zac@harrold.com.au>
# Objective
- This fixes raycast picking with lyon
- reverse winding of 2D meshes currently results in them being rendered
but not pickable as the raycast passes through the backface and would
only hit "from below"
## Solution
- Disables backface culling for Mesh2d
## Testing
- Tested picking with bevy_prototype_lyon
- Could probably use testing with Mesh3d (should not be affected) and
SimplifiedMesh (no experience with that, could have the same issue if
used for 2D?)
---------
Co-authored-by: Aevyrie <aevyrie@gmail.com>
The bindless PR (#16368) broke some examples:
* `specialized_mesh_pipeline` and `custom_shader_instancing` failed
because they expect to be able to render a mesh with no material, by
overriding enough of the render pipeline to be able to do so. This PR
fixes the issue by restoring the old behavior in which we extract meshes
even if they have no material.
* `texture_binding_array` broke because it doesn't implement
`AsBindGroup::unprepared_bind_group`. This was tricky to fix because
there's a very good reason why `texture_binding_array` doesn't implement
that method: there's no sensible way to do so with `wgpu`'s current
bindless API, due to its multiple levels of borrowed references. To fix
the example, I split `MaterialBindGroup` into
`MaterialBindlessBindGroup` and `MaterialNonBindlessBindGroup`, and
allow direct custom implementations of `AsBindGroup::as_bind_group` for
the latter type of bind groups. To opt in to the new behavior, return
the `AsBindGroupError::CreateBindGroupDirectly` error from your
`AsBindGroup::unprepared_bind_group` implementation, and Bevy will call
your custom `AsBindGroup::as_bind_group` method as before.
## Migration Guide
* Bevy will now unconditionally call
`AsBindGroup::unprepared_bind_group` for your materials, so you must no
longer panic in that function. Instead, return the new
`AsBindGroupError::CreateBindGroupDirectly` error, and Bevy will fall
back to calling `AsBindGroup::as_bind_group` as before.
# Objective
- Contributes to #15460
## Solution
- Added the following new features:
- `std` (default)
- `alloc`
- `encase` (default)
- `libm`
## Testing
- Added to `compile-check-no-std` CI command
## Notes
- `ColorCurve` requires `alloc` due to how the underlying `EvenCore`
type works.
- `Srgba::to_hex` requires `alloc` to return a `String`.
- This was otherwise a _very_ simple change
# Objective
- Allow bevy_sprite_picking backend to pass through transparent sections
of the sprite.
- Fixes#14929
## Solution
- After sprite picking detects the cursor is within a sprites rect,
check the pixel at that location on the texture and check that it meets
an optional transparency cutoff. Change originally created for
mod_picking on bevy 0.14
(https://github.com/aevyrie/bevy_mod_picking/pull/373)
## Testing
- Ran Sprite Picking example to check it was working both with
transparency enabled and disabled
- ModPicking version is currently in use in my own isometric game where
this has been an extremely noticeable issue
## Showcase
![Sprite Picking
Text](https://github.com/user-attachments/assets/76568c0d-c359-422b-942d-17c84d3d3009)
## Migration Guide
Sprite picking now ignores transparent regions (with an alpha value less
than or equal to 0.1). To configure this, modify the
`SpriteBackendSettings` resource.
---------
Co-authored-by: andriyDev <andriydzikh@gmail.com>
This commit moves the front end of the rendering pipeline to a retained
model when GPU preprocessing is in use (i.e. by default, except in
constrained environments). `RenderMeshInstance` and `MeshUniformData`
are stored from frame to frame and are updated only for the entities
that changed state. This was rather tricky and requires some careful
surgery to keep the data valid in the case of removals.
This patch is built on top of Bevy's change detection. Generally, this
worked, except that `ViewVisibility` isn't currently properly tracked.
Therefore, this commit adds proper change tracking for `ViewVisibility`.
Doing this required adding a new system that runs after all
`check_visibility` invocations, as no single `check_visibility`
invocation has enough global information to detect changes.
On the Bistro exterior scene, with all textures forced to opaque, this
patch improves steady-state `extract_meshes_for_gpu_building` from
93.8us to 34.5us and steady-state `collect_meshes_for_gpu_building` from
195.7us to 4.28us. Altogether this constitutes an improvement from 290us
to 38us, which is a 7.46x speedup.
![Screenshot 2024-11-13
143841](https://github.com/user-attachments/assets/40b1aacc-373d-4016-b7fd-b0284bc33de4)
![Screenshot 2024-11-13
143850](https://github.com/user-attachments/assets/53b401c3-7461-43b3-918b-cff89ea780d6)
This patch is only lightly tested and shouldn't land before 0.15 is
released anyway, so I'm releasing it as a draft.
# Objective
- Contributes to #15460
## Solution
- Added `std` feature (enabled by default)
## Testing
- CI
- `cargo check -p bevy_reflect --no-default-features --target
"x86_64-unknown-none"`
- UEFI demo application runs with this branch of `bevy_reflect`,
allowing `derive(Reflect)`
## Notes
- The [`spin`](https://crates.io/crates/spin) crate has been included to
provide `RwLock` and `Once` (as an alternative to `OnceLock`) when the
`std` feature is not enabled. Another alternative may be more desirable,
please provide feedback if you have a strong opinion here!
- Certain items (`Box`, `String`, `ToString`) provided by `alloc` have
been added to `__macro_exports` as a way to avoid `alloc` vs `std`
namespacing. I'm personally quite annoyed that we can't rely on `alloc`
as a crate name in `std` environments within macros. I'd love an
alternative to my approach here, but I suspect it's the least-bad
option.
- I would've liked to have an `alloc` feature (for allocation-free
`bevy_reflect`), unfortunately, `erased_serde` unconditionally requires
access to `Box`. Maybe one day we could design around this, but for now
it just means `bevy_reflect` requires `alloc`.
---------
Co-authored-by: Gino Valente <49806985+MrGVSV@users.noreply.github.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
On the web, it's common to attach observers to windows. As @viridia has
discovered, this can be quite a nice paradigm in bevy as well when
applied to observers. The changes here are intended to make this
possible.
+ Adds a new default picking back-end as part to the core picking plugin
(which can be disabled) that causes pointers on windows to treat the
window entity as the final hit, behind everything else. This means
clicking empty space now dispatches normal picking events to the window,
and is especially nice for drag-and-drop functionality.
+ Adds a new traversal type, specific to picking events, that causes
them to bubble up to the window entity after they reach the root of the
hierarchy.
## Solution
The window picking back-end is extremely simple, but the bubbling
changes are much more complex, since they require doing a different
traversal depending on the picking event.
To achieve this, `Traversal` has been made generic over an associated
sized data type `D`. Observer bounds have been changed such that
`Event::Traversal<D>` is required for `Trigger<D>`. A blanket
implementation has been added for `()` and `Parent` that preserves the
existing functionality. A new `PointerTraversal` traversal has been
implemented, with a blanket implementation for `Traversal<Pointer<E>>`.
It is still possible to use `Parent` as the traversal for any event,
because of the blanket implementation. It is now possible for users to
add other custom traversals, which read event data during traversal.
## Testing
I tested these changes locally on some picking UI prototypes I have been
playing with. I also tested them on the picking examples.
---------
Co-authored-by: Martín Maita <47983254+mnmaita@users.noreply.github.com>
# Objective
Running Github Actions on forks helps users to reduce the amount of CI
errors they get before submitting a PR. However, due to how workflows
are set up on the Bevy repository, this can result in errors occurring
for jobs that may not be related to their PR - in this case, uploading
screenshots to Pixel Eagle.
## Solution
The Pixel Eagle workflow is skipped if we aren't running on the Bevy
repository.
If we are on the Bevy repository, or the user has set it to run
elsewhere, we check if the `PIXELEAGLE_TOKEN` secret is set. If it
isn't, we skip uploading screenshots to Pixel Eagle.
* Artifacts still continue to generate, in case the user needs them.
* In the event that the Pixel Eagle workflow runs, but the
`PIXELEAGLE_TOKEN` secret isn't set, we generate a step summary that
notifies the user of why it was skipped.
https://github.com/LikeLakers2/bevy/actions/runs/12173329006/attempts/1#summary-33953502068
for an example.
## Testing
Lots. And lots. Of trying to get Github Actions to work with me.
# Objective
- Currently adding observers spawns an entity which implicitly flushes
the command queue, which can cause undefined behaviour if the
`WorldEntityMut` is used after this
- The reason `WorldEntityMut` attempted to (unsuccessfully) avoid
flushing commands until finished was that such commands may move or
despawn the entity being referenced, invalidating the cached location.
- With the introduction of hooks and observers, this isn't sensible
anymore as running the commands generated by hooks immediately is
required to maintain correct ordering of operations and to not expose
the world in an inconsistent state
- Objective is to make command flushing deterministic and fix the
related issues
- Fixes#16212
- Fixes#14621
- Fixes#16034
## Solution
- Allow `WorldEntityMut` to exist even when it refers to a despawned
entity by allowing `EntityLocation` to be marked invalid
- Add checks to all methods to panic if trying to access a despawned
entity
- Flush command queue after every operation that might trigger hooks or
observers
- Update entity location always after flushing command queue
## Testing
- Added test cases for currently broken behaviour
- Added test cases that flushes happen in all operations
- Added test cases to ensure hooks and commands are run exactly in
correct order when nested
---
Todo:
- [x] Write migration guide
- [x] Add tests that using `EntityWorldMut` on a despawned entity panics
- [x] Add tests that commands are flushed after every operation that is
supposed to flush them
- [x] Add tests that hooks, observers and their spawned commands are run
in the correct order when nested
---
## Migration Guide
Previously `EntityWorldMut` triggered command queue flushes in
unpredictable places, which could interfere with hooks and observers.
Now the command queue is flushed always immediately after any call in
`EntityWorldMut` that spawns or despawns an entity, or adds, removes or
replaces a component. This means hooks and observers will run their
commands in the correct order.
As a side effect, there is a possibility that a hook or observer could
despawn the entity that is being referred to by `EntityWorldMut`. This
could already currently happen if an observer was added while keeping an
`EntityWorldMut` referece and would cause unsound behaviour. If the
entity has been despawned, calling any methods which require the entity
location will panic. This matches the behaviour that `Commands` will
panic if called on an already despawned entity. In the extremely rare
case where taking a new `EntityWorldMut` reference or otherwise
restructuring the code so that this case does not happen is not
possible, there's a new `is_despawned` method that can be used to check
if the referred entity has been despawned.
# Objective
- Make it easier to understand how to profile things.
- Talk about CPU vs GPU work for graphics.
## Solution
- Add a section on GPU profiling and CPU vs GPU work.
- Rearrange some sections so Tracy is the first backend mentioned.
## Issues
I did this as a very quick fix to clear some things up, but the overall
guide still flows poorly and has too much extraneous info distracting
from the use case of "I just want to figure out why my app is slow",
where the advice should be "use tracy, and if GPU bottlenecked, do
this". Someone should do a full rewrite at some point.
I chose to omit talking about RenderDiagnosticsPlugin, but it's
definitely worth a mention as a way to easily check GPU + GPU time for
graphics work, although it's not hooked up in a lot of the engine, iirc
only shadows and the main passes. Again someone else should write about
it in the future.
Similarly it might've been useful to have a section describing how to
use NSight/RGP/IGA/Xcode for GPU profiling, as they're far from
intuitive tools.
# Objective
- Required by #16622 due to differing implementations of `System` by
`FunctionSystem` and `ExclusiveFunctionSystem`.
- Optimize the memory usage of instances of `apply_deferred` in system
schedules.
## Solution
By changing `apply_deferred` from being an ordinary system that ends up
as an `ExclusiveFunctionSystem`, and instead into a ZST struct that
implements `System` manually, we save ~320 bytes per instance of
`apply_deferred` in any schedule.
## Testing
- All current tests pass.
---
## Migration Guide
- If you were previously calling the special `apply_deferred` system via
`apply_deferred(world)`, don't.
# Objective
Outside of the `bevy_ecs` crate it's hard to implement `SystemParam`
trait on params that require access to the `World`, because `init_state`
expects user to extend access in `SystemMeta` and access-related fields
of `SystemMeta` are private.
## Solution
Expose those fields as a functions
# Objective
Define a framework for handling keyboard focus and bubbled keyboard
events, as discussed in #15374.
## Solution
Introduces a new crate, `bevy_input_focus`. This crate provides:
* A resource for tracking which entity has keyboard focus.
* Methods for getting and setting keyboard focus.
* Event definitions for triggering bubble-able keyboard input events to
the focused entity.
* A system for dispatching keyboard input events to the focused entity.
This crate does *not* provide any integration with UI widgets, or
provide functions for
tab navigation or gamepad-based focus navigation, as those are typically
application-specific.
## Testing
Most of the code has been copied from a different project, one that has
been well tested. However, most of what's in this module consists of
type definitions, with relatively small amounts of executable code. That
being said, I expect that there will be substantial bikeshedding on the
design, and I would prefer to hold off writing tests until after things
have settled.
I think that an example would be appropriate, however I'm waiting on a
few other pending changes to Bevy before doing so. In particular, I can
see a simple example with four buttons, with focus navigation between
them, and which can be triggered by the keyboard.
@alice-i-cecile
# Objective
- Fixes#16208
## Solution
- Added an associated type to `Component`, `Mutability`, which flags
whether a component is mutable, or immutable. If `Mutability= Mutable`,
the component is mutable. If `Mutability= Immutable`, the component is
immutable.
- Updated `derive_component` to default to mutable unless an
`#[component(immutable)]` attribute is added.
- Updated `ReflectComponent` to check if a component is mutable and, if
not, panic when attempting to mutate.
## Testing
- CI
- `immutable_components` example.
---
## Showcase
Users can now mark a component as `#[component(immutable)]` to prevent
safe mutation of a component while it is attached to an entity:
```rust
#[derive(Component)]
#[component(immutable)]
struct Foo {
// ...
}
```
This prevents creating an exclusive reference to the component while it
is attached to an entity. This is particularly powerful when combined
with component hooks, as you can now fully track a component's value,
ensuring whatever invariants you desire are upheld. Before this would be
done my making a component private, and manually creating a `QueryData`
implementation which only permitted read access.
<details>
<summary>Using immutable components as an index</summary>
```rust
/// This is an example of a component like [`Name`](bevy::prelude::Name), but immutable.
#[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash, Component)]
#[component(
immutable,
on_insert = on_insert_name,
on_replace = on_replace_name,
)]
pub struct Name(pub &'static str);
/// This index allows for O(1) lookups of an [`Entity`] by its [`Name`].
#[derive(Resource, Default)]
struct NameIndex {
name_to_entity: HashMap<Name, Entity>,
}
impl NameIndex {
fn get_entity(&self, name: &'static str) -> Option<Entity> {
self.name_to_entity.get(&Name(name)).copied()
}
}
fn on_insert_name(mut world: DeferredWorld<'_>, entity: Entity, _component: ComponentId) {
let Some(&name) = world.entity(entity).get::<Name>() else {
unreachable!()
};
let Some(mut index) = world.get_resource_mut::<NameIndex>() else {
return;
};
index.name_to_entity.insert(name, entity);
}
fn on_replace_name(mut world: DeferredWorld<'_>, entity: Entity, _component: ComponentId) {
let Some(&name) = world.entity(entity).get::<Name>() else {
unreachable!()
};
let Some(mut index) = world.get_resource_mut::<NameIndex>() else {
return;
};
index.name_to_entity.remove(&name);
}
// Setup our name index
world.init_resource::<NameIndex>();
// Spawn some entities!
let alyssa = world.spawn(Name("Alyssa")).id();
let javier = world.spawn(Name("Javier")).id();
// Check our index
let index = world.resource::<NameIndex>();
assert_eq!(index.get_entity("Alyssa"), Some(alyssa));
assert_eq!(index.get_entity("Javier"), Some(javier));
// Changing the name of an entity is also fully capture by our index
world.entity_mut(javier).insert(Name("Steven"));
// Javier changed their name to Steven
let steven = javier;
// Check our index
let index = world.resource::<NameIndex>();
assert_eq!(index.get_entity("Javier"), None);
assert_eq!(index.get_entity("Steven"), Some(steven));
```
</details>
Additionally, users can use `Component<Mutability = ...>` in trait
bounds to enforce that a component _is_ mutable or _is_ immutable. When
using `Component` as a trait bound without specifying `Mutability`, any
component is applicable. However, methods which only work on mutable or
immutable components are unavailable, since the compiler must be
pessimistic about the type.
## Migration Guide
- When implementing `Component` manually, you must now provide a type
for `Mutability`. The type `Mutable` provides equivalent behaviour to
earlier versions of `Component`:
```rust
impl Component for Foo {
type Mutability = Mutable;
// ...
}
```
- When working with generic components, you may need to specify that
your generic parameter implements `Component<Mutability = Mutable>`
rather than `Component` if you require mutable access to said component.
- The entity entry API has had to have some changes made to minimise
friction when working with immutable components. Methods which
previously returned a `Mut<T>` will now typically return an
`OccupiedEntry<T>` instead, requiring you to add an `into_mut()` to get
the `Mut<T>` item again.
## Draft Release Notes
Components can now be made immutable while stored within the ECS.
Components are the fundamental unit of data within an ECS, and Bevy
provides a number of ways to work with them that align with Rust's rules
around ownership and borrowing. One part of this is hooks, which allow
for defining custom behavior at key points in a component's lifecycle,
such as addition and removal. However, there is currently no way to
respond to _mutation_ of a component using hooks. The reasons for this
are quite technical, but to summarize, their addition poses a
significant challenge to Bevy's core promises around performance.
Without mutation hooks, it's relatively trivial to modify a component in
such a way that breaks invariants it intends to uphold. For example, you
can use `core::mem::swap` to swap the components of two entities,
bypassing the insertion and removal hooks.
This means the only way to react to this modification is via change
detection in a system, which then begs the question of what happens
_between_ that alteration and the next run of that system?
Alternatively, you could make your component private to prevent
mutation, but now you need to provide commands and a custom `QueryData`
implementation to allow users to interact with your component at all.
Immutable components solve this problem by preventing the creation of an
exclusive reference to the component entirely. Without an exclusive
reference, the only way to modify an immutable component is via removal
or replacement, which is fully captured by component hooks. To make a
component immutable, simply add `#[component(immutable)]`:
```rust
#[derive(Component)]
#[component(immutable)]
struct Foo {
// ...
}
```
When implementing `Component` manually, there is an associated type
`Mutability` which controls this behavior:
```rust
impl Component for Foo {
type Mutability = Mutable;
// ...
}
```
Note that this means when working with generic components, you may need
to specify that a component is mutable to gain access to certain
methods:
```rust
// Before
fn bar<C: Component>() {
// ...
}
// After
fn bar<C: Component<Mutability = Mutable>>() {
// ...
}
```
With this new tool, creating index components, or caching data on an
entity should be more user friendly, allowing libraries to provide APIs
relying on components and hooks to uphold their invariants.
## Notes
- ~~I've done my best to implement this feature, but I'm not happy with
how reflection has turned out. If any reflection SMEs know a way to
improve this situation I'd greatly appreciate it.~~ There is an
outstanding issue around the fallibility of mutable methods on
`ReflectComponent`, but the DX is largely unchanged from `main` now.
- I've attempted to prevent all safe mutable access to a component that
does not implement `Component<Mutability = Mutable>`, but there may
still be some methods I have missed. Please indicate so and I will
address them, as they are bugs.
- Unsafe is an escape hatch I am _not_ attempting to prevent. Whatever
you do with unsafe is between you and your compiler.
- I am marking this PR as ready, but I suspect it will undergo fairly
major revisions based on SME feedback.
- I've marked this PR as _Uncontroversial_ based on the feature, not the
implementation.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Benjamin Brienen <benjamin.brienen@outlook.com>
Co-authored-by: Gino Valente <49806985+MrGVSV@users.noreply.github.com>
Co-authored-by: Nuutti Kotivuori <naked@iki.fi>
This commit allows the Bevy renderer to use the clustering
infrastructure for light probes (reflection probes and irradiance
volumes) on platforms where at least 3 storage buffers are available. On
such platforms (the vast majority), we stop performing brute-force
searches of light probes for each fragment and instead only search the
light probes with bounding spheres that intersect the current cluster.
This should dramatically improve scalability of irradiance volumes and
reflection probes.
The primary platform that doesn't support 3 storage buffers is WebGL 2,
and we continue using a brute-force search of light probes on that
platform, as the UBO that stores per-cluster indices is too small to fit
the light probe counts. Note, however, that that platform also doesn't
support bindless textures (indeed, it would be very odd for a platform
to support bindless textures but not SSBOs), so we only support one of
each type of light probe per drawcall there in the first place.
Consequently, this isn't a performance problem, as the search will only
have one light probe to consider. (In fact, clustering would probably
end up being a performance loss.)
Known potential improvements include:
1. We currently cull based on a conservative bounding sphere test and
not based on the oriented bounding box (OBB) of the light probe. This is
improvable, but in the interests of simplicity, I opted to keep the
bounding sphere test for now. The OBB improvement can be a follow-up.
2. This patch doesn't change the fact that each fragment only takes a
single light probe into account. Typical light probe implementations
detect the case in which multiple light probes cover the current
fragment and perform some sort of weighted blend between them. As the
light probe fetch function presently returns only a single light probe,
implementing that feature would require more code restructuring, so I
left it out for now. It can be added as a follow-up.
3. Light probe implementations typically have a falloff range. Although
this is a wanted feature in Bevy, this particular commit also doesn't
implement that feature, as it's out of scope.
4. This commit doesn't raise the maximum number of light probes past its
current value of 8 for each type. This should be addressed later, but
would possibly require more bindings on platforms with storage buffers,
which would increase this patch's complexity. Even without raising the
limit, this patch should constitute a significant performance
improvement for scenes that get anywhere close to this limit. In the
interest of keeping this patch small, I opted to leave raising the limit
to a follow-up.
## Changelog
### Changed
* Light probes (reflection probes and irradiance volumes) are now
clustered on most platforms, improving performance when many light
probes are present.
---------
Co-authored-by: Benjamin Brienen <Benjamin.Brienen@outlook.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
Add a way to use the gizmo API in a retained manner, for increased
performance.
## Solution
- Move gizmo API from `Gizmos` to `GizmoBuffer`, ~ab~using `Deref` to
keep usage the same as before.
- Merge non-strip and strip variant of `LineGizmo` into one, storing the
data in a `GizmoBuffer` to have the same API for retained `LineGizmo`s.
### Review guide
- The meat of the changes are in `lib.rs`, `retained.rs`, `gizmos.rs`,
`pipeline_3d.rs` and `pipeline_2d.rs`
- The other files contain almost exclusively the churn from moving the
gizmo API from `Gizmos` to `GizmoBuffer`
## Testing
### Performance
Performance compared to the immediate mode API is from 65 to 80 times
better for static lines.
```
7900 XTX, 3700X
1707.9k lines/ms: gizmos_retained (21.3ms)
3488.5k lines/ms: gizmos_retained_continuous_polyline (31.3ms)
0.5k lines/ms: gizmos_retained_separate (97.7ms)
3054.9k lines/ms: bevy_polyline_retained_nan (16.8ms)
3596.3k lines/ms: bevy_polyline_retained_continuous_polyline (14.2ms)
0.6k lines/ms: bevy_polyline_retained_separate (78.9ms)
26.9k lines/ms: gizmos_immediate (14.9ms)
43.8k lines/ms: gizmos_immediate_continuous_polyline (18.3ms)
```
Looks like performance is good enough, being close to par with
`bevy_polyline`.
Benchmarks can be found here:
This branch:
https://github.com/tim-blackbird/line_racing/tree/retained-gizmos
Bevy 0.14: https://github.com/DGriffin91/line_racing
## Showcase
```rust
fn setup(
mut commands: Commands,
mut gizmo_assets: ResMut<Assets<GizmoAsset>>
) {
let mut gizmo = GizmoAsset::default();
// A sphere made out of one million lines!
gizmo
.sphere(default(), 1., CRIMSON)
.resolution(1_000_000 / 3);
commands.spawn(Gizmo {
handle: gizmo_assets.add(gizmo),
..default()
});
}
```
## Follow-up work
- Port over to the retained rendering world proper
- Calculate visibility and cull `Gizmo`s