# Objective
Fixes https://github.com/bevyengine/bevy/issues/16752
## Solution
Renamed the 3 remaining instances of `enqueue_command` to
`queue_command`
## Testing
- Built locally
## Migration Guide
All instances of the `enqueue_command` method have been renamed to
`queue_command`.
# Objective
Draw the UI debug overlay using the UI renderer.
Significantly simpler and easier to use than
`bevy_dev_tools::ui_debug_overlay` which uses `bevy_gizmos`.
* Supports multiple windows and UI rendered to texture.
* Draws rounded debug rects for rounded UI nodes.
Fixes#16666
## Solution
Removed the `ui_debug_overlay` module from `bevy_dev_tools`.
Added a `bevy_ui_debug` feature gate.
Draw the UI debug overlay using the UI renderer.
Adds a new module `bevy_ui::render::debug_overlay`.
The debug overlay extraction function queries for the existing UI layout
and then adds a border around each UI node with `u32::MAX / 2` added to
each stack index so it's drawn on top.
There is a `UiDebugOptions` resource that can be used to enable or
disable the debug overlay and set the line width.
## Testing
The `testbed_ui` example has been changed to use the new debug overlay:
```
cargo run --example testbed_ui --features bevy_ui_debug
```
Press Space to toggle the debug overlay on and off.
---
## Showcase
<img width="961" alt="testbed-ui-new-debug"
src="https://github.com/user-attachments/assets/e9523d18-39ae-46a8-adbe-7d3f3ab8e951">
## Migration Guide
The `ui_debug_overlay` module has been removed from `bevy_dev_tools`.
There is a new debug overlay implemented using the `bevy_ui` renderer.
To use it, enable the `bevy_ui_debug` feature and set the `enable` field
of the `UiDebugOptions` resource to `true`.
# Objective
- Benchmarks are inconsistently setup, there are several that do not
compile, and several others that need to be reformatted.
- Related / precursor to #16647, this is part of my attempt migrating
[`bevy-bencher`](https://github.com/TheBevyFlock/bevy-bencher) to the
official benchmarks.
## Solution
> [!TIP]
>
> I recommend reviewing this PR commit-by-commit, instead of all at
once!
In 5d26f56eb9 I reorganized how benches
were registered. Now this is one `[[bench]]` per Bevy crate. In each
crate benchmark folder, there is a `main.rs` that calls
`criterion_main!`. I also disabled automatic benchmark discovery, which
isn't necessarily required, but may clear up confusion with our custom
setup. I also fixed a few errors that were causing the benchmarks to
fail to compile.
In afc8d33a87 I ran `rustfmt` on all of
the benchmarks.
In d6cdf960ab I fixed all of the Clippy
warnings.
In ee94d48f50 I fixed some of the
benchmarks' usage of `black_box()`. I ended up opening
https://github.com/rust-lang/rust/pull/133942 due to this, which should
help prevent this in the future.
In cbe1688dcd I renamed all of the ECS
benchmark groups to be called `benches`, to be consistent with the other
crate benchmarks.
In e701c212cd and
8815bb78b0 I re-ordered some imports and
module definitions, and uplifted `fragmentation/mod.rs` to
`fragementation.rs`.
Finally, in b0065e0b0b I organized
`Cargo.toml` and bumped Criterion to v0.5.
## Testing
- `cd benches && cargo clippy --benches`
- `cd benches && cargo fmt --all`
# Objective
- For curves that also include derivatives, make accessing derivative
information via the `Curve` API ergonomic: that is, provide access to a
curve that also samples derivative information.
- Implement this functionality for cubic spline curves provided by
`bevy_math`.
Ultimately, this is to serve the purpose of doing more geometric
operations on curves, like reparametrization by arclength and the
construction of moving frames.
## Solution
This has several parts, some of which may seem redundant. However, care
has been put into this to satisfy the following constraints:
- Accessing a `Curve` that samples derivative information should be not
just possible but easy and non-error-prone. For example, given a
differentiable `Curve<Vec2>`, one should be able to access something
like a `Curve<(Vec2, Vec2)>` ergonomically, and not just sample the
derivatives piecemeal from point to point.
- Derivative access should not step on the toes of ordinary curve usage.
In particular, in the above scenario, we want to avoid simply making the
same curve both a `Curve<Vec2>` and a `Curve<(Vec2, Vec2)>` because this
requires manual disambiguation when the API is used.
- Derivative access must work gracefully in both owned and borrowed
contexts.
### `HasTangent`
We introduce a trait `HasTangent` that provides an associated `Tangent`
type for types that have tangent spaces:
```rust
pub trait HasTangent {
/// The tangent type.
type Tangent: VectorSpace;
}
```
(Mathematically speaking, it would be more precise to say that these are
types that represent spaces which are canonically
[parallelized](https://en.wikipedia.org/wiki/Parallelizable_manifold). )
The idea here is that a point moving through a `HasTangent` type may
have a derivative valued in the associated `Tangent` type at each time
in its journey. We reify this with a `WithDerivative<T>` type that uses
`HasTangent` to include derivative information:
```rust
pub struct WithDerivative<T>
where
T: HasTangent,
{
/// The underlying value.
pub value: T,
/// The derivative at `value`.
pub derivative: T::Tangent,
}
```
And we can play the same game with second derivatives as well, since
every `VectorSpace` type is `HasTangent` where `Tangent` is itself (we
may want to be more restrictive with this in practice, but this holds
mathematically).
```rust
pub struct WithTwoDerivatives<T>
where
T: HasTangent,
{
/// The underlying value.
pub value: T,
/// The derivative at `value`.
pub derivative: T::Tangent,
/// The second derivative at `value`.
pub second_derivative: <T::Tangent as HasTangent>::Tangent,
}
```
In this PR, `HasTangent` is only implemented for `VectorSpace` types,
but it would be valuable to have this implementation for types like
`Rot2` and `Quat` as well. We could also do it for the isometry types
and, potentially, transforms as well. (This is in decreasing order of
value in my opinion.)
### `CurveWithDerivative`
This is a trait for a `Curve<T>` which allows the construction of a
`Curve<WithDerivative<T>>` when derivative information is known
intrinsically. It looks like this:
```rust
/// Trait for curves that have a well-defined notion of derivative, allowing for
/// derivatives to be extracted along with values.
pub trait CurveWithDerivative<T>
where
T: HasTangent,
{
/// This curve, but with its first derivative included in sampling.
fn with_derivative(self) -> impl Curve<WithDerivative<T>>;
}
```
The idea here is to provide patterns like this:
```rust
let value_and_derivative = my_curve.with_derivative().sample_clamped(t);
```
One of the main points here is that `Curve<WithDerivative<T>>` is useful
as an output because it can be used durably. For example, in a dynamic
context, something that needs curves with derivatives can store
something like a `Box<dyn Curve<WithDerivative<T>>>`. Note that
`CurveWithDerivative` is not dyn-compatible.
### `SampleDerivative`
Many curves "know" how to sample their derivatives instrinsically, but
implementing `CurveWithDerivative` as given would be onerous or require
an annoying amount of boilerplate. There are also hurdles to overcome
that involve references to curves: for the `Curve` API, the expectation
is that curve transformations like `with_derivative` take things by
value, with the contract that they can still be used by reference
through deref-magic by including `by_ref` in a method chain.
These problems are solved simultaneously by a trait `SampleDerivative`
which, when implemented, automatically derives `CurveWithDerivative` for
a type and all types that dereference to it. It just looks like this:
```rust
pub trait SampleDerivative<T>: Curve<T>
where
T: HasTangent,
{
fn sample_with_derivative_unchecked(&self, t: f32) -> WithDerivative<T>;
// ... other sampling variants as default methods
}
```
The point is that the output of `with_derivative` is a
`Curve<WithDerivative<T>>` that uses the `SampleDerivative`
implementation. On a `SampleDerivative` type, you can also just call
`my_curve.sample_with_derivative(t)` instead of something like
`my_curve.by_ref().with_derivative().sample(t)`, which is more verbose
and less accessible.
In practice, `CurveWithDerivative<T>` is actually a "sealed" extension
trait of `SampleDerivative<T>`.
## Adaptors
`SampleDerivative` has automatic implementations on all curve adaptors
except for `FunctionCurve`, `MapCurve`, and `ReparamCurve` (because we
do not have a notion of differentiable Rust functions).
For example, `CurveReparamCurve` (the reparametrization of a curve by
another curve) can compute derivatives using the chain rule in the case
both its constituents have them.
## Testing
Tests for derivatives on the curve adaptors are included.
---
## Showcase
This development allows derivative information to be included with and
extracted from curves using the `Curve` API.
```rust
let points = [
vec2(-1.0, -20.0),
vec2(3.0, 2.0),
vec2(5.0, 3.0),
vec2(9.0, 8.0),
];
// A cubic spline curve that goes through `points`.
let curve = CubicCardinalSpline::new(0.3, points).to_curve().unwrap();
// Calling `with_derivative` causes derivative output to be included in the output of the curve API.
let curve_with_derivative = curve.with_derivative();
// A `Curve<f32>` that outputs the speed of the original.
let speed_curve = curve_with_derivative.map(|x| x.derivative.norm());
```
---
## Questions
- ~~Maybe we should seal `WithDerivative` or make it require
`SampleDerivative` (i.e. make it unimplementable except through
`SampleDerivative`).~~ I decided this is a good idea.
- ~~Unclear whether `VectorSpace: HasTangent` blanket implementation is
really appropriate. For colors, for example, I'm not sure that the
derivative values can really be interpreted as a color. In any case, it
should still remain the case that `VectorSpace` types are `HasTangent`
and that `HasTangent::Tangent: HasTangent`.~~ I think this is fine.
- Infinity bikeshed on names of traits and things.
## Future
- Faster implementations of `SampleDerivative` for cubic spline curves.
- Improve ergonomics for accessing only derivatives (and other kinds of
transformations on derivative curves).
- Implement `HasTangent` for:
- `Rot2`/`Quat`
- `Isometry` types
- `Transform`, maybe
- Implement derivatives for easing curves.
- Marker traits for continuous/differentiable curves. (It's actually
unclear to me how much value this has in practice, but we have discussed
it in the past.)
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Updating dependencies; adopted version of #15696. (Supercedes #15696.)
Long answer: hashbrown is no longer using ahash by default, meaning that
we can't use the default-hasher methods with ahasher. So, we have to use
the longer-winded versions instead. This takes the opportunity to also
switch our default hasher as well, but without actually enabling the
default-hasher feature for hashbrown, meaning that we'll be able to
change our hasher more easily at the cost of all of these method calls
being obnoxious forever.
One large change from 0.15 is that `insert_unique_unchecked` is now
`unsafe`, and for cases where unsafe code was denied at the crate level,
I replaced it with `insert`.
## Migration Guide
`bevy_utils` has updated its version of `hashbrown` to 0.15 and now
defaults to `foldhash` instead of `ahash`. This means that if you've
hard-coded your hasher to `bevy_utils::AHasher` or separately used the
`ahash` crate in your code, you may need to switch to `foldhash` to
ensure that everything works like it does in Bevy.
# Objective
- Fixes#16497
- This is my first PR, so I'm still learning to contribute to the
project
## Solution
- Added struct `UnregisterSystemCached` and function
`unregister_system_cached`
- renamed `World::run_system_with_input` to `run_system_with`
- reordered input parameters for `World::run_system_once_with`
## Testing
- Added a crude test which registers a system via
`World::register_system_cached`, and removes it via
`Command::unregister_system_cached`.
## Migration Guide
- Change all occurrences of `World::run_system_with_input` to
`World::run_system_with`.
- swap the order of input parameters for `World::run_system_once_with`
such that the system comes before the input.
---------
Co-authored-by: Paul Mattern <mail@paulmattern.dev>
This commit makes skinned meshes batchable on platforms other than WebGL
2. On supported platforms, it replaces the two uniform buffers used for
joint matrices with a pair of storage buffers containing all matrices
for all skinned meshes packed together. The indices into the buffer are
stored in the mesh uniform and mesh input uniform. The GPU mesh
preprocessing step copies the indices in if that step is enabled.
On the `many_foxes` demo, I observed a frame time decrease from 15.470ms
to 11.935ms. This is the result of reducing the `submit_graph_commands`
time from an average of 5.45ms to 0.489ms, an 11x speedup in that
portion of rendering.
![Screenshot 2024-12-01
192838](https://github.com/user-attachments/assets/7d2db997-8939-466e-8b9e-050d4a6a78ee)
This is what the profile looks like for `many_foxes` after these
changes.
![Screenshot 2024-12-01
193026](https://github.com/user-attachments/assets/68983fc3-01b8-41fd-835e-3d93cb65d0fa)
---------
Co-authored-by: François Mockers <mockersf@gmail.com>
This commit makes `StandardMaterial` use bindless textures, as
implemented in PR #16368. Non-bindless mode, as used for example in
Metal and WebGL 2, remains fully supported via a plethora of `#ifdef
BINDLESS` preprocessor definitions.
Unfortunately, this PR introduces quite a bit of unsightliness into the
PBR shaders. This is a result of the fact that WGSL supports neither
passing binding arrays to functions nor passing individual *elements* of
binding arrays to functions, except directly to texture sample
functions. Thus we're unable to use the `sample_texture` abstraction
that helped abstract over the meshlet and non-meshlet paths. I don't
think there's anything we can do to help this other than to suggest
improvements to upstream Naga.
This patch makes shadows use multidraw when the camera they'll be drawn
to has the `GpuCulling` component. This results in a significant
reduction in drawcalls; Bistro Exterior drops to 3 drawcalls for each
shadow cascade.
Note that PR #16670 will remove the `GpuCulling` component, making
shadows automatically use multidraw. Beware of that when testing this
patch; before #16670 lands, you'll need to manually add `GpuCulling` to
your camera in order to see any performance benefits.
CI was failing because `bevy_math` no longer compiled with `libcore`.
This was due to PR #15981. This commit fixes the issue by moving the
applicable functionality behind `#[cfg(feature = "alloc")]`.
PR #15756 made us create temporary render entities for all visible
objects, even if they had no render world counterpart. This regressed
our `many_cubes` time from about 3.59 ms/frame to 4.66 ms/frame.
This commit changes that behavior to use `Entity::PLACEHOLDER` instead
of creating a temporary render entity. This improves our `many_cubes`
time from 5.66 ms/frame to 3.96 ms/frame, a 43% speedup.
I tested 3D, 2D gizmos, and UI and they seem to work.
See the following graph of `many_cubes` frame time (lower is better). PR
#15756 is the one in October.
![Time (ms_frame) vs
Date(3)](https://github.com/user-attachments/assets/2c31a893-97bd-40f6-9e89-d2195a44cf40)
# Objective
The `RayCastSettings` type is only used in the context of ray casts with
the `MeshRayCast` system parameter. The current name is somewhat
inconsistent with other existing types, like `MeshRayCast` and
`MeshPickingSettings`, but more importantly, it easily conflicts with
physics, and forces those crates to opt for some other name like
`RayCastConfig` or `RayCastOptions`.
We should rename `RayCastSettings` to `MeshRayCastSettings` to avoid
naming conflicts and improve consistency.
## Solution
Rename `RayCastSettings` to `MeshRayCastSettings`.
---
## Migration Guide
`RayCastSettings` has been renamed to `MeshRayCastSettings` to avoid
naming conflicts with other ray casting backends and types.
## Objective
Follow-up to #16672.
`EntityCommands::clone` looks the same as the `Clone` trait, which could
be confusing. A discord discussion has made me realize that's probably a
bigger problem than I thought. Oops :P
## Solution
Renamed `EntityCommands::clone` to `EntityCommands::clone_and_spawn`,
renamed `EntityCommands::clone_with` to
`EntityCommands::clone_and_spawn_with`. Also added some docs explaining
the commands' relation to `Clone` (components need to implement it (or
`Reflect`)).
## Showcase
```
// Create a new entity and keep its EntityCommands
let mut entity = commands.spawn((ComponentA(10), ComponentB(20)));
// Create a clone of the first entity
let mut entity_clone = entity.clone_and_spawn();
```
## The Bikeshed
- `clone_and_spawn` (Alice's suggestion)
- `spawn_clone` (benfrankel's suggestion)
- `spawn_cloned` (rparrett's suggestion)
# Objective
The documentation for `Query::transmute_lens` lists some allowed
transmutes, but the list is incomplete.
## Solution
Document the underlying rules for what transmutes are allowed.
Add a longer list of examples. Write them as doc tests to ensure that
those examples are actually allowed.
I'm assuming that anything that can be done today is intended to be
supported! If any of these examples are things we plan to prohibit in
the future then we can add some warnings to that effect.
# Objective
The parameter names for `bevy::math::ops::atan2` are labelled such that
`x` is the first argument and `y` is the second argument, but it passes
those arguments directly to
[`f32::atan2`](https://doc.rust-lang.org/stable/std/primitive.f32.html#method.atan2),
whose parameters are expected to be `(y, x)`. This PR changes the
parameter names in the bevy documentation to use the correct order for
the operation being performed. You can verify this by doing:
```rust
fn main() {
let x = 3.0;
let y = 4.0;
let angle = bevy::math::ops::atan2(x, y);
// standard polar coordinates formula
dbg!(5.0 * angle.cos(), 5.0 * angle.sin());
}
```
This will print `(4.0, 3.0)`, which has flipped `x` and `y`. The problem
is that the `atan2` function to calculate the angle was really expecting
`(y, x)`, not `(x, y)`.
## Solution
I flipped the parameter names for `bevy::math::ops::atan2` and updated
the documentation. I also removed references to `self` and `other` from
the documentation which seemed to be copied from the `f32::atan2`
documentation.
## Testing
Not really needed, you can compare the `f32::atan2` docs to the
`bevy::math::ops::atan2` docs to see the problem is obvious. If a test
is required I could add a short one.
## Migration Guide
I'm not sure if this counts as a breaking change, since the
implementation clearly meant to use `f32::atan2` directly, so it was
really just the parameter names that were wrong.
This fixes a minor copy-paste mistake in the `FontAtlasSet::is_empty`
method's documentation.
# Objective
- Correct the documentation for that method.
## Solution
- Remove the copy + paste'd docs from `FontAtlasSet::is_empty` and add
something similar to
`alloc::collections::btree_set::BTreeSet::is_empty`.
## Testing
- No testable changes were made. However, the two tests in the
`bevy_text` module still pass.
A small documentation improvement. The description was copied from
insert_children. I changed the documentation to be singular instead of
plural when referring to the child in add_child.
# Objective
- The description was copied from insert_children and still refers to
the child being added as plural children
## Solution
- Description now has child in singular form.
## Testing
- N/A
---------
Co-authored-by: IQuick 143 <IQuick143cz@gmail.com>
# Objective
We currently have no benchmarks for large worlds with many entities,
components and systems.
Having a benchmark for a world with many components is especially useful
for the performance improvements needed for relations. This is also a
response to this [comment from
cart](https://github.com/bevyengine/bevy/pull/14385#issuecomment-2311292546).
> I'd like both a small bevy_ecs-scoped executor benchmark that
generates thousands of components used by hundreds of systems.
## Solution
I use dynamic components and components to construct a benchmark with
2000 components, 4000 systems, and 10000 entities.
## Some notes
- ~I use a lot of random entities, which creates unpredictable
performance, I should use a seeded PRNG.~
- Not entirely sure if everything is ran concurrently currently. And
there are many conflicts, meaning there's probably a lot of
first-come-first-serve going on. Not entirely sure if these benchmarks
are very reproducible.
- Maybe add some more safety comments
- Also component_reads_and_writes() is about to be deprecated #16339,
but there's no other way to currently do what I'm trying to do.
---------
Co-authored-by: Chris Russell <8494645+chescock@users.noreply.github.com>
Co-authored-by: BD103 <59022059+BD103@users.noreply.github.com>
# Objective
Fixes#16192
## Solution
I renamed the Pointer<Down/Up> to <Pressed/Released> and then I resolved
all the errors.
Renamed variables like "is_down" to "is_pressed" to maintain
consistency.
Modified the docs in places where 'down/up' were used to maintain
consistency.
## Testing
I haven't tested this in any way beside the checks from rust analyzer
and the examples in the examples/ directory.
---
## Migration Guide
### `bevy_picking/src/pointer.rs`:
#### `enum PressDirection`:
- `PressDirection::Down` changes to `PressDirection::Pressed`.
- `PressDirection::Up` changes to `PressDirection::Released`.
These changes are also relevant when working with `enum PointerAction`
### `bevy_picking/src/events.rs`:
Clicking and pressing Events in events.rs categories change from [Down],
[Up], [Click] to [Pressed], [Released], [Click].
- `struct Down` changes to `struct Pressed` - fires when a pointer
button is pressed over the 'target' entity.
- `struct Up` changes to `struct Released` - fires when a pointer button
is released over the 'target' entity.
- `struct Click` now fires when a pointer sends a Pressed event followed
by a Released event on the same 'target'.
- `struct DragStart` now fires when the 'target' entity receives a
pointer Pressed event followed by a pointer Move event.
- `struct DragEnd` now fires when the 'target' entity is being dragged
and receives a pointer Released event.
- `PickingEventWriters<'w>::down_events: EventWriter<'w, Pointer<Down>>`
changes to `PickingEventWriters<'w>::pressed_events: EventWriter<'w,
Pointer<Pressed>>`.
- `PickingEventWriters<'w>::up_events changes to
PickingEventWriters<'w>::released_events`.
---------
Co-authored-by: Harun Ibram <harun.ibram@outlook.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Updates the requirements on
[thiserror](https://github.com/dtolnay/thiserror) to permit the latest
version.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/dtolnay/thiserror/releases">thiserror's
releases</a>.</em></p>
<blockquote>
<h2>2.0.3</h2>
<ul>
<li>Support the same Path field being repeated in both Debug and Display
representation in error message (<a
href="https://redirect.github.com/dtolnay/thiserror/issues/383">#383</a>)</li>
<li>Improve error message when a format trait used in error message is
not implemented by some field (<a
href="https://redirect.github.com/dtolnay/thiserror/issues/384">#384</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="15fd26e476"><code>15fd26e</code></a>
Release 2.0.3</li>
<li><a
href="7046023130"><code>7046023</code></a>
Simplify how has_bonus_display is accumulated</li>
<li><a
href="9cc1d0b251"><code>9cc1d0b</code></a>
Merge pull request <a
href="https://redirect.github.com/dtolnay/thiserror/issues/384">#384</a>
from dtolnay/nowrap</li>
<li><a
href="1d040f358a"><code>1d040f3</code></a>
Use Var wrapper only for Pointer formatting</li>
<li><a
href="6a6132d79b"><code>6a6132d</code></a>
Extend no-display ui test to cover another fmt trait</li>
<li><a
href="a061beb9dc"><code>a061beb</code></a>
Merge pull request <a
href="https://redirect.github.com/dtolnay/thiserror/issues/383">#383</a>
from dtolnay/both</li>
<li><a
href="63882935be"><code>6388293</code></a>
Support Display and Debug of same path in error message</li>
<li><a
href="dc0359eeec"><code>dc0359e</code></a>
Defer binding_value construction</li>
<li><a
href="520343e37d"><code>520343e</code></a>
Add test of Debug and Display of paths</li>
<li><a
href="49be39dee1"><code>49be39d</code></a>
Release 2.0.2</li>
<li>Additional commits viewable in <a
href="https://github.com/dtolnay/thiserror/compare/1.0.0...2.0.3">compare
view</a></li>
</ul>
</details>
<br />
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
# Objective
- This PR adds the ability to determine whether a `Polygon<N>` or
`BoxedPolygon` is simple (aka. not self-intersecting) by calling
`my_polygon.is_simple()`.
- This may be useful information for users to determine whether their
polygons are 'valid' and will be useful when adding meshing for
polygons.
- As such this is a step towards fixing #15255
## Solution
- Implemented the Shamos-Hoey algorithm in its own module `polygon`.
## Testing
- Tests are included, and can be verified visually.
---
## Performance
- The Shamos-Hoey algorithm runs in O(n * log n)
- In reality, the results look more linear to me.
- Determining simplicity for a simple polygon (the worst case) with less
than 100 vertices takes less than 0.2ms.
![image](https://github.com/user-attachments/assets/23c62234-abdc-4710-a3b4-feaad5929133)
# Objective
Currently function reflection requires users to manually monomorphize
their generic functions. For example:
```rust
fn add<T: Add<Output=T>>(a: T, b: T) -> T {
a + b
}
// We have to specify the type of `T`:
let reflect_add = add::<i32>.into_function();
```
This PR doesn't aim to solve that problem—this is just a limitation in
Rust. However, it also means that reflected functions can only ever work
for a single monomorphization. If we wanted to support other types for
`T`, we'd have to create a separate function for each one:
```rust
let reflect_add_i32 = add::<i32>.into_function();
let reflect_add_u32 = add::<u32>.into_function();
let reflect_add_f32 = add::<f32>.into_function();
// ...
```
So in addition to requiring manual monomorphization, we also lose the
benefit of having a single function handle multiple argument types.
If a user wanted to create a small modding script that utilized function
reflection, they'd have to either:
- Store all sets of supported monomorphizations and require users to
call the correct one
- Write out some logic to find the correct function based on the given
arguments
While the first option would work, it wouldn't be very ergonomic. The
second option is better, but it adds additional complexity to the user's
logic—complexity that `bevy_reflect` could instead take on.
## Solution
Introduce [function
overloading](https://en.wikipedia.org/wiki/Function_overloading).
A `DynamicFunction` can now be overloaded with other `DynamicFunction`s.
We can rewrite the above code like so:
```rust
let reflect_add = add::<i32>
.into_function()
.with_overload(add::<u32>)
.with_overload(add::<f32>);
```
When invoked, the `DynamicFunction` will attempt to find a matching
overload for the given set of arguments.
And while I went into this PR only looking to improve generic function
reflection, I accidentally added support for variadic functions as well
(hence why I use the broader term "overload" over "generic").
```rust
// Supports 1 to 4 arguments
let multiply_all = (|a: i32| a)
.into_function()
.with_overload(|a: i32, b: i32| a * b)
.with_overload(|a: i32, b: i32, c: i32| a * b * c)
.with_overload(|a: i32, b: i32, c: i32, d: i32| a * b * c * d);
```
This is simply an added bonus to this particular implementation. ~~Full
variadic support (i.e. allowing for an indefinite number of arguments)
will be added in a later PR.~~ I actually decided to limit the maximum
number of arguments to 63 to supplement faster lookups, a reduced memory
footprint, and faster cloning.
### Alternatives & Rationale
I explored a few options for handling generic functions. This PR is the
one I feel the most confident in, but I feel I should mention the others
and why I ultimately didn't move forward with them.
#### Adding `GenericDynamicFunction`
**TL;DR:** Adding a distinct `GenericDynamicFunction` type unnecessarily
splits and complicates the API.
<details>
<summary>Details</summary>
My initial explorations involved a dedicated `GenericDynamicFunction` to
contain and handle the mappings.
This was initially started back when `DynamicFunction` was distinct from
`DynamicClosure`. My goal was to not prevent us from being able to
somehow make `DynamicFunction` implement `Copy`. But once we reverted
back to a single `DynamicFunction`, that became a non-issue.
But that aside, the real problem was that it created a split in the API.
If I'm using a third-party library that uses function reflection, I have
to know whether to request a `DynamicFunction` or a
`GenericDynamicFunction`. I might not even know ahead of time which one
I want. It might need to be determined at runtime.
And if I'm creating a library, I might want a type to contain both
`DynamicFunction` and `GenericDynamicFunction`. This might not be
possible if, for example, I need to store the function in a `HashMap`.
The other concern is with `IntoFunction`. Right now `DynamicFunction`
trivially implements `IntoFunction` since it can just return itself. But
what should `GenericDynamicFunction` do? It could return itself wrapped
into a `DynamicFunction`, but then the API for `DynamicFunction` would
have to account for this. So then what was the point of having a
separate `GenericDynamicFunction` anyways?
And even apart from `IntoFunction`, there's nothing stopping someone
from manually creating a generic `DynamicFunction` through lying about
its `FunctionInfo` and wrapping a `GenericDynamicFunction`.
That being said, this is probably the "best" alternative if we added a
`Function` trait and stored functions as `Box<dyn Function>`.
However, I'm not convinced we gain much from this. Sure, we could keep
the API for `DynamicFunction` the same, but consumers of `Function` will
need to account for `GenericDynamicFunction` regardless (e.g. handling
multiple `FunctionInfo`, a ranged argument count, etc.). And for all
cases, except where using `DynamicFunction` directly, you end up
treating them all like `GenericDynamicFunction`.
Right now, if we did go with `GenericDynamicFunction`, the only major
benefit we'd gain would be saving 24 bytes. If memory ever does become
an issue here, we could swap over. But I think for the time being it's
better for us to pursue a clearer mental model and end-user ergonomics
through unification.
</details>
##### Using the `FunctionRegistry`
**TL;DR:** Having overloads only exist in the `FunctionRegistry`
unnecessarily splits and complicates the API.
<details>
<summary>Details</summary>
Another idea was to store the overloads in the `FunctionRegistry`. Users
would then just call functions directly through the registry (i.e.
`registry.call("my_func", my_args)`).
I didn't go with this option because of how it specifically relies on
the functions being registered. You'd not only always need access to the
registry, but you'd need to ensure that the functions you want to call
are even registered.
It also means you can't just store a generic `DynamicFunction` on a
type. Instead, you'll need to store the function's name and use that to
look up the function in the registry—even if it's only ever used by that
type.
Doing so also removes all the benefits of `DynamicFunction`, such as the
ability to pass it to functions accepting `IntoFunction`, modify it if
needed, and so on.
Like `GenericDynamicFunction` this introduces a split in the ecosystem:
you either store `DynamicFunction`, store a string to look up the
function, or force `DynamicFunction` to wrap your generic function
anyways. Or worse yet: have `DynamicFunction` wrap the lookup function
using `FunctionRegistryArc`.
</details>
#### Generic `ArgInfo`
**TL;DR:** Allowing `ArgInfo` and `ReturnInfo` to store the generic
information introduces a footgun when interpreting `FunctionInfo`.
<details>
<summary>Details</summary>
Regardless of how we represent a generic function, one thing is clear:
we need to be able to represent the information for such a function.
This PR does so by introducing a `FunctionInfoType` enum to wrap one or
more `FunctionInfo` values.
Originally, I didn't do this. I had `ArgInfo` and `ReturnInfo` allow for
generic types. This allowed us to have a single `FunctionInfo` to
represent our function, but then I realized that it actually lies about
our function.
If we have two `ArgInfo` that both allow for either `i32` or `u32`, what
does this tell us about our function? It turns out: nothing! We can't
know whether our function takes `(i32, i32)`, `(u32, u32)`, `(i32,
u32)`, or `(u32, i32)`.
It therefore makes more sense to just represent a function with multiple
`FunctionInfo` since that's really what it's made up of.
</details>
#### Flatten `FunctionInfo`
**TL;DR:** Flattening removes additional per-overload information some
users may desire and prevents us from adding more information in the
future.
<details>
<summary>Details</summary>
Why don't we just flatten multiple `FunctionInfo` into just one that can
contain multiple signatures?
This is something we could do, but I decided against it for a few
reasons:
- The only thing we'd be able to get rid of for each signature would be
the `name`. While not enough to not do it, it doesn't really suggest we
*have* to either.
- Some consumers may want access to the names of the functions that make
up the overloaded function. For example, to track a bug where an
undesirable function is being added as an overload. Or to more easily
locate the original function of an overload.
- We may eventually allow for more information to be stored on
`FunctionInfo`. For example, we may allow for documentation to be stored
like we do for `TypeInfo`. Consumers of this documentation may want
access to the documentation of each overload as they may provide
documentation specific to that overload.
</details>
## Testing
This PR adds lots of tests and benchmarks, and also adds to the example.
To run the tests:
```
cargo test --package bevy_reflect --all-features
```
To run the benchmarks:
```
cargo bench --bench reflect_function --all-features
```
To run the example:
```
cargo run --package bevy --example function_reflection --all-features
```
### Benchmarks
One of my goals with this PR was to leave the typical case of
non-overloaded functions largely unaffected by the changes introduced in
this PR. ~~And while the static size of `DynamicFunction` has increased
by 17% (from 136 to 160 bytes), the performance has generally stayed the
same~~ The static size of `DynamicFunction` has decreased from 136 to
112 bytes, while calling performance has generally stayed the same:
| | `main` | 7d293ab | 252f3897d |
|-------------------------------------|--------|---------|-----------|
| `into/function` | 37 ns | 46 ns | 142 ns |
| `with_overload/01_simple_overload` | - | 149 ns | 268 ns |
| `with_overload/01_complex_overload` | - | 332 ns | 431 ns |
| `with_overload/10_simple_overload` | - | 1266 ns | 2618 ns |
| `with_overload/10_complex_overload` | - | 2544 ns | 4170 ns |
| `call/function` | 57 ns | 58 ns | 61 ns |
| `call/01_simple_overload` | - | 255 ns | 242 ns |
| `call/01_complex_overload` | - | 595 ns | 431 ns |
| `call/10_simple_overload` | - | 740 ns | 699 ns |
| `call/10_complex_overload` | - | 1824 ns | 1618 ns |
For the overloaded function tests, the leading number indicates how many
overloads there are: `01` indicates 1 overload, `10` indicates 10
overloads. The `complex` cases have 10 unique generic types and 10
arguments, compared to the `simple` 1 generic type and 2 arguments.
I aimed to prioritize the performance of calling the functions over
creating them, hence creation speed tends to be a bit slower.
There may be other optimizations we can look into but that's probably
best saved for a future PR.
The important bit is that the standard ~~`into/function`~~ and
`call/function` benchmarks show minimal regressions. Since the latest
changes, `into/function` does have some regressions, but again the
priority was `call/function`. We can probably optimize `into/function`
if needed in the future.
---
## Showcase
Function reflection now supports [function
overloading](https://en.wikipedia.org/wiki/Function_overloading)! This
can be used to simulate generic functions:
```rust
fn add<T: Add<Output=T>>(a: T, b: T) -> T {
a + b
}
let reflect_add = add::<i32>
.into_function()
.with_overload(add::<u32>)
.with_overload(add::<f32>);
let args = ArgList::default().push_owned(25_i32).push_owned(75_i32);
let result = func.call(args).unwrap().unwrap_owned();
assert_eq!(result.try_take::<i32>().unwrap(), 100);
let args = ArgList::default().push_owned(25.0_f32).push_owned(75.0_f32);
let result = func.call(args).unwrap().unwrap_owned();
assert_eq!(result.try_take::<f32>().unwrap(), 100.0);
```
You can also simulate variadic functions:
```rust
#[derive(Reflect, PartialEq, Debug)]
struct Player {
name: Option<String>,
health: u32,
}
// Creates a `Player` with one of the following:
// - No name and 100 health
// - A name and 100 health
// - No name and custom health
// - A name and custom health
let create_player = (|| Player {
name: None,
health: 100,
})
.into_function()
.with_overload(|name: String| Player {
name: Some(name),
health: 100,
})
.with_overload(|health: u32| Player {
name: None,
health
})
.with_overload(|name: String, health: u32| Player {
name: Some(name),
health,
});
let args = ArgList::default()
.push_owned(String::from("Urist"))
.push_owned(55_u32);
let player = create_player
.call(args)
.unwrap()
.unwrap_owned()
.try_take::<Player>()
.unwrap();
assert_eq!(
player,
Player {
name: Some(String::from("Urist")),
health: 55
}
);
```
This commit removes the logic that attempted to keep the
`MeshInputUniform` buffer contiguous. Not only was it slow and complex,
but it was also incorrect, which caused #16686 and #16690. I changed the
logic to simply maintain a free list of unused slots in the buffer and
preferentially fill them when pushing new mesh input uniforms.
Closes#16686.
Closes#16690.
# Objective
- A `Trigger` has multiple associated `Entity`s - the entity observing
the event, and the entity that was targeted by the event.
- The field `entity: Entity` encodes no semantic information about what
the entity is used for, you can already tell that it's an `Entity` by
the type signature!
## Solution
- Rename `trigger.entity()` to `trigger.target()`
---
## Changelog
- `Trigger`s are associated with multiple entities. `Trigger::entity()`
has been renamed to `Trigger::target()` to reflect the semantics of the
entity being returned.
## Migration Guide
- Rename `Trigger::entity()` to `Trigger::target()`.
- Rename `ObserverTrigger::entity` to `ObserverTrigger::target`
# Objective
Fixes#16610, related to #16702
## Solution
Upgrade typos and its configuration
## Testing
- Did you test these changes? If so, how? No
- Are there any parts that need more testing? No
- How can other people (reviewers) test your changes? Is there anything
specific they need to know? No
- If relevant, what platforms did you test these changes on, and are
there any important ones you can't test? Not applicable
# Objective
Fixes#16706
## Solution
- Added new method: `try_resource_scope` which returns `None` if the
requested resource doesn't exist.
- Changed the `resource_scope` test to use `try_resource_scope` as well
to test for the `None` case.
---
## Showcase
```rust
world.try_resource_scope::<MyResource, _>(|world, mut my_resource| {
// do something with the resource if it exists
});
```
# Objective
Fixes typos in bevy project, following suggestion in
https://github.com/bevyengine/bevy-website/pull/1912#pullrequestreview-2483499337
## Solution
I used https://github.com/crate-ci/typos to find them.
I included only the ones that feel undebatable too me, but I am not in
game engine so maybe some terms are expected.
I left out the following typos:
- `reparametrize` => `reparameterize`: There are a lot of occurences, I
believe this was expected
- `semicircles` => `hemicircles`: 2 occurences, may mean something
specific in geometry
- `invertation` => `inversion`: may mean something specific
- `unparented` => `parentless`: may mean something specific
- `metalness` => `metallicity`: may mean something specific
## Testing
- Did you test these changes? If so, how? I did not test the changes,
most changes are related to raw text. I expect the others to be tested
by the CI.
- Are there any parts that need more testing? I do not think
- How can other people (reviewers) test your changes? Is there anything
specific they need to know? To me there is nothing to test
- If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?
---
## Migration Guide
> This section is optional. If there are no breaking changes, you can
delete this section.
(kept in case I include the `reparameterize` change here)
- If this PR is a breaking change (relative to the last release of
Bevy), describe how a user might need to migrate their code to support
these changes
- Simply adding new functionality is not a breaking change.
- Fixing behavior that was definitely a bug, rather than a questionable
design choice is not a breaking change.
## Questions
- [x] Should I include the above typos? No
(https://github.com/bevyengine/bevy/pull/16702#issuecomment-2525271152)
- [ ] Should I add `typos` to the CI? (I will check how to configure it
properly)
This project looks awesome, I really enjoy reading the progress made,
thanks to everyone involved.
# Objective
This PR update breakout to use the new 0.15 Required Component feature
instead of the Bundle.
Add more information in the comment about where to find more info about
Required Components.
## Solution
Replace `#[derive(Bundle)]` with a new Wall component and `#[require()]`
Macro to include the other components.
## Testing
Tested with `cargo test` as well tested the game manually with `cargo
run --example breakout` It looks to me that it works like it used to
before the changes. Tested on Arch Linux, Wayland
---------
Co-authored-by: Arnav Mummineni <45217840+RCoder01@users.noreply.github.com>
Co-authored-by: Joona Aalto <jondolf.dev@gmail.com>
This commit adds support for *multidraw*, which is a feature that allows
multiple meshes to be drawn in a single drawcall. `wgpu` currently
implements multidraw on Vulkan, so this feature is only enabled there.
Multiple meshes can be drawn at once if they're in the same vertex and
index buffers and are otherwise placed in the same bin. (Thus, for
example, at present the materials and textures must be identical, but
see #16368.) Multidraw is a significant performance improvement during
the draw phase because it reduces the number of rebindings, as well as
the number of drawcalls.
This feature is currently only enabled when GPU culling is used: i.e.
when `GpuCulling` is present on a camera. Therefore, if you run for
example `scene_viewer`, you will not see any performance improvements,
because `scene_viewer` doesn't add the `GpuCulling` component to its
camera.
Additionally, the multidraw feature is only implemented for opaque 3D
meshes and not for shadows or 2D meshes. I plan to make GPU culling the
default and to extend the feature to shadows in the future. Also, in the
future I suspect that polyfilling multidraw on APIs that don't support
it will be fruitful, as even without driver-level support use of
multidraw allows us to avoid expensive `wgpu` rebindings.
# Objective
The "mehses" typo introduced in a review comment
[here](https://github.com/bevyengine/bevy/pull/16657#discussion_r1870834999)
hurts my soul, it was merged right as I was about to comment about it :(
## Solution
Fix it :D
(also, why didn't the CI typo checker catch this?)
# Objective
- Remove `derive_more`'s error derivation and replace it with
`thiserror`
## Solution
- Added `derive_more`'s `error` feature to `deny.toml` to prevent it
sneaking back in.
- Reverted to `thiserror` error derivation
## Notes
Merge conflicts were too numerous to revert the individual changes, so
this reversion was done manually. Please scrutinise carefully during
review.
## Objective
I was resolving a conflict between #16132 and my PR #15929 and thought
the `clone_entity` commands made more sense in `EntityCommands`.
## Solution
Moved `Commands::clone_entity` to `EntityCommands::clone`, moved
`Commands::clone_entity_with` to `EntityCommands::clone_with`.
## Testing
Ran the two tests that used the old methods.
## Showcase
```
// Create a new entity and keep its EntityCommands.
let mut entity = commands.spawn((ComponentA(10), ComponentB(20)));
// Create a clone of the first entity
let mut entity_clone = entity.clone();
```
The only potential downside is that the method name is now the same as
the one from the `Clone` trait. `EntityCommands` doesn't implement
`Clone` though, so there's no actual conflict.
Maybe I'm biased because this'll work better with my PR, but I think the
UX is nicer regardless.
# Objective
Volumetric fog was broken by #13746.
Looks like this particular shader just got missed. I don't see any other
instances of `unpack_offset_and_counts` in the codebase.
```
2024-12-06T03:18:42.297494Z ERROR bevy_render::render_resource::pipeline_cache: failed to process shader:
error: no definition in scope for identifier: 'bevy_pbr::clustered_forward::unpack_offset_and_counts'
┌─ crates/bevy_pbr/src/volumetric_fog/volumetric_fog.wgsl:312:29
│
312 │ let offset_and_counts = bevy_pbr::clustered_forward::unpack_offset_and_counts(cluster_index);
│ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ unknown identifier
│
= no definition in scope for identifier: 'bevy_pbr::clustered_forward::unpack_offset_and_counts'
```
## Solution
Use `unpack_clusterable_object_index_ranges` to get the indices for
point/spot lights.
## Testing
`cargo run --example volumetric_fog`
`cargo run --example fog_volumes`
`cargo run --example scrolling_fog`
# Objective
The weekly beta compile test workflow can be useful to people who have
forked the repository.
However, there are a few small issues with how this workflow is
currently set up:
* Scheduled workflows run on the base/default branch, with no way
(currently) to change this. On forks, the base/default branch is usually
kept in sync with the main Bevy repository, meaning that running this
workflow on forks would just be a waste of resources. (See
[here](https://docs.github.com/en/actions/writing-workflows/choosing-when-your-workflow-runs/events-that-trigger-workflows#schedule)
- "Scheduled workflows will only run on the default branch.")
* Even if there was a way to change the branch that a scheduled workflow
runs on, forks default to not having an issue tracker.
* Even in the event that a fork's issue tracker is enabled, most users
probably don't want to receive automated issues in the event of a
compilation failure.
Because of these reasons, this workflow is irrelevant for 99% of forks.
## Solution
We now skip all jobs on the `Weekly beta compile test` workflow, if we
detect we're running this workflow on a fork.
## Testing
Testing? Who needs testing? (Seriously though, I made sure the syntax is
correct.)
# Objective
- Contributes to #15460
## Solution
- Added the following features:
- `std` (default)
- `async_executor` (default)
- `edge_executor`
- `critical-section`
- `portable-atomic`
- Added [`edge-executor`](https://crates.io/crates/edge-executor) as a
`no_std` alternative to `async-executor`.
- Updated the `single_threaded_task_pool` to work in `no_std`
environments by gating its reliance on `thread_local`.
## Testing
- Added to `compile-check-no-std` CI command
## Notes
- In previous iterations of this PR, a custom `async-executor`
alternative was vendored in. This raised concerns around maintenance and
testing. In this iteration, an existing version of that same vendoring
is now used, but _only_ in `no_std` contexts. For existing `std`
contexts, the original `async-executor` is used.
- Due to the way statics work, certain `TaskPool` operations have added
restrictions around `Send`/`Sync` in `no_std`. This is because there
isn't a straightforward way to create a thread-local in `no_std`. If
these added constraints pose an issue we can revisit this at a later
date.
- If a user enables both the `async_executor` and `edge_executor`
features, we will default to using `async-executor`. Since enabling
`async_executor` requires `std`, we can safely assume we are in an `std`
context and use the original library.
---------
Co-authored-by: Mike <2180432+hymm@users.noreply.github.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
This adds a few minor items which were left out of the previous PR:
- Added synchronization from bevy_input_focus to bevy_a11y.
- Initialize InputFocusVisible resource.
- Make `input_focus` available from `bevy` module.
I've tested this using VoiceOver on Mac OS. It works, but it needs
considerable polish.
# Objective
- Fixes#16498
## Solution
- Trivially swaps ordering of hooks and observers for all call sites
where they are triggered for `on_replace` or `on_remove`
## Testing
- Just CI
---
## Migration Guide
The order of hooks and observers for `on_replace` and `on_remove` has
been swapped. Observers are now run before hooks. This is a more natural
ordering where the removal ordering is inverted compared to the
insertion ordering.
# Objective
Error handling in bevy is hard. See for reference
https://github.com/bevyengine/bevy/issues/11562,
https://github.com/bevyengine/bevy/issues/10874 and
https://github.com/bevyengine/bevy/issues/12660. The goal of this PR is
to make it better, by allowing users to optionally return `Result` from
systems as outlined by Cart in
<https://github.com/bevyengine/bevy/issues/14275#issuecomment-2223708314>.
## Solution
This PR introduces a new `ScheuleSystem` type to represent systems that
can be added to schedules. Instances of this type contain either an
infallible `BoxedSystem<(), ()>` or a fallible `BoxedSystem<(),
Result>`. `ScheuleSystem` implements `System<In = (), Out = Result>` and
replaces all uses of `BoxedSystem` in schedules. The async executor now
receives a result after executing a system, which for infallible systems
is always `Ok(())`. Currently it ignores this result, but more useful
error handling could also be implemented.
Aliases for `Error` and `Result` have been added to the `bevy_ecs`
prelude, as well as const `OK` which new users may find more friendly
than `Ok(())`.
## Testing
- Currently there are not actual semantics changes that really require
new tests, but I added a basic one just to make sure we don't break
stuff in the future.
- The behavior of existing systems is totally unchanged, including
logging.
- All of the existing systems tests pass, and I have not noticed
anything strange while playing with the examples
## Showcase
The following minimal example prints "hello world" once, then completes.
```rust
use bevy::prelude::*;
fn main() {
App::new().add_systems(Update, hello_world_system).run();
}
fn hello_world_system() -> Result {
println!("hello world");
Err("string")?;
println!("goodbye world");
OK
}
```
## Migration Guide
This change should be pretty much non-breaking, except for users who
have implemented their own custom executors. Those users should use
`ScheduleSystem` in place of `BoxedSystem<(), ()>` and import the
`System` trait where needed. They can choose to do whatever they wish
with the result.
## Current Work
+ [x] Fix tests & doc comments
+ [x] Write more tests
+ [x] Add examples
+ [X] Draft release notes
## Draft Release Notes
As of this release, systems can now return results.
First a bit of background: Bevy has hisotrically expected systems to
return the empty type `()`. While this makes sense in the context of the
ecs, it's at odds with how error handling is typically done in rust:
returning `Result::Error` to indicate failure, and using the
short-circuiting `?` operator to propagate that error up the call stack
to where it can be properly handled. Users of functional languages will
tell you this is called "monadic error handling".
Not being able to return `Results` from systems left bevy users with a
quandry. They could add custom error handling logic to every system, or
manually pipe every system into an error handler, or perhaps sidestep
the issue with some combination of fallible assignents, logging, macros,
and early returns. Often, users would just litter their systems with
unwraps and possible panics.
While any one of these approaches might be fine for a particular user,
each of them has their own drawbacks, and none makes good use of the
language. Serious issues could also arrise when two different crates
used by the same project made different choices about error handling.
Now, by returning results, systems can defer error handling to the
application itself. It looks like this:
```rust
// Previous, handling internally
app.add_systems(my_system)
fn my_system(window: Query<&Window>) {
let Ok(window) = query.get_single() else {
return;
};
// ... do something to the window here
}
// Previous, handling externally
app.add_systems(my_system.pipe(my_error_handler))
fn my_system(window: Query<&Window>) -> Result<(), impl Error> {
let window = query.get_single()?;
// ... do something to the window here
Ok(())
}
// Previous, panicking
app.add_systems(my_system)
fn my_system(window: Query<&Window>) {
let window = query.single();
// ... do something to the window here
}
// Now
app.add_systems(my_system)
fn my_system(window: Query<&Window>) -> Result {
let window = query.get_single()?;
// ... do something to the window here
Ok(())
}
```
There are currently some limitations. Systems must either return `()` or
`Result<(), Box<dyn Error + Send + Sync + 'static>>`, with no
in-between. Results are also ignored by default, and though implementing
a custom handler is possible, it involves writing your own custom ecs
executor (which is *not* recomended).
Systems should return errors when they cannot perform their normal
behavior. In turn, errors returned to the executor while running the
schedule will (eventually) be treated as unexpected. Users and library
authors should prefer to return errors for anything that disrupts the
normal expected behavior of a system, and should only handle expected
cases internally.
We have big plans for improving error handling further:
+ Allowing users to change the error handling logic of the default
executors.
+ Adding source tracking and optional backtraces to errors.
+ Possibly adding tracing-levels (Error/Warn/Info/Debug/Trace) to
errors.
+ Generally making the default error logging more helpful and
inteligent.
+ Adding monadic system combininators for fallible systems.
+ Possibly removing all panicking variants from our api.
---------
Co-authored-by: Zachary Harrold <zac@harrold.com.au>
# Objective
- This fixes raycast picking with lyon
- reverse winding of 2D meshes currently results in them being rendered
but not pickable as the raycast passes through the backface and would
only hit "from below"
## Solution
- Disables backface culling for Mesh2d
## Testing
- Tested picking with bevy_prototype_lyon
- Could probably use testing with Mesh3d (should not be affected) and
SimplifiedMesh (no experience with that, could have the same issue if
used for 2D?)
---------
Co-authored-by: Aevyrie <aevyrie@gmail.com>
The bindless PR (#16368) broke some examples:
* `specialized_mesh_pipeline` and `custom_shader_instancing` failed
because they expect to be able to render a mesh with no material, by
overriding enough of the render pipeline to be able to do so. This PR
fixes the issue by restoring the old behavior in which we extract meshes
even if they have no material.
* `texture_binding_array` broke because it doesn't implement
`AsBindGroup::unprepared_bind_group`. This was tricky to fix because
there's a very good reason why `texture_binding_array` doesn't implement
that method: there's no sensible way to do so with `wgpu`'s current
bindless API, due to its multiple levels of borrowed references. To fix
the example, I split `MaterialBindGroup` into
`MaterialBindlessBindGroup` and `MaterialNonBindlessBindGroup`, and
allow direct custom implementations of `AsBindGroup::as_bind_group` for
the latter type of bind groups. To opt in to the new behavior, return
the `AsBindGroupError::CreateBindGroupDirectly` error from your
`AsBindGroup::unprepared_bind_group` implementation, and Bevy will call
your custom `AsBindGroup::as_bind_group` method as before.
## Migration Guide
* Bevy will now unconditionally call
`AsBindGroup::unprepared_bind_group` for your materials, so you must no
longer panic in that function. Instead, return the new
`AsBindGroupError::CreateBindGroupDirectly` error, and Bevy will fall
back to calling `AsBindGroup::as_bind_group` as before.
# Objective
- Contributes to #15460
## Solution
- Added the following new features:
- `std` (default)
- `alloc`
- `encase` (default)
- `libm`
## Testing
- Added to `compile-check-no-std` CI command
## Notes
- `ColorCurve` requires `alloc` due to how the underlying `EvenCore`
type works.
- `Srgba::to_hex` requires `alloc` to return a `String`.
- This was otherwise a _very_ simple change
# Objective
- Allow bevy_sprite_picking backend to pass through transparent sections
of the sprite.
- Fixes#14929
## Solution
- After sprite picking detects the cursor is within a sprites rect,
check the pixel at that location on the texture and check that it meets
an optional transparency cutoff. Change originally created for
mod_picking on bevy 0.14
(https://github.com/aevyrie/bevy_mod_picking/pull/373)
## Testing
- Ran Sprite Picking example to check it was working both with
transparency enabled and disabled
- ModPicking version is currently in use in my own isometric game where
this has been an extremely noticeable issue
## Showcase
![Sprite Picking
Text](https://github.com/user-attachments/assets/76568c0d-c359-422b-942d-17c84d3d3009)
## Migration Guide
Sprite picking now ignores transparent regions (with an alpha value less
than or equal to 0.1). To configure this, modify the
`SpriteBackendSettings` resource.
---------
Co-authored-by: andriyDev <andriydzikh@gmail.com>