# Objective
- Standard Material is starting to run out of samplers (currently uses
13 with no additional features off, I think in 0.13 it was 12).
- This change adds a new feature switch, modelled on the other ones
which add features to Standard Material, to turn off the new anisotropy
feature by default.
## Solution
- feature + texture define
## Testing
- Anisotropy example still works fine
- Other samples work fine
- Standard Material now takes 12 samplers by default on my Mac instead
of 13
## Migration Guide
- Add feature pbr_anisotropy_texture if you are using that texture in
any standard materials.
---------
Co-authored-by: John Payne <20407779+johngpayne@users.noreply.github.com>
# Objective
The bounds for query iterators are quite intimidating.
## Solution
With Rust 1.79, [associated type
bounds](https://github.com/rust-lang/rust/pull/122055/) stabilized,
which can simplify the bounds slightly.
# Objective
- https://github.com/bevyengine/bevy/pull/13136 changed the default
value of `-Zshare-generics=n` to `-Zshare-generics=y` on Windows
- New users are encouraged to use dynamic builds and LLD when setting up
Bevy
- New users are also encouraged to check out `fast_config.toml`
- https://github.com/bevyengine/bevy/issues/1126 means that running
dynamic builds, using LLD and sharing generics on Windows results in a
cryptic error message
As a result, a new Windows user following all recommendations for better
compiles is actually not able to compile Bevy at all.
## Solution
- Set `-Zshare-generics=n` on Windows with a comment saying this is for
dynamic linking
## Testing
I verified that https://github.com/bevyengine/bevy/issues/1126 is still
in place on the current nightly (1.80)
## Additional Info
Maybe the website should mention this as well? The relevant snippet
there looks like this:
```toml
# /path/to/project/.cargo/config.toml
[target.x86_64-unknown-linux-gnu]
rustflags = [
# (Nightly) Make the current crate share its generic instantiations
"-Zshare-generics=y",
]
```
so it kinda implies it's only for Linux? Which is not quite true, this
setting works on macOS as well AFAIK
# Objective
The Bevy API around manipulating hierarchies removes `Children` if the
operation results in an entity having no children. This means that
`Children` is guaranteed to hold actual children. However, the following
code unexpectedly inserts empty `Children`:
```rust
commands.entity(entity).with_children(|_| {});
```
This was discovered by @Jondolf:
https://discord.com/channels/691052431525675048/1124043933886976171/1257660865625325800
## Solution
- `with_children` is now a noop when no children were passed
## Testing
- Added a regression test
# Objective
The `AssetReader` trait allows customizing the behavior of fetching
bytes for an `AssetPath`, and expects implementors to return `dyn
AsyncRead + AsyncSeek`. This gives implementors of `AssetLoader` great
flexibility to tightly integrate their asset loading behavior with the
asynchronous task system.
However, almost all implementors of `AssetLoader` don't use the async
functionality at all, and just call `AsyncReadExt::read_to_end(&mut
Vec<u8>)`. This is incredibly inefficient, as this method repeatedly
calls `poll_read` on the trait object, filling the vector 32 bytes at a
time. At my work we have assets that are hundreds of megabytes which
makes this a meaningful overhead.
## Solution
Turn the `Reader` type alias into an actual trait, with a provided
method `read_to_end`. This provided method should be more efficient than
the existing extension method, as the compiler will know the underlying
type of `Reader` when generating this function, which removes the
repeated dynamic dispatches and allows the compiler to make further
optimizations after inlining. Individual implementors are able to
override the provided implementation -- for simple asset readers that
just copy bytes from one buffer to another, this allows removing a large
amount of overhead from the provided implementation.
Now that `Reader` is an actual trait, I also improved the ergonomics for
implementing `AssetReader`. Currently, implementors are expected to box
their reader and return it as a trait object, which adds unnecessary
boilerplate to implementations. This PR changes that trait method to
return a pseudo trait alias, which allows implementors to return `impl
Reader` instead of `Box<dyn Reader>`. Now, the boilerplate for boxing
occurs in `ErasedAssetReader`.
## Testing
I made identical changes to my company's fork of bevy. Our app, which
makes heavy use of `read_to_end` for asset loading, still worked
properly after this. I am not aware if we have a more systematic way of
testing asset loading for correctness.
---
## Migration Guide
The trait method `bevy_asset::io::AssetReader::read` (and `read_meta`)
now return an opaque type instead of a boxed trait object. Implementors
of these methods should change the type signatures appropriately
```rust
impl AssetReader for MyReader {
// Before
async fn read<'a>(&'a self, path: &'a Path) -> Result<Box<Reader<'a>>, AssetReaderError> {
let reader = // construct a reader
Box::new(reader) as Box<Reader<'a>>
}
// After
async fn read<'a>(&'a self, path: &'a Path) -> Result<impl Reader + 'a, AssetReaderError> {
// create a reader
}
}
```
`bevy::asset::io::Reader` is now a trait, rather than a type alias for a
trait object. Implementors of `AssetLoader::load` will need to adjust
the method signature accordingly
```rust
impl AssetLoader for MyLoader {
async fn load<'a>(
&'a self,
// Before:
reader: &'a mut bevy::asset::io::Reader,
// After:
reader: &'a mut dyn bevy::asset::io::Reader,
_: &'a Self::Settings,
load_context: &'a mut LoadContext<'_>,
) -> Result<Self::Asset, Self::Error> {
}
```
Additionally, implementors of `AssetReader` that return a type
implementing `futures_io::AsyncRead` and `AsyncSeek` might need to
explicitly implement `bevy::asset::io::Reader` for that type.
```rust
impl bevy::asset::io::Reader for MyAsyncReadAndSeek {}
```
# Objective
Add ability to de-register events from the EventRegistry (and the
associated World).
The initial reasoning relates to retaining support for Event hot
reloading in `dexterous_developer`.
## Solution
Add a `deregister_events<T: Event>(&mut world)` method to the
`EventRegistry` struct.
## Testing
Added an automated test that verifies the event registry adds and
removes `Events<T>` from the world.
---------
Co-authored-by: BD103 <59022059+BD103@users.noreply.github.com>
# Objective
It's not always obvious what the default value for `RenderLayers`
represents. It is documented, but since it's an implementation of a
trait method the documentation may or may not be shown depending on the
IDE.
## Solution
Add documentation to the `none` method that explicitly calls out the
difference.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Also bumps `accesskit_winit` to 0.22 and fixes one breaking change.
# Objective
- `accesskit` has been updated recently to 0.16!
## Solution
- Update `accesskit`, as well as `accesskit_winit`.
- [`accesskit`
changelog](552032c839/common/CHANGELOG.md (0160-2024-06-29))
- [`accesskit_winit`
changelog](552032c839/platforms/winit/CHANGELOG.md (0220-2024-06-29))
- Fix one breaking change where `Role::StaticText` has been renamed to
`Role::Label`.
## Testing
- The test suite should cover most things.
- It would be good to test this with an example, but I don't know how.
---
## Changelog
- Update `accesskit` to 0.16 and `accesskit_winit` to 0.22.
## Migration Guide
`accesskit`'s `Role::StaticText` variant has been renamed to
`Role::Label`.
# Objective
- Though Rodio will eventually be replaced with Kira for `bevy_audio`,
we should not let it languish.
## Solution
- Bump Rodio to 0.19.
- This is [the
changelog](27f2b42406/CHANGELOG.md (version-0190-2024-06-29)).
No apparent breaking changes, only 1 feature and 1 fix.
## Testing
- Run an example that uses audio, on both native and WASM.
---
## Changelog
- Bumped Rodio to 0.19.
# Objective
The `BuildChildren` and `BuildWorldChildren` traits are mostly
identical, so I decided to try and merge them. I'm not sure of the
history, maybe they were added before GATs existed.
## Solution
- Add an associated type to `BuildChildren` which reflects the prior
differences between the `BuildChildren` and `BuildWorldChildren` traits.
- Add `ChildBuild` trait that is the bounds for
`BuildChildren::Builder`, with impls for `ChildBuilder` and
`WorldChildBuilder`.
- Remove `BuildWorldChildren` trait and replace it with an impl of
`BuildChildren` for `EntityWorldMut`.
## Testing
I ran several of the examples that use entity hierarchies, mainly UI.
---
## Changelog
n/a
## Migration Guide
n/a
# Objective
- Add the `AccumulatedMouseMotion` and `AccumulatedMouseScroll`
resources to make it simpler to track mouse motion/scroll changes
- Closes#13915
## Solution
- Created two resources, `AccumulatedMouseMotion` and
`AccumulatedMouseScroll`, and a method that tracks the `MouseMotion` and
`MouseWheel` events and accumulates their deltas every frame.
- Also modified the mouse input example to show how to use the
resources.
## Testing
- Tested the changes by modifying an existing example to use the newly
added resources, and moving/scrolling my trackpad around a ton.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Gino Valente <49806985+MrGVSV@users.noreply.github.com>
# Objective
Fixes#13995.
## Solution
Override the default `Ctrl+C` handler with one that sends `AppExit`
event to every app with `TerminalCtrlCHandlerPlugin`.
## Testing
Tested by running the `3d_scene` example and hitting `Ctrl+C` in the
terminal.
---
## Changelog
Handles `Ctrl+C` in the terminal gracefully.
## Migration Guide
If you are overriding the `Ctrl+C` handler then you should call
`TerminalCtrlCHandlerPlugin::gracefully_exit` from your handler. It will
tell the app to exit.
Part of #13681
# Objective
gLTF Assets shouldn't be duplicated between Assets resource and node
children.
Also changed `asset_label` to be a method as [per previous PR
comment](https://github.com/bevyengine/bevy/pull/13558).
## Solution
- Made GltfNode children be Handles instead of asset copies.
## Testing
- Added tests that actually test loading and hierarchy as previous ones
unit tested only one function and that makes little sense.
- Made circular nodes an actual loading failure instead of a warning
no-op. You [_MUST NOT_ have cycles in
gLTF](https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#nodes-and-hierarchy)
according to the spec.
- IMO this is a bugfix, not a breaking change. But in an extremely
unlikely event in which you relied on invalid behavior for loading gLTF
with cyclic children, you will not be able to do that anymore. You
should fix your gLTF file as it's not valid according to gLTF spec. For
it to for work someone, it had to be bevy with bevy_animation flag off.
---
## Changelog
### Changed
- `GltfNode.children` are now `Vec<Handle<GltfNode>>` instead of
`Vec<GltfNode>`
- Having children cycles between gLTF nodes in a gLTF document is now an
explicit asset loading failure.
## Migration Guide
If accessing children, use `Assets<GltfNode>` resource to get the actual
child object.
#### Before
```rs
fn gltf_print_first_node_children_system(gltf_component_query: Query<Handle<Gltf>>, gltf_assets: Res<Assets<Gltf>>, gltf_nodes: Res<Assets<GltfNode>>) {
for gltf_handle in gltf_component_query.iter() {
let gltf_root = gltf_assets.get(gltf_handle).unwrap();
let first_node_handle = gltf_root.nodes.get(0).unwrap();
let first_node = gltf_nodes.get(first_node_handle).unwrap();
let first_child = first_node.children.get(0).unwrap();
println!("First nodes child node name is {:?)", first_child.name);
}
}
```
#### After
```rs
fn gltf_print_first_node_children_system(gltf_component_query: Query<Handle<Gltf>>, gltf_assets: Res<Assets<Gltf>>, gltf_nodes: Res<Assets<GltfNode>>) {
for gltf_handle in gltf_component_query.iter() {
let gltf_root = gltf_assets.get(gltf_handle).unwrap();
let first_node_handle = gltf_root.nodes.get(0).unwrap();
let first_node = gltf_nodes.get(first_node_handle).unwrap();
let first_child_handle = first_node.children.get(0).unwrap();
let first_child = gltf_nodes.get(first_child_handle).unwrap();
println!("First nodes child node name is {:?)", first_child.name);
}
}
```
# Objective
Allow combining render layers with a more-ergonomic syntax than
`RenderLayers::from_iter(a.iter().chain(b.iter()))`.
## Solution
Add the `or` operation (and corresponding `const` method) to allow
computing the union of a set of render layers. While we're here, also
added `and` and `xor` operations. Someone might find them useful
## Testing
Added a simple unit test.
# Objective
We're able to reflect types sooooooo... why not functions?
The goal of this PR is to make functions callable within a dynamic
context, where type information is not readily available at compile
time.
For example, if we have a function:
```rust
fn add(left: i32, right: i32) -> i32 {
left + right
}
```
And two `Reflect` values we've already validated are `i32` types:
```rust
let left: Box<dyn Reflect> = Box::new(2_i32);
let right: Box<dyn Reflect> = Box::new(2_i32);
```
We should be able to call `add` with these values:
```rust
// ?????
let result: Box<dyn Reflect> = add.call_dynamic(left, right);
```
And ideally this wouldn't just work for functions, but methods and
closures too!
Right now, users have two options:
1. Manually parse the reflected data and call the function themselves
2. Rely on registered type data to handle the conversions for them
For a small function like `add`, this isn't too bad. But what about for
more complex functions? What about for many functions?
At worst, this process is error-prone. At best, it's simply tedious.
And this is assuming we know the function at compile time. What if we
want to accept a function dynamically and call it with our own
arguments?
It would be much nicer if `bevy_reflect` could alleviate some of the
problems here.
## Solution
Added function reflection!
This adds a `DynamicFunction` type to wrap a function dynamically. This
can be called with an `ArgList`, which is a dynamic list of
`Reflect`-containing `Arg` arguments. It returns a `FunctionResult`
which indicates whether or not the function call succeeded, returning a
`Reflect`-containing `Return` type if it did succeed.
Many functions can be converted into this `DynamicFunction` type thanks
to the `IntoFunction` trait.
Taking our previous `add` example, this might look something like
(explicit types added for readability):
```rust
fn add(left: i32, right: i32) -> i32 {
left + right
}
let mut function: DynamicFunction = add.into_function();
let args: ArgList = ArgList::new().push_owned(2_i32).push_owned(2_i32);
let result: Return = function.call(args).unwrap();
let value: Box<dyn Reflect> = result.unwrap_owned();
assert_eq!(value.take::<i32>().unwrap(), 4);
```
And it also works on closures:
```rust
let add = |left: i32, right: i32| left + right;
let mut function: DynamicFunction = add.into_function();
let args: ArgList = ArgList::new().push_owned(2_i32).push_owned(2_i32);
let result: Return = function.call(args).unwrap();
let value: Box<dyn Reflect> = result.unwrap_owned();
assert_eq!(value.take::<i32>().unwrap(), 4);
```
As well as methods:
```rust
#[derive(Reflect)]
struct Foo(i32);
impl Foo {
fn add(&mut self, value: i32) {
self.0 += value;
}
}
let mut foo = Foo(2);
let mut function: DynamicFunction = Foo::add.into_function();
let args: ArgList = ArgList::new().push_mut(&mut foo).push_owned(2_i32);
function.call(args).unwrap();
assert_eq!(foo.0, 4);
```
### Limitations
While this does cover many functions, it is far from a perfect system
and has quite a few limitations. Here are a few of the limitations when
using `IntoFunction`:
1. The lifetime of the return value is only tied to the lifetime of the
first argument (useful for methods). This means you can't have a
function like `(a: i32, b: &i32) -> &i32` without creating the
`DynamicFunction` manually.
2. Only 15 arguments are currently supported. If the first argument is a
(mutable) reference, this number increases to 16.
3. Manual implementations of `Reflect` will need to implement the new
`FromArg`, `GetOwnership`, and `IntoReturn` traits in order to be used
as arguments/return types.
And some limitations of `DynamicFunction` itself:
1. All arguments share the same lifetime, or rather, they will shrink to
the shortest lifetime.
2. Closures that capture their environment may need to have their
`DynamicFunction` dropped before accessing those variables again (there
is a `DynamicFunction::call_once` to make this a bit easier)
3. All arguments and return types must implement `Reflect`. While not a
big surprise coming from `bevy_reflect`, this implementation could
actually still work by swapping `Reflect` out with `Any`. Of course,
that makes working with the arguments and return values a bit harder.
4. Generic functions are not supported (unless they have been manually
monomorphized)
And general, reflection gotchas:
1. `&str` does not implement `Reflect`. Rather, `&'static str`
implements `Reflect` (the same is true for `&Path` and similar types).
This means that `&'static str` is considered an "owned" value for the
sake of generating arguments. Additionally, arguments and return types
containing `&str` will assume it's `&'static str`, which is almost never
the desired behavior. In these cases, the only solution (I believe) is
to use `&String` instead.
### Followup Work
This PR is the first of two PRs I intend to work on. The second PR will
aim to integrate this new function reflection system into the existing
reflection traits and `TypeInfo`. The goal would be to register and call
a reflected type's methods dynamically.
I chose not to do that in this PR since the diff is already quite large.
I also want the discussion for both PRs to be focused on their own
implementation.
Another followup I'd like to do is investigate allowing common container
types as a return type, such as `Option<&[mut] T>` and `Result<&[mut] T,
E>`. This would allow even more functions to opt into this system. I
chose to not include it in this one, though, for the same reasoning as
previously mentioned.
### Alternatives
One alternative I had considered was adding a macro to convert any
function into a reflection-based counterpart. The idea would be that a
struct that wraps the function would be created and users could specify
which arguments and return values should be `Reflect`. It could then be
called via a new `Function` trait.
I think that could still work, but it will be a fair bit more involved,
requiring some slightly more complex parsing. And it of course is a bit
more work for the user, since they need to create the type via macro
invocation.
It also makes registering these functions onto a type a bit more
complicated (depending on how it's implemented).
For now, I think this is a fairly simple, yet powerful solution that
provides the least amount of friction for users.
---
## Showcase
Bevy now adds support for storing and calling functions dynamically
using reflection!
```rust
// 1. Take a standard Rust function
fn add(left: i32, right: i32) -> i32 {
left + right
}
// 2. Convert it into a type-erased `DynamicFunction` using the `IntoFunction` trait
let mut function: DynamicFunction = add.into_function();
// 3. Define your arguments from reflected values
let args: ArgList = ArgList::new().push_owned(2_i32).push_owned(2_i32);
// 4. Call the function with your arguments
let result: Return = function.call(args).unwrap();
// 5. Extract the return value
let value: Box<dyn Reflect> = result.unwrap_owned();
assert_eq!(value.take::<i32>().unwrap(), 4);
```
## Changelog
#### TL;DR
- Added support for function reflection
- Added a new `Function Reflection` example:
ba727898f2/examples/reflection/function_reflection.rs (L1-L157)
#### Details
Added the following items:
- `ArgError` enum
- `ArgId` enum
- `ArgInfo` struct
- `ArgList` struct
- `Arg` enum
- `DynamicFunction` struct
- `FromArg` trait (derived with `derive(Reflect)`)
- `FunctionError` enum
- `FunctionInfo` struct
- `FunctionResult` alias
- `GetOwnership` trait (derived with `derive(Reflect)`)
- `IntoFunction` trait (with blanket implementation)
- `IntoReturn` trait (derived with `derive(Reflect)`)
- `Ownership` enum
- `ReturnInfo` struct
- `Return` enum
---------
Co-authored-by: Periwink <charlesbour@gmail.com>
# Objective
- When no wgpu backend is selected, there should be a clear explanation.
- Fix a regression in 0.14 when not using default features. I hit this
compile failure when trying to build bevy_framepace for 0.14.0-rc.4
```
error[E0432]: unresolved import `crate::core_3d::DEPTH_TEXTURE_SAMPLING_SUPPORTED`
--> /Users/aevyrie/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_core_pipeline-0.14.0-rc.4/src/dof/mod.rs:59:19
|
59 | Camera3d, DEPTH_TEXTURE_SAMPLING_SUPPORTED,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no `DEPTH_TEXTURE_SAMPLING_SUPPORTED` in `core_3d`
|
note: found an item that was configured out
--> /Users/aevyrie/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_core_pipeline-0.14.0-rc.4/src/core_3d/mod.rs:53:11
|
53 | pub const DEPTH_TEXTURE_SAMPLING_SUPPORTED: bool = false;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
note: found an item that was configured out
--> /Users/aevyrie/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_core_pipeline-0.14.0-rc.4/src/core_3d/mod.rs:63:11
|
63 | pub const DEPTH_TEXTURE_SAMPLING_SUPPORTED: bool = true;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
## Solution
- Ensure that `DEPTH_TEXTURE_SAMPLING_SUPPORTED` is either `true` or
`false`, it shouldn't be completely missing.
## Testing
- Building on WASM without default features, which now seemingly no
longer includes webgl, will panic on startup with a message saying that
no wgpu backend was selected. This is much more helpful than the compile
time failure:
```
No wgpu backend feature that is implemented for the target platform was enabled
```
- I can see an argument for making this a compile time failure, however
the current failure mode is very confusing for novice users, and
provides no clues for how to fix it. If we want this to fail at compile
time, we should do it in a way that fails with a helpful message,
similar to what this PR acheives.
# Objective
- Fixes#14059
- `morphed_skinned_mesh_layout` is the same as
`morphed_skinned_motion_mesh_layout` but shouldn't have the skin / morph
from previous frame, as they're used for motion
## Solution
- Remove the extra entries
## Testing
- Run with the glTF file reproducing #14059, it works
# Objective / Solution
Make it possible to construct `Material2dBindGroupId` for custom 2D
material pipelines by making the inner field public.
---
The 3D variant (`MaterialBindGroupId`) had this done in
e79b9b62ce
# Objective
- Some people have asked how to do image masking in UI. It's pretty easy
to do using a `UiMaterial` assuming you know how to write shaders.
## Solution
- Update the ui_material example to show the bevy banner slowly being
revealed like a progress bar
## Notes
I'm not entirely sure if we want this or not. For people that would be
comfortable to use this for their own games they would probably have
already figured out how to do it and for people that aren't familiar
with shaders this isn't really enough to make an actual slider/progress
bar.
---------
Co-authored-by: François Mockers <francois.mockers@vleue.com>
# Objective
- Fixes a correctness error introduced in
https://github.com/bevyengine/bevy/pull/14013 ...
## Solution
I've been playing around a lot of with the access code and I realized
that I introduced a soundness error when trying to simplify the code.
When we have a `Or<(With<A>, With<B>)>` filter, we cannot call
```
let mut intermediate = FilteredAccess::default();
$name::update_component_access($name, &mut intermediate);
_new_access.append_or(&intermediate);
```
because that's just equivalent to adding the new components as `Or`
clauses.
For example if the existing `filter_sets` was `vec![With<C>]`, we would
then get `vec![With<C>, With<A>, With<B>]` which translates to `A or B
or C`.
Instead what we want is `(A and B) or (A and C)`, so we need to have
each new OR clause compose with the existing access like so:
```
let mut intermediate = _access.clone();
// if we previously had a With<C> in the filter_set, this will become `With<C> AND With<A>`
$name::update_component_access($name, &mut intermediate);
_new_access.append_or(&intermediate);
```
## Testing
- Added a unit test that is broken in main, but passes in this PR
# Objective
`sickle_ui` needs `PartialEq` on components to turn them into animatable
style attributes.
## Solution
All properties of Outline is already `PartialEq`, add derive on
`Outline` as well.
## Testing
- used `sickle_ui` to test if it can be made animatable
As reported in #14004, many third-party plugins, such as Hanabi, enqueue
entities that don't have meshes into render phases. However, the
introduction of indirect mode added a dependency on mesh-specific data,
breaking this workflow. This is because GPU preprocessing requires that
the render phases manage indirect draw parameters, which don't apply to
objects that aren't meshes. The existing code skips over binned entities
that don't have indirect draw parameters, which causes the rendering to
be skipped for such objects.
To support this workflow, this commit adds a new field,
`non_mesh_items`, to `BinnedRenderPhase`. This field contains a simple
list of (bin key, entity) pairs. After drawing batchable and unbatchable
objects, the non-mesh items are drawn one after another. Bevy itself
doesn't enqueue any items into this list; it exists solely for the
application and/or plugins to use.
Additionally, this commit switches the asset ID in the standard bin keys
to be an untyped asset ID rather than that of a mesh. This allows more
flexibility, allowing bins to be keyed off any type of asset.
This patch adds a new example, `custom_phase_item`, which simultaneously
serves to demonstrate how to use this new feature and to act as a
regression test so this doesn't break again.
Fixes#14004.
## Changelog
### Added
* `BinnedRenderPhase` now contains a `non_mesh_items` field for plugins
to add custom items to.
# Objective
`StaticSystemParam` should delegate all `SystemParam` methods to the
inner param, but it looks like it was missed when the new `queue()`
method was added in #10839.
## Solution
Implement `StaticSystemParam::queue()` to delegate to the inner param.
The comment was incorrect - we are already looking at the pyramid
texture so we do not need to transform the size in any way. Doing that
resulted in a mip that was too fine to be selected in certain cases,
which resulted in a 2x2 pixel footprint not actually fully covering the
cluster sphere - sometimes this could lead to a non-conservative depth
value being computed which resulted in the cluster being marked as
invisible incorrectly.
This change updates meshopt-rs to 0.3 to take advantage of the newly
added sparse simplification mode: by default, simplifier assumes that
the entire mesh is simplified and runs a set of calculations that are
O(vertex count), but in our case we simplify many small mesh subsets
which is inefficient.
Sparse mode instead assumes that the simplified subset is only using a
portion of the vertex buffer, and optimizes accordingly. This changes
the meaning of the error (as it becomes relative to the subset, in our
case a meshlet group); to ensure consistent error selection, we also use
the ErrorAbsolute mode which allows us to operate in mesh coordinate
space.
Additionally, meshopt 0.3 runs optimizeMeshlet automatically as part of
`build_meshlets` so we no longer need to call it ourselves.
This reduces the time to build meshlet representation for Stanford Bunny
mesh from ~1.65s to ~0.45s (3.7x) in optimized builds.
# Objective
Implement `BuildChildrenTransformExt` for `EntityWorldMut`, which is
useful when working directly with a mutable `World` ref.
## Solution
I realize this isn't the most optimal implementation in that it doesn't
reuse the existing entity location for the child, but it is terse and
reuses the existing code. I can address that if needed.
## Testing
I only tested locally. There are no tests for `set_parent_in_place` and
`remove_parent_in_place` currently, but I can add some.
---
## Changelog
`BuildChildrenTransformExt` implemented for `EntityWorldMut`.
# Objective
When using combinators such as `EntityCommand::with_entity` to build
commands, it can be easy to forget to apply that command, leading to
dead code. In many cases this doesn't even lead to an unused variable
warning, which can make these mistakes difficult to track down
## Solution
Annotate the method with `#[must_use]`
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
- #14017 changed how `UiImage` and `BackgroundColor` work
- one change was missed in example `color_grading`, another in the
mobile example
## Solution
- Change it in the examples
# Objective
- After #13908, shaders imported are collected
- this is done with a hashset, so order is random between executions
## Solution
- Don't use a hashset, and keep the order they were found in the example
# Objective
- Second part of #13900
- based on #13905
## Solution
- check_dir_light_mesh_visibility defers setting the entity's
`ViewVisibility `so that Bevy can schedule it to run in parallel with
`check_point_light_mesh_visibility`.
- Reduce HashMap lookups for directional light checking as much as
possible
- Use `par_iter `to parallelize the checking process within each system.
---------
Co-authored-by: Kristoffer Søholm <k.soeholm@gmail.com>
# Objective
The trait method `with_entity` is used to add an `EntityCommand` to the
command queue. Currently this method returns `WithEntity<C>` which pairs
a command with an `Entity`. By replacing this explicit type with an
opaque type, implementors can override this default implementation by
returning a custom command or closure that does the same thing with a
lower memory footprint.
# Solution
Return an opaque type from the method. As a bonus this file is now
cleaner without the `WithEntity` boilerplate
# Objective
- #4972 introduce a benchmark to measure chang detection performance
- However,it uses `iter_batch ` cause a lot of overhead in clone data to
each routine closure(it feels like a bug in`iter_batch `) and constructs
new query in every iter.This overhead masks the real change detection
throughput we want to measure. Instead of evaluating raw change
detection, the benchmark ends up dominated by data cloning and
allocation costs.
## Solution
- Use iter_batch_ref to reduce the benchmark overload
- Use cached query to better reflect real-world usage scenarios.
- Add more benmark
---
## Changelog
# Objective
Numerous people have been confused that Bevy runs slowly, when the
reason is that the `llvmpipe` software rendered is being used.
## Solution
Printing a warning could reduce the confusion.
# Objective
- Fixes#13728
## Solution
- add a new feature `smaa_luts`. if enables, it also enables `ktx2` and
`zstd`. if not, it doesn't load the files but use placeholders instead
- adds all the resources needed in the same places that system that uses
them are added.
# Objective
Fixes https://github.com/bevyengine/bevy/issues/13993
PR inspired by https://github.com/bevyengine/bevy/pull/14007 to
accomplish the same thing, but maybe in a clearer fashion.
@Gingeh feel free to take my changes and add them to your PR, I don't
want to steal any credit
---------
Co-authored-by: Gingeh <39150378+Gingeh@users.noreply.github.com>
Co-authored-by: Bob Gardner <rgardner@inworld.ai>
Co-authored-by: Martín Maita <47983254+mnmaita@users.noreply.github.com>
# Objective
In Bevy 0.13, `BackgroundColor` simply tinted the image of any
`UiImage`. This was confusing: in every other case (e.g. Text), this
added a solid square behind the element. #11165 changed this, but
removed `BackgroundColor` from `ImageBundle` to avoid confusion, since
the semantic meaning had changed.
However, this resulted in a serious UX downgrade / inconsistency, as
this behavior was no longer part of the bundle (unlike for `TextBundle`
or `NodeBundle`), leaving users with a relatively frustrating upgrade
path.
Additionally, adding both `BackgroundColor` and `UiImage` resulted in a
bizarre effect, where the background color was seemingly ignored as it
was covered by a solid white placeholder image.
Fixes#13969.
## Solution
Per @viridia's design:
> - if you don't specify a background color, it's transparent.
> - if you don't specify an image color, it's white (because it's a
multiplier).
> - if you don't specify an image, no image is drawn.
> - if you specify both a background color and an image color, they are
independent.
> - the background color is drawn behind the image (in whatever pixels
are transparent)
As laid out by @benfrankel, this involves:
1. Changing the default `UiImage` to use a transparent texture but a
pure white tint.
2. Adding `UiImage::solid_color` to quickly set placeholder images.
3. Changing the default `BorderColor` and `BackgroundColor` to
transparent.
4. Removing the default overrides for these values in the other assorted
UI bundles.
5. Adding `BackgroundColor` back to `ImageBundle` and `ButtonBundle`.
6. Adding a 1x1 `Image::transparent`, which can be accessed from
`Assets<Image>` via the `TRANSPARENT_IMAGE_HANDLE` constant.
Huge thanks to everyone who helped out with the design in the linked
issue and [the Discord
thread](https://discord.com/channels/691052431525675048/1255209923890118697/1255209999278280844):
this was very much a joint design.
@cart helped me figure out how to set the UiImage's default texture to a
transparent 1x1 image, which is a much nicer fix.
## Testing
I've checked the examples modified by this PR, and the `ui` example as
well just to be sure.
## Migration Guide
- `BackgroundColor` no longer tints the color of images in `ImageBundle`
or `ButtonBundle`. Set `UiImage::color` to tint images instead.
- The default texture for `UiImage` is now a transparent white square.
Use `UiImage::solid_color` to quickly draw debug images.
- The default value for `BackgroundColor` and `BorderColor` is now
transparent. Set the color to white manually to return to previous
behavior.
# Objective
- Fixes#13811 (probably, I lost my test code...)
## Solution
- Turns out that Queue and PrepareAssets are _not_ ordered. We should
probably either rethink our system sets (again), or improve the
documentation here. For reference, I've included the current ordering
below.
- The `prepare_meshlet_meshes_X` systems need to run after
`prepare_assets::<PreparedMaterial<M>>`, and have also been moved to
QueueMeshes.
```rust
schedule.configure_sets(
(
ExtractCommands,
ManageViews,
Queue,
PhaseSort,
Prepare,
Render,
Cleanup,
)
.chain(),
);
schedule.configure_sets((ExtractCommands, PrepareAssets, Prepare).chain());
schedule.configure_sets(QueueMeshes.in_set(Queue).after(prepare_assets::<GpuMesh>));
schedule.configure_sets(
(PrepareResources, PrepareResourcesFlush, PrepareBindGroups)
.chain()
.in_set(Prepare),
);
```
## Testing
- Ambiguity checker to make sure I don't have ambiguous system ordering
# Objective
#12469 changed the `Debug` impl for `Entity`, making sure it's actually
accurate for debugging. To ensure that its can still be readily logged
in error messages and inspectors, this PR added a more concise and
human-friendly `Display` impl.
However, users found this form too verbose: the `to_bits` information
was unhelpful and too long. Fixes#13980.
## Solution
- Don't include `Entity::to_bits` in the `Display` implementation for
`Entity`. This information can readily be accessed and logged for users
who need it.
- Also clean up the implementation of `Display` for `DebugName`,
introduced in https://github.com/bevyengine/bevy/pull/13760, to simply
use the new `Display` impl (since this was the desired format there).
## Testing
I've updated an existing test to verify the output of `Entity::display`.
---------
Co-authored-by: Kristoffer Søholm <k.soeholm@gmail.com>
# Objective
- Fix issue #13821
## Solution
- Rewrote the test to ensure that it actually tests the functionality
correctly. Then made the par_read function correctly change the values
of self.reader.last_event_count.
## Testing
- Rewrote the test for par_read to run the system schedule twice,
checking the output each time
---------
Co-authored-by: Martín Maita <47983254+mnmaita@users.noreply.github.com>
# Objective
- Fixes https://github.com/bevyengine/bevy/issues/14003
## Solution
- Expose an API to perform updates for a specific sub-app, so we can
avoid mutable borrow the app twice.
## Testing
- I have tested the API by modifying the code in the `many_lights`
example with the following changes:
```rust
impl Plugin for LogVisibleLights {
fn build(&self, app: &mut App) {
let Some(render_app) = app.get_sub_app_mut(RenderApp) else {
return;
};
render_app.add_systems(Render, print_visible_light_count.in_set(RenderSet::Prepare));
}
fn finish(&self, app: &mut App) {
app.update_sub_app_by_label(RenderApp);
}
}
```
---
## Changelog
- add the `update_sub_app_by_label` API to `App` and `SubApps`.
---------
Co-authored-by: Jan Hohenheim <jan@hohenheim.ch>
# Objective
- When writing "in game" debugging tools, quite often you need the name
of an entity (for example an entity tree). DebugName is the usual way of
doing that.
- A recent change to Entity's Debug implementation meant it was no
longer a minimal {index}v{generation} but instead a more verbose auto
generated Debug.
- This made DebugName's Debug implementation also verbose
## Solution
- I changed DebugName to derive Debug automatically and added a new
(preferred) Display implementation for it which is the preferred name
for an entity. If the entity has a Name component its the contents of
that, otherwise it is {index}v{generation} (though this does not use
Display of the Entity as that is more verbose than this).
## Testing
- I've added a new test in name.rs which tests the Display
implementation for DebugName by using to_string.
---
## Migration Guide
- In code which uses DebugName you should now use the Display
implementation rather than the Debug implementation (ie {} instead of
{:?} if you were printing it out).
---------
Co-authored-by: John Payne <20407779+johngpayne@users.noreply.github.com>
Co-authored-by: Jan Hohenheim <jan@hohenheim.ch>
Co-authored-by: Andres O. Vela <andresovela@users.noreply.github.com>