# Objective
- Allow bevy_sprite_picking backend to pass through transparent sections
of the sprite.
- Fixes#14929
## Solution
- After sprite picking detects the cursor is within a sprites rect,
check the pixel at that location on the texture and check that it meets
an optional transparency cutoff. Change originally created for
mod_picking on bevy 0.14
(https://github.com/aevyrie/bevy_mod_picking/pull/373)
## Testing
- Ran Sprite Picking example to check it was working both with
transparency enabled and disabled
- ModPicking version is currently in use in my own isometric game where
this has been an extremely noticeable issue
## Showcase
![Sprite Picking
Text](https://github.com/user-attachments/assets/76568c0d-c359-422b-942d-17c84d3d3009)
## Migration Guide
Sprite picking now ignores transparent regions (with an alpha value less
than or equal to 0.1). To configure this, modify the
`SpriteBackendSettings` resource.
---------
Co-authored-by: andriyDev <andriydzikh@gmail.com>
This patch adds the infrastructure necessary for Bevy to support
*bindless resources*, by adding a new `#[bindless]` attribute to
`AsBindGroup`.
Classically, only a single texture (or sampler, or buffer) can be
attached to each shader binding. This means that switching materials
requires breaking a batch and issuing a new drawcall, even if the mesh
is otherwise identical. This adds significant overhead not only in the
driver but also in `wgpu`, as switching bind groups increases the amount
of validation work that `wgpu` must do.
*Bindless resources* are the typical solution to this problem. Instead
of switching bindings between each texture, the renderer instead
supplies a large *array* of all textures in the scene up front, and the
material contains an index into that array. This pattern is repeated for
buffers and samplers as well. The renderer now no longer needs to switch
binding descriptor sets while drawing the scene.
Unfortunately, as things currently stand, this approach won't quite work
for Bevy. Two aspects of `wgpu` conspire to make this ideal approach
unacceptably slow:
1. In the DX12 backend, all binding arrays (bindless resources) must
have a constant size declared in the shader, and all textures in an
array must be bound to actual textures. Changing the size requires a
recompile.
2. Changing even one texture incurs revalidation of all textures, a
process that takes time that's linear in the total size of the binding
array.
This means that declaring a large array of textures big enough to
encompass the entire scene is presently unacceptably slow. For example,
if you declare 4096 textures, then `wgpu` will have to revalidate all
4096 textures if even a single one changes. This process can take
multiple frames.
To work around this problem, this PR groups bindless resources into
small *slabs* and maintains a free list for each. The size of each slab
for the bindless arrays associated with a material is specified via the
`#[bindless(N)]` attribute. For instance, consider the following
declaration:
```rust
#[derive(AsBindGroup)]
#[bindless(16)]
struct MyMaterial {
#[buffer(0)]
color: Vec4,
#[texture(1)]
#[sampler(2)]
diffuse: Handle<Image>,
}
```
The `#[bindless(N)]` attribute specifies that, if bindless arrays are
supported on the current platform, each resource becomes a binding array
of N instances of that resource. So, for `MyMaterial` above, the `color`
attribute is exposed to the shader as `binding_array<vec4<f32>, 16>`,
the `diffuse` texture is exposed to the shader as
`binding_array<texture_2d<f32>, 16>`, and the `diffuse` sampler is
exposed to the shader as `binding_array<sampler, 16>`. Inside the
material's vertex and fragment shaders, the applicable index is
available via the `material_bind_group_slot` field of the `Mesh`
structure. So, for instance, you can access the current color like so:
```wgsl
// `uniform` binding arrays are a non-sequitur, so `uniform` is automatically promoted
// to `storage` in bindless mode.
@group(2) @binding(0) var<storage> material_color: binding_array<Color, 4>;
...
@fragment
fn fragment(in: VertexOutput) -> @location(0) vec4<f32> {
let color = material_color[mesh[in.instance_index].material_bind_group_slot];
...
}
```
Note that portable shader code can't guarantee that the current platform
supports bindless textures. Indeed, bindless mode is only available in
Vulkan and DX12. The `BINDLESS` shader definition is available for your
use to determine whether you're on a bindless platform or not. Thus a
portable version of the shader above would look like:
```wgsl
#ifdef BINDLESS
@group(2) @binding(0) var<storage> material_color: binding_array<Color, 4>;
#else // BINDLESS
@group(2) @binding(0) var<uniform> material_color: Color;
#endif // BINDLESS
...
@fragment
fn fragment(in: VertexOutput) -> @location(0) vec4<f32> {
#ifdef BINDLESS
let color = material_color[mesh[in.instance_index].material_bind_group_slot];
#else // BINDLESS
let color = material_color;
#endif // BINDLESS
...
}
```
Importantly, this PR *doesn't* update `StandardMaterial` to be bindless.
So, for example, `scene_viewer` will currently not run any faster. I
intend to update `StandardMaterial` to use bindless mode in a follow-up
patch.
A new example, `shaders/shader_material_bindless`, has been added to
demonstrate how to use this new feature.
Here's a Tracy profile of `submit_graph_commands` of this patch and an
additional patch (not submitted yet) that makes `StandardMaterial` use
bindless. Red is those patches; yellow is `main`. The scene was Bistro
Exterior with a hack that forces all textures to opaque. You can see a
1.47x mean speedup.
![Screenshot 2024-11-12
161713](https://github.com/user-attachments/assets/4334b362-42c8-4d64-9cfb-6835f019b95c)
## Migration Guide
* `RenderAssets::prepare_asset` now takes an `AssetId` parameter.
* Bin keys now have Bevy-specific material bind group indices instead of
`wgpu` material bind group IDs, as part of the bindless change. Use the
new `MaterialBindGroupAllocator` to map from bind group index to bind
group ID.
# Objective
Fixes#15941
## Solution
Created https://crates.io/crates/variadics_please and moved the code
there; updating references
`bevy_utils/macros` is deleted.
## Testing
cargo check
## Migration Guide
Use `variadics_please::{all_tuples, all_tuples_with_size}` instead of
`bevy::utils::{all_tuples, all_tuples_with_size}`.
## Objective
Fixes#1515
This PR implements a flexible entity cloning system. The primary use
case for it is to clone dynamically-generated entities.
Example:
```rs
#[derive(Component, Clone)]
pub struct Projectile;
#[derive(Component, Clone)]
pub struct Damage {
value: f32,
}
fn player_input(
mut commands: Commands,
projectiles: Query<Entity, With<Projectile>>,
input: Res<ButtonInput<KeyCode>>,
) {
// Fire a projectile
if input.just_pressed(KeyCode::KeyF) {
commands.spawn((Projectile, Damage { value: 10.0 }));
}
// Triplicate all active projectiles
if input.just_pressed(KeyCode::KeyT) {
for projectile in projectiles.iter() {
// To triplicate a projectile we need to create 2 more clones
for _ in 0..2{
commands.clone_entity(projectile)
}
}
}
}
```
## Solution
### Commands
Add a `clone_entity` command to create a clone of an entity with all
components that can be cloned. Components that can't be cloned will be
ignored.
```rs
commands.clone_entity(entity)
```
If there is a need to configure the cloning process (like set to clone
recursively), there is a second command:
```rs
commands.clone_entity_with(entity, |builder| {
builder.recursive(true)
});
```
Both of these commands return `EntityCommands` of the cloned entity, so
the copy can be modified afterwards.
### Builder
All these commands use `EntityCloneBuilder` internally. If there is a
need to clone an entity using `World` instead, it is also possible:
```rs
let entity = world.spawn(Component).id();
let entity_clone = world.spawn_empty().id();
EntityCloneBuilder::new(&mut world).clone_entity(entity, entity_clone);
```
Builder has methods to `allow` or `deny` certain components during
cloning if required and can be extended by implementing traits on it.
This PR includes two `EntityCloneBuilder` extensions:
`CloneEntityWithObserversExt` to configure adding cloned entity to
observers of the original entity, and `CloneEntityRecursiveExt` to
configure cloning an entity recursively.
### Clone implementations
By default, all components that implement either `Clone` or `Reflect`
will be cloned (with `Clone`-based implementation preferred in case
component implements both).
This can be overriden on a per-component basis:
```rs
impl Component for SomeComponent {
const STORAGE_TYPE: StorageType = StorageType::Table;
fn get_component_clone_handler() -> ComponentCloneHandler {
// Don't clone this component
ComponentCloneHandler::Ignore
}
}
```
### `ComponentCloneHandlers`
Clone implementation specified in `get_component_clone_handler` will get
registered in `ComponentCloneHandlers` (stored in
`bevy_ecs::component::Components`) at component registration time.
The clone handler implementation provided by a component can be
overriden after registration like so:
```rs
let component_id = world.components().component_id::<Component>().unwrap()
world.get_component_clone_handlers_mut()
.set_component_handler(component_id, ComponentCloneHandler::Custom(component_clone_custom))
```
The default clone handler for all components that do not explicitly
define one (or don't derive `Component`) is
`component_clone_via_reflect` if `bevy_reflect` feature is enabled, and
`component_clone_ignore` (noop) otherwise.
Default handler can be overriden using
`ComponentCloneHandlers::set_default_handler`
### Handlers
Component clone handlers can be used to modify component cloning
behavior. The general signature for a handler that can be used in
`ComponentCloneHandler::Custom` is as follows:
```rs
pub fn component_clone_custom(
world: &mut DeferredWorld,
entity_cloner: &EntityCloner,
) {
// implementation
}
```
The `EntityCloner` implementation (used internally by
`EntityCloneBuilder`) assumes that after calling this custom handler,
the `target` entity has the desired version of the component from the
`source` entity.
### Builder handler overrides
Besides component-defined and world-overriden handlers,
`EntityCloneBuilder` also has a way to override handlers locally. It is
mainly used to allow configuration methods like `recursive` and
`add_observers`.
```rs
// From observer clone handler implementation
impl CloneEntityWithObserversExt for EntityCloneBuilder<'_> {
fn add_observers(&mut self, add_observers: bool) -> &mut Self {
if add_observers {
self.override_component_clone_handler::<ObservedBy>(ComponentCloneHandler::Custom(
component_clone_observed_by,
))
} else {
self.remove_component_clone_handler_override::<ObservedBy>()
}
}
}
```
## Testing
Includes some basic functionality tests and doctests.
Performance-wise this feature is the same as calling `clone` followed by
`insert` for every entity component. There is also some inherent
overhead due to every component clone handler having to access component
data through `World`, but this can be reduced without breaking current
public API in a later PR.
# Objective
- Fixes#16078
## Solution
- Rename things to clarify that we _want_ unclipped depth for
directional light shadow views, and need some way of disabling the GPU's
builtin depth clipping
- Use DEPTH_CLIP_CONTROL instead of the fragment shader emulation on
supported platforms
- Pass only the clip position depth instead of the whole clip position
between vertex->fragment shader (no idea if this helps performance or
not, compiler might optimize it anyways)
- Meshlets
- HW raster always uses DEPTH_CLIP_CONTROL since it targets a more
limited set of platforms
- SW raster was not handling DEPTH_CLAMP_ORTHO correctly, it ended up
pretty much doing nothing.
- This PR made me realize that SW raster technically should have depth
clipping for all views that are not directional light shadows, but I
decided not to bother writing it. I'm not sure that it ever matters in
practice. If proven otherwise, I can add it.
## Testing
- Did you test these changes? If so, how?
- Lighting example. Both opaque (no fragment shader) and alpha masked
geometry (fragment shader emulation) are working with
depth_clip_control, and both work when it's turned off. Also tested
meshlet example.
- Are there any parts that need more testing?
- Performance. I can't figure out a good test scene.
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
- Toggle depth_clip_control_supported in prepass/mod.rs line 323 to turn
this PR on or off.
- If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?
- Native
---
## Migration Guide
- `MeshPipelineKey::DEPTH_CLAMP_ORTHO` is now
`MeshPipelineKey::UNCLIPPED_DEPTH_ORTHO`
- The `DEPTH_CLAMP_ORTHO` shaderdef has been renamed to
`UNCLIPPED_DEPTH_ORTHO_EMULATION`
- `clip_position_unclamped: vec4<f32>` is now `unclipped_depth: f32`
# Objective
Avoid a premature normalize operation and get better smooth normals for
it.
## Inspiration
@IceSentry suggested `face_normal()` could have its normalize removed
based on [this article](https://iquilezles.org/articles/normals/) in PR
#16039.
## Solution
I did not want to change `face_normal()` to return a vector that's not
normalized. The name "normal" implies it'll be normalized. Instead I
added the `face_area_normal()` function, whose result is not normalized.
Its magnitude is equal two times the triangle's area. I've noted why
this is the case in its doc comment.
I changed `compute_smooth_normals()` from computing normals from
adjacent faces with equal weight to use the area of the faces as a
weight. This has the benefit of being cheaper computationally and
hopefully produces better normals.
The `compute_flat_normals()` is unchanged and still uses
`face_normal()`.
## Testing
One test was added which shows the bigger triangle having an effect on
the normal, but the previous test that uses the same size triangles is
unchanged.
**WARNING:** No visual test has been done yet. No example exists that
demonstrates the compute_smooth_normals(). Perhaps there's a good model
to demonstrate what the differences are. I would love to have some input
on this.
I'd suggest @IceSentry and @stepancheg to review this PR.
## Further Considerations
It's possible weighting normals by their area is not definitely better
than unweighted. It's possible there may be aesthetic reasons to prefer
one over the other. In such a case, we could offer two variants:
weighted or unweighted. Or we could offer another function perhaps like
this: `compute_smooth_normals_with_weights(|normal, area| 1.0)` which
would restore the original unweighted sum of normals.
---
## Showcase
Smooth normal calculation now weights adjacent face normals by their
area.
## Migration Guide
# Objective
- Exposes raw winit events making Bevy even more modular and powerful
for custom plugin developers (e.g. a custom render plugin).
XRef: https://github.com/bevyengine/bevy/issues/5977
It doesn't quite close the issue as sending events is not supported (or
not very useful to be precise). I would think that supporting that
requires some extra considerations by someone a bit more familiar with
the `bevy_winit` crate. That said, this PR could be a nice step forward.
## Solution
Emit `RawWinitWindowEvent` objects for each received event.
## Testing
I verified that the events are emitted using a basic test app. I don't
think it makes sense to solidify this behavior in one of the examples.
---
## Showcase
My example usage for a custom `egui_winit` integration:
```rust
for ev in winit_events.read() {
if ev.window_id == window.id {
let _ = egui_winit.on_window_event(&window, &ev.event);
}
}
```
---------
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
# Objective
- Keep Taffy version up to date
Taffy 0.6 doesn't include a huge amount relevant to Bevy. But it does:
- Add the `box_sizing` style
- Expose the computed `margin` in layout
- Traitifies the `Style` struct, which opens up the possibility of using
Bevy's `Style` struct directly (although Bevy currently does some style
resolution at conversion time which would no longer be cached if it was
used directly).
- Have a few bug fixes in the layout algorithms
## Solution
- Upgrade Taffy to `0.6.0`
## Testing
- I've run the `grid` example. All looks good.
- More testing is probably warranted. We have had regressions from Taffy
upgrades before
- Having said that, most of the algorithm changes this cycle were driven
by fixing WPT tests run through the new Servo integration. So they're
possibly less likely than usual to cause regressions.
## Breaking changes
The only "breaking" change is adding a field to `Style`. Probably
doesn't bear mentioning?
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
- Contributes to #15460
## Solution
- Added two new features, `std` (default) and `alloc`, gating `std` and
`alloc` behind them respectively.
- Added missing `f32` functions to `std_ops` as required. These `f32`
methods have been added to the `clippy.toml` deny list to aid in
`no_std` development.
## Testing
- CI
- `cargo clippy -p bevy_math --no-default-features --features libm
--target "x86_64-unknown-none"`
- `cargo test -p bevy_math --no-default-features --features libm`
- `cargo test -p bevy_math --no-default-features --features "libm,
alloc"`
- `cargo test -p bevy_math --no-default-features --features "libm,
alloc, std"`
- `cargo test -p bevy_math --no-default-features --features "std"`
## Notes
The following items require the `alloc` feature to be enabled:
- `CubicBSpline`
- `CubicBezier`
- `CubicCardinalSpline`
- `CubicCurve`
- `CubicGenerator`
- `CubicHermite`
- `CubicNurbs`
- `CyclicCubicGenerator`
- `RationalCurve`
- `RationalGenerator`
- `BoxedPolygon`
- `BoxedPolyline2d`
- `BoxedPolyline3d`
- `SampleCurve`
- `SampleAutoCurve`
- `UnevenSampleCurve`
- `UnevenSampleAutoCurve`
- `EvenCore`
- `UnevenCore`
- `ChunkedUnevenCore`
This requirement could be relaxed in certain cases, but I had erred on
the side of gating rather than modifying. Since `no_std` is a new set of
platforms we are adding support to, and the `alloc` feature is enabled
by default, this is not a breaking change.
---------
Co-authored-by: Benjamin Brienen <benjamin.brienen@outlook.com>
Co-authored-by: Matty <2975848+mweatherley@users.noreply.github.com>
Co-authored-by: Joona Aalto <jondolf.dev@gmail.com>
# Objective
~Blocked on #13417~
Motivation is the same as in #13417. If users can sort `QueryIter`, to
only makes sense to also allow them to use this functionality on
`QueryManyIter`.
## Solution
Also implement the sorts on `QueryManyIter`.
The implementation of the sorts themselves are mostly the same as with
`QueryIter` in #13417.
They differ in that they re-use the `entity_iter` passed to the
`iter_many`, and internally call `iter_many_unchecked_manual` on the
lens `QueryState` with it.
These methods also return a different struct, `QuerySortedManyIter`,
because there is no longer a guarantee of unique entities.
`QuerySortedManyIter` implements the various `Iterator` traits for
read-only iteration, as `QueryManyIter` does + `DoubleEndedIterator`.
For mutable iteration, there is both a `fetch_next` and a
`fetch_next_back` method. However, they only become available after the
user calls `collect_inner` on `QuerySortedManyIter` first. This collects
the inner `entity_iter` (this is the sorted one, **not** the original
the user passed) to drop all query lens items to avoid aliasing.
When TAITs are available this `collect_inner` could be hidden away,
until then it is unfortunately not possible to elide this without either
regressing read-only iteration, or introducing a whole new type, mostly
being a copy of `QuerySortedIter`.
As a follow-up we could add a `entities_all_unique` method to check
whether the entity list consists of only unique entities, and then
return a `QuerySortedIter` from it (under opaque impl Trait if need be),
*allowing mutable `Iterator` trait iteration* over what was originally
an `iter_many` call.
Such a method can also be added to `QueryManyIter`, albeit needing a
separate, new return type.
## Testing
I've switched the third example/doc test under `sort` out for one that
shows the collect_inner/fetch_next_back functionality, otherwise the
examples are the same as in #13417, adjusted to use `iter_many` instead
of `iter`.
The `query-iter-many-sorts` test checks for equivalence to the
underlying sorts.
The test after shows that these sorts *do not* panic after
`fetch`/`fetch_next` calls.
## Changelog
Added `sort`, `sort_unstable`, `sort_by`, `sort_unstable_by`,
`sort_by_key`, `sort_by_cached_key` to `QueryManyIter`.
Added `QuerySortedManyIter`.
# Objective
- dont depend on wgpu if we dont have to
## Solution
- works towards this, but doesnt fully accomplish it. the remaining
types stopping us from doing this need to be moved upstream, i will PR
this
## Testing
- 3d_scene runs
---------
Co-authored-by: François Mockers <francois.mockers@vleue.com>
# Objective
Add support for multiple box shadows on a single `Node`.
## Solution
* Rename `BoxShadow` to `ShadowStyle` and remove its `Component` derive.
* Create a new `BoxShadow` component that newtypes a `Vec<ShadowStyle>`.
* Add a `new` constructor method to `BoxShadow` for single shadows.
* Change `extract_shadows` to iterate through a list of shadows per
node.
Render order is determined implicitly from the order of the shadows
stored in the `BoxShadow` component, back-to-front.
Might be more efficient to use a `SmallVec<[ShadowStyle; 1]>` for the
list of shadows but not sure if the extra friction is worth it.
## Testing
Added a node with four differently coloured shadows to the `box_shadow`
example.
---
## Showcase
```
cargo run --example box_shadow
```
<img width="460" alt="four-shadow"
src="https://github.com/user-attachments/assets/2f728c47-33b4-42e1-96ba-28a774b94b24">
## Migration Guide
Bevy UI now supports multiple shadows per node. A new struct
`ShadowStyle` is used to set the style for each shadow. And the
`BoxShadow` component is changed to a tuple struct wrapping a vector
containing a list of `ShadowStyle`s. To spawn a node with a single
shadow you can use the `new` constructor function:
```rust
commands.spawn((
Node::default(),
BoxShadow::new(
Color::BLACK.with_alpha(0.8),
Val::Percent(offset.x),
Val::Percent(offset.y),
Val::Percent(spread),
Val::Px(blur),
)
));
```
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
- Fixes: #15663
## Solution
- Add an `is_forward_plane` method to `Aabb`, and a `contains_aabb`
method to `Frustum`.
## Test
- I have created a frustum with an offset along with three unit tests to
evaluate the `contains_aabb` algorithm.
## Explanation for the Test Cases
- To facilitate the code review, I will explain how the frustum is
created. Initially, we create a frustum without any offset and then
create a cuboid that is just contained within it.
<img width="714" alt="image"
src="https://github.com/user-attachments/assets/a9ac53a2-f8a3-4e09-b20b-4ee71b27a099">
- Secondly, we move the cuboid by 2 units along both the x-axis and the
y-axis to make it more general.
## Reference
- [Frustum
Culling](https://learnopengl.com/Guest-Articles/2021/Scene/Frustum-Culling#)
- [AABB Plane
intersection](https://gdbooks.gitbooks.io/3dcollisions/content/Chapter2/static_aabb_plane.html)
---------
Co-authored-by: IQuick 143 <IQuick143cz@gmail.com>
# Objective
- I got tired of calling `enable_state_scoped_entities`, and though it
would make more sense to define that at the place where the state is
defined
## Solution
- add a derive attribute `#[states(scoped_entities)]` when derive
`States` or `SubStates` that enables it automatically when adding the
state
## Testing
- Ran the examples using it, they still work
# Objective
There is currently no way of getting `QueryState` from `&World`, so it
is hard to, for example, iterate over all entities with a component,
only having `&World`.
## Solution
Add `try_new` function to `QueryState` that internally uses
`WorldQuery`'s `get_state`.
## Testing
No testing
# Objective
Combine the `Option<_>` state in `FunctionSystem` into a single `Option`
to provide clarity and save space.
## Solution
Simplifies `FunctionSystem`'s layout by using a single
`Option<FunctionSystemState>` for state that must be initialized before
running, and saves a byte by removing the need to store an enum tag.
Additionally, calling `System::run` on an uninitialized `System` will
now give a more descriptive message prior to verifying the `WorldId`.
## Testing
Ran CI checks locally.
# Objective
See title.
## Solution
Move `bevy_animation` import to where it is used.
## Testing
Compiled with and without `bevy_animation` feature enabled.
# Objective
- Avoid recreating the monitor every loop (temp fix until it's done
properly on winit side)
- Add a new `WinitSettings` preset for mobile that makes the winit loop
wait more and recommend its usage
Co-authored by: @BenjaminBrienen
# Objective
Fixes#16494. Closes#16539, which this replaces. Suggestions alone
weren't enough, so now we have a new PR!
---------
Co-authored-by: Joona Aalto <jondolf.dev@gmail.com>
# Objective
- In 0.14, ZIndex and GlobalZIndex where split from a shared enum into
separate components. There have been a few people confused by the
behavior of ZIndex when they really needed GlobalZIndex.
## Solution
- Update ZIndex docs to improve discoverability of GlobalZIndex.
---------
Co-authored-by: Benjamin Brienen <benjamin.brienen@outlook.com>
# Objective
Fixes#16531
I also added change detection when creating the pipeline, which
technically isn't needed but it felt weird leaving it as is.
## Solution
Remove the pipeline if CAS is disabled. The uniform was already being
removed, which caused flickering / weirdness.
## Testing
Tested the anti_alias example by toggling CAS a bunch on/off.
# Objective
Animating component fields requires too much boilerplate at the moment:
```rust
#[derive(Reflect)]
struct FontSizeProperty;
impl AnimatableProperty for FontSizeProperty {
type Component = TextFont;
type Property = f32;
fn get_mut(component: &mut Self::Component) -> Option<&mut Self::Property> {
Some(&mut component.font_size)
}
}
animation_clip.add_curve_to_target(
animation_target_id,
AnimatableKeyframeCurve::new(
[0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0]
.into_iter()
.zip([24.0, 80.0, 24.0, 80.0, 24.0, 80.0, 24.0]),
)
.map(AnimatableCurve::<FontSizeProperty, _>::from_curve)
.expect("should be able to build translation curve because we pass in valid samples"),
);
```
## Solution
This adds `AnimatedField` and an `animated_field!` macro, enabling the
following:
```rust
animation_clip.add_curve_to_target(
animation_target_id,
AnimatableCurve::new(
animated_field!(TextFont::font_size),
AnimatableKeyframeCurve::new(
[0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0]
.into_iter()
.zip([24.0, 80.0, 24.0, 80.0, 24.0, 80.0, 24.0]),
)
.expect(
"should be able to build translation curve because we pass in valid samples",
),
),
);
```
This required reworking the internals a bit, namely stripping out a lot
of the `Reflect` usage, as that implementation was fundamentally
incompatible with the `AnimatedField` pattern. `Reflect` was being used
in this context just to downcast traits. But we can get downcasting
behavior without the `Reflect` requirement by implementing `Downcast`
for `AnimationCurveEvaluator`.
This also reworks "evaluator identity" to support either a (Component /
Field) pair, or a TypeId. This allows properties to reuse evaluators,
even if they have different accessor methods. The "contract" here is
that for a given (Component / Field) pair, the accessor will return the
same value. Fields are identified by their Reflect-ed field index. The
(TypeId, usize) is prehashed and cached to optimize for lookup speed.
This removes the built-in hard-coded TranslationCurve / RotationCurve /
ScaleCurve in favor of AnimatableField.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
The documentation for `bevy_app::App.world()` (and its mut variant)
could confuse some into thinking that this is the only World that the
App will contain.
## Solution
Clarify the documentation for `bevy_app::App.world()` (and its mut
variant), to say that it returns the main subapp's world. This helps
imply that Apps can contain more than one world (albeit, only one per
SubApp).
## Testing
This is a documentation change, with no changes to doctests. Thus,
testing is not necessary beyond ensuring the link syntax is correct.
# Objective
Fixes#16521
## Solution
If an empty span is encountered (such as the default `Text` value), we
skip it entirely when updating buffers. This prevents unnecessarily
bailing when the font doesn't exist (ex: when the default font is
disabled)
This reverts commit 3476a9f3a658fe7c8326ca0e618ad543914ca7f4.
# Objective
#16517 makes it impossible to select a libc version, now that the
"breaking" libc release was yanked
## Solution
Revert the version bump so we can select a version of sysinfo that
builds (the previous version)
# Objective
- Fixes#16363
- Ensure that someone using minimum version doesn't get the bugs that
were fixed in the 23.0.1 patch
## Solution
- Use wgpu 23.0.1
# Objective
In the [*Similar parameters* section of
`Query`](https://dev-docs.bevyengine.org/bevy/ecs/prelude/struct.Query.html#similar-parameters),
the doc link for `Single` actually links to `Query::single`, and
`Option<Single>` just links to `Option`. They should both link to
`Single`!
The first link is broken because there is a reference-style link defined
for `single`, but not for `Single`, and rustdoc treats the link as
case-insensitive for some reason.
## Solution
Fix the links!
## Testing
I built the docs locally with `cargo doc` and tested the links.
# Objective
- Refactor
## Solution
- Refactor
## Testing
- Ran 3d_scene
---
## Migration Guide
`RenderCreation::Manual` variant fields are now wrapped in a struct
called `RenderResources`
# Objective
`TemporaryRenderEntity` currently uses `SparseSet` storage, but doesn't
seem to fit the criteria for a component that would benefit from this.
Typical usage of `TemporaryRenderEntity` (and all current usages of it
in engine as far as I can tell) would be to spawn an entity with it once
and then iterate over it once to despawn that entity.
`SparseSet` is said to be useful for insert/removal perf at the cost of
iteration perf.
## Solution
Use the default table storage
## Testing
Possibly this could show up in stress tests like `many_buttons`. I
didn't do any benchmarking.
# Objective
When using a rect for a ui image, its content size is still equal to the
size of the full image instead of the size of the rect.
## Solution
Use the rect size if it is present.
## Testing
I tested it using all 4 possible combinations of having a rect and
texture atlas or not. See the showcase section.
---
## Showcase
<details>
<summary>Click to view showcase</summary>
```rust
use bevy::prelude::*;
fn main() {
App::new()
.add_plugins(DefaultPlugins.set(ImagePlugin::default_nearest()))
.add_systems(Startup, create_ui)
.run();
}
fn create_ui(
mut commands: Commands,
assets: Res<AssetServer>,
mut texture_atlas_layouts: ResMut<Assets<TextureAtlasLayout>>,
mut ui_scale: ResMut<UiScale>,
) {
let texture = assets.load("textures/fantasy_ui_borders/numbered_slices.png");
let layout = TextureAtlasLayout::from_grid(UVec2::splat(16), 3, 3, None, None);
let texture_atlas_layout = texture_atlas_layouts.add(layout);
ui_scale.0 = 2.;
commands.spawn(Camera2d::default());
commands
.spawn(Node {
display: Display::Flex,
align_items: AlignItems::Center,
..default()
})
.with_children(|parent| {
// nothing
parent.spawn(ImageNode::new(texture.clone()));
// with rect
parent.spawn(ImageNode::new(texture.clone()).with_rect(Rect::new(0., 0., 16., 16.)));
// with rect and texture atlas
parent.spawn(
ImageNode::from_atlas_image(
texture.clone(),
TextureAtlas {
layout: texture_atlas_layout.clone(),
index: 1,
},
)
.with_rect(Rect::new(0., 0., 8., 8.)),
);
// with texture atlas
parent.spawn(ImageNode::from_atlas_image(
texture.clone(),
TextureAtlas {
layout: texture_atlas_layout.clone(),
index: 2,
},
));
});
}
```
Before this change:
<img width="529" alt="Screenshot 2024-11-21 at 11 55 45"
src="https://github.com/user-attachments/assets/23196003-08ca-4049-8409-fe349bd5aa54">
After the change:
<img width="400" alt="Screenshot 2024-11-21 at 11 54 54"
src="https://github.com/user-attachments/assets/e2cd6ebf-859c-40a1-9fc4-43bb28b024e5">
</details>
# Objective
Original motivation was a bundle I am migrating that is `Copy` which
needs to be synced to the render world. It probably doesn't actually
*need* to be `Copy`, so this isn't critical or anything.
I am continuing to use this bundle while bundles still exist to give
users an easier migration path.
## Solution
These ZSTs might as well be `Copy`. Add `Copy` derives.
PR #15164 made Bevy consider the center of the mesh to be the center of
the axis-aligned bounding box (AABB). Unfortunately, this breaks
crossfading in many cases. LODs may have different AABBs and so the
center of the AABB may differ for different LODs of the same mesh. The
crossfading, however, relies on all LODs having *precisely* the same
position.
To address this problem, this PR adds a new field, `use_aabb`, to
`VisibilityRange`, which makes the AABB center point behavior opt-in.
@BenjaminBrienen first noticed this issue when reviewing PR #16286. That
PR contains a video showing the effects of this regression on the
`visibility_range` example. This commit fixes that example.
## Migration Guide
* The `VisibilityRange` component now has an extra field, `use_aabb`.
Generally, you can safely set it to false.
We have an early-out to avoid updating `RenderVisibilityRanges` when a
`VisibilityRange` component is *modified*, but not when one is
*removed*. This means that removing `VisibilityRange` from an entity
might not update the rendering.
This PR fixes the issue by adding a check for removed
`VisibilityRange`s.
# Objective
- Fixes#16472.
## Solution
- Add flags to `SpritePlugin` and `UiPlugin` to disable their picking
backends.
## Testing
- The change is pretty trivial, so not much to test!
---
## Migration Guide
- `UiPlugin` now contains an extra `add_picking` field if
`bevy_ui_picking_backend` is enabled.
- `SpritePlugin` is no longer a unit struct, and has one field if
`bevy_sprite_picking_backend` is enabled (otherwise no fields).
# Objective
- Fixes#16469.
## Solution
- Make the picking backend features not enabled by default in each
sub-crate.
- Make features in `bevy_internal` to set the backend features
- Make the root `bevy` crate set the features by default.
## Testing
- The mesh and sprite picking examples still work correctly.
I'm not sure why, but somehow `#[derive(Reflect)]` on a tuple struct
with a boxed trait object can result in linker errors when dynamic
linking is used on Windows using `rust-lld`:
```
= note: rust-lld: error: <root>: undefined symbol: bevy_animation::_::_$LT$impl$u20$bevy_reflect..reflect..PartialReflect$u20$for$u20$bevy_animation..AnimationEventFn$GT$::reflect_partial_eq::hc4cce1dc55e42e0b␍
rust-lld: error: <root>: undefined symbol: bevy_animation::_::_$LT$impl$u20$bevy_reflect..from_reflect..FromReflect$u20$for$u20$bevy_animation..AnimationEventFn$GT$::from_reflect::hc2b1d575b8491092␍
rust-lld: error: <root>: undefined symbol: bevy_animation::_::_$LT$impl$u20$bevy_reflect..tuple_struct..TupleStruct$u20$for$u20$bevy_animation..AnimationEventFn$GT$::clone_dynamic::hab42a4edc8d6b5c2␍
rust-lld: error: <root>: undefined symbol: bevy_animation::_::_$LT$impl$u20$bevy_reflect..tuple_struct..TupleStruct$u20$for$u20$bevy_animation..AnimationEventFn$GT$::field::h729a3d6dd6a27a43␍
rust-lld: error: <root>: undefined symbol: bevy_animation::_::_$LT$impl$u20$bevy_reflect..tuple_struct..TupleStruct$u20$for$u20$bevy_animation..AnimationEventFn$GT$::field_mut::hde1c34846d77344b␍
rust-lld: error: <root>: undefined symbol: bevy_animation::_::_$LT$impl$u20$bevy_reflect..type_registry..GetTypeRegistration$u20$for$u20$bevy_animation..AnimationEventFn$GT$::get_type_registration::hb96eb543e403a132␍
rust-lld: error: <root>: undefined symbol: bevy_animation::_::_$LT$impl$u20$bevy_reflect..type_registry..GetTypeRegistration$u20$for$u20$bevy_animation..AnimationEventFn$GT$::register_type_dependencies::hcf1a4b69bcfea6ae␍
rust-lld: error: undefined symbol: bevy_animation::_::_$LT$impl$u20$bevy_reflect..reflect..PartialReflect$u20$for$u20$bevy_animation..AnimationEventFn$GT$::reflect_partial_eq::hc4cce1dc55e42e0b␍
```
etc.
Adding `#[reflect(opaque)]` to the `Reflect` derive fixes the problem,
and that's what this patch does. I think that adding
`#[reflect(opaque)]` is harmless, as there's little that reflection
allows with a boxed trait object anyhow.
# Objective
- Fixes#16406 even more. The previous implementation did not take into
account the depth of the requiree when setting the depth relative to the
required_by component.
## Solution
- Add the depth of the requiree!
## Testing
- Added a test.
---------
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
We switch back and forwards between logical and physical coordinates all
over the place. Systems have to query for cameras and the UiScale when
they shouldn't need to. It's confusing and fragile and new scale factor
bugs get found constantly.
## Solution
* Use physical coordinates whereever possible in `bevy_ui`.
* Store physical coords in `ComputedNode` and tear out all the unneeded
scale factor calculations and queries.
* Add an `inverse_scale_factor` field to `ComputedNode` and set nodes
changed when their scale factor changes.
## Migration Guide
`ComputedNode`'s fields and methods now use physical coordinates.
`ComputedNode` has a new field `inverse_scale_factor`. Multiplying the
physical coordinates by the `inverse_scale_factor` will give the logical
values.
---------
Co-authored-by: atlv <email@atlasdostal.com>
# Objective
Needing to derive `AnimationEvent` for `Event` is unnecessary, and the
trigger logic coupled to it feels like we're coupling "event producer"
logic with the event itself, which feels wrong. It also comes with a
bunch of complexity, which is again unnecessary. We can have the
flexibility of "custom animation event trigger logic" without this
coupling and complexity.
The current `animation_events` example is also needlessly complicated,
due to it needing to work around system ordering issues. The docs
describing it are also slightly wrong. We can make this all a non-issue
by solving the underlying ordering problem.
Related to this, we use the `bevy_animation::Animation` system set to
solve PostUpdate animation order-of-operations issues. If we move this
to bevy_app as part of our "core schedule", we can cut out needless
`bevy_animation` crate dependencies in these instances.
## Solution
- Remove `AnimationEvent`, the derive, and all other infrastructure
associated with it (such as the `bevy_animation/derive` crate)
- Replace all instances of `AnimationEvent` traits with `Event + Clone`
- Store and use functions for custom animation trigger logic (ex:
`clip.add_event_fn()`). For "normal" cases users dont need to think
about this and should use the simpler `clip.add_event()`
- Run the `Animation` system set _before_ updating text
- Move `bevy_animation::Animation` to `bevy_app::Animation`. Remove
unnecessary `bevy_animation` dependency from `bevy_ui`
- Adjust `animation_events` example to use the simpler `clip.add_event`
API, as the workarounds are no longer necessary
This is polishing work that will land in 0.15, and I think it is simple
enough and valuable enough to land in 0.15 with it, in the interest of
making the feature as compelling as possible.
# Objective
A new user is likely to try `Query<Component>` instead of
`Query<&Component>`. The error message should guide them to the right
solution.
## Solution
Add a note to the on_unimplemented message for `QueryData` recommending
`&T` and `&mut T`.
The full error message now looks like:
```
error[E0277]: `A` is not valid to request as data in a `Query`
--> crates\bevy_ecs\src\query\world_query.rs:260:18
|
260 | fn system(query: Query<A>) {}
| ^^^^^^^^ invalid `Query` data
|
= help: the trait `fetch::QueryData` is not implemented for `A`
= note: if `A` is a component type, try using `&A` or `&mut A`
= help: the following other types implement trait `fetch::QueryData`:
&'__w mut T
&Archetype
&T
()
(F,)
(F0, F1)
(F0, F1, F2)
(F0, F1, F2, F3)
and 41 others
note: required by a bound in `system::query::Query`
--> crates\bevy_ecs\src\system\query.rs:362:37
|
362 | pub struct Query<'world, 'state, D: QueryData, F: QueryFilter = ()> {
| ^^^^^^^^^ required by this bound in `Query`
```
Alternative to #16450
# Objective
detailed_trace! in its current form does not work (and breaks CI)
## Solution
Fix detailed_trace by checking for the feature properly, adding it to
the correct crates, and removing it from the incorrect crates
# Objective
- Fixes#16406
- Fixes an issue where registering a "deeper" required component, then a
"shallower" required component, would result in the wrong required
constructor being used for the root component.
## Solution
- Make `register_required_components` add any "parent" of a component as
`required_by` to the new "child".
- Assign the depth of the `requiree` plus 1 as the depth of a new
runtime required component.
## Testing
- Added two new tests.
# Objective
- Add methods to facilitate `TextFont` component creation and insertion.
## Solution
- Added `from_font` and `from_font_size` which return a new `TextFont`
with said attributes provided as parameters.
- Added `with_font` and `with_font_size` which return an existing
`TextFont` modifying said attributes with the values provided as
parameters.
## Testing
- CI Checks.
- Tested methods locally by changing values and running the `text_debug`
example.
# Objective
Run this without this PR:
`cargo build -p bevy_hierarchy --no-default-features`
You'll get:
```
error[E0432]: unresolved import `bevy_reflect`
--> crates/bevy_hierarchy/src/events.rs:2:5
|
2 | use bevy_reflect::Reflect;
| ^^^^^^^^^^^^ use of undeclared crate or module `bevy_reflect`
For more information about this error, try `rustc --explain E0432`.
error: could not compile `bevy_hierarchy` (lib) due to 1 previous error
warning: build failed, waiting for other jobs to finish...
```
Because of this line:
```rs
use bevy_reflect::Reflect;
#[derive(Event, Debug, Clone, PartialEq, Eq)]
#[cfg_attr(feature = "reflect", derive(Reflect), reflect(Debug, PartialEq))]
pub enum HierarchyEvent { .. }
```
## Solution
use FQN: `derive(bevy_reflect::Reflect)`
## Testing
`cargo build -p bevy_hierarchy --no-default-features`
# Objective
It looks like this file was created based on the `ui_texture_slice`
rendering code and some variable names weren't updated.
## Solution
Rename "texture slice" variable names to "box shadow".
# Objective
#16222 regressed the user experience of actually using gamepads:
```rust
// Before 16222
gamepad.just_pressed(GamepadButton::South)
// After 16222
gamepad.digital.just_pressed(GamepadButton::South)
// Before 16222
gamepad.get(GamepadButton::RightTrigger2)
// After 16222
gamepad.analog.get(GamepadButton::RighTrigger2)
```
Users shouldn't need to think about "digital vs analog" when checking if
a button is pressed. This abstraction was intentional and I strongly
believe it is in our users' best interest. Buttons and Axes are _both_
digital and analog, and this is largely an implementation detail. I
don't think reverting this will be controversial.
## Solution
- Revert most of #16222
- Add the `Into<T>` from #16222 to the internals
- Expose read/write `digital` and `analog` accessors on gamepad, in the
interest of enabling the mocking scenarios covered in #16222 (and
allowing the minority of users that care about the "digital" vs "analog"
distinction in this context to make that distinction)
---------
Co-authored-by: Hennadii Chernyshchyk <genaloner@gmail.com>
Co-authored-by: Rob Parrett <robparrett@gmail.com>
# Objective
- bevy_render (poorly) implements gcd (which should be in bevy_math but
theres not enough justification to have it there either anyways cus its
just one usage)
## Solution
- hardcoded LUT replacement for the one usage
## Testing
- verified the alternative implementation of 4/gcd(4,x) agreed with
original for 0..200
# Objective
https://github.com/AccessKit/accesskit/pull/475 changed how text content
should be set for AccessKit nodes with a role of `Label`. This was
unfortunately missing from #16234.
## Solution
When building an `accesskit::Node` with `Role::Label`, calls `set_value`
instead of `set_label` on the node to set its content.
## Testing
I can't test this right now on my Windows machine due to a compilation
error with wgpu-hal I have no idea how to resolve.
**NOTE: This is based on, and should be merged alongside,
https://github.com/bevyengine/bevy/pull/15482.** I'll leave this in
draft until that PR is merged.
# Objective
Equivalent of https://github.com/bevyengine/bevy/pull/15482 but for
serialization. See that issue for the motivation.
Also part of this tracking issue:
https://github.com/bevyengine/bevy/issues/15518
This PR is non-breaking, just like the deserializer PR (because the new
type parameter `P` has a default `P = ()`).
## Solution
Identical solution to the deserializer PR.
## Testing
Added unit tests and a very comprehensive doc test outlining a clear
example and use case.
# Objective
Fixes#16406.
Currently, the `#[require(...)]` attribute internally registers
component requirements using `register_required_components_manual`. This
is done recursively in a way where every requirement in the "inheritance
tree" is added into a flat `RequiredComponents` hash map with component
constructors and inheritance depths stored.
However, this does not consider runtime requirements: if a plugins has
already registered `C` as required by `B`, and a component `A` requires
`B` through the macro attribute, spawning an entity with `A` won't add
`C`. The `required_by` hash set for `C` doesn't have `A`, and the
`RequiredComponents` of `A` don't have `C`.
Intuitively, I would've thought that the macro attribute's requirements
were always added *before* runtime requirements, and in that case I
believe this shouldn't have been an issue. But the macro requirements
are based on `Component::register_required_components`, which in a lot
of cases (I think) is only called *after* the first time a bundle with
the component is inserted. So if a runtime requirement is defined
*before* this (as is often the case, during `Plugin::build`), the macro
may not take it into account.
## Solution
Register requirements inherited from the `required` component in
`register_required_components_manual_unchecked`.
## Testing
I added a test, essentially the same as in #16406, and it now passes. I
also ran some of the tests in #16409, and they seem to work as expected.
All the existing tests for required components pass.
# Objective
Seemed to have missed the export of `DynamicComponentFetch` from #15593.
`TryFromFilteredError` which is returned by `impl
TryFrom<FiliteredEntityMut/Ref> for EntityRef/Mut` also seemed to have
been missing.
## Solution
Export both of them.
# Objective
MSRV in the standalone crates should be accurate
## Solution
Determine the msrv of each crate and set it
## Testing
Adding better msrv checks to the CI is a next-step.
# Objective
- Fix part of #15920
## Solution
- Keep track of the last written amount of bytes, and bind only that
much of the buffer.
## Testing
- Did you test these changes? If so, how? No
- Are there any parts that need more testing?
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
- If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?
---
## Migration Guide
- Fixed a bug with StorageBuffer and DynamicStorageBuffer binding data
from the previous frame(s) due to caching GPU buffers between frames.
# Objective
When adding custom BRP methods one might need to:
- Run custom systems in the `RemoteLast` schedule.
- Order those systems before/after request processing and cleanup.
For example in `bevy_remote_inspector` we need a way to perform some
preparation _before_ request processing. And to perform cleanup
_between_ request processing and watcher cleanup.
## Solution
- Make `RemoteLast` public
- Add `RemoteSet` with `ProcessRequests` and `Cleanup` variants.
I didn't mean to make this item private, fixing it for the 0.15 release
to be consistent with 0.14.
(maintainers: please make sure this gets merged into the 0.15 release
branch as well as main)
# Objective
- Fixes#16152
## Solution
- Put `bevy_window` and `bevy_a11y` behind the `bevy_window` feature.
they were the only difference
- Add `ScheduleRunnerPlugin` to the `DefaultPlugins` when `bevy_window`
is disabled
- Remove `HeadlessPlugins`
- Update the `headless` example
# Objective
- Fixes#16285
- Inactive camera are keeping the component `ViewUniformOffset` from
when they were active, still matching some queries trying to render to
them
## Solution
- Remove component `ViewUniformOffset` from cameras that are inactive
## Testing
- Ran example `render_primitives` and switched camera
Tweaks picking docs slightly for formatting and to add additional
context about the ordering of `Over` and `Out` events. Also shifts `Out`
to trigger before `Over` in the global event ordering.
Because of how focus is tracked, we must send all `Over` and `Out`
events at the same time, in a block. Originally I had `Over` precede
`Out` in the global event order, because this seemed natural. However,
the effect of this, when a pointer moves between entities, is to have
the new entity receive `Over` before the old entity received `Out`,
which several users found confusing.
The new ordering (out before over globally, over before out locally per
entity) should make it much easier to write hover state cleanup code.
# Objective
- When picking sprites, the pointer is offset from the mouse, causing
you to pick sprites you're not mousing over!
## Solution
- Shift over the cursor by the minimum of the viewport.
## Testing
- I was already using the bevy_mod_picking PR for my project, so it
seems to work!
- I tested this on the sprite_example (making the camera only render to
part of the viewport), and it also works there.
## Notes
- This is just https://github.com/aevyrie/bevy_mod_picking/pull/365 but
in Bevy form.
- We don't need to renormalize the viewport in any way since the
viewport is specified in pixels, so all that matters is that the origin
is correct.
Co-authored-by: johanhelsing <johanhelsing@gmail.com>
# Objective
PCSS still has some fundamental issues (#16155). We should resolve them
before "releasing" the feature.
## Solution
1. Rename the already-optional `pbr_pcss` cargo feature to
`experimental_pbr_pcss` to better communicate its state to developers.
2. Adjust the description of the `experimental_pbr_pcss` cargo feature
to better communicate its state to developers.
3. Gate PCSS-related light component fields behind that cargo feature,
to prevent surfacing them to developers by default.
# Objective
UI Anti-aliasing is incorrectly implemented. It always uses an edge
radius of 0.25 logical pixels, and ignores the physical resolution. For
low dpi screens 0.25 is is too low and on higher dpi screens the
physical edge radius is much too large, resulting in visual artifacts.
## Solution
Multiply the distance by the scale factor in the `antialias` function so
that the edge radius stays constant in physical pixels.
## Testing
To see the problem really clearly run the button example with `UiScale`
set really high. With `UiScale(25.)` on main if you examine the button's
border you can see a thick gradient fading away from the edges:
<img width="127" alt="edgg"
src="https://github.com/user-attachments/assets/7c852030-c0e8-4aef-8d3e-768cb2464cab">
With this PR the edges are sharp and smooth at all scale factors:
<img width="127" alt="edge"
src="https://github.com/user-attachments/assets/b3231140-1bbc-4a4f-a1d3-dde21f287988">
# Objective
Text2d doesn't respond to changes to the window scalefactor.
Fixes#16223
## Solution
In `update_text2d_layout` store the previous scale factor in a `Local`
instead and check against the current scale factor to detect changes.
It seems like previously the text wasn't updated because of a bug with
the `WindowScaleFactorChanged` event and it isn't emitted after changes
to the scale factor. That needs to be looked into, but this will work
for now.
## Testing
Really simple app that draws a big message in the middle of the window:
```
use bevy::prelude::*;
fn main() {
App::new()
.add_plugins(DefaultPlugins)
.add_systems(Startup, setup)
.run();
}
fn setup(mut commands: Commands) {
commands.spawn(Camera2d);
commands.spawn((
Text2d::new("Hello"),
TextFont {
font_size: 400.,
..Default::default()
},
));
}
```
Looks fine:
<img width="500" alt="hello1"
src="https://github.com/user-attachments/assets/5320746b-687e-4682-9e4c-bc43ab7ff9d3">
On main, after changing the monitor's scale factor:
<img width="500" alt="hello2"
src="https://github.com/user-attachments/assets/486cea16-fc44-4d66-9468-6f68905d4196">
With this PR the text maintains the same size and position after the
scale factor is changed.
# Objective
- Describe the objective or issue this PR addresses.
Use the fully qualified name for `Component` in the `require` attribute
- If you're fixing a specific issue, say "Fixes #X".
Fixes#16377
## Solution
- Describe the solution used to achieve the objective above.
Use the fully qualified name for `Component` in the `require` attribute,
i.e.,`<#ident as #bevy_ecs_path::component::Component>`
## Testing
- Did you test these changes? If so, how?
`cargo run -p ci -- lints`
`cargo run -p ci -- compile`
`cargo run -p ci -- test`
- Are there any parts that need more testing?
no
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
try to compile
```rust
#[derive(::bevy::ecs::component::Component, Default)]
pub struct A;
#[derive(::bevy::ecs::component::Component)]
#[require(A)]
pub struct B;
```
- If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?
Mac only
---
</details>
## Migration Guide
> This section is optional. If there are no breaking changes, you can
delete this section.
- If this PR is a breaking change (relative to the last release of
Bevy), describe how a user might need to migrate their code to support
these changes
- Simply adding new functionality is not a breaking change.
- Fixing behavior that was definitely a bug, rather than a questionable
design choice is not a breaking change.
Co-authored-by: Volodymyr Enhelhardt <volodymyr.enhelhardt@ambr.net>
# Objective
We currently use special "floating" constructors for `EasingCurve`,
`FunctionCurve`, and `ConstantCurve` (ex: `easing_curve`). This erases
the type being created (and in general "what is happening"
structurally), for very minimal ergonomics improvements. With rare
exceptions, we prefer normal `X::new()` constructors over floating `x()`
constructors in Bevy. I don't think this use case merits special casing
here.
## Solution
Add `EasingCurve::new()`, use normal constructors everywhere, and remove
the floating constructors.
I think this should land in 0.15 in the interest of not breaking people
later.
# Objective
Fixes#16266
## Solution
Added an `UnregisterSystem` command struct and
`Commands::unregister_system`. Also renamed `World::remove_system` and
`World::remove_system_cached` to `World::unregister_*`
## Testing
It's a fairly simple change, but I tested locally to ensure it actually
works.
---------
Co-authored-by: Benjamin Brienen <benjamin.brienen@outlook.com>
# Objective
Fixes#16316
## Solution
Tweaked a few crates cargo files until I was able to build and test
`bevy_ui` via `cargo test --package bevy_ui`
## Testing
- ran `cargo test --package bevy_ui` successfully
- CI should catch anything amiss (Hopefully?)
# Objective
`ButtonBundle` has an `ImageNode` component (renamed from `UiImage`)
which wasn't a problem in 0.14 but in 0.15 `requires` pulls in the
`ContentSize` and `NodeImageSize` which means that by default
`ButtonBundle` nodes are given a measure func based on the size of the
image belonging to `TRANSPARENT_IMAGE_HANDLE`, which is 1x1.
This doesn't make sense and the behaviour for default image nodes should
either be to go to zero size or not add a measure func.
## Solution
Check if an image has a `TRANSPARENT_IMAGE_HANDLE` and if it does remove
its measure func.
Possibly a zero-sized measure would make more sense, but that would
break existing code.
## Testing
Used `ButtonBundle` in the 0.15 `button` example and the border doesn't
render, after this change it does.
# Objective
- Fix bug where `UiSurface::set_camera_children` (and
`UiSurface::update_children` sometimes) will panic if you remove and add
a `Node` component in a single tick. This is more likely to happen now
because of `remove_with_requires`.
## Solution
- Filter out entities with `Node` when cleaning up entities from
`RemovedComponents<Node>`.
## Testing
- Not tested (rust compiler refused to cooperate when I tried to patch
this into my project), correct by inspection.
# Objective
- Allow to configure `on_thread_spawn` and `on_thread_destroy` when
using `TaskPoolPlugin` of bevy.
## Solution
- In `TaskPoolThreadAssignmentPolicy`, two options `on_thread_spawn` and
`on_thread_destroy` are added, which will be passed to two new methods
motioned above when creating corresponding task pool using builder.
- Due to lack of debug derive for these two options, manually implement
the debug for `TaskPoolThreadAssignmentPolicy`.
---
## Changelog
### Added
- `on_thread_spawn` option and `on_thread_destroy` option to the
`TaskPoolPlugin`, allow user to customize them as needed.
## Migration Guide
- `TaskPooolThreadAssignmentPolicy` now has two additional fields:
`on_thread_spawn` and `on_thread_destroy`. Please consider defaulting
them to `None`.
---------
Co-authored-by: François Mockers <mockersf@gmail.com>
Co-authored-by: François Mockers <francois.mockers@vleue.com>
# Objective
_If I understand it correctly_, we were checking mesh visibility, as
well as re-rendering point and spot light shadow maps for each view.
This makes it so that M views and N lights produce M x N complexity.
This PR aims to fix that, as well as introduce a stress test for this
specific scenario.
## Solution
- Keep track of what lights have already had mesh visibility calculated
and do not calculate it again;
- Reuse shadow depth textures and attachments across all views, and only
render shadow maps for the _first_ time a light is encountered on a
view;
- Directional lights remain unaltered, since their shadow map cascades
are view-dependent;
- Add a new `many_cameras_lights` stress test example to verify the
solution
## Showcase
110% speed up on the stress test
83% reduction of memory usage in stress test
### Before (5.35 FPS on stress test)
<img width="1392" alt="Screenshot 2024-09-11 at 12 25 57"
src="https://github.com/user-attachments/assets/136b0785-e9a4-44df-9a22-f99cc465e126">
### After (11.34 FPS on stress test)
<img width="1392" alt="Screenshot 2024-09-11 at 12 24 35"
src="https://github.com/user-attachments/assets/b8dd858f-5e19-467f-8344-2b46ca039630">
## Testing
- Did you test these changes? If so, how?
- On my game project where I have two cameras, and many shadow casting
lights I managed to get pretty much double the FPS.
- Also included a stress test, see the comparison above
- Are there any parts that need more testing?
- Yes, I would like help verifying that this fix is indeed correct, and
that we were really re-rendering the shadow maps by mistake and it's
indeed okay to not do that
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
- Run the `many_cameras_lights` example
- On the `main` branch, cherry pick the commit with the example (`git
cherry-pick --no-commit 1ed4ace01`) and run it
- If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?
- macOS
---------
Co-authored-by: François Mockers <francois.mockers@vleue.com>
# Objective
Should compile successfully with any combination of features
## Solution
Add the missing import on the right cfg.
## Testing
Tested building locally.
Fixes https://github.com/bevyengine/bevy/issues/16352
# Objective
Glam has some common and useful types and helpers that are not in the
prelude of `bevy_math`. This includes shorthand constructors like
`vec3`, or even `Vec3A`, the aligned version of `Vec3`.
```rust
// The "normal" way to create a 3D vector
let vec = Vec3::new(2.0, 1.0, -3.0);
// Shorthand version
let vec = vec3(2.0, 1.0, -3.0);
```
## Solution
Add the following types and methods to the prelude:
- `vec2`, `vec3`, `vec3a`, `vec4`
- `uvec2`, `uvec3`, `uvec4`
- `ivec2`, `ivec3`, `ivec4`
- `bvec2`, `bvec3`, `bvec3a`, `bvec4`, `bvec4a`
- `mat2`, `mat3`, `mat3a`, `mat4`
- `quat` (not sure if anyone uses this, but for consistency)
- `Vec3A`
- `BVec3A`, `BVec4A`
- `Mat3A`
I did not add the u16, i16, or f64 variants like `dvec2`, since there
are currently no existing types like those in the prelude.
The shorthand constructors are currently used a lot in some places in
Bevy, and not at all in others. In a follow-up, we might want to
consider if we have a preference for the shorthand, and make a PR to
change the codebase to use it more consistently.
# Objective
- Fixes: #15603
## Solution
- Add an unsafe `get_mut_by_id_unchecked` to `EntityMut` that borrows
&self instead of &mut self, thereby allowing access to multiple
components simultaneously.
## Testing
- a unit test function `get_mut_by_id_unchecked` was added.
---------
Co-authored-by: Mike <mike.hsu@gmail.com>
**NOTE: Also see https://github.com/bevyengine/bevy/pull/15548 for the
serializer equivalent**
# Objective
The current `ReflectDeserializer` and `TypedReflectDeserializer` use the
`TypeRegistration` and/or `ReflectDeserialize` of a given type in order
to determine how to deserialize a value of that type. However, there is
currently no way to statefully override deserialization of a given type
when using these two deserializers - that is, to have some local data in
the same scope as the `ReflectDeserializer`, and make use of that data
when deserializing.
The motivating use case for this came up when working on
[`bevy_animation_graph`](https://github.com/aecsocket/bevy_animation_graph/tree/feat/dynamic-nodes),
when loading an animation graph asset. The `AnimationGraph` stores
`Vec<Box<dyn NodeLike>>`s which we have to load in. Those `Box<dyn
NodeLike>`s may store `Handle`s to e.g. `Handle<AnimationClip>`. I want
to trigger a `load_context.load()` for that handle when it's loaded.
```rs
#[derive(Reflect)]
struct Animation {
clips: Vec<Handle<AnimationClip>>,
}
```
```rs
(
clips: [
"animation_clips/walk.animclip.ron",
"animation_clips/run.animclip.ron",
"animation_clips/jump.animclip.ron",
],
)
````
Currently, if this were deserialized from an asset loader, this would be
deserialized as a vec of `Handle::default()`s, which isn't useful since
we also need to `load_context.load()` those handles for them to be used.
With this processor field, a processor can detect when `Handle<T>`s are
being loaded, then actually load them in.
## Solution
```rs
trait ReflectDeserializerProcessor {
fn try_deserialize<'de, D>(
&mut self,
registration: &TypeRegistration,
deserializer: D,
) -> Result<Result<Box<dyn PartialReflect>, D>, D::Error>
where
D: serde::Deserializer<'de>;
}
```
```diff
- pub struct ReflectDeserializer<'a> {
+ pub struct ReflectDeserializer<'a, P = ()> { // also for ReflectTypedDeserializer
registry: &'a TypeRegistry,
+ processor: Option<&'a mut P>,
}
```
```rs
impl<'a, P: ReflectDeserializerProcessor> ReflectDeserializer<'a, P> { // also for ReflectTypedDeserializer
pub fn with_processor(registry: &'a TypeRegistry, processor: &'a mut P) -> Self {
Self {
registry,
processor: Some(processor),
}
}
}
```
This does not touch the existing `fn new`s.
This `processor` field is also added to all internal visitor structs.
When `TypedReflectDeserializer` runs, it will first try to deserialize a
value of this type by passing the `TypeRegistration` and deserializer to
the processor, and fallback to the default logic. This processor runs
the earliest, and takes priority over all other deserialization logic.
## Testing
Added unit tests to `bevy_reflect::serde::de`. Also using almost exactly
the same implementation in [my fork of
`bevy_animation_graph`](https://github.com/aecsocket/bevy_animation_graph/tree/feat/dynamic-nodes).
## Migration Guide
(Since I added `P = ()`, I don't think this is actually a breaking
change anymore, but I'll leave this in)
`bevy_reflect`'s `ReflectDeserializer` and `TypedReflectDeserializer`
now take a `ReflectDeserializerProcessor` as the type parameter `P`,
which allows you to customize deserialization for specific types when
they are found. However, the rest of the API surface (`new`) remains the
same.
<details>
<summary>Original implementation</summary>
Add `ReflectDeserializerProcessor`:
```rs
struct ReflectDeserializerProcessor {
pub can_deserialize: Box<dyn FnMut(&TypeRegistration) -> bool + 'p>,
pub deserialize: Box<
dyn FnMut(
&TypeRegistration,
&mut dyn erased_serde::Deserializer,
) -> Result<Box<dyn PartialReflect>, erased_serde::Error>
+ 'p,
}
```
Along with `ReflectDeserializer::new_with_processor` and
`TypedReflectDeserializer::new_with_processor`. This does not touch the
public API of the existing `new` fns.
This is stored as an `Option<&mut ReflectDeserializerProcessor>` on the
deserializer and any of the private `-Visitor` structs, and when we
attempt to deserialize a value, we first pass it through this processor.
Also added a very comprehensive doc test to
`ReflectDeserializerProcessor`, which is actually a scaled down version
of the code for the `bevy_animation_graph` loader. This should give
users a good motivating example for when and why to use this feature.
### Why `Box<dyn ..>`?
When I originally implemented this, I added a type parameter to
`ReflectDeserializer` to determine the processor used, with `()` being
"no processor". However when using this, I kept running into rustc
errors where it failed to validate certain type bounds and led to
overflows. I then switched to a dynamic dispatch approach.
The dynamic dispatch should not be that expensive, nor should it be a
performance regression, since it's only used if there is `Some`
processor. (Note: I have not benchmarked this, I am just speculating.)
Also, it means that we don't infect the rest of the code with an extra
type parameter, which is nicer to maintain.
### Why the `'p` on `ReflectDeserializerProcessor<'p>`?
Without a lifetime here, the `Box`es would automatically become `Box<dyn
FnMut(..) + 'static>`. This makes them practically useless, since any
local data you would want to pass in must then be `'static`. In the
motivating example, you couldn't pass in that `&mut LoadContext` to the
function.
This means that the `'p` infects the rest of the Visitor types, but this
is acceptable IMO. This PR also elides the lifetimes in the `impl<'de>
Visitor<'de> for -Visitor` blocks where possible.
### Future possibilities
I think it's technically possible to turn the processor into a trait,
and make the deserializers generic over that trait. This would also open
the door to an API like:
```rs
type Seed;
fn seed_deserialize(&mut self, r: &TypeRegistration) -> Option<Self::Seed>;
fn deserialize(&mut self, r: &TypeRegistration, d: &mut dyn erased_serde::Deserializer, s: Self::Seed) -> ...;
```
A similar processor system should also be added to the serialization
side, but that's for another PR. Ideally, both PRs will be in the same
release, since one isn't very useful without the other.
## Testing
Added unit tests to `bevy_reflect::serde::de`. Also using almost exactly
the same implementation in [my fork of
`bevy_animation_graph`](https://github.com/aecsocket/bevy_animation_graph/tree/feat/dynamic-nodes).
## Migration Guide
`bevy_reflect`'s `ReflectDeserializer` and `TypedReflectDeserializer`
now take a second lifetime parameter `'p` for storing the
`ReflectDeserializerProcessor` field lifetimes. However, the rest of the
API surface (`new`) remains the same, so if you are not storing these
deserializers or referring to them with lifetimes, you should not have
to make any changes.
</details>
# Objective
`glam` has opted to rename `Vec2::angle_between` to `Vec2::angle_to`
because of the difference in semantics compared to `Vec3::angle_between`
and others which return an unsigned angle `[0, PI]` where
`Vec2::angle_between` returns a signed angle `[-PI, PI]`.
We should follow suit for `Rot2` in 0.15 to avoid further confusion.
Links:
-
https://github.com/bitshifter/glam-rs/issues/514#issuecomment-2143202294
- https://github.com/bitshifter/glam-rs/pull/524
## Migration Guide
`Rot2::angle_between` has been deprecated, use `Rot2::angle_to` instead,
the semantics of `Rot2::angle_between` will change in the future.
---------
Co-authored-by: Joona Aalto <jondolf.dev@gmail.com>
# Objective
Fixes#15940
## Solution
Remove the `pub use` and fix the compile errors.
Make `bevy_image` available as `bevy::image`.
## Testing
Feature Frenzy would be good here! Maybe I'll learn how to use it if I
have some time this weekend, or maybe a reviewer can use it.
## Migration Guide
Use `bevy_image` instead of `bevy_render::texture` items.
---------
Co-authored-by: chompaa <antony.m.3012@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
- Currently when you attempt to change the cursor_grab_mode it caches
the new value whether the cursor grab succeeded or failed. This change
handles the Result being returned by set_cursor_grab and changes the
cursor_grab_mode back to the cached version in case of an Error.
- Creates a way to handle #16237 and #16238
## Solution
- I changed the signature of winit_windows attempt_grab to return the
Result<(), ExternalError> that winit set_cursor_grab returns. The system
that calls attempt_grab now checks if there's an error returned, and if
there is it sets the grab_mode back to the cached version (similar to
what hit_test does a few lines down).
## Testing
- I tested using this system that previously would not correctly lock
the mouse on Ubuntu/x11
```
pub fn lock_mouse(mut primary_window: Query<&mut Window, With<PrimaryWindow>>) {
let window = &mut primary_window.single_mut();
if window.focused {
window.cursor_options.grab_mode = CursorGrabMode::Confined;
} else {
window.cursor_options.grab_mode = CursorGrabMode::None;
}
}
```
- I only tested on Ubuntu with x11
# Objective
Fixes#15928
## Solution
return Error instead of panic
## Testing
I don't know if we need to add a test for this. It is pretty
straightforward.
# Objective
- wgpu 0.20 made workgroup vars stop being zero-init by default. this
broke some applications (cough foresight cough) and now we workaround
it. wgpu exposes a compilation option that zero initializes workgroup
memory by default, but bevy does not expose it.
## Solution
- expose the compilation option wgpu gives us
## Testing
- ran examples: 3d_scene, compute_shader_game_of_life, gpu_readback,
lines, specialized_mesh_pipeline. they all work
- confirmed fix for our own problems
---
</details>
## Migration Guide
- add `zero_initialize_workgroup_memory: false,` to
`ComputePipelineDescriptor` or `RenderPipelineDescriptor` structs to
preserve 0.14 functionality, add `zero_initialize_workgroup_memory:
true,` to restore bevy 0.13 functionality.