# Objective
- Fixes#14966
## Solution
- Added a _Performance_ section to the documentation for
`LogPlugin::filter` explaining that long filter strings can degrade
performance and to instead rely on `LogPlugin::level` when possible.
## Testing
- CI passed locally.
# Objective
Replaced the existing instantiation of the 2D Circle in the 2d_shapes.rs
file with the `new` method.
- Ensures consistency in instantiating 2D primitive shapes in the
examples.
## Solution
Replaced the existing instantiation of the 2D Circle in the 2d_shapes.rs
file with the `new` method.
- Ensures consistency in instantiating 2D primitive shapes in the
examples.
## Testing
- None: Should be straight-forward enough to not warrant a test (I will
eat my words if I am wrong).
---
# Objective
- Currently we have the `ColorRange` trait to interpolate linearly
between two colors
- It would be cool to have:
1. linear interpolation between n colors where `n >= 1`
2. other kinds of interpolation
## Solution
1. Implement `ColorGradient` which takes `n >= 1` colors and linearly
interpolates between consecutive pairs of them
2. Implement `Curve` intergration for this `ColorGradient` which yields
a curve struct. After that we can apply all of the cool curve adaptors
like `.reparametrize()` and `.map()` to the gradient
## Testing
- Added doc tests
- Added tests
## Showcase
```rust
// let gradient = ColorGradient::new(vec![]).unwrap(); // panic! 💥
let gradient = ColorGradient::new([basic::RED, basic::LIME, basic::BLUE]).expect("non-empty");
let curve = gradient.to_curve();
let brighter_curve = curve.map(|c| c.mix(&basic::WHITE, 0.5));
```
---
Kind of related to
https://github.com/bevyengine/bevy/pull/14971#discussion_r1736337631
---------
Co-authored-by: Zachary Harrold <zac@harrold.com.au>
Co-authored-by: Matty <weatherleymatthew@gmail.com>
# Objective
Fixes https://github.com/bevyengine/bevy/issues/14183
## Solution
Reimplement the UI texture atlas slicer using a shader.
The problems with #14183 could be fixed more simply by hacking around
with the coordinates and scaling but that way is very fragile and might
get broken again the next time we make changes to the layout
calculations. A shader based solution is more robust, it's impossible
for gaps to appear between the image slices with these changes as we're
only drawing a single quad.
I've not tried any benchmarks yet but it should much more efficient as
well, in the worst cases even hundreds or thousands of times faster.
Maybe could have used the UiMaterialPipeline. I wrote the shader first
and used fat vertices and then realised it wouldn't work that way with a
UiMaterial. If it's rewritten it so it puts all the slice geometry in
uniform buffer, then it might work? Adding the uniform buffer would
probably make the shader more complicated though, so don't know if it's
even worth it. Instancing is another alternative.
## Testing
The examples are working and it seems to match the old API correctly but
I've not used the texture atlas slicing API for anything before, I
reviewed the PR but that was back in January.
Needs a review by someone who knows the rendering pipeline and wgsl
really well because I don't really have any idea what I'm doing.
# Objective
Minor cosmetic improvements and code cleanup for this example
## Solution
- Use the default font and shorter constructor where the the full font
is not needed
- Use 12px padding for instruction text to match other examples
- Clean up formatting of instruction text
- Fix one instance of a font handle not being reused
<details>
<summary>Before / After</summary>
*Question mark boxes are expected. We don't include a font with the
glyphs I'm testing the IME with*
<img width="1280" alt="Screenshot 2024-09-02 at 11 14 17 AM"
src="https://github.com/user-attachments/assets/e0a904a3-c01b-4c18-b71f-41fbc1d09169">
<img width="1280" alt="Screenshot 2024-09-02 at 11 13 14 AM"
src="https://github.com/user-attachments/assets/4b2a2d31-6b9f-4b9a-a245-4258a6aa9519">
</details>
## Testing
Tested all functionality of the example (macOS)
# Objective
Thought I had found all of these... noticed some `10px` in #15013 and
did another sweep.
Continuation of #8478, #13583.
## Solution
- Position example text (and other elements) 12px from the edge of the
screen
# Objective
The `BorderRadius` component shouldn't be required to draw borders for
nodes with sharp corners.
## Solution
Make `BorderRadius` optional in `extract_uinode_borders`'s UI node
query.
# Objective
The `Parent` component holds a reference to the parent entity of the
entity it is inserted onto. The `with_child` function erroneously
forgets to insert this component onto the child entity that it spawns,
causing buggy behaviour when the function is used instead of the other
child-spawning functions.
## Solution
Ensure `with_child` inserts the `Parent` component, the same as all the
other child-spawning functions.
## Testing
Checked before/after with a bevy_ui layout where this patch fixed buggy
behaviour I was seeing in parent/child UI nodes.
# Objective
- Fixes#14969
## Solution
- Added `Deserialize` to the list of reflected traits for `SmolStr`
## Testing
- CI passed locally.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
This commit adds support for *masks* to the animation graph. A mask is a
set of animation targets (bones) that neither a node nor its descendants
are allowed to animate. Animation targets can be assigned one or more
*mask group*s, which are specific to a single graph. If a node masks out
any mask group that an animation target belongs to, animation curves for
that target will be ignored during evaluation.
The canonical use case for masks is to support characters holding
objects. Typically, character animations will contain hand animations in
the case that the character's hand is empty. (For example, running
animations may close a character's fingers into a fist.) However, when
the character is holding an object, the animation must be altered so
that the hand grips the object.
Bevy currently has no convenient way to handle this. The only workaround
that I can see is to have entirely separate animation clips for
characters' hands and bodies and keep them in sync, which is burdensome
and doesn't match artists' expectations from other engines, which all
effectively have support for masks. However, with mask group support,
this task is simple. We assign each hand to a mask group and parent all
character animations to a node. When a character grasps an object in
hand, we position the fingers as appropriate and then enable the mask
group for that hand in that node. This allows the character's animations
to run normally, while the object remains correctly attached to the
hand.
Note that even with this PR, we won't have support for running separate
animations for a character's hand and the rest of the character. This is
because we're missing additive blending: there's no way to combine the
two masked animations together properly. I intend that to be a follow-up
PR.
The major engines all have support for masks, though the workflow varies
from engine to engine:
* Unity has support for masks [essentially as implemented here], though
with layers instead of a tree. However, when using the Mecanim
("Humanoid") feature, precise control over bones is lost in favor of
predefined muscle groups.
* Unreal has a feature named [*layered blend per bone*]. This allows for
separate blend weights for different bones, effectively achieving masks.
I believe that the combination of blend nodes and masks make Bevy's
animation graph as expressible as that of Unreal, once we have support
for additive blending, though you may have to use more nodes than you
would in Unreal. Moreover, separating out the concepts of "blend weight"
and "which bones this node applies to" seems like a cleaner design than
what Unreal has.
* Godot's `AnimationTree` has the notion of [*blend filters*], which are
essentially the same as masks as implemented in this PR.
Additionally, this patch fixes a bug with weight evaluation whereby
weights weren't properly propagated down to grandchildren, because the
weight evaluation for a node only checked its parent's weight, not its
evaluated weight. I considered submitting this as a separate PR, but
given that this PR refactors that code entirely to support masks and
weights under a unified "evaluated node" concept, I simply included the
fix here.
A new example, `animation_masks`, has been added. It demonstrates how to
toggle masks on and off for specific portions of a skin.
This is part of #14395, but I'm going to defer closing that issue until
we have additive blending.
[essentially as implemented here]:
https://docs.unity3d.com/560/Documentation/Manual/class-AvatarMask.html
[*layered blend per bone*]:
https://dev.epicgames.com/documentation/en-us/unreal-engine/using-layered-animations-in-unreal-engine
[*blend filters*]:
https://docs.godotengine.org/en/stable/tutorials/animation/animation_tree.html
## Migration Guide
* The serialized format of animation graphs has changed with the
addition of animation masks. To upgrade animation graph RON files, add
`mask` and `mask_groups` fields as appropriate. (They can be safely set
to zero.)
# Objective
`resolve_outlines_system` wasn't updated when multi-window support was
added and it always uses the size of the primary window when resolving
viewport coords, regardless of the layout's camera target.
Fixes#14945
## Solution
It's awkward to get the viewport size of the target for an individual
node without walking the tree or adding extra fields to `Node`, so I
removed `resolve_outlines_system` and instead the outline values are
updated in `ui_layout_system`.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
Make the documentation for `SystemParamBuilder` nicer by combining the
tuple implementations into a single line of documentation.
## Solution
Use `#[doc(fake_variadic)]` for `SystemParamBuilder` tuple impls.
![image](https://github.com/user-attachments/assets/b4665861-c405-467f-b30b-82b4b1d99bf7)
(This got missed originally because #14050 and #14703 were open at the
same time.)
Adds a new `Handle<Storage>` asset type that can be used as a render
asset, particularly for use with `AsBindGroup`.
Closes: #13658
# Objective
Allow users to create storage buffers in the main world without having
to access the `RenderDevice`. While this resource is technically
available, it's bad form to use in the main world and requires mixing
rendering details with main world code. Additionally, this makes storage
buffers easier to use with `AsBindGroup`, particularly in the following
scenarios:
- Sharing the same buffers between a compute stage and material shader.
We already have examples of this for storage textures (see game of life
example) and these changes allow a similar pattern to be used with
storage buffers.
- Preventing repeated gpu upload (see the previous easier to use `Vec`
`AsBindGroup` option).
- Allow initializing custom materials using `Default`. Previously, the
lack of a `Default` implement for the raw `wgpu::Buffer` type made
implementing a `AsBindGroup + Default` bound difficult in the presence
of buffers.
## Solution
Adds a new `Handle<Storage>` asset type that is prepared into a
`GpuStorageBuffer` render asset. This asset can either be initialized
with a `Vec<u8>` of properly aligned data or with a size hint. Users can
modify the underlying `wgpu::BufferDescriptor` to provide additional
usage flags.
## Migration Guide
The `AsBindGroup` `storage` attribute has been modified to reference the
new `Handle<Storage>` asset instead. Usages of Vec` should be converted
into assets instead.
---------
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
# Objective
- Avoid cloning the `CosmicBuffer` every time you create a new text
measurement.
## Solution
- Inject a buffer query when calculating layout so existing buffers can
be reused.
## Testing
- I tested the `text`, `text_debug`, and `text_wrap_debug` examples.
- I did not do a performance test.
Fixes#14993 (maybe). Adds a system ordering constraint that was missed
in the refactor in #14833. The theory here is that the single threaded
forces a topology that causes the prepare system to run before
`prepare_windows` in a way that causes issues. For whatever reason, this
appears to be unlikely when multi-threading is enabled.
# Objective
* Fixes https://github.com/bevyengine/bevy/issues/14889
## Solution
Exposes `bevy_animation::{animatable, graph, transition}` to the world.
## Testing
- Did you test these changes? If so, how?
- These changes do not need testing, as they do not modify/add/remove
any functionality.
- ~~Are there any parts that need more testing?~~
- ~~How can other people (reviewers) test your changes? Is there
anything specific they need to know?~~
- ~~If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?~~
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
The `reflect` module in `bevy_state` is gated behind the `bevy_reflect`
feature, but the type exports from that module in the crate prelude are
erroneously gated behind the `bevy_app` feature, causing a compile error
when the `bevy_reflect` feature is disabled, but the `bevy_app` feature
is enabled.
## Solution
Change the feature gate to `bevy_reflect`.
## Testing
- Discovered by depending on `bevy_state` with `default-features =
false, features = ["bevy_app"]`
- Tested by running `cargo check -p bevy_state --no-default-features
--features bevy_app`
# Objective
- Fixes#14974
## Solution
- Replace all* instances of `NonZero*` with `NonZero<*>`
## Testing
- CI passed locally.
---
## Notes
Within the `bevy_reflect` implementations for `std` types,
`impl_reflect_value!()` will continue to use the type aliases instead,
as it inappropriately parses the concrete type parameter as a generic
argument. If the `ZeroablePrimitive` trait was stable, or the macro
could be modified to accept a finite list of types, then we could fully
migrate.
# Objective
`bevy_animation` imports a lot of items - and it uses a very
inconsistent code style to do so.
## Solution
Changes the offending `use` statements to be more consistent across the
crate.
## Testing
- Did you test these changes? If so, how?
- No testing is needed beyond lint checks, and those finished
successfully.
- ~~Are there any parts that need more testing?~~
- ~~How can other people (reviewers) test your changes? Is there
anything specific they need to know?~~
- ~~If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?~~
# Objective
- There's one occurence of `CowArc::Borrow` that wraps '&'static str`
## Solution
- Replaces it with `CowArc::Static`. I don't think this change is
important but I can't unsee it:)
## Testing
- `cargo check` compiles fine
# Objective
- Fixes https://github.com/bevyengine/bevy/issues/14961
## Solution
- Check that the archetypes don't contain any other observed components
before unsetting their flags
## Testing
- I added a regression test: `observer_despawn_archetype_flags`
# Objective
- Fixes https://github.com/bevyengine/bevy/issues/14975
## Solution
- Replace usages of `bevy_utils::CowArc` with `atomicow::CowArc`
- Remove bevy_utils::CowArc
## Testing
- `bevy_asset` test suite continues to pass.
---
## Migration Guide
`bevy_utils::CowArc` has moved to a new crate called
[atomicow](https://crates.io/crates/atomicow).
# Objective
- Add gizmos integration for the new `Curve` things in the math lib
## Solution
- Add the following methods
- `curve_2d(curve, sample_times, color)`
- `curve_3d(curve, sample_times, color)`
- `curve_gradient_2d(curve, sample_times_with_colors)`
- `curve_gradient_3d(curve, sample_times_with_colors)`
## Testing
- I added examples of the 2D and 3D variants of the gradient curve
gizmos to the gizmos examples.
## Showcase
### 2D
![image](https://github.com/user-attachments/assets/01a75706-a7b4-4fc5-98d5-18018185c877)
```rust
let domain = Interval::EVERYWHERE;
let curve = function_curve(domain, |t| Vec2::new(t, (t / 25.0).sin() * 100.0));
let resolution = ((time.elapsed_seconds().sin() + 1.0) * 50.0) as usize;
let times_and_colors = (0..=resolution)
.map(|n| n as f32 / resolution as f32)
.map(|t| (t - 0.5) * 600.0)
.map(|t| (t, TEAL.mix(&HOT_PINK, (t + 300.0) / 600.0)));
gizmos.curve_gradient_2d(curve, times_and_colors);
```
### 3D
![image](https://github.com/user-attachments/assets/3fd23983-1ec9-46cd-baed-5b5e2dc935d0)
```rust
let domain = Interval::EVERYWHERE;
let curve = function_curve(domain, |t| {
(Vec2::from((t * 10.0).sin_cos())).extend(t - 6.0)
});
let resolution = ((time.elapsed_seconds().sin() + 1.0) * 100.0) as usize;
let times_and_colors = (0..=resolution)
.map(|n| n as f32 / resolution as f32)
.map(|t| t * 5.0)
.map(|t| (t, TEAL.mix(&HOT_PINK, t / 5.0)));
gizmos.curve_gradient_3d(curve, times_and_colors);
```
# Objective
With the current implementation of `Plane3d` gizmos, it's really hard to
get a good feeling for big planes. Usually I tend to add more axes as a
user but that doesn't scale well and is pretty wasteful. It's hard to
recognize the plane in the distance here. Especially if there would've
been other rendered objects in the scene
![image](https://github.com/user-attachments/assets/b65b7015-c08c-46d7-aa27-c7c0d49b2021)
## Solution
- Since we got grid gizmos in the mean time, I went ahead and just
reused them here.
## Testing
I added an instance of the new `Plane3D` to the `3d_gizmos.rs` example.
If you want to look at it you need to look around a bit. I didn't
position it in the center since that was too crowded already.
---
## Showcase
![image](https://github.com/user-attachments/assets/e4982afe-7296-416c-9801-7dd85cd975c1)
## Migration Guide
The optional builder methods on
```rust
gizmos.primitive_3d(&Plane3d { }, ...);
```
changed from
- `segment_length`
- `segment_count`
- `axis_count`
to
- `cell_count`
- `spacing`
# Objective
This moves the default `LogPlugin` filter to be a public constant so
that it can be updated and referenced from outside code without changes
across releases:
```
fn main() {
App::new().add_plugins(
DefaultPlugins
.set(bevy::log::LogPlugin {
filter: format!("{},mylogs=error", bevy::log::LogPlugin::DEFAULT_FILTER),
..default()
})).run();
}
```
## Testing
Tested with `cargo run -p ci`
# Objective
Allow `SystemParamBuilder` implementations for custom system parameters
created using `#[derive(SystemParam)]`.
## Solution
Extend the derive macro to accept a `#[system_param(builder)]`
attribute. When present, emit a builder type with a field corresponding
to each field of the param.
## Example
```rust
#[derive(SystemParam)]
#[system_param(builder)]
struct CustomParam<'w, 's> {
query: Query<'w, 's, ()>,
local: Local<'s, usize>,
}
let system = (CustomParamBuilder {
local: LocalBuilder(100),
query: QueryParamBuilder::new(|builder| {
builder.with::<A>();
}),
},)
.build_state(&mut world)
.build_system(|param: CustomParam| *param.local + param.query.iter().count());
```
# Objective
- Fixes#14841
## Solution
- Compute BufferSlice size manually and use it for comparison in
`TrackedRenderPass`
## Testing
- Gizmo example does not crash with #14721 (without system ordering),
and `slice` computes correct size there
---
## Migration Guide
- `TrackedRenderPass::set_vertex_buffer` function has been modified to
update vertex buffers when the same buffer with the same offset is
provided, but its size has changed. Some existing code may rely on the
previous behavior, which did not update the vertex buffer in this
scenario.
---------
Co-authored-by: Zachary Harrold <zac@harrold.com.au>
# Objective
- Fixes#14860
## Solution
- Added a line of documentation to `FromWorld`'s trait definition
mention the `Default` blanket implementation.
- Added custom documentation to the `from_world` method for the
`Default` blanket implementation. This ensures when inspecting the
`from_world` function within an IDE, the tooltip will explicitly state
the `default()` method will be used for any `Default` types.
## Testing
- CI passes.
# Objective
Since https://github.com/bevyengine/bevy/pull/14731 is merged, it
unblocked a few utility methods for 2D arcs. In 2D the pendant to
`long_arc_3d_between` and `short_arc_3d_between` are missing. Since
`arc_2d` can be a bit hard to use, this PR is trying to plug some holes
in the `arcs` API.
## Solution
Implement
- `long_arc_2d_between(center, from, tp, color)`
- `short_arc_2d_between(center, from, tp, color)`
## Testing
- There are new doc tests
- The `2d_gizmos` example has been extended a bit to include a few more
arcs which can easily be checked with respect to the grid
---
## Showcase
![image](https://github.com/user-attachments/assets/b90ad8b1-86c2-4304-a481-4f9a5246c457)
Code related to the screenshot (from outer = first line to inner = last
line)
```rust
my_gizmos.arc_2d(Isometry2d::IDENTITY, FRAC_PI_2, 80.0, ORANGE_RED);
my_gizmos.short_arc_2d_between(Vec2::ZERO, Vec2::X * 40.0, Vec2::Y * 40.0, ORANGE_RED);
my_gizmos.long_arc_2d_between(Vec2::ZERO, Vec2::X * 20.0, Vec2::Y * 20.0, ORANGE_RED);
```
# Objective
When building a system from `SystemParamBuilder`s and defining the
system as a closure, the compiler should be able to infer the parameter
types from the builder types.
## Solution
Create methods for each arity that take an argument that implements both
`SystemParamFunction` as well as `FnMut(SystemParamItem<P>,...)`. The
explicit `FnMut` constraint will allow the compiler to infer the
necessary higher-ranked lifetimes along with the parameter types.
I wanted to show that this was possible, but I can't tell whether it's
worth the complexity. It requires a separate method for each arity,
which pollutes the docs a bit:
![SystemState build_system
docs](https://github.com/user-attachments/assets/5069b749-7ec7-47e3-a5e4-1a4c78129f78)
## Example
```rust
let system = (LocalBuilder(0u64), ParamBuilder::local::<u64>())
.build_state(&mut world)
.build_system(|a, b| *a + *b + 1);
```
# Objective
- Solves the last bullet in and closes#14319
- Make better use of the `Isometry` types
- Prevent issues like #14655
- Probably simplify and clean up a lot of code through the use of Gizmos
as well (i.e. the 3D gizmos for cylinders circles & lines don't connect
well, probably due to wrong rotations)
## Solution
- go through the `bevy_gizmos` crate and give all methods a slight
workover
## Testing
- For all the changed examples I run `git switch main && cargo rr
--example <X> && git switch <BRANCH> && cargo rr --example <X>` and
compare the visual results
- Check if all doc tests are still compiling
- Check the docs in general and update them !!!
---
## Migration Guide
The gizmos methods function signature changes as follows:
- 2D
- if it took `position` & `rotation_angle` before ->
`Isometry2d::new(position, Rot2::radians(rotation_angle))`
- if it just took `position` before ->
`Isometry2d::from_translation(position)`
- 3D
- if it took `position` & `rotation` before ->
`Isometry3d::new(position, rotation)`
- if it just took `position` before ->
`Isometry3d::from_translation(position)`
# Objective
sending events tends to be low-frequency so ergonomics can be
prioritized over efficiency.
add `Commands::send_event` to send any type of event without needing a
writer in hand.
i don't know how we feel about these kind of ergonomic things, i add
this to all my projects and find it useful. adding `mut
this_particular_event_writer: EventWriter<ThisParticularEvent>` every
time i want to send something is unnecessarily cumbersome.
it also simplifies the "send and receive in the same system" pattern
significantly.
basic example before:
```rs
fn my_func(
q: Query<(Entity, &State)>,
mut damage_event_writer: EventWriter<DamageEvent>,
mut heal_event_writer: EventWriter<HealEvent>,
) {
for (entity, state) in q.iter() {
if let Some(damage) = state.get_damage() {
damage_event_writer.send(DamageEvent { entity, damage });
}
if let Some(heal) = state.get_heal() {
heal_event_writer.send(HealEvent { entity, heal });
}
}
}
```
basic example after:
```rs
import bevy::ecs::event::SendEventEx;
fn my_func(
mut commands: Commands,
q: Query<(Entity, &State)>,
) {
for (entity, state) in q.iter() {
if let Some(damage) = state.get_damage() {
commands.send_event(DamageEvent { entity, damage });
}
if let Some(heal) = state.get_heal() {
commands.send_event(HealEvent { entity, heal });
}
}
}
```
send/receive in the same system before:
```rs
fn send_and_receive_param_set(
mut param_set: ParamSet<(EventReader<DebugEvent>, EventWriter<DebugEvent>)>,
) {
// We must collect the events to resend, because we can't access the writer while we're iterating over the reader.
let mut events_to_resend = Vec::new();
// This is p0, as the first parameter in the `ParamSet` is the reader.
for event in param_set.p0().read() {
if event.resend_from_param_set {
events_to_resend.push(event.clone());
}
}
// This is p1, as the second parameter in the `ParamSet` is the writer.
for mut event in events_to_resend {
event.times_sent += 1;
param_set.p1().send(event);
}
}
```
after:
```rs
use bevy::ecs::event::SendEventEx;
fn send_via_commands_and_receive(
mut reader: EventReader<DebugEvent>,
mut commands: Commands,
) {
for event in reader.read() {
if event.resend_via_commands {
commands.send_event(DebugEvent {
times_sent: event.times_sent + 1,
..event.clone()
});
}
}
}
```
---------
Co-authored-by: Jan Hohenheim <jan@hohenheim.ch>
## Introduction
This is the first step in my [Next Generation Scene / UI
Proposal](https://github.com/bevyengine/bevy/discussions/14437).
Fixes https://github.com/bevyengine/bevy/issues/7272#14800.
Bevy's current Bundles as the "unit of construction" hamstring the UI
user experience and have been a pain point in the Bevy ecosystem
generally when composing scenes:
* They are an additional _object defining_ concept, which must be
learned separately from components. Notably, Bundles _are not present at
runtime_, which is confusing and limiting.
* They can completely erase the _defining component_ during Bundle init.
For example, `ButtonBundle { style: Style::default(), ..default() }`
_makes no mention_ of the `Button` component symbol, which is what makes
the Entity a "button"!
* They are not capable of representing "dependency inheritance" without
completely non-viable / ergonomically crushing nested bundles. This
limitation is especially painful in UI scenarios, but it applies to
everything across the board.
* They introduce a bunch of additional nesting when defining scenes,
making them ugly to look at
* They introduce component name "stutter": `SomeBundle { component_name:
ComponentName::new() }`
* They require copious sprinklings of `..default()` when spawning them
in Rust code, due to the additional layer of nesting
**Required Components** solve this by allowing you to define which
components a given component needs, and how to construct those
components when they aren't explicitly provided.
This is what a `ButtonBundle` looks like with Bundles (the current
approach):
```rust
#[derive(Component, Default)]
struct Button;
#[derive(Bundle, Default)]
struct ButtonBundle {
pub button: Button,
pub node: Node,
pub style: Style,
pub interaction: Interaction,
pub focus_policy: FocusPolicy,
pub border_color: BorderColor,
pub border_radius: BorderRadius,
pub image: UiImage,
pub transform: Transform,
pub global_transform: GlobalTransform,
pub visibility: Visibility,
pub inherited_visibility: InheritedVisibility,
pub view_visibility: ViewVisibility,
pub z_index: ZIndex,
}
commands.spawn(ButtonBundle {
style: Style {
width: Val::Px(100.0),
height: Val::Px(50.0),
..default()
},
focus_policy: FocusPolicy::Block,
..default()
})
```
And this is what it looks like with Required Components:
```rust
#[derive(Component)]
#[require(Node, UiImage)]
struct Button;
commands.spawn((
Button,
Style {
width: Val::Px(100.0),
height: Val::Px(50.0),
..default()
},
FocusPolicy::Block,
));
```
With Required Components, we mention only the most relevant components.
Every component required by `Node` (ex: `Style`, `FocusPolicy`, etc) is
automatically brought in!
### Efficiency
1. At insertion/spawn time, Required Components (including recursive
required components) are initialized and inserted _as if they were
manually inserted alongside the given components_. This means that this
is maximally efficient: there are no archetype or table moves.
2. Required components are only initialized and inserted if they were
not manually provided by the developer. For the code example in the
previous section, because `Style` and `FocusPolicy` are inserted
manually, they _will not_ be initialized and inserted as part of the
required components system. Efficient!
3. The "missing required components _and_ constructors needed for an
insertion" are cached in the "archetype graph edge", meaning they aren't
computed per-insertion. When a component is inserted, the "missing
required components" list is iterated (and that graph edge (AddBundle)
is actually already looked up for us during insertion, because we need
that for "normal" insert logic too).
### IDE Integration
The `#[require(SomeComponent)]` macro has been written in such a way
that Rust Analyzer can provide type-inspection-on-hover and `F12` /
go-to-definition for required components.
### Custom Constructors
The `require` syntax expects a `Default` constructor by default, but it
can be overridden with a custom constructor:
```rust
#[derive(Component)]
#[require(
Node,
Style(button_style),
UiImage
)]
struct Button;
fn button_style() -> Style {
Style {
width: Val::Px(100.0),
..default()
}
}
```
### Multiple Inheritance
You may have noticed by now that this behaves a bit like "multiple
inheritance". One of the problems that this presents is that it is
possible to have duplicate requires for a given type at different levels
of the inheritance tree:
```rust
#[derive(Component)
struct X(usize);
#[derive(Component)]
#[require(X(x1))
struct Y;
fn x1() -> X {
X(1)
}
#[derive(Component)]
#[require(
Y,
X(x2),
)]
struct Z;
fn x2() -> X {
X(2)
}
// What version of X is inserted for Z?
commands.spawn(Z);
```
This is allowed (and encouraged), although this doesn't appear to occur
much in practice. First: only one version of `X` is initialized and
inserted for `Z`. In the case above, I think we can all probably agree
that it makes the most sense to use the `x2` constructor for `X`,
because `Y`'s `x1` constructor exists "beneath" `Z` in the inheritance
hierarchy; `Z`'s constructor is "more specific".
The algorithm is simple and predictable:
1. Use all of the constructors (including default constructors) directly
defined in the spawned component's require list
2. In the order the requires are defined in `#[require()]`, recursively
visit the require list of each of the components in the list (this is a
depth Depth First Search). When a constructor is found, it will only be
used if one has not already been found.
From a user perspective, just think about this as the following:
1. Specifying a required component constructor for `Foo` directly on a
spawned component `Bar` will result in that constructor being used (and
overriding existing constructors lower in the inheritance tree). This is
the classic "inheritance override" behavior people expect.
2. For cases where "multiple inheritance" results in constructor
clashes, Components should be listed in "importance order". List a
component earlier in the requirement list to initialize its inheritance
tree earlier.
Required Components _does_ generally result in a model where component
values are decoupled from each other at construction time. Notably, some
existing Bundle patterns use bundle constructors to initialize multiple
components with shared state. I think (in general) moving away from this
is necessary:
1. It allows Required Components (and the Scene system more generally)
to operate according to simple rules
2. The "do arbitrary init value sharing in Bundle constructors" approach
_already_ causes data consistency problems, and those problems would be
exacerbated in the context of a Scene/UI system. For cases where shared
state is truly necessary, I think we are better served by observers /
hooks.
3. If a situation _truly_ needs shared state constructors (which should
be rare / generally discouraged), Bundles are still there if they are
needed.
## Next Steps
* **Require Construct-ed Components**: I have already implemented this
(as defined in the [Next Generation Scene / UI
Proposal](https://github.com/bevyengine/bevy/discussions/14437). However
I've removed `Construct` support from this PR, as that has not landed
yet. Adding this back in requires relatively minimal changes to the
current impl, and can be done as part of a future Construct pr.
* **Port Built-in Bundles to Required Components**: This isn't something
we should do right away. It will require rethinking our public
interfaces, which IMO should be done holistically after the rest of Next
Generation Scene / UI lands. I think we should merge this PR first and
let people experiment _inside their own code with their own Components_
while we wait for the rest of the new scene system to land.
* **_Consider_ Automatic Required Component Removal**: We should
evaluate _if_ automatic Required Component removal should be done. Ex:
if all components that explicitly require a component are removed,
automatically remove that component. This issue has been explicitly
deferred in this PR, as I consider the insertion behavior to be
desirable on its own (and viable on its own). I am also doubtful that we
can find a design that has behavior we actually want. Aka: can we
_really_ distinguish between a component that is "only there because it
was automatically inserted" and "a component that was necessary / should
be kept". See my [discussion response
here](https://github.com/bevyengine/bevy/discussions/14437#discussioncomment-10268668)
for more details.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: BD103 <59022059+BD103@users.noreply.github.com>
Co-authored-by: Pascal Hertleif <killercup@gmail.com>
# Objective
The Android example on Adreno 642L currently crashes on startup.
Previous PRs #14176 and #13323 have adressed this specific crash
occurring on some Adreno GPUs, that fix works as it should but isn't
applied when to the GPU name contains a suffix like in the case of
`642L`.
## Solution
- Amending the logic to filter out any parts of the GPU name not
containing digits thus enabling the fix on `642L`.
## Testing
- Ran the Android example on a Nothing Phone 1. Before this change it
crashed, after it works as intended.
---------
Co-authored-by: Sam Pettersson <sam.pettersson@geoguessr.com>
# Objective
- `Curve<T>` was meant to be object safe, but one of the latest commits
made it not object safe.
- When trying to use `Curve<T>` as `&dyn Curve<T>` this compile error is
raised:
```
error[E0038]: the trait `curve::Curve` cannot be made into an object
--> crates/bevy_math/src/curve/mod.rs:1025:20
note: for a trait to be "object safe" it needs to allow building a vtable to allow the call to be resolvable dynamically; for more information visit <https://doc.rust-lang.org/reference/items/traits.html#object-safety>
--> crates/bevy_math/src/curve/mod.rs:60:8
|
23 | pub trait Curve<T> {
| ----- this trait cannot be made into an object...
...
60 | fn sample_iter(&self, iter: impl IntoIterator<Item = f32>) -> impl Iterator<Item = Option<T>> {
| ^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...because method `sample_iter` references an `impl Trait` type in its return type
| |
| ...because method `sample_iter` has generic type parameters
...
```
## Solution
- Making `Curve<T>` object safe again by adding `Self: Sized` to newly
added methods.
## Testing
- Added new test that ensures the `Curve<T>` trait can be made into an
objet.
# Objective
- Fixes#14348
- Fixes#14528
- Less complex (but also likely less performant) alternative to #14611
## Solution
- Add a `is_dense` field flag to `QueryIter` indicating whether it is
dense or not, that is whether it can perform dense iteration or not;
- Check this flag any time iteration over a query is performed.
---
It would be nice if someone could try benching this change to see if it
actually matters.
~Note that this not 100% ready for mergin, since there are a bunch of
safety comments on the use of the various `IS_DENSE` for checks that
still need to be updated.~ This is ready modulo benchmarks
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
when handles for loading assets are dropped, we currently wait until
load is completed before dropping the handle. drop asset-load tasks
immediately
## Solution
- track tasks for loading assets and drop them immediately when all
handles are dropped.
~~- use `join_all` in `gltf_loader.rs` to allow it to yield and be
dropped.~~
doesn't cover all the load apis - for those it doesn't cover the task
will still be detached and will still complete before the result is
discarded.
separated out from #13170
# Objective
Allow dynamic systems to take lists of system parameters whose length is
not known at compile time.
This can be used for building a system that runs a script defined at
runtime, where the script needs a variable number of query parameters.
It can also be used for building a system that collects a list of
plugins at runtime, and provides a parameter to each one.
This is most useful today with `Vec<Query<FilteredEntityMut>>`. It will
be even more useful with `Vec<DynSystemParam>` if #14817 is merged,
since the parameters in the list can then be of different types.
## Solution
Implement `SystemParam` and `SystemParamBuilder` for `Vec` and
`ParamSet<Vec>`.
## Example
```rust
let system = (vec![
QueryParamBuilder::new_box(|builder| {
builder.with::<B>().without::<C>();
}),
QueryParamBuilder::new_box(|builder| {
builder.with::<C>().without::<B>();
}),
],)
.build_state(&mut world)
.build_system(|params: Vec<Query<&mut A>>| {
let mut count: usize = 0;
params
.into_iter()
.for_each(|mut query| count += query.iter_mut().count());
count
});
```
# Objective
- Fixes#14902
- > #14686 Introduced a name clash when using use bevy::prelude::*;
## Solution
- renamed `bevy::picking::events::Drop`
`bevy::picking::events::DragDrop`
## Testing
- Not being used in tests or examples, so I just compiled.
---
</details>
## Migration Guide
- Rename `Drop` to `DragDrop`
- `bevy::picking::events::Drop` is now `bevy::picking::events::DragDrop`
# Objective
This is a value that is and will be used as a domain of curves pretty
often. By adding it as a dedicated constant we can get rid of some
`unwraps` and function calls.
## Solution
added `Interval::UNIT`
## Testing
I replaced all occurrences of `interval(0.0, 1.0).unwrap()` with the new
`Interval::UNIT` constant in tests and doc tests.