# Objective
`resolve_outlines_system` wasn't updated when multi-window support was
added and it always uses the size of the primary window when resolving
viewport coords, regardless of the layout's camera target.
Fixes#14945
## Solution
It's awkward to get the viewport size of the target for an individual
node without walking the tree or adding extra fields to `Node`, so I
removed `resolve_outlines_system` and instead the outline values are
updated in `ui_layout_system`.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
Make the documentation for `SystemParamBuilder` nicer by combining the
tuple implementations into a single line of documentation.
## Solution
Use `#[doc(fake_variadic)]` for `SystemParamBuilder` tuple impls.
![image](https://github.com/user-attachments/assets/b4665861-c405-467f-b30b-82b4b1d99bf7)
(This got missed originally because #14050 and #14703 were open at the
same time.)
Adds a new `Handle<Storage>` asset type that can be used as a render
asset, particularly for use with `AsBindGroup`.
Closes: #13658
# Objective
Allow users to create storage buffers in the main world without having
to access the `RenderDevice`. While this resource is technically
available, it's bad form to use in the main world and requires mixing
rendering details with main world code. Additionally, this makes storage
buffers easier to use with `AsBindGroup`, particularly in the following
scenarios:
- Sharing the same buffers between a compute stage and material shader.
We already have examples of this for storage textures (see game of life
example) and these changes allow a similar pattern to be used with
storage buffers.
- Preventing repeated gpu upload (see the previous easier to use `Vec`
`AsBindGroup` option).
- Allow initializing custom materials using `Default`. Previously, the
lack of a `Default` implement for the raw `wgpu::Buffer` type made
implementing a `AsBindGroup + Default` bound difficult in the presence
of buffers.
## Solution
Adds a new `Handle<Storage>` asset type that is prepared into a
`GpuStorageBuffer` render asset. This asset can either be initialized
with a `Vec<u8>` of properly aligned data or with a size hint. Users can
modify the underlying `wgpu::BufferDescriptor` to provide additional
usage flags.
## Migration Guide
The `AsBindGroup` `storage` attribute has been modified to reference the
new `Handle<Storage>` asset instead. Usages of Vec` should be converted
into assets instead.
---------
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
# Objective
- Avoid cloning the `CosmicBuffer` every time you create a new text
measurement.
## Solution
- Inject a buffer query when calculating layout so existing buffers can
be reused.
## Testing
- I tested the `text`, `text_debug`, and `text_wrap_debug` examples.
- I did not do a performance test.
Fixes#14993 (maybe). Adds a system ordering constraint that was missed
in the refactor in #14833. The theory here is that the single threaded
forces a topology that causes the prepare system to run before
`prepare_windows` in a way that causes issues. For whatever reason, this
appears to be unlikely when multi-threading is enabled.
# Objective
* Fixes https://github.com/bevyengine/bevy/issues/14889
## Solution
Exposes `bevy_animation::{animatable, graph, transition}` to the world.
## Testing
- Did you test these changes? If so, how?
- These changes do not need testing, as they do not modify/add/remove
any functionality.
- ~~Are there any parts that need more testing?~~
- ~~How can other people (reviewers) test your changes? Is there
anything specific they need to know?~~
- ~~If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?~~
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
The `reflect` module in `bevy_state` is gated behind the `bevy_reflect`
feature, but the type exports from that module in the crate prelude are
erroneously gated behind the `bevy_app` feature, causing a compile error
when the `bevy_reflect` feature is disabled, but the `bevy_app` feature
is enabled.
## Solution
Change the feature gate to `bevy_reflect`.
## Testing
- Discovered by depending on `bevy_state` with `default-features =
false, features = ["bevy_app"]`
- Tested by running `cargo check -p bevy_state --no-default-features
--features bevy_app`
# Objective
- Fixes#14974
## Solution
- Replace all* instances of `NonZero*` with `NonZero<*>`
## Testing
- CI passed locally.
---
## Notes
Within the `bevy_reflect` implementations for `std` types,
`impl_reflect_value!()` will continue to use the type aliases instead,
as it inappropriately parses the concrete type parameter as a generic
argument. If the `ZeroablePrimitive` trait was stable, or the macro
could be modified to accept a finite list of types, then we could fully
migrate.
# Objective
`bevy_animation` imports a lot of items - and it uses a very
inconsistent code style to do so.
## Solution
Changes the offending `use` statements to be more consistent across the
crate.
## Testing
- Did you test these changes? If so, how?
- No testing is needed beyond lint checks, and those finished
successfully.
- ~~Are there any parts that need more testing?~~
- ~~How can other people (reviewers) test your changes? Is there
anything specific they need to know?~~
- ~~If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?~~
# Objective
- There's one occurence of `CowArc::Borrow` that wraps '&'static str`
## Solution
- Replaces it with `CowArc::Static`. I don't think this change is
important but I can't unsee it:)
## Testing
- `cargo check` compiles fine
# Objective
- Fixes https://github.com/bevyengine/bevy/issues/14961
## Solution
- Check that the archetypes don't contain any other observed components
before unsetting their flags
## Testing
- I added a regression test: `observer_despawn_archetype_flags`
# Objective
- Fixes https://github.com/bevyengine/bevy/issues/14975
## Solution
- Replace usages of `bevy_utils::CowArc` with `atomicow::CowArc`
- Remove bevy_utils::CowArc
## Testing
- `bevy_asset` test suite continues to pass.
---
## Migration Guide
`bevy_utils::CowArc` has moved to a new crate called
[atomicow](https://crates.io/crates/atomicow).
# Objective
- Add gizmos integration for the new `Curve` things in the math lib
## Solution
- Add the following methods
- `curve_2d(curve, sample_times, color)`
- `curve_3d(curve, sample_times, color)`
- `curve_gradient_2d(curve, sample_times_with_colors)`
- `curve_gradient_3d(curve, sample_times_with_colors)`
## Testing
- I added examples of the 2D and 3D variants of the gradient curve
gizmos to the gizmos examples.
## Showcase
### 2D
![image](https://github.com/user-attachments/assets/01a75706-a7b4-4fc5-98d5-18018185c877)
```rust
let domain = Interval::EVERYWHERE;
let curve = function_curve(domain, |t| Vec2::new(t, (t / 25.0).sin() * 100.0));
let resolution = ((time.elapsed_seconds().sin() + 1.0) * 50.0) as usize;
let times_and_colors = (0..=resolution)
.map(|n| n as f32 / resolution as f32)
.map(|t| (t - 0.5) * 600.0)
.map(|t| (t, TEAL.mix(&HOT_PINK, (t + 300.0) / 600.0)));
gizmos.curve_gradient_2d(curve, times_and_colors);
```
### 3D
![image](https://github.com/user-attachments/assets/3fd23983-1ec9-46cd-baed-5b5e2dc935d0)
```rust
let domain = Interval::EVERYWHERE;
let curve = function_curve(domain, |t| {
(Vec2::from((t * 10.0).sin_cos())).extend(t - 6.0)
});
let resolution = ((time.elapsed_seconds().sin() + 1.0) * 100.0) as usize;
let times_and_colors = (0..=resolution)
.map(|n| n as f32 / resolution as f32)
.map(|t| t * 5.0)
.map(|t| (t, TEAL.mix(&HOT_PINK, t / 5.0)));
gizmos.curve_gradient_3d(curve, times_and_colors);
```
# Objective
With the current implementation of `Plane3d` gizmos, it's really hard to
get a good feeling for big planes. Usually I tend to add more axes as a
user but that doesn't scale well and is pretty wasteful. It's hard to
recognize the plane in the distance here. Especially if there would've
been other rendered objects in the scene
![image](https://github.com/user-attachments/assets/b65b7015-c08c-46d7-aa27-c7c0d49b2021)
## Solution
- Since we got grid gizmos in the mean time, I went ahead and just
reused them here.
## Testing
I added an instance of the new `Plane3D` to the `3d_gizmos.rs` example.
If you want to look at it you need to look around a bit. I didn't
position it in the center since that was too crowded already.
---
## Showcase
![image](https://github.com/user-attachments/assets/e4982afe-7296-416c-9801-7dd85cd975c1)
## Migration Guide
The optional builder methods on
```rust
gizmos.primitive_3d(&Plane3d { }, ...);
```
changed from
- `segment_length`
- `segment_count`
- `axis_count`
to
- `cell_count`
- `spacing`
# Objective
This moves the default `LogPlugin` filter to be a public constant so
that it can be updated and referenced from outside code without changes
across releases:
```
fn main() {
App::new().add_plugins(
DefaultPlugins
.set(bevy::log::LogPlugin {
filter: format!("{},mylogs=error", bevy::log::LogPlugin::DEFAULT_FILTER),
..default()
})).run();
}
```
## Testing
Tested with `cargo run -p ci`
# Objective
Allow `SystemParamBuilder` implementations for custom system parameters
created using `#[derive(SystemParam)]`.
## Solution
Extend the derive macro to accept a `#[system_param(builder)]`
attribute. When present, emit a builder type with a field corresponding
to each field of the param.
## Example
```rust
#[derive(SystemParam)]
#[system_param(builder)]
struct CustomParam<'w, 's> {
query: Query<'w, 's, ()>,
local: Local<'s, usize>,
}
let system = (CustomParamBuilder {
local: LocalBuilder(100),
query: QueryParamBuilder::new(|builder| {
builder.with::<A>();
}),
},)
.build_state(&mut world)
.build_system(|param: CustomParam| *param.local + param.query.iter().count());
```
# Objective
- Fixes#14841
## Solution
- Compute BufferSlice size manually and use it for comparison in
`TrackedRenderPass`
## Testing
- Gizmo example does not crash with #14721 (without system ordering),
and `slice` computes correct size there
---
## Migration Guide
- `TrackedRenderPass::set_vertex_buffer` function has been modified to
update vertex buffers when the same buffer with the same offset is
provided, but its size has changed. Some existing code may rely on the
previous behavior, which did not update the vertex buffer in this
scenario.
---------
Co-authored-by: Zachary Harrold <zac@harrold.com.au>
# Objective
- Fixes#14860
## Solution
- Added a line of documentation to `FromWorld`'s trait definition
mention the `Default` blanket implementation.
- Added custom documentation to the `from_world` method for the
`Default` blanket implementation. This ensures when inspecting the
`from_world` function within an IDE, the tooltip will explicitly state
the `default()` method will be used for any `Default` types.
## Testing
- CI passes.
# Objective
Since https://github.com/bevyengine/bevy/pull/14731 is merged, it
unblocked a few utility methods for 2D arcs. In 2D the pendant to
`long_arc_3d_between` and `short_arc_3d_between` are missing. Since
`arc_2d` can be a bit hard to use, this PR is trying to plug some holes
in the `arcs` API.
## Solution
Implement
- `long_arc_2d_between(center, from, tp, color)`
- `short_arc_2d_between(center, from, tp, color)`
## Testing
- There are new doc tests
- The `2d_gizmos` example has been extended a bit to include a few more
arcs which can easily be checked with respect to the grid
---
## Showcase
![image](https://github.com/user-attachments/assets/b90ad8b1-86c2-4304-a481-4f9a5246c457)
Code related to the screenshot (from outer = first line to inner = last
line)
```rust
my_gizmos.arc_2d(Isometry2d::IDENTITY, FRAC_PI_2, 80.0, ORANGE_RED);
my_gizmos.short_arc_2d_between(Vec2::ZERO, Vec2::X * 40.0, Vec2::Y * 40.0, ORANGE_RED);
my_gizmos.long_arc_2d_between(Vec2::ZERO, Vec2::X * 20.0, Vec2::Y * 20.0, ORANGE_RED);
```
# Objective
When building a system from `SystemParamBuilder`s and defining the
system as a closure, the compiler should be able to infer the parameter
types from the builder types.
## Solution
Create methods for each arity that take an argument that implements both
`SystemParamFunction` as well as `FnMut(SystemParamItem<P>,...)`. The
explicit `FnMut` constraint will allow the compiler to infer the
necessary higher-ranked lifetimes along with the parameter types.
I wanted to show that this was possible, but I can't tell whether it's
worth the complexity. It requires a separate method for each arity,
which pollutes the docs a bit:
![SystemState build_system
docs](https://github.com/user-attachments/assets/5069b749-7ec7-47e3-a5e4-1a4c78129f78)
## Example
```rust
let system = (LocalBuilder(0u64), ParamBuilder::local::<u64>())
.build_state(&mut world)
.build_system(|a, b| *a + *b + 1);
```
# Objective
- Solves the last bullet in and closes#14319
- Make better use of the `Isometry` types
- Prevent issues like #14655
- Probably simplify and clean up a lot of code through the use of Gizmos
as well (i.e. the 3D gizmos for cylinders circles & lines don't connect
well, probably due to wrong rotations)
## Solution
- go through the `bevy_gizmos` crate and give all methods a slight
workover
## Testing
- For all the changed examples I run `git switch main && cargo rr
--example <X> && git switch <BRANCH> && cargo rr --example <X>` and
compare the visual results
- Check if all doc tests are still compiling
- Check the docs in general and update them !!!
---
## Migration Guide
The gizmos methods function signature changes as follows:
- 2D
- if it took `position` & `rotation_angle` before ->
`Isometry2d::new(position, Rot2::radians(rotation_angle))`
- if it just took `position` before ->
`Isometry2d::from_translation(position)`
- 3D
- if it took `position` & `rotation` before ->
`Isometry3d::new(position, rotation)`
- if it just took `position` before ->
`Isometry3d::from_translation(position)`
# Objective
sending events tends to be low-frequency so ergonomics can be
prioritized over efficiency.
add `Commands::send_event` to send any type of event without needing a
writer in hand.
i don't know how we feel about these kind of ergonomic things, i add
this to all my projects and find it useful. adding `mut
this_particular_event_writer: EventWriter<ThisParticularEvent>` every
time i want to send something is unnecessarily cumbersome.
it also simplifies the "send and receive in the same system" pattern
significantly.
basic example before:
```rs
fn my_func(
q: Query<(Entity, &State)>,
mut damage_event_writer: EventWriter<DamageEvent>,
mut heal_event_writer: EventWriter<HealEvent>,
) {
for (entity, state) in q.iter() {
if let Some(damage) = state.get_damage() {
damage_event_writer.send(DamageEvent { entity, damage });
}
if let Some(heal) = state.get_heal() {
heal_event_writer.send(HealEvent { entity, heal });
}
}
}
```
basic example after:
```rs
import bevy::ecs::event::SendEventEx;
fn my_func(
mut commands: Commands,
q: Query<(Entity, &State)>,
) {
for (entity, state) in q.iter() {
if let Some(damage) = state.get_damage() {
commands.send_event(DamageEvent { entity, damage });
}
if let Some(heal) = state.get_heal() {
commands.send_event(HealEvent { entity, heal });
}
}
}
```
send/receive in the same system before:
```rs
fn send_and_receive_param_set(
mut param_set: ParamSet<(EventReader<DebugEvent>, EventWriter<DebugEvent>)>,
) {
// We must collect the events to resend, because we can't access the writer while we're iterating over the reader.
let mut events_to_resend = Vec::new();
// This is p0, as the first parameter in the `ParamSet` is the reader.
for event in param_set.p0().read() {
if event.resend_from_param_set {
events_to_resend.push(event.clone());
}
}
// This is p1, as the second parameter in the `ParamSet` is the writer.
for mut event in events_to_resend {
event.times_sent += 1;
param_set.p1().send(event);
}
}
```
after:
```rs
use bevy::ecs::event::SendEventEx;
fn send_via_commands_and_receive(
mut reader: EventReader<DebugEvent>,
mut commands: Commands,
) {
for event in reader.read() {
if event.resend_via_commands {
commands.send_event(DebugEvent {
times_sent: event.times_sent + 1,
..event.clone()
});
}
}
}
```
---------
Co-authored-by: Jan Hohenheim <jan@hohenheim.ch>
## Introduction
This is the first step in my [Next Generation Scene / UI
Proposal](https://github.com/bevyengine/bevy/discussions/14437).
Fixes https://github.com/bevyengine/bevy/issues/7272#14800.
Bevy's current Bundles as the "unit of construction" hamstring the UI
user experience and have been a pain point in the Bevy ecosystem
generally when composing scenes:
* They are an additional _object defining_ concept, which must be
learned separately from components. Notably, Bundles _are not present at
runtime_, which is confusing and limiting.
* They can completely erase the _defining component_ during Bundle init.
For example, `ButtonBundle { style: Style::default(), ..default() }`
_makes no mention_ of the `Button` component symbol, which is what makes
the Entity a "button"!
* They are not capable of representing "dependency inheritance" without
completely non-viable / ergonomically crushing nested bundles. This
limitation is especially painful in UI scenarios, but it applies to
everything across the board.
* They introduce a bunch of additional nesting when defining scenes,
making them ugly to look at
* They introduce component name "stutter": `SomeBundle { component_name:
ComponentName::new() }`
* They require copious sprinklings of `..default()` when spawning them
in Rust code, due to the additional layer of nesting
**Required Components** solve this by allowing you to define which
components a given component needs, and how to construct those
components when they aren't explicitly provided.
This is what a `ButtonBundle` looks like with Bundles (the current
approach):
```rust
#[derive(Component, Default)]
struct Button;
#[derive(Bundle, Default)]
struct ButtonBundle {
pub button: Button,
pub node: Node,
pub style: Style,
pub interaction: Interaction,
pub focus_policy: FocusPolicy,
pub border_color: BorderColor,
pub border_radius: BorderRadius,
pub image: UiImage,
pub transform: Transform,
pub global_transform: GlobalTransform,
pub visibility: Visibility,
pub inherited_visibility: InheritedVisibility,
pub view_visibility: ViewVisibility,
pub z_index: ZIndex,
}
commands.spawn(ButtonBundle {
style: Style {
width: Val::Px(100.0),
height: Val::Px(50.0),
..default()
},
focus_policy: FocusPolicy::Block,
..default()
})
```
And this is what it looks like with Required Components:
```rust
#[derive(Component)]
#[require(Node, UiImage)]
struct Button;
commands.spawn((
Button,
Style {
width: Val::Px(100.0),
height: Val::Px(50.0),
..default()
},
FocusPolicy::Block,
));
```
With Required Components, we mention only the most relevant components.
Every component required by `Node` (ex: `Style`, `FocusPolicy`, etc) is
automatically brought in!
### Efficiency
1. At insertion/spawn time, Required Components (including recursive
required components) are initialized and inserted _as if they were
manually inserted alongside the given components_. This means that this
is maximally efficient: there are no archetype or table moves.
2. Required components are only initialized and inserted if they were
not manually provided by the developer. For the code example in the
previous section, because `Style` and `FocusPolicy` are inserted
manually, they _will not_ be initialized and inserted as part of the
required components system. Efficient!
3. The "missing required components _and_ constructors needed for an
insertion" are cached in the "archetype graph edge", meaning they aren't
computed per-insertion. When a component is inserted, the "missing
required components" list is iterated (and that graph edge (AddBundle)
is actually already looked up for us during insertion, because we need
that for "normal" insert logic too).
### IDE Integration
The `#[require(SomeComponent)]` macro has been written in such a way
that Rust Analyzer can provide type-inspection-on-hover and `F12` /
go-to-definition for required components.
### Custom Constructors
The `require` syntax expects a `Default` constructor by default, but it
can be overridden with a custom constructor:
```rust
#[derive(Component)]
#[require(
Node,
Style(button_style),
UiImage
)]
struct Button;
fn button_style() -> Style {
Style {
width: Val::Px(100.0),
..default()
}
}
```
### Multiple Inheritance
You may have noticed by now that this behaves a bit like "multiple
inheritance". One of the problems that this presents is that it is
possible to have duplicate requires for a given type at different levels
of the inheritance tree:
```rust
#[derive(Component)
struct X(usize);
#[derive(Component)]
#[require(X(x1))
struct Y;
fn x1() -> X {
X(1)
}
#[derive(Component)]
#[require(
Y,
X(x2),
)]
struct Z;
fn x2() -> X {
X(2)
}
// What version of X is inserted for Z?
commands.spawn(Z);
```
This is allowed (and encouraged), although this doesn't appear to occur
much in practice. First: only one version of `X` is initialized and
inserted for `Z`. In the case above, I think we can all probably agree
that it makes the most sense to use the `x2` constructor for `X`,
because `Y`'s `x1` constructor exists "beneath" `Z` in the inheritance
hierarchy; `Z`'s constructor is "more specific".
The algorithm is simple and predictable:
1. Use all of the constructors (including default constructors) directly
defined in the spawned component's require list
2. In the order the requires are defined in `#[require()]`, recursively
visit the require list of each of the components in the list (this is a
depth Depth First Search). When a constructor is found, it will only be
used if one has not already been found.
From a user perspective, just think about this as the following:
1. Specifying a required component constructor for `Foo` directly on a
spawned component `Bar` will result in that constructor being used (and
overriding existing constructors lower in the inheritance tree). This is
the classic "inheritance override" behavior people expect.
2. For cases where "multiple inheritance" results in constructor
clashes, Components should be listed in "importance order". List a
component earlier in the requirement list to initialize its inheritance
tree earlier.
Required Components _does_ generally result in a model where component
values are decoupled from each other at construction time. Notably, some
existing Bundle patterns use bundle constructors to initialize multiple
components with shared state. I think (in general) moving away from this
is necessary:
1. It allows Required Components (and the Scene system more generally)
to operate according to simple rules
2. The "do arbitrary init value sharing in Bundle constructors" approach
_already_ causes data consistency problems, and those problems would be
exacerbated in the context of a Scene/UI system. For cases where shared
state is truly necessary, I think we are better served by observers /
hooks.
3. If a situation _truly_ needs shared state constructors (which should
be rare / generally discouraged), Bundles are still there if they are
needed.
## Next Steps
* **Require Construct-ed Components**: I have already implemented this
(as defined in the [Next Generation Scene / UI
Proposal](https://github.com/bevyengine/bevy/discussions/14437). However
I've removed `Construct` support from this PR, as that has not landed
yet. Adding this back in requires relatively minimal changes to the
current impl, and can be done as part of a future Construct pr.
* **Port Built-in Bundles to Required Components**: This isn't something
we should do right away. It will require rethinking our public
interfaces, which IMO should be done holistically after the rest of Next
Generation Scene / UI lands. I think we should merge this PR first and
let people experiment _inside their own code with their own Components_
while we wait for the rest of the new scene system to land.
* **_Consider_ Automatic Required Component Removal**: We should
evaluate _if_ automatic Required Component removal should be done. Ex:
if all components that explicitly require a component are removed,
automatically remove that component. This issue has been explicitly
deferred in this PR, as I consider the insertion behavior to be
desirable on its own (and viable on its own). I am also doubtful that we
can find a design that has behavior we actually want. Aka: can we
_really_ distinguish between a component that is "only there because it
was automatically inserted" and "a component that was necessary / should
be kept". See my [discussion response
here](https://github.com/bevyengine/bevy/discussions/14437#discussioncomment-10268668)
for more details.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: BD103 <59022059+BD103@users.noreply.github.com>
Co-authored-by: Pascal Hertleif <killercup@gmail.com>
# Objective
The Android example on Adreno 642L currently crashes on startup.
Previous PRs #14176 and #13323 have adressed this specific crash
occurring on some Adreno GPUs, that fix works as it should but isn't
applied when to the GPU name contains a suffix like in the case of
`642L`.
## Solution
- Amending the logic to filter out any parts of the GPU name not
containing digits thus enabling the fix on `642L`.
## Testing
- Ran the Android example on a Nothing Phone 1. Before this change it
crashed, after it works as intended.
---------
Co-authored-by: Sam Pettersson <sam.pettersson@geoguessr.com>
# Objective
- `Curve<T>` was meant to be object safe, but one of the latest commits
made it not object safe.
- When trying to use `Curve<T>` as `&dyn Curve<T>` this compile error is
raised:
```
error[E0038]: the trait `curve::Curve` cannot be made into an object
--> crates/bevy_math/src/curve/mod.rs:1025:20
note: for a trait to be "object safe" it needs to allow building a vtable to allow the call to be resolvable dynamically; for more information visit <https://doc.rust-lang.org/reference/items/traits.html#object-safety>
--> crates/bevy_math/src/curve/mod.rs:60:8
|
23 | pub trait Curve<T> {
| ----- this trait cannot be made into an object...
...
60 | fn sample_iter(&self, iter: impl IntoIterator<Item = f32>) -> impl Iterator<Item = Option<T>> {
| ^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...because method `sample_iter` references an `impl Trait` type in its return type
| |
| ...because method `sample_iter` has generic type parameters
...
```
## Solution
- Making `Curve<T>` object safe again by adding `Self: Sized` to newly
added methods.
## Testing
- Added new test that ensures the `Curve<T>` trait can be made into an
objet.
# Objective
- Fixes#14348
- Fixes#14528
- Less complex (but also likely less performant) alternative to #14611
## Solution
- Add a `is_dense` field flag to `QueryIter` indicating whether it is
dense or not, that is whether it can perform dense iteration or not;
- Check this flag any time iteration over a query is performed.
---
It would be nice if someone could try benching this change to see if it
actually matters.
~Note that this not 100% ready for mergin, since there are a bunch of
safety comments on the use of the various `IS_DENSE` for checks that
still need to be updated.~ This is ready modulo benchmarks
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
when handles for loading assets are dropped, we currently wait until
load is completed before dropping the handle. drop asset-load tasks
immediately
## Solution
- track tasks for loading assets and drop them immediately when all
handles are dropped.
~~- use `join_all` in `gltf_loader.rs` to allow it to yield and be
dropped.~~
doesn't cover all the load apis - for those it doesn't cover the task
will still be detached and will still complete before the result is
discarded.
separated out from #13170
# Objective
Allow dynamic systems to take lists of system parameters whose length is
not known at compile time.
This can be used for building a system that runs a script defined at
runtime, where the script needs a variable number of query parameters.
It can also be used for building a system that collects a list of
plugins at runtime, and provides a parameter to each one.
This is most useful today with `Vec<Query<FilteredEntityMut>>`. It will
be even more useful with `Vec<DynSystemParam>` if #14817 is merged,
since the parameters in the list can then be of different types.
## Solution
Implement `SystemParam` and `SystemParamBuilder` for `Vec` and
`ParamSet<Vec>`.
## Example
```rust
let system = (vec![
QueryParamBuilder::new_box(|builder| {
builder.with::<B>().without::<C>();
}),
QueryParamBuilder::new_box(|builder| {
builder.with::<C>().without::<B>();
}),
],)
.build_state(&mut world)
.build_system(|params: Vec<Query<&mut A>>| {
let mut count: usize = 0;
params
.into_iter()
.for_each(|mut query| count += query.iter_mut().count());
count
});
```
# Objective
- Fixes#14902
- > #14686 Introduced a name clash when using use bevy::prelude::*;
## Solution
- renamed `bevy::picking::events::Drop`
`bevy::picking::events::DragDrop`
## Testing
- Not being used in tests or examples, so I just compiled.
---
</details>
## Migration Guide
- Rename `Drop` to `DragDrop`
- `bevy::picking::events::Drop` is now `bevy::picking::events::DragDrop`
# Objective
This is a value that is and will be used as a domain of curves pretty
often. By adding it as a dedicated constant we can get rid of some
`unwraps` and function calls.
## Solution
added `Interval::UNIT`
## Testing
I replaced all occurrences of `interval(0.0, 1.0).unwrap()` with the new
`Interval::UNIT` constant in tests and doc tests.
Bumps [crate-ci/typos](https://github.com/crate-ci/typos) from 1.23.6 to
1.24.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/crate-ci/typos/releases">crate-ci/typos's
releases</a>.</em></p>
<blockquote>
<h2>v1.24.1</h2>
<h2>[1.24.1] - 2024-08-23</h2>
<h3>Fixes</h3>
<ul>
<li>Remove unverified varcon (locale data) entries</li>
</ul>
<h2>v1.24.0</h2>
<h2>[1.24.0] - 2024-08-23</h2>
<h3>Features</h3>
<ul>
<li>Update varcon (locale data) to version 2020.12.07</li>
</ul>
<h2>v1.23.7</h2>
<h2>[1.23.7] - 2024-08-22</h2>
<h3>Fixes</h3>
<ul>
<li><em>(config)</em> Respect <code>--locale</code> /
<code>default.locale</code> again after it was broken in 1.16.24</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/crate-ci/typos/blob/master/CHANGELOG.md">crate-ci/typos's
changelog</a>.</em></p>
<blockquote>
<h2>[1.24.1] - 2024-08-23</h2>
<h3>Fixes</h3>
<ul>
<li>Remove unverified varcon (locale data) entries</li>
</ul>
<h2>[1.24.0] - 2024-08-23</h2>
<h3>Features</h3>
<ul>
<li>Update varcon (locale data) to version 2020.12.07</li>
</ul>
<h2>[1.23.7] - 2024-08-22</h2>
<h3>Fixes</h3>
<ul>
<li><em>(config)</em> Respect <code>--locale</code> /
<code>default.locale</code> again after it was broken in 1.16.24</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b86466d268"><code>b86466d</code></a>
chore: Release</li>
<li><a
href="611cbc4c70"><code>611cbc4</code></a>
docs: Update changelog</li>
<li><a
href="8bfb5febe2"><code>8bfb5fe</code></a>
chore: Release</li>
<li><a
href="7b6170d8e1"><code>7b6170d</code></a>
Merge pull request <a
href="https://redirect.github.com/crate-ci/typos/issues/1086">#1086</a>
from epage/verified</li>
<li><a
href="17b4d0267e"><code>17b4d02</code></a>
fix(vars): Drop unverified entries</li>
<li><a
href="21281cc0ae"><code>21281cc</code></a>
chore: Release</li>
<li><a
href="b7592dd937"><code>b7592dd</code></a>
chore: Release</li>
<li><a
href="04d3f682be"><code>04d3f68</code></a>
docs: Update changelog</li>
<li><a
href="6ae8520cf3"><code>6ae8520</code></a>
chore: Release</li>
<li><a
href="f0be796065"><code>f0be796</code></a>
chore: Release</li>
<li>Additional commits viewable in <a
href="https://github.com/crate-ci/typos/compare/v1.23.6...v1.24.1">compare
view</a></li>
</ul>
</details>
<br />
[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=crate-ci/typos&package-manager=github_actions&previous-version=1.23.6&new-version=1.24.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
# Objective
- I needed to run a system whenever a specific condition became true
after being previously false.
- Other users might also need to run a system when a condition changes,
regardless of if it became true or false.
## Solution
- This adds two systems to common_conditions:
- `condition_changed` that changes whenever the inner condition changes
- `condition_became_true` that returns true whenever the inner condition
becomes true after previously being false
## Testing
- I added a doctest for each function
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Jan Hohenheim <jan@hohenheim.ch>
# Objective
- Allow not only inserting `Data` into `EmbeddedAssetRegistry` and `Dir`
in turn but now also removing it again.
- This way when used to embed asset data from *somewhere* but not load
it using the conventional means via `AssetServer` (which I observed
takes ownership of the `Data`) the `Data` does not need to stay in
memory of the `EmbeddedAssetRegistry` throughout the lifetime of the
application.
## Solution
- added the `remove_asset` functions in `EmbeddedAssetRegistry` and
`Dir`
## Testing
- added a module unittest
- does this require changes if build with feature `embedded_watcher`?
# Objective
Fixes#14883
## Solution
Pretty simple update to `EntityCommands` methods to consume `self` and
return it rather than taking `&mut self`. The things probably worth
noting:
* I added `#[allow(clippy::should_implement_trait)]` to the `add` method
because it causes a linting conflict with `std::ops::Add`.
* `despawn` and `log_components` now return `Self`. I'm not sure if
that's exactly the desired behavior so I'm happy to adjust if that seems
wrong.
## Testing
Tested with `cargo run -p ci`. I think that should be sufficient to call
things good.
## Migration Guide
The most likely migration needed is changing code from this:
```
let mut entity = commands.get_or_spawn(entity);
if depth_prepass {
entity.insert(DepthPrepass);
}
if normal_prepass {
entity.insert(NormalPrepass);
}
if motion_vector_prepass {
entity.insert(MotionVectorPrepass);
}
if deferred_prepass {
entity.insert(DeferredPrepass);
}
```
to this:
```
let mut entity = commands.get_or_spawn(entity);
if depth_prepass {
entity = entity.insert(DepthPrepass);
}
if normal_prepass {
entity = entity.insert(NormalPrepass);
}
if motion_vector_prepass {
entity = entity.insert(MotionVectorPrepass);
}
if deferred_prepass {
entity.insert(DeferredPrepass);
}
```
as can be seen in several of the example code updates here. There will
probably also be instances where mutable `EntityCommands` vars no longer
need to be mutable.
# Objective
Based on the discussion in #14864, I wanted to experiment with the core
`GenericTypeCell` type, whose `get_or_insert` method accounted for 2% of
the final binary size of the `3d_scene` example. The reason for this
large percentage is likely because the type is fundamental to the rest
of Bevy while having 3 generic parameters (the type stored `T`, the type
to retrieve `G`, and the function used to insert a new value `F`).
- Acts on #14864
## Solution
- Split `get_or_insert` into smaller functions with minimised
parameterisation. These new functions are private as to preserve the
public facing API, but could be exposed if desired.
## Testing
- Ran CI locally.
- Used `cargo bloat --release --example 3d_scene -n 100000
--message-format json > out.json` and @cart's [bloat
analyzer](https://gist.github.com/cart/722756ba3da0e983d207633e0a48a8ab)
to measure a 428KiB reduction in binary size when compiling on Windows
10.
- ~I have _not_ benchmarked to determine if this improves/hurts
performance.~ See
[below](https://github.com/bevyengine/bevy/pull/14865#issuecomment-2306083606).
## Notes
In my opinion this seems like a good test-case for the concept of
debloating generics within the Bevy codebase. I believe the performance
impact here is negligible in either direction (at runtime and compile
time), but the binary reduction is measurable and quite significant for
a relatively minor change in code.
---------
Co-authored-by: Gino Valente <49806985+MrGVSV@users.noreply.github.com>
# Objective
Citing @mweatherley
> As mentioned before, a multi-sampling function in the API which takes
an iterator is probably something we want (e.g. `sample_iter(iter: impl
IntoIterator<Item = f32>) -> impl IntoIterator<Item = T> { //... }`, but
there are some design choices to be made on the details (e.g. does this
filter out points that aren't in the domain? does it do sorting? etc.)
## Solution
I think the most flexible solution for end users is to expose all the
`sample_...` functions with an `iter` equivalent, so we'll have
- `sample_iter`
- `sample_iter_unchecked`
- `sample_iter_clamped`
Answering some questions from the original idea:
> does this filter out points that aren't in the domain?
With the methods the user has the choice to just sample or if they want
to filter out invalid types us `sample_iter` and then apply `filter_map`
to the iterator returned themselves.
> does it do sorting?
I think it's the same thing. If the user wants it, they need to do it
themselves by either collecting and sorting a `Vec` or using
`itertools`. I think there is a legit use case for "please sample me
this collection of points that are unordered" and we would destroy it if
we take away to much agency from users by sorting for them
## Testing
- Added a test which covers all three methods
# Objective
Add `bevy_picking` sprite backend as part of the `bevy_mod_picking`
upstreamening (#12365).
## Solution
More or less a copy/paste from `bevy_mod_picking`, with the changes
[here](https://github.com/aevyrie/bevy_mod_picking/pull/354). I'm
putting that link here since those changes haven't yet made it through
review, so should probably be reviewed on their own.
## Testing
I couldn't find any sprite-backend-specific tests in `bevy_mod_picking`
and unfortunately I'm not familiar enough with Bevy's testing patterns
to write tests for code that relies on windowing and input. I'm willing
to break the pointer hit system into testable blocks and add some more
modular tests if that's deemed important enough to block, otherwise I
can open an issue for adding tests as follow-up.
## Follow-up work
- More docs/tests
- Ignore pick events on transparent sprite pixels with potential opt-out
---------
Co-authored-by: Aevyrie <aevyrie@gmail.com>
# Objective
`arc_2d` wasn't actually doing what the docs were saying. The arc wasn't
offset by what was previously `direction_angle` but by `direction_angle
- arc_angle / 2.0`. This meant that the arcs center was laying on the
`Vec2::Y` axis and then it was offset. This was probably done to fit the
behavior of the `Arc2D` primitive. I would argue that this isn't
desirable for the plain `arc_2d` gizmo method since
- a) the docs get longer to explain the weird centering
- b) the mental model the user has to know gets bigger with more
implicit assumptions
given the code
```rust
my_gizmos.arc_2d(Vec2::ZERO, 0.0, FRAC_PI_2, 75.0, ORANGE_RED);
```
we get
![image](https://github.com/user-attachments/assets/84894c6d-42e4-451b-b3e2-811266486ede)
where after the fix with
```rust
my_gizmos.arc_2d(Isometry2d::IDENTITY, FRAC_PI_2, 75.0, ORANGE_RED);
```
we get
![image](https://github.com/user-attachments/assets/16b0aba0-f7b5-4600-ac49-a22be0315c40)
To get the same result with the previous implementation you would have
to randomly add `arc_angle / 2.0` to the `direction_angle`.
```rust
my_gizmos.arc_2d(Vec2::ZERO, FRAC_PI_4, FRAC_PI_2, 75.0, ORANGE_RED);
```
This makes constructing similar helping functions as they already exist
in 3D like
- `long_arc_2d_between`
- `short_arc_2d_between`
much harder.
## Solution
- Make the arc really start at `Vec2::Y * radius` in counter-clockwise
direction + offset by an angle as the docs state it
- Use `Isometry2d` instead of `position : Vec2` and `direction_angle :
f32` to reduce the chance of messing up rotation/translation
- Adjust the docs for the changes above
- Adjust the gizmo rendering of some primitives
## Testing
- check `2d_gizmos.rs` and `render_primitives.rs` examples
## Migration Guide
- users have to adjust their usages of `arc_2d`:
- before:
```rust
arc_2d(
pos,
angle,
arc_angle,
radius,
color
)
```
- after:
```rust
arc_2d(
// this `+ arc_angle * 0.5` quirk is only if you want to preserve the
previous behavior
// with the new API.
// feel free to try to fix this though since your current calls to this
function most likely
// involve some computations to counter-act that quirk in the first
place
Isometry2d::new(pos, Rot2::radians(angle + arc_angle * 0.5),
arc_angle,
radius,
color
)
```
# Objective
- Faster meshlet rasterization path for small triangles
- Avoid having to allocate and write out a triangle buffer
- Refactor gpu_scene.rs
## Solution
- Replace the 32bit visbuffer texture with a 64bit visbuffer buffer,
where the left 32 bits encode depth, and the right 32 bits encode the
existing cluster + triangle IDs. Can't use 64bit textures, wgpu/naga
doesn't support atomic ops on textures yet.
- Instead of writing out a buffer of packed cluster + triangle IDs (per
triangle) to raster, the culling pass now writes out a buffer of just
cluster IDs (per cluster, so less memory allocated, cheaper to write
out).
- Clusters for software raster are allocated from the left side
- Clusters for hardware raster are allocated in the same buffer, from
the right side
- The buffer size is fixed at MeshletPlugin build time, and should be
set to a reasonable value for your scene (no warning on overflow, and no
good way to determine what value you need outside of renderdoc - I plan
to fix this in a future PR adding a meshlet stats overlay)
- Currently I don't have a heuristic for software vs hardware raster
selection for each cluster. The existing code is just a placeholder. I
need to profile on a release scene and come up with a heuristic,
probably in a future PR.
- The culling shader is getting pretty hard to follow at this point, but
I don't want to spend time improving it as the entire shader/pass is
getting rewritten/replaced in the near future.
- Software raster is a compute workgroup per-cluster. Each workgroup
loads and transforms the <=64 vertices of the cluster, and then
rasterizes the <=64 triangles of the cluster.
- Two variants are implemented: Scanline for clusters with any larger
triangles (still smaller than hardware is good at), and brute-force for
very very tiny triangles
- Once the shader determines that a pixel should be filled in, it does
an atomicMax() on the visbuffer to store the results, copying how Nanite
works
- On devices with a low max workgroups per dispatch limit, an extra
compute pass is inserted before software raster to convert from a 1d to
2d dispatch (I don't think 3d would ever be necessary).
- I haven't implemented the top-left rule or subpixel precision yet, I'm
leaving that for a future PR since I get usable results without it for
now
- Resources used:
https://kristoffer-dyrkorn.github.io/triangle-rasterizer and chapters
6-8 of
https://fgiesen.wordpress.com/2013/02/17/optimizing-sw-occlusion-culling-index
- Hardware raster now spawns 64*3 vertex invocations per meshlet,
instead of the actual meshlet vertex count. Extra invocations just
early-exit.
- While this is slower than the existing system, hardware draws should
be rare now that software raster is usable, and it saves a ton of memory
using the unified cluster ID buffer. This would be fixed if wgpu had
support for mesh shaders.
- Instead of writing to a color+depth attachment, the hardware raster
pass also does the same atomic visbuffer writes that software raster
uses.
- We have to bind a dummy render target anyways, as wgpu doesn't
currently support render passes without any attachments
- Material IDs are no longer written out during the main rasterization
passes.
- If we had async compute queues, we could overlap the software and
hardware raster passes.
- New material and depth resolve passes run at the end of the visbuffer
node, and write out view depth and material ID depth textures
### Misc changes
- Fixed cluster culling importing, but never actually using the previous
view uniforms when doing occlusion culling
- Fixed incorrectly adding the LOD error twice when building the meshlet
mesh
- Splitup gpu_scene module into meshlet_mesh_manager, instance_manager,
and resource_manager
- resource_manager is still too complex and inefficient (extract and
prepare are way too expensive). I plan on improving this in a future PR,
but for now ResourceManager is mostly a 1:1 port of the leftover
MeshletGpuScene bits.
- Material draw passes have been renamed to the more accurate material
shade pass, as well as some other misc renaming (in the future, these
will be compute shaders even, and not actual draw calls)
---
## Migration Guide
- TBD (ask me at the end of the release for meshlet changes as a whole)
---------
Co-authored-by: vero <email@atlasdostal.com>
# Objective
When using instancing, 2 `VertexBufferLayout`s are needed, one for
per-vertex and one for per-instance data. Shader locations of all
attributes must not overlap, so one of the layouts needs to start its
locations at an offset. However,
`VertexBufferLayout::from_vertex_formats` will always start locations at
0, requiring manual adjustment, which is currently pretty verbose.
## Solution
Add `VertexBufferLayout::offset_locations`, which adds an offset to all
attribute locations.
Code using this method looks like this:
```rust
VertexState {
shader: BACKBUFFER_SHADER_HANDLE.typed(),
shader_defs: Vec::new(),
entry_point: "vertex".into(),
buffers: vec![
VertexBufferLayout::from_vertex_formats(
VertexStepMode::Vertex,
[VertexFormat::Float32x2],
),
VertexBufferLayout::from_vertex_formats(
VertexStepMode::Instance,
[VertexFormat::Float32x2, VertexFormat::Float32x3],
)
.offset_locations(1),
],
}
```
Alternative solutions include:
- Pass the starting location to `from_vertex_formats` – this is a bit
simpler than my solution here, but most calls don't need an offset, so
they'd always pass 0 there.
- Do nothing and make the user hand-write this.
---
## Changelog
- Add `VertexBufferLayout::offset_locations` to simplify buffer layout
construction when using instancing.
---------
Co-authored-by: Nicola Papale <nicopap@users.noreply.github.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
Adding more features to `AsBindGroup` proc macro means making the trait
arguments uglier. Downstream implementors of the trait without the proc
macro might want to do different things than our default arguments.
## Solution
Make `AsBindGroup` take an associated `Param` type.
## Migration Guide
`AsBindGroup` now allows the user to specify a `SystemParam` to be used
for creating bind groups.