# Objective
This moves the default `LogPlugin` filter to be a public constant so
that it can be updated and referenced from outside code without changes
across releases:
```
fn main() {
App::new().add_plugins(
DefaultPlugins
.set(bevy::log::LogPlugin {
filter: format!("{},mylogs=error", bevy::log::LogPlugin::DEFAULT_FILTER),
..default()
})).run();
}
```
## Testing
Tested with `cargo run -p ci`
# Objective
Allow `SystemParamBuilder` implementations for custom system parameters
created using `#[derive(SystemParam)]`.
## Solution
Extend the derive macro to accept a `#[system_param(builder)]`
attribute. When present, emit a builder type with a field corresponding
to each field of the param.
## Example
```rust
#[derive(SystemParam)]
#[system_param(builder)]
struct CustomParam<'w, 's> {
query: Query<'w, 's, ()>,
local: Local<'s, usize>,
}
let system = (CustomParamBuilder {
local: LocalBuilder(100),
query: QueryParamBuilder::new(|builder| {
builder.with::<A>();
}),
},)
.build_state(&mut world)
.build_system(|param: CustomParam| *param.local + param.query.iter().count());
```
# Objective
- Fixes#14841
## Solution
- Compute BufferSlice size manually and use it for comparison in
`TrackedRenderPass`
## Testing
- Gizmo example does not crash with #14721 (without system ordering),
and `slice` computes correct size there
---
## Migration Guide
- `TrackedRenderPass::set_vertex_buffer` function has been modified to
update vertex buffers when the same buffer with the same offset is
provided, but its size has changed. Some existing code may rely on the
previous behavior, which did not update the vertex buffer in this
scenario.
---------
Co-authored-by: Zachary Harrold <zac@harrold.com.au>
# Objective
- Fixes#14860
## Solution
- Added a line of documentation to `FromWorld`'s trait definition
mention the `Default` blanket implementation.
- Added custom documentation to the `from_world` method for the
`Default` blanket implementation. This ensures when inspecting the
`from_world` function within an IDE, the tooltip will explicitly state
the `default()` method will be used for any `Default` types.
## Testing
- CI passes.
# Objective
Since https://github.com/bevyengine/bevy/pull/14731 is merged, it
unblocked a few utility methods for 2D arcs. In 2D the pendant to
`long_arc_3d_between` and `short_arc_3d_between` are missing. Since
`arc_2d` can be a bit hard to use, this PR is trying to plug some holes
in the `arcs` API.
## Solution
Implement
- `long_arc_2d_between(center, from, tp, color)`
- `short_arc_2d_between(center, from, tp, color)`
## Testing
- There are new doc tests
- The `2d_gizmos` example has been extended a bit to include a few more
arcs which can easily be checked with respect to the grid
---
## Showcase
![image](https://github.com/user-attachments/assets/b90ad8b1-86c2-4304-a481-4f9a5246c457)
Code related to the screenshot (from outer = first line to inner = last
line)
```rust
my_gizmos.arc_2d(Isometry2d::IDENTITY, FRAC_PI_2, 80.0, ORANGE_RED);
my_gizmos.short_arc_2d_between(Vec2::ZERO, Vec2::X * 40.0, Vec2::Y * 40.0, ORANGE_RED);
my_gizmos.long_arc_2d_between(Vec2::ZERO, Vec2::X * 20.0, Vec2::Y * 20.0, ORANGE_RED);
```
# Objective
When building a system from `SystemParamBuilder`s and defining the
system as a closure, the compiler should be able to infer the parameter
types from the builder types.
## Solution
Create methods for each arity that take an argument that implements both
`SystemParamFunction` as well as `FnMut(SystemParamItem<P>,...)`. The
explicit `FnMut` constraint will allow the compiler to infer the
necessary higher-ranked lifetimes along with the parameter types.
I wanted to show that this was possible, but I can't tell whether it's
worth the complexity. It requires a separate method for each arity,
which pollutes the docs a bit:
![SystemState build_system
docs](https://github.com/user-attachments/assets/5069b749-7ec7-47e3-a5e4-1a4c78129f78)
## Example
```rust
let system = (LocalBuilder(0u64), ParamBuilder::local::<u64>())
.build_state(&mut world)
.build_system(|a, b| *a + *b + 1);
```
# Objective
- Solves the last bullet in and closes#14319
- Make better use of the `Isometry` types
- Prevent issues like #14655
- Probably simplify and clean up a lot of code through the use of Gizmos
as well (i.e. the 3D gizmos for cylinders circles & lines don't connect
well, probably due to wrong rotations)
## Solution
- go through the `bevy_gizmos` crate and give all methods a slight
workover
## Testing
- For all the changed examples I run `git switch main && cargo rr
--example <X> && git switch <BRANCH> && cargo rr --example <X>` and
compare the visual results
- Check if all doc tests are still compiling
- Check the docs in general and update them !!!
---
## Migration Guide
The gizmos methods function signature changes as follows:
- 2D
- if it took `position` & `rotation_angle` before ->
`Isometry2d::new(position, Rot2::radians(rotation_angle))`
- if it just took `position` before ->
`Isometry2d::from_translation(position)`
- 3D
- if it took `position` & `rotation` before ->
`Isometry3d::new(position, rotation)`
- if it just took `position` before ->
`Isometry3d::from_translation(position)`
# Objective
sending events tends to be low-frequency so ergonomics can be
prioritized over efficiency.
add `Commands::send_event` to send any type of event without needing a
writer in hand.
i don't know how we feel about these kind of ergonomic things, i add
this to all my projects and find it useful. adding `mut
this_particular_event_writer: EventWriter<ThisParticularEvent>` every
time i want to send something is unnecessarily cumbersome.
it also simplifies the "send and receive in the same system" pattern
significantly.
basic example before:
```rs
fn my_func(
q: Query<(Entity, &State)>,
mut damage_event_writer: EventWriter<DamageEvent>,
mut heal_event_writer: EventWriter<HealEvent>,
) {
for (entity, state) in q.iter() {
if let Some(damage) = state.get_damage() {
damage_event_writer.send(DamageEvent { entity, damage });
}
if let Some(heal) = state.get_heal() {
heal_event_writer.send(HealEvent { entity, heal });
}
}
}
```
basic example after:
```rs
import bevy::ecs::event::SendEventEx;
fn my_func(
mut commands: Commands,
q: Query<(Entity, &State)>,
) {
for (entity, state) in q.iter() {
if let Some(damage) = state.get_damage() {
commands.send_event(DamageEvent { entity, damage });
}
if let Some(heal) = state.get_heal() {
commands.send_event(HealEvent { entity, heal });
}
}
}
```
send/receive in the same system before:
```rs
fn send_and_receive_param_set(
mut param_set: ParamSet<(EventReader<DebugEvent>, EventWriter<DebugEvent>)>,
) {
// We must collect the events to resend, because we can't access the writer while we're iterating over the reader.
let mut events_to_resend = Vec::new();
// This is p0, as the first parameter in the `ParamSet` is the reader.
for event in param_set.p0().read() {
if event.resend_from_param_set {
events_to_resend.push(event.clone());
}
}
// This is p1, as the second parameter in the `ParamSet` is the writer.
for mut event in events_to_resend {
event.times_sent += 1;
param_set.p1().send(event);
}
}
```
after:
```rs
use bevy::ecs::event::SendEventEx;
fn send_via_commands_and_receive(
mut reader: EventReader<DebugEvent>,
mut commands: Commands,
) {
for event in reader.read() {
if event.resend_via_commands {
commands.send_event(DebugEvent {
times_sent: event.times_sent + 1,
..event.clone()
});
}
}
}
```
---------
Co-authored-by: Jan Hohenheim <jan@hohenheim.ch>
## Introduction
This is the first step in my [Next Generation Scene / UI
Proposal](https://github.com/bevyengine/bevy/discussions/14437).
Fixes https://github.com/bevyengine/bevy/issues/7272#14800.
Bevy's current Bundles as the "unit of construction" hamstring the UI
user experience and have been a pain point in the Bevy ecosystem
generally when composing scenes:
* They are an additional _object defining_ concept, which must be
learned separately from components. Notably, Bundles _are not present at
runtime_, which is confusing and limiting.
* They can completely erase the _defining component_ during Bundle init.
For example, `ButtonBundle { style: Style::default(), ..default() }`
_makes no mention_ of the `Button` component symbol, which is what makes
the Entity a "button"!
* They are not capable of representing "dependency inheritance" without
completely non-viable / ergonomically crushing nested bundles. This
limitation is especially painful in UI scenarios, but it applies to
everything across the board.
* They introduce a bunch of additional nesting when defining scenes,
making them ugly to look at
* They introduce component name "stutter": `SomeBundle { component_name:
ComponentName::new() }`
* They require copious sprinklings of `..default()` when spawning them
in Rust code, due to the additional layer of nesting
**Required Components** solve this by allowing you to define which
components a given component needs, and how to construct those
components when they aren't explicitly provided.
This is what a `ButtonBundle` looks like with Bundles (the current
approach):
```rust
#[derive(Component, Default)]
struct Button;
#[derive(Bundle, Default)]
struct ButtonBundle {
pub button: Button,
pub node: Node,
pub style: Style,
pub interaction: Interaction,
pub focus_policy: FocusPolicy,
pub border_color: BorderColor,
pub border_radius: BorderRadius,
pub image: UiImage,
pub transform: Transform,
pub global_transform: GlobalTransform,
pub visibility: Visibility,
pub inherited_visibility: InheritedVisibility,
pub view_visibility: ViewVisibility,
pub z_index: ZIndex,
}
commands.spawn(ButtonBundle {
style: Style {
width: Val::Px(100.0),
height: Val::Px(50.0),
..default()
},
focus_policy: FocusPolicy::Block,
..default()
})
```
And this is what it looks like with Required Components:
```rust
#[derive(Component)]
#[require(Node, UiImage)]
struct Button;
commands.spawn((
Button,
Style {
width: Val::Px(100.0),
height: Val::Px(50.0),
..default()
},
FocusPolicy::Block,
));
```
With Required Components, we mention only the most relevant components.
Every component required by `Node` (ex: `Style`, `FocusPolicy`, etc) is
automatically brought in!
### Efficiency
1. At insertion/spawn time, Required Components (including recursive
required components) are initialized and inserted _as if they were
manually inserted alongside the given components_. This means that this
is maximally efficient: there are no archetype or table moves.
2. Required components are only initialized and inserted if they were
not manually provided by the developer. For the code example in the
previous section, because `Style` and `FocusPolicy` are inserted
manually, they _will not_ be initialized and inserted as part of the
required components system. Efficient!
3. The "missing required components _and_ constructors needed for an
insertion" are cached in the "archetype graph edge", meaning they aren't
computed per-insertion. When a component is inserted, the "missing
required components" list is iterated (and that graph edge (AddBundle)
is actually already looked up for us during insertion, because we need
that for "normal" insert logic too).
### IDE Integration
The `#[require(SomeComponent)]` macro has been written in such a way
that Rust Analyzer can provide type-inspection-on-hover and `F12` /
go-to-definition for required components.
### Custom Constructors
The `require` syntax expects a `Default` constructor by default, but it
can be overridden with a custom constructor:
```rust
#[derive(Component)]
#[require(
Node,
Style(button_style),
UiImage
)]
struct Button;
fn button_style() -> Style {
Style {
width: Val::Px(100.0),
..default()
}
}
```
### Multiple Inheritance
You may have noticed by now that this behaves a bit like "multiple
inheritance". One of the problems that this presents is that it is
possible to have duplicate requires for a given type at different levels
of the inheritance tree:
```rust
#[derive(Component)
struct X(usize);
#[derive(Component)]
#[require(X(x1))
struct Y;
fn x1() -> X {
X(1)
}
#[derive(Component)]
#[require(
Y,
X(x2),
)]
struct Z;
fn x2() -> X {
X(2)
}
// What version of X is inserted for Z?
commands.spawn(Z);
```
This is allowed (and encouraged), although this doesn't appear to occur
much in practice. First: only one version of `X` is initialized and
inserted for `Z`. In the case above, I think we can all probably agree
that it makes the most sense to use the `x2` constructor for `X`,
because `Y`'s `x1` constructor exists "beneath" `Z` in the inheritance
hierarchy; `Z`'s constructor is "more specific".
The algorithm is simple and predictable:
1. Use all of the constructors (including default constructors) directly
defined in the spawned component's require list
2. In the order the requires are defined in `#[require()]`, recursively
visit the require list of each of the components in the list (this is a
depth Depth First Search). When a constructor is found, it will only be
used if one has not already been found.
From a user perspective, just think about this as the following:
1. Specifying a required component constructor for `Foo` directly on a
spawned component `Bar` will result in that constructor being used (and
overriding existing constructors lower in the inheritance tree). This is
the classic "inheritance override" behavior people expect.
2. For cases where "multiple inheritance" results in constructor
clashes, Components should be listed in "importance order". List a
component earlier in the requirement list to initialize its inheritance
tree earlier.
Required Components _does_ generally result in a model where component
values are decoupled from each other at construction time. Notably, some
existing Bundle patterns use bundle constructors to initialize multiple
components with shared state. I think (in general) moving away from this
is necessary:
1. It allows Required Components (and the Scene system more generally)
to operate according to simple rules
2. The "do arbitrary init value sharing in Bundle constructors" approach
_already_ causes data consistency problems, and those problems would be
exacerbated in the context of a Scene/UI system. For cases where shared
state is truly necessary, I think we are better served by observers /
hooks.
3. If a situation _truly_ needs shared state constructors (which should
be rare / generally discouraged), Bundles are still there if they are
needed.
## Next Steps
* **Require Construct-ed Components**: I have already implemented this
(as defined in the [Next Generation Scene / UI
Proposal](https://github.com/bevyengine/bevy/discussions/14437). However
I've removed `Construct` support from this PR, as that has not landed
yet. Adding this back in requires relatively minimal changes to the
current impl, and can be done as part of a future Construct pr.
* **Port Built-in Bundles to Required Components**: This isn't something
we should do right away. It will require rethinking our public
interfaces, which IMO should be done holistically after the rest of Next
Generation Scene / UI lands. I think we should merge this PR first and
let people experiment _inside their own code with their own Components_
while we wait for the rest of the new scene system to land.
* **_Consider_ Automatic Required Component Removal**: We should
evaluate _if_ automatic Required Component removal should be done. Ex:
if all components that explicitly require a component are removed,
automatically remove that component. This issue has been explicitly
deferred in this PR, as I consider the insertion behavior to be
desirable on its own (and viable on its own). I am also doubtful that we
can find a design that has behavior we actually want. Aka: can we
_really_ distinguish between a component that is "only there because it
was automatically inserted" and "a component that was necessary / should
be kept". See my [discussion response
here](https://github.com/bevyengine/bevy/discussions/14437#discussioncomment-10268668)
for more details.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: BD103 <59022059+BD103@users.noreply.github.com>
Co-authored-by: Pascal Hertleif <killercup@gmail.com>
# Objective
The Android example on Adreno 642L currently crashes on startup.
Previous PRs #14176 and #13323 have adressed this specific crash
occurring on some Adreno GPUs, that fix works as it should but isn't
applied when to the GPU name contains a suffix like in the case of
`642L`.
## Solution
- Amending the logic to filter out any parts of the GPU name not
containing digits thus enabling the fix on `642L`.
## Testing
- Ran the Android example on a Nothing Phone 1. Before this change it
crashed, after it works as intended.
---------
Co-authored-by: Sam Pettersson <sam.pettersson@geoguessr.com>
# Objective
- `Curve<T>` was meant to be object safe, but one of the latest commits
made it not object safe.
- When trying to use `Curve<T>` as `&dyn Curve<T>` this compile error is
raised:
```
error[E0038]: the trait `curve::Curve` cannot be made into an object
--> crates/bevy_math/src/curve/mod.rs:1025:20
note: for a trait to be "object safe" it needs to allow building a vtable to allow the call to be resolvable dynamically; for more information visit <https://doc.rust-lang.org/reference/items/traits.html#object-safety>
--> crates/bevy_math/src/curve/mod.rs:60:8
|
23 | pub trait Curve<T> {
| ----- this trait cannot be made into an object...
...
60 | fn sample_iter(&self, iter: impl IntoIterator<Item = f32>) -> impl Iterator<Item = Option<T>> {
| ^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...because method `sample_iter` references an `impl Trait` type in its return type
| |
| ...because method `sample_iter` has generic type parameters
...
```
## Solution
- Making `Curve<T>` object safe again by adding `Self: Sized` to newly
added methods.
## Testing
- Added new test that ensures the `Curve<T>` trait can be made into an
objet.
# Objective
- Fixes#14348
- Fixes#14528
- Less complex (but also likely less performant) alternative to #14611
## Solution
- Add a `is_dense` field flag to `QueryIter` indicating whether it is
dense or not, that is whether it can perform dense iteration or not;
- Check this flag any time iteration over a query is performed.
---
It would be nice if someone could try benching this change to see if it
actually matters.
~Note that this not 100% ready for mergin, since there are a bunch of
safety comments on the use of the various `IS_DENSE` for checks that
still need to be updated.~ This is ready modulo benchmarks
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
when handles for loading assets are dropped, we currently wait until
load is completed before dropping the handle. drop asset-load tasks
immediately
## Solution
- track tasks for loading assets and drop them immediately when all
handles are dropped.
~~- use `join_all` in `gltf_loader.rs` to allow it to yield and be
dropped.~~
doesn't cover all the load apis - for those it doesn't cover the task
will still be detached and will still complete before the result is
discarded.
separated out from #13170
# Objective
Allow dynamic systems to take lists of system parameters whose length is
not known at compile time.
This can be used for building a system that runs a script defined at
runtime, where the script needs a variable number of query parameters.
It can also be used for building a system that collects a list of
plugins at runtime, and provides a parameter to each one.
This is most useful today with `Vec<Query<FilteredEntityMut>>`. It will
be even more useful with `Vec<DynSystemParam>` if #14817 is merged,
since the parameters in the list can then be of different types.
## Solution
Implement `SystemParam` and `SystemParamBuilder` for `Vec` and
`ParamSet<Vec>`.
## Example
```rust
let system = (vec![
QueryParamBuilder::new_box(|builder| {
builder.with::<B>().without::<C>();
}),
QueryParamBuilder::new_box(|builder| {
builder.with::<C>().without::<B>();
}),
],)
.build_state(&mut world)
.build_system(|params: Vec<Query<&mut A>>| {
let mut count: usize = 0;
params
.into_iter()
.for_each(|mut query| count += query.iter_mut().count());
count
});
```
# Objective
- Fixes#14902
- > #14686 Introduced a name clash when using use bevy::prelude::*;
## Solution
- renamed `bevy::picking::events::Drop`
`bevy::picking::events::DragDrop`
## Testing
- Not being used in tests or examples, so I just compiled.
---
</details>
## Migration Guide
- Rename `Drop` to `DragDrop`
- `bevy::picking::events::Drop` is now `bevy::picking::events::DragDrop`
# Objective
This is a value that is and will be used as a domain of curves pretty
often. By adding it as a dedicated constant we can get rid of some
`unwraps` and function calls.
## Solution
added `Interval::UNIT`
## Testing
I replaced all occurrences of `interval(0.0, 1.0).unwrap()` with the new
`Interval::UNIT` constant in tests and doc tests.
Bumps [crate-ci/typos](https://github.com/crate-ci/typos) from 1.23.6 to
1.24.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/crate-ci/typos/releases">crate-ci/typos's
releases</a>.</em></p>
<blockquote>
<h2>v1.24.1</h2>
<h2>[1.24.1] - 2024-08-23</h2>
<h3>Fixes</h3>
<ul>
<li>Remove unverified varcon (locale data) entries</li>
</ul>
<h2>v1.24.0</h2>
<h2>[1.24.0] - 2024-08-23</h2>
<h3>Features</h3>
<ul>
<li>Update varcon (locale data) to version 2020.12.07</li>
</ul>
<h2>v1.23.7</h2>
<h2>[1.23.7] - 2024-08-22</h2>
<h3>Fixes</h3>
<ul>
<li><em>(config)</em> Respect <code>--locale</code> /
<code>default.locale</code> again after it was broken in 1.16.24</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/crate-ci/typos/blob/master/CHANGELOG.md">crate-ci/typos's
changelog</a>.</em></p>
<blockquote>
<h2>[1.24.1] - 2024-08-23</h2>
<h3>Fixes</h3>
<ul>
<li>Remove unverified varcon (locale data) entries</li>
</ul>
<h2>[1.24.0] - 2024-08-23</h2>
<h3>Features</h3>
<ul>
<li>Update varcon (locale data) to version 2020.12.07</li>
</ul>
<h2>[1.23.7] - 2024-08-22</h2>
<h3>Fixes</h3>
<ul>
<li><em>(config)</em> Respect <code>--locale</code> /
<code>default.locale</code> again after it was broken in 1.16.24</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b86466d268"><code>b86466d</code></a>
chore: Release</li>
<li><a
href="611cbc4c70"><code>611cbc4</code></a>
docs: Update changelog</li>
<li><a
href="8bfb5febe2"><code>8bfb5fe</code></a>
chore: Release</li>
<li><a
href="7b6170d8e1"><code>7b6170d</code></a>
Merge pull request <a
href="https://redirect.github.com/crate-ci/typos/issues/1086">#1086</a>
from epage/verified</li>
<li><a
href="17b4d0267e"><code>17b4d02</code></a>
fix(vars): Drop unverified entries</li>
<li><a
href="21281cc0ae"><code>21281cc</code></a>
chore: Release</li>
<li><a
href="b7592dd937"><code>b7592dd</code></a>
chore: Release</li>
<li><a
href="04d3f682be"><code>04d3f68</code></a>
docs: Update changelog</li>
<li><a
href="6ae8520cf3"><code>6ae8520</code></a>
chore: Release</li>
<li><a
href="f0be796065"><code>f0be796</code></a>
chore: Release</li>
<li>Additional commits viewable in <a
href="https://github.com/crate-ci/typos/compare/v1.23.6...v1.24.1">compare
view</a></li>
</ul>
</details>
<br />
[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=crate-ci/typos&package-manager=github_actions&previous-version=1.23.6&new-version=1.24.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
# Objective
- I needed to run a system whenever a specific condition became true
after being previously false.
- Other users might also need to run a system when a condition changes,
regardless of if it became true or false.
## Solution
- This adds two systems to common_conditions:
- `condition_changed` that changes whenever the inner condition changes
- `condition_became_true` that returns true whenever the inner condition
becomes true after previously being false
## Testing
- I added a doctest for each function
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Jan Hohenheim <jan@hohenheim.ch>
# Objective
- Allow not only inserting `Data` into `EmbeddedAssetRegistry` and `Dir`
in turn but now also removing it again.
- This way when used to embed asset data from *somewhere* but not load
it using the conventional means via `AssetServer` (which I observed
takes ownership of the `Data`) the `Data` does not need to stay in
memory of the `EmbeddedAssetRegistry` throughout the lifetime of the
application.
## Solution
- added the `remove_asset` functions in `EmbeddedAssetRegistry` and
`Dir`
## Testing
- added a module unittest
- does this require changes if build with feature `embedded_watcher`?
# Objective
Fixes#14883
## Solution
Pretty simple update to `EntityCommands` methods to consume `self` and
return it rather than taking `&mut self`. The things probably worth
noting:
* I added `#[allow(clippy::should_implement_trait)]` to the `add` method
because it causes a linting conflict with `std::ops::Add`.
* `despawn` and `log_components` now return `Self`. I'm not sure if
that's exactly the desired behavior so I'm happy to adjust if that seems
wrong.
## Testing
Tested with `cargo run -p ci`. I think that should be sufficient to call
things good.
## Migration Guide
The most likely migration needed is changing code from this:
```
let mut entity = commands.get_or_spawn(entity);
if depth_prepass {
entity.insert(DepthPrepass);
}
if normal_prepass {
entity.insert(NormalPrepass);
}
if motion_vector_prepass {
entity.insert(MotionVectorPrepass);
}
if deferred_prepass {
entity.insert(DeferredPrepass);
}
```
to this:
```
let mut entity = commands.get_or_spawn(entity);
if depth_prepass {
entity = entity.insert(DepthPrepass);
}
if normal_prepass {
entity = entity.insert(NormalPrepass);
}
if motion_vector_prepass {
entity = entity.insert(MotionVectorPrepass);
}
if deferred_prepass {
entity.insert(DeferredPrepass);
}
```
as can be seen in several of the example code updates here. There will
probably also be instances where mutable `EntityCommands` vars no longer
need to be mutable.
# Objective
Based on the discussion in #14864, I wanted to experiment with the core
`GenericTypeCell` type, whose `get_or_insert` method accounted for 2% of
the final binary size of the `3d_scene` example. The reason for this
large percentage is likely because the type is fundamental to the rest
of Bevy while having 3 generic parameters (the type stored `T`, the type
to retrieve `G`, and the function used to insert a new value `F`).
- Acts on #14864
## Solution
- Split `get_or_insert` into smaller functions with minimised
parameterisation. These new functions are private as to preserve the
public facing API, but could be exposed if desired.
## Testing
- Ran CI locally.
- Used `cargo bloat --release --example 3d_scene -n 100000
--message-format json > out.json` and @cart's [bloat
analyzer](https://gist.github.com/cart/722756ba3da0e983d207633e0a48a8ab)
to measure a 428KiB reduction in binary size when compiling on Windows
10.
- ~I have _not_ benchmarked to determine if this improves/hurts
performance.~ See
[below](https://github.com/bevyengine/bevy/pull/14865#issuecomment-2306083606).
## Notes
In my opinion this seems like a good test-case for the concept of
debloating generics within the Bevy codebase. I believe the performance
impact here is negligible in either direction (at runtime and compile
time), but the binary reduction is measurable and quite significant for
a relatively minor change in code.
---------
Co-authored-by: Gino Valente <49806985+MrGVSV@users.noreply.github.com>
# Objective
Citing @mweatherley
> As mentioned before, a multi-sampling function in the API which takes
an iterator is probably something we want (e.g. `sample_iter(iter: impl
IntoIterator<Item = f32>) -> impl IntoIterator<Item = T> { //... }`, but
there are some design choices to be made on the details (e.g. does this
filter out points that aren't in the domain? does it do sorting? etc.)
## Solution
I think the most flexible solution for end users is to expose all the
`sample_...` functions with an `iter` equivalent, so we'll have
- `sample_iter`
- `sample_iter_unchecked`
- `sample_iter_clamped`
Answering some questions from the original idea:
> does this filter out points that aren't in the domain?
With the methods the user has the choice to just sample or if they want
to filter out invalid types us `sample_iter` and then apply `filter_map`
to the iterator returned themselves.
> does it do sorting?
I think it's the same thing. If the user wants it, they need to do it
themselves by either collecting and sorting a `Vec` or using
`itertools`. I think there is a legit use case for "please sample me
this collection of points that are unordered" and we would destroy it if
we take away to much agency from users by sorting for them
## Testing
- Added a test which covers all three methods
# Objective
Add `bevy_picking` sprite backend as part of the `bevy_mod_picking`
upstreamening (#12365).
## Solution
More or less a copy/paste from `bevy_mod_picking`, with the changes
[here](https://github.com/aevyrie/bevy_mod_picking/pull/354). I'm
putting that link here since those changes haven't yet made it through
review, so should probably be reviewed on their own.
## Testing
I couldn't find any sprite-backend-specific tests in `bevy_mod_picking`
and unfortunately I'm not familiar enough with Bevy's testing patterns
to write tests for code that relies on windowing and input. I'm willing
to break the pointer hit system into testable blocks and add some more
modular tests if that's deemed important enough to block, otherwise I
can open an issue for adding tests as follow-up.
## Follow-up work
- More docs/tests
- Ignore pick events on transparent sprite pixels with potential opt-out
---------
Co-authored-by: Aevyrie <aevyrie@gmail.com>
# Objective
`arc_2d` wasn't actually doing what the docs were saying. The arc wasn't
offset by what was previously `direction_angle` but by `direction_angle
- arc_angle / 2.0`. This meant that the arcs center was laying on the
`Vec2::Y` axis and then it was offset. This was probably done to fit the
behavior of the `Arc2D` primitive. I would argue that this isn't
desirable for the plain `arc_2d` gizmo method since
- a) the docs get longer to explain the weird centering
- b) the mental model the user has to know gets bigger with more
implicit assumptions
given the code
```rust
my_gizmos.arc_2d(Vec2::ZERO, 0.0, FRAC_PI_2, 75.0, ORANGE_RED);
```
we get
![image](https://github.com/user-attachments/assets/84894c6d-42e4-451b-b3e2-811266486ede)
where after the fix with
```rust
my_gizmos.arc_2d(Isometry2d::IDENTITY, FRAC_PI_2, 75.0, ORANGE_RED);
```
we get
![image](https://github.com/user-attachments/assets/16b0aba0-f7b5-4600-ac49-a22be0315c40)
To get the same result with the previous implementation you would have
to randomly add `arc_angle / 2.0` to the `direction_angle`.
```rust
my_gizmos.arc_2d(Vec2::ZERO, FRAC_PI_4, FRAC_PI_2, 75.0, ORANGE_RED);
```
This makes constructing similar helping functions as they already exist
in 3D like
- `long_arc_2d_between`
- `short_arc_2d_between`
much harder.
## Solution
- Make the arc really start at `Vec2::Y * radius` in counter-clockwise
direction + offset by an angle as the docs state it
- Use `Isometry2d` instead of `position : Vec2` and `direction_angle :
f32` to reduce the chance of messing up rotation/translation
- Adjust the docs for the changes above
- Adjust the gizmo rendering of some primitives
## Testing
- check `2d_gizmos.rs` and `render_primitives.rs` examples
## Migration Guide
- users have to adjust their usages of `arc_2d`:
- before:
```rust
arc_2d(
pos,
angle,
arc_angle,
radius,
color
)
```
- after:
```rust
arc_2d(
// this `+ arc_angle * 0.5` quirk is only if you want to preserve the
previous behavior
// with the new API.
// feel free to try to fix this though since your current calls to this
function most likely
// involve some computations to counter-act that quirk in the first
place
Isometry2d::new(pos, Rot2::radians(angle + arc_angle * 0.5),
arc_angle,
radius,
color
)
```
# Objective
- Faster meshlet rasterization path for small triangles
- Avoid having to allocate and write out a triangle buffer
- Refactor gpu_scene.rs
## Solution
- Replace the 32bit visbuffer texture with a 64bit visbuffer buffer,
where the left 32 bits encode depth, and the right 32 bits encode the
existing cluster + triangle IDs. Can't use 64bit textures, wgpu/naga
doesn't support atomic ops on textures yet.
- Instead of writing out a buffer of packed cluster + triangle IDs (per
triangle) to raster, the culling pass now writes out a buffer of just
cluster IDs (per cluster, so less memory allocated, cheaper to write
out).
- Clusters for software raster are allocated from the left side
- Clusters for hardware raster are allocated in the same buffer, from
the right side
- The buffer size is fixed at MeshletPlugin build time, and should be
set to a reasonable value for your scene (no warning on overflow, and no
good way to determine what value you need outside of renderdoc - I plan
to fix this in a future PR adding a meshlet stats overlay)
- Currently I don't have a heuristic for software vs hardware raster
selection for each cluster. The existing code is just a placeholder. I
need to profile on a release scene and come up with a heuristic,
probably in a future PR.
- The culling shader is getting pretty hard to follow at this point, but
I don't want to spend time improving it as the entire shader/pass is
getting rewritten/replaced in the near future.
- Software raster is a compute workgroup per-cluster. Each workgroup
loads and transforms the <=64 vertices of the cluster, and then
rasterizes the <=64 triangles of the cluster.
- Two variants are implemented: Scanline for clusters with any larger
triangles (still smaller than hardware is good at), and brute-force for
very very tiny triangles
- Once the shader determines that a pixel should be filled in, it does
an atomicMax() on the visbuffer to store the results, copying how Nanite
works
- On devices with a low max workgroups per dispatch limit, an extra
compute pass is inserted before software raster to convert from a 1d to
2d dispatch (I don't think 3d would ever be necessary).
- I haven't implemented the top-left rule or subpixel precision yet, I'm
leaving that for a future PR since I get usable results without it for
now
- Resources used:
https://kristoffer-dyrkorn.github.io/triangle-rasterizer and chapters
6-8 of
https://fgiesen.wordpress.com/2013/02/17/optimizing-sw-occlusion-culling-index
- Hardware raster now spawns 64*3 vertex invocations per meshlet,
instead of the actual meshlet vertex count. Extra invocations just
early-exit.
- While this is slower than the existing system, hardware draws should
be rare now that software raster is usable, and it saves a ton of memory
using the unified cluster ID buffer. This would be fixed if wgpu had
support for mesh shaders.
- Instead of writing to a color+depth attachment, the hardware raster
pass also does the same atomic visbuffer writes that software raster
uses.
- We have to bind a dummy render target anyways, as wgpu doesn't
currently support render passes without any attachments
- Material IDs are no longer written out during the main rasterization
passes.
- If we had async compute queues, we could overlap the software and
hardware raster passes.
- New material and depth resolve passes run at the end of the visbuffer
node, and write out view depth and material ID depth textures
### Misc changes
- Fixed cluster culling importing, but never actually using the previous
view uniforms when doing occlusion culling
- Fixed incorrectly adding the LOD error twice when building the meshlet
mesh
- Splitup gpu_scene module into meshlet_mesh_manager, instance_manager,
and resource_manager
- resource_manager is still too complex and inefficient (extract and
prepare are way too expensive). I plan on improving this in a future PR,
but for now ResourceManager is mostly a 1:1 port of the leftover
MeshletGpuScene bits.
- Material draw passes have been renamed to the more accurate material
shade pass, as well as some other misc renaming (in the future, these
will be compute shaders even, and not actual draw calls)
---
## Migration Guide
- TBD (ask me at the end of the release for meshlet changes as a whole)
---------
Co-authored-by: vero <email@atlasdostal.com>
# Objective
When using instancing, 2 `VertexBufferLayout`s are needed, one for
per-vertex and one for per-instance data. Shader locations of all
attributes must not overlap, so one of the layouts needs to start its
locations at an offset. However,
`VertexBufferLayout::from_vertex_formats` will always start locations at
0, requiring manual adjustment, which is currently pretty verbose.
## Solution
Add `VertexBufferLayout::offset_locations`, which adds an offset to all
attribute locations.
Code using this method looks like this:
```rust
VertexState {
shader: BACKBUFFER_SHADER_HANDLE.typed(),
shader_defs: Vec::new(),
entry_point: "vertex".into(),
buffers: vec![
VertexBufferLayout::from_vertex_formats(
VertexStepMode::Vertex,
[VertexFormat::Float32x2],
),
VertexBufferLayout::from_vertex_formats(
VertexStepMode::Instance,
[VertexFormat::Float32x2, VertexFormat::Float32x3],
)
.offset_locations(1),
],
}
```
Alternative solutions include:
- Pass the starting location to `from_vertex_formats` – this is a bit
simpler than my solution here, but most calls don't need an offset, so
they'd always pass 0 there.
- Do nothing and make the user hand-write this.
---
## Changelog
- Add `VertexBufferLayout::offset_locations` to simplify buffer layout
construction when using instancing.
---------
Co-authored-by: Nicola Papale <nicopap@users.noreply.github.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
Adding more features to `AsBindGroup` proc macro means making the trait
arguments uglier. Downstream implementors of the trait without the proc
macro might want to do different things than our default arguments.
## Solution
Make `AsBindGroup` take an associated `Param` type.
## Migration Guide
`AsBindGroup` now allows the user to specify a `SystemParam` to be used
for creating bind groups.
# Objective
Closes#7622.
I was working on adding support for reflecting generic functions and
found that I wanted to use an argument's `TypeId` for hashing and
comparison, but its `TypePath` for debugging and error messaging.
While I could just keep them separate, place them in a tuple or a local
struct or something, I think I see an opportunity to make a dedicate
type for this.
Additionally, we can use this type to clean up some duplication amongst
the type info structs in a manner similar to #7622.
## Solution
Added the `Type` type. This should be seen as the most basic
representation of a type apart from `TypeId`. It stores both the
`TypeId` of the type as well as its `TypePathTable`.
The `Hash` and `PartialEq` implementations rely on the `TypeId`, while
the `Debug` implementation relies on the `TypePath`.
This makes it especially useful as a key in a `HashMap` since we get the
speed of the `TypeId` hashing/comparisons with the readability of
`TypePath`.
With this type, we're able to reduce the duplication across the type
info structs by removing individual fields for `TypeId` and
`TypePathTable`, replacing them with a single `Type` field. Similarly,
we can remove many duplicate methods and replace it with a macro that
delegates to the stored `Type`.
### Caveats
It should be noted that this type is currently 3x larger than `TypeId`.
On my machine, it's 48 bytes compared to `TypeId`'s 16. While this
doesn't matter for `TypeInfo` since it would contain that data
regardless, it is something to keep in mind when using elsewhere.
## Testing
All tests should pass as normal:
```
cargo test --package bevy_reflect
```
---
## Showcase
`bevy_reflect` now exports a `Type` struct. This type contains both the
`TypeId` and the `TypePathTable` of the given type, allowing it to be
used like `TypeId` but have the debuggability of `TypePath`.
```rust
// We can create this for any type implementing `TypePath`:
let ty = Type::of::<String>();
// It has `Hash` and `Eq` impls powered by `TypeId`, making it useful for maps:
let mut map = HashMap::<Type, i32>::new();
map.insert(ty, 25);
// And it has a human-readable `Debug` representation:
let debug = format!("{:?}", map);
assert_eq!(debug, "{alloc::string::String: 25}");
```
## Migration Guide
Certain type info structs now only return their item types as `Type`
instead of exposing direct methods on them.
The following methods have been removed:
- `ArrayInfo::item_type_path_table`
- `ArrayInfo::item_type_id`
- `ArrayInfo::item_is`
- `ListInfo::item_type_path_table`
- `ListInfo::item_type_id`
- `ListInfo::item_is`
- `SetInfo::value_type_path_table`
- `SetInfo::value_type_id`
- `SetInfo::value_is`
- `MapInfo::key_type_path_table`
- `MapInfo::key_type_id`
- `MapInfo::key_is`
- `MapInfo::value_type_path_table`
- `MapInfo::value_type_id`
- `MapInfo::value_is`
Instead, access the `Type` directly using one of the new methods:
- `ArrayInfo::item_ty`
- `ListInfo::item_ty`
- `SetInfo::value_ty`
- `MapInfo::key_ty`
- `MapInfo::value_ty`
For example:
```rust
// BEFORE
let type_id = array_info.item_type_id();
// AFTER
let type_id = array_info.item_ty().id();
```
# Objective
I tried writing something like this in my project
```rust
.observe(|e: Trigger<OnAdd, Skeleton>| {
panic!("Skeletoned! {e:?}");
});
```
and it didn't compile.
Having `Debug` trait defined on `Trigger` event will ease debugging the
observers a little bit.
## Solution
Add a bespoke `Debug` implementation when both the bundle and the event
have `Debug` implemented for them.
## Testing
I've added `println!("{trigger:#?}");` to the [observers
example](938d810766/examples/ecs/observers.rs (L124))
and it compiled!
Caveats with this PR are:
- removing this implementation if for any reason we will need it, will
be a breaking change
- the implementation is manually generated, which adds potential toil
when changing the `Trigger` structure
## Showcase
Log output:
```rust
on_add_mine: Trigger {
event: OnAdd,
propagate: false,
trigger: ObserverTrigger {
observer: 2v1#4294967298,
event_type: ComponentId(
0,
),
entity: 454v1#4294967750,
},
_marker: PhantomData<observers::Mine>,
}
```
Thank you for maintaining this engine! 🧡
# Objective
- Gizmo rendering on WebGPU has been fixed by #14653, but gizmo joints
still cause error
(https://github.com/bevyengine/bevy/issues/14696#issuecomment-2283689669)
when enabled.
## Solution
- Applies the same fix as #14653 to Gizmo joints.
I'm noob and just copied their solution, please correct me if I did
something wrong.
## Testing
- Tested 2d-gizmos and 3d-gizmos examples in WebGPU on Chrome. No
rendering errors, and the gizmo joints are apparently rendered ok.
# Objective
Support building systems with parameters whose types can be determined
at runtime.
## Solution
Create a `DynSystemParam` type that can be built using a
`SystemParamBuilder` of any type and then downcast to the appropriate
type dynamically.
## Example
```rust
let system = (
DynParamBuilder::new(LocalBuilder(3_usize)),
DynParamBuilder:🆕:<Query<()>>(QueryParamBuilder::new(|builder| {
builder.with::<A>();
})),
DynParamBuilder:🆕:<&Entities>(ParamBuilder),
)
.build_state(&mut world)
.build_system(
|mut p0: DynSystemParam, mut p1: DynSystemParam, mut p2: DynSystemParam| {
let local = p0.downcast_mut::<Local<usize>>().unwrap();
let query_count = p1.downcast_mut::<Query<()>>().unwrap();
let entities = p2.downcast_mut::<&Entities>().unwrap();
},
);
```
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Periwink <charlesbour@gmail.com>
# Objective
- Remove the `wgpu_trace` feature while still making it easy/possible to
record wgpu traces for debugging.
- Close#14725.
- Get a taste of the bevy codebase. :P
## Solution
This PR performs the above objective by removing the `wgpu_trace`
feature from all `Cargo.toml` files.
However, wgpu traces are still useful for debugging - but to record
them, you need to pass in a directory path to store the traces in. To
avoid forcing users into manually creating the renderer,
`bevy_render::settings::WgpuSettings` now has a `trace_path` field, so
that all of Bevy's automatic initialization can happen while still
allowing for tracing.
## Testing
- Did you test these changes? If so, how?
- I have tested these changes, but only via running `cargo run -p ci`. I
am hoping the Github Actions workflows will catch anything I missed.
- Are there any parts that need more testing?
- I do not believe so.
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
- If you want to test these changes, I have updated the debugging guide
(`docs/debugging.md`) section on WGPU Tracing.
- If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?
- I ran the above command on a Windows 10 64-bit (x64) machine, using
the `stable-x86_64-pc-windows-msvc` toolchain. I do not have anything
set up for other platforms or targets (though I can't imagine this needs
testing on other platforms).
---
## Migration Guide
1. The `bevy/wgpu_trace`, `bevy_render/wgpu_trace`, and
`bevy_internal/wgpu_trace` features no longer exist. Remove them from
your `Cargo.toml`, CI, tooling, and what-not.
2. Follow the instructions in the updated `docs/debugging.md` file in
the repository, under the WGPU Tracing section.
Because of the changes made, you can now generate traces to any path,
rather than the hardcoded `%WorkspaceRoot%/wgpu_trace` (where
`%WorkspaceRoot%` is... the root of your crate's workspace) folder.
(If WGPU hasn't restored tracing functionality...) Do note that WGPU has
not yet restored tracing functionality. However, once it does, the above
should be sufficient to generate new traces.
---------
Co-authored-by: TrialDragon <31419708+TrialDragon@users.noreply.github.com>
# Objective
Add a convenience constructor to make simple animation graphs easier to
build.
I've had some notes about attempting this since #11989 that I just
remembered after seeing #14852.
This partially addresses #14852, but I don't really know animation well
enough to write all of the documentation it's asking for.
## Solution
Add `AnimationGraph::from_clips` and use it to simplify `animated_fox`.
Do some other little bits of incidental cleanup and documentation .
## Testing
I ran `cargo run --example animated_fox`.
# Objective
- Fixes#14658.
## Solution
- Added `on_unimplemented` Diagnostic for `IntoObserverSystem` calling
out argument ordering in a `note`
- Added an example to the documentation on `App::observe` to provide
some explanation to users.
## Testing
- Ran CI locally
- Deliberately introduced a parameter order error in the
`ecs/observers.rs` example as a test.
---
## Showcase
<details>
<summary>Error Before</summary>
```
error[E0277]: the trait bound `{closure@examples/ecs/observers.rs:19:13: 22:37}: IntoObserverSystem<_, _, _>` is not satisfied
--> examples/ecs/observers.rs:19:13
|
18 | .observe(
| ------- required by a bound introduced by this call
19 | / |mines: Query<&Mine>,
20 | | trigger: Trigger<ExplodeMines>,
21 | | index: Res<SpatialIndex>,
22 | | mut commands: Commands| {
... |
34 | | }
35 | | },
| |_____________^ the trait `bevy::prelude::IntoSystem<bevy::prelude::Trigger<'static, _, _>, (), _>` is not implemented for closure `{closure@examples/ecs/observers.rs:19:13: 22:37}`, which is required by `{closure@examples/ecs/observers.rs:19:13: 22:37}: IntoObserverSystem<_, _, _>`
|
= note: required for `{closure@examples/ecs/observers.rs:19:13: 22:37}` to implement `IntoObserverSystem<_, _, _>`
note: required by a bound in `bevy::prelude::App::observe`
--> C:\Users\Zac\Documents\GitHub\bevy\crates\bevy_app\src\app.rs:995:24
|
993 | pub fn observe<E: Event, B: Bundle, M>(
| ------- required by a bound in this associated function
994 | &mut self,
995 | observer: impl IntoObserverSystem<E, B, M>,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ required by this bound in `App::observe`
For more information about this error, try `rustc --explain E0277`.
error: could not compile `bevy` (example "observers") due to 1 previous error
```
</details>
<details>
<summary>Error After</summary>
```
error[E0277]: `{closure@examples/ecs/observers.rs:19:13: 22:37}` cannot become an `ObserverSystem`
--> examples/ecs/observers.rs:19:13
|
18 | .observe(
| ------- required by a bound introduced by this call
19 | / |mines: Query<&Mine>,
20 | | trigger: Trigger<ExplodeMines>,
21 | | index: Res<SpatialIndex>,
22 | | mut commands: Commands| {
... |
34 | | }
35 | | },
| |_____________^ the trait `IntoObserverSystem` is not implemented
|
= help: the trait `bevy::prelude::IntoSystem<bevy::prelude::Trigger<'static, _, _>, (), _>` is not implemented for closure `{closure@examples/ecs/observers.rs:19:13: 22:37}`, which is required by `{closure@examples/ecs/observers.rs:19:13: 22:37}: IntoObserverSystem<_, _, _>`
= note: for function `ObserverSystem`s, ensure the first argument is a `Trigger<T>` and any subsequent ones are `SystemParam`
= note: required for `{closure@examples/ecs/observers.rs:19:13: 22:37}` to implement `IntoObserverSystem<_, _, _>`
note: required by a bound in `bevy::prelude::App::observe`
--> C:\Users\Zac\Documents\GitHub\bevy\crates\bevy_app\src\app.rs:1025:24
|
1023 | pub fn observe<E: Event, B: Bundle, M>(
| ------- required by a bound in this associated function
1024 | &mut self,
1025 | observer: impl IntoObserverSystem<E, B, M>,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ required by this bound in `App::observe`
For more information about this error, try `rustc --explain E0277`.
error: could not compile `bevy` (example "observers") due to 1 previous error
```
</details>
# Objective
- The examples use a more verbose than necessary way to initialize the
image
- The order of the camera doesn't need to be specified. At least I
didn't see a difference in my testing
## Solution
- Use `Image::new_fill()` to fill the image instead of abusing
`resize()`
- Remove the camera ordering
# Objective
Rewrite screenshotting to be able to accept any `RenderTarget`.
Closes#12478
## Solution
Previously, screenshotting relied on setting a variety of state on the
requested window. When extracted, the window's `swap_chain_texture_view`
property would be swapped out with a texture_view created that frame for
the screenshot pipeline to write back to the cpu.
Besides being tightly coupled to window in a way that prevented
screenshotting other render targets, this approach had the drawback of
relying on the implicit state of `swap_chain_texture_view` being
returned from a `NormalizedRenderTarget` when view targets were
prepared. Because property is set every frame for windows, that wasn't a
problem, but poses a problem for render target images. Namely, to do the
equivalent trick, we'd have to replace the `GpuImage`'s texture view,
and somehow restore it later.
As such, this PR creates a new `prepare_view_textures` system which runs
before `prepare_view_targets` that allows a new `prepare_screenshots`
system to be sandwiched between and overwrite the render targets texture
view if a screenshot has been requested that frame for the given target.
Additionally, screenshotting itself has been changed to use a component
+ observer pattern. We now spawn a `Screenshot` component into the
world, whose lifetime is tracked with a series of marker components.
When the screenshot is read back to the CPU, we send the image over a
channel back to the main world where an observer fires on the screenshot
entity before being despawned the next frame. This allows the user to
access resources in their save callback that might be useful (e.g.
uploading the screenshot over the network, etc.).
## Testing
![image](https://github.com/user-attachments/assets/48f19aed-d9e1-4058-bb17-82b37f992b7b)
TODO:
- [x] Web
- [ ] Manual texture view
---
## Showcase
render to texture example:
<img
src="https://github.com/user-attachments/assets/612ac47b-8a24-4287-a745-3051837963b0"
width=200/>
web saving still works:
<img
src="https://github.com/user-attachments/assets/e2a15b17-1ff5-4006-ab2a-e5cc74888b9c"
width=200/>
## Migration Guide
`ScreenshotManager` has been removed. To take a screenshot, spawn a
`Screenshot` entity with the specified render target and provide an
observer targeting the `ScreenshotCaptured` event. See the
`window/screenshot` example to see an example.
---------
Co-authored-by: Kristoffer Søholm <k.soeholm@gmail.com>
# Objective
`ParamSetBuilder` is supposed to be used as a tuple constructor, but the
field was not marked `pub` so it's not actually usable outside of its
module.
## Solution
Mark the field `pub`.
Realize one advantage of doc tests over unit tests is that they test the
public API.
Add a doc test example that uses the field so that this would have been
caught.
# Objective
There is a tiny seam at the top of the annulus caused by normal
floating-point error in calculating the coordinates. When generating the
last pair of triangles, given `n == i` then `(TAU / n) * i` does not
equal `TAU` exactly.
Fixes https://github.com/komadori/bevy_mod_outline/issues/42
## Solution
This can be fixed by changing the calculation so that `(TAU / n) * (i %
n) == 0.0`, which is equivalent for trigonometric purposes.
## Testing
Added the unit test
`bevy_render::mesh::primitives::dim2::tests::test_annulus`.
Fixes#14825
Edit: After feedback, these are the updated methods:
- `toggle_inherited_visible(&mut self)`
- Toggles between `Visibility::Inherited` and `Visibility::Visible`. If
the value is `Visibility::Hidden`, it remains unaffected.
- `toggle_inherited_hidden(&mut self)`
- Toggles between `Visibility::Inherited` and `Visibility::Hidden`. If
the value is `Visibility::Visible`, it remains unaffected.
- `toggle_visible_hidden(&mut self)`
- Toggles between `Visibility::Visible` and `Visibility::Hidden`. If the
value is `Visibility::Inherited`, it remains unaffected.
---------
Co-authored-by: Chris Russell <8494645+chescock@users.noreply.github.com>
# Objective
- There is a flaw in the implementation of `FogVolume`'s
`density_texture_offset` from #14868. Because of the way I am wrapping
the UVW coordinates in the volumetric fog shader, a seam is visible when
the 3d texture is wrapping around from one side to the other:
![density_texture_offset_seam](https://github.com/user-attachments/assets/89527ef2-5e1b-4b90-8e73-7a3e607697d4)
## Solution
- This PR fixes the issue by removing the wrapping from the shader and
instead leaving it to the user to configure the 3d noise texture to use
`ImageAddressMode::Repeat` if they want it to repeat. Using
`ImageAddressMode::Repeat` is the proper solution to avoid the obvious
seam:
![density_texture_seam_fixed](https://github.com/user-attachments/assets/06e871a6-2db1-4501-b425-4141605f9b26)
- The sampler cannot be implicitly configured to use
`ImageAddressMode::Repeat` because that's not always desirable. For
example, the `fog_volumes` example wouldn't work properly because the
texture from the edges of the volume would overflow to the other sides,
which would be bad in this instance (but it's good in the case of the
`scrolling_fog` example). So leaving it to the user to decide on their
own whether they want the density texture to repeat seems to be the best
solution.
## Testing
- The `scrolling_fog` example still looks the same, it was just changed
to explicitly declare that the density texture should be repeating when
loading the asset. The `fog_volumes` example is unaffected.
<details>
<summary>Minimal reproduction example on current main</summary>
<pre>
use bevy::core_pipeline::experimental::taa::{TemporalAntiAliasBundle,
TemporalAntiAliasPlugin};
use bevy::pbr::{FogVolume, VolumetricFogSettings, VolumetricLight};
use bevy::prelude::*;<br>
fn main() {
App::new()
.add_plugins((DefaultPlugins, TemporalAntiAliasPlugin))
.add_systems(Startup, setup)
.run();
}<br>
fn setup(mut commands: Commands, assets: Res<AssetServer>) {
commands.spawn((
Camera3dBundle {
transform: Transform::from_xyz(3.5, -1.0, 0.4)
.looking_at(Vec3::new(0.0, 0.0, 0.4), Vec3::Y),
msaa: Msaa::Off,
..default()
},
TemporalAntiAliasBundle::default(),
VolumetricFogSettings {
ambient_intensity: 0.0,
jitter: 0.5,
..default()
},
));<br>
commands.spawn((
DirectionalLightBundle {
transform: Transform::from_xyz(-6.0, 5.0, -9.0)
.looking_at(Vec3::new(0.0, 0.0, 0.0), Vec3::Y),
directional_light: DirectionalLight {
illuminance: 32_000.0,
shadows_enabled: true,
..default()
},
..default()
},
VolumetricLight,
));<br>
commands.spawn((
SpatialBundle {
visibility: Visibility::Visible,
transform: Transform::from_xyz(0.0, 0.0,
0.0).with_scale(Vec3::splat(3.0)),
..default()
},
FogVolume {
density_texture: Some(assets.load("volumes/fog_noise.ktx2")),
density_texture_offset: Vec3::new(0.0, 0.0, 0.4),
scattering: 1.0,
..default()
},
));
}
</pre>
</details>
# Objective
This example previously had kind of a needlessly complex state machine
that tracked moves between its previous orientation and the new one that
was randomly generated. Using `smooth_nudge` simplifies the example in
addition to making good use of the new API.
## Solution
Use `smooth_nudge` to transition between the current transform and the
new one. This does away with the need to keep track of the move's
starting position and progress. It also just sort of looks nicer.
## Testing
Run the `align` example:
`cargo run --example align`
# Objective
- Fixes#14873, see that issue for a whole lot of context
## Solution
- Add a blessed system set for this stuff. See [this Discord
discussion](https://discord.com/channels/691052431525675048/749335865876021248/1276262931327094908).
Note that the gizmo systems,
[LWIM](https://github.com/Leafwing-Studios/leafwing-input-manager/pull/522/files#diff-9b59ee4899ad0a5d008889ea89a124a7291316532e42f9f3d6ae842b906fb095R154)
and now a new plugin I'm working on are all already ordering against
`run_fixed_main_schedule`, so having a dedicated system set should be
more robust and hopefully also more discoverable.
---
## ~~Showcase~~
~~I can add a little video of a smooth camera later if this gets merged
:)~~
Apparently a release note is not needed, so I'll leave it out. See the
changes in the fixed timestep example for usage showcase and the video
in #14873 for a more or less accurate video of the effect (it does not
use the same solution though, so it is not quite the same)
## Migration Guide
[run_fixed_main_schedule](https://docs.rs/bevy/latest/bevy/time/fn.run_fixed_main_schedule.html)
is no longer public. If you used to order against it, use the new
dedicated `RunFixedMainLoopSystem` system set instead. You can replace
your usage of `run_fixed_main_schedule` one for one by
`RunFixedMainLoopSystem::FixedMainLoop`, but it is now more idiomatic to
place your systems in either
`RunFixedMainLoopSystem::BeforeFixedMainLoop` or
`RunFixedMainLoopSystem::AfterFixedMainLoop`
Old:
```rust
app.add_systems(
RunFixedMainLoop,
some_system.before(run_fixed_main_schedule)
);
```
New:
```rust
app.add_systems(
RunFixedMainLoop,
some_system.in_set(RunFixedMainLoopSystem::BeforeFixedMainLoop)
);
```
---------
Co-authored-by: Tau Gärtli <git@tau.garden>