# Objective
`ButtonBundle` has an `ImageNode` component (renamed from `UiImage`)
which wasn't a problem in 0.14 but in 0.15 `requires` pulls in the
`ContentSize` and `NodeImageSize` which means that by default
`ButtonBundle` nodes are given a measure func based on the size of the
image belonging to `TRANSPARENT_IMAGE_HANDLE`, which is 1x1.
This doesn't make sense and the behaviour for default image nodes should
either be to go to zero size or not add a measure func.
## Solution
Check if an image has a `TRANSPARENT_IMAGE_HANDLE` and if it does remove
its measure func.
Possibly a zero-sized measure would make more sense, but that would
break existing code.
## Testing
Used `ButtonBundle` in the 0.15 `button` example and the border doesn't
render, after this change it does.
# Objective
- Fix bug where `UiSurface::set_camera_children` (and
`UiSurface::update_children` sometimes) will panic if you remove and add
a `Node` component in a single tick. This is more likely to happen now
because of `remove_with_requires`.
## Solution
- Filter out entities with `Node` when cleaning up entities from
`RemovedComponents<Node>`.
## Testing
- Not tested (rust compiler refused to cooperate when I tried to patch
this into my project), correct by inspection.
# Objective
- Allow to configure `on_thread_spawn` and `on_thread_destroy` when
using `TaskPoolPlugin` of bevy.
## Solution
- In `TaskPoolThreadAssignmentPolicy`, two options `on_thread_spawn` and
`on_thread_destroy` are added, which will be passed to two new methods
motioned above when creating corresponding task pool using builder.
- Due to lack of debug derive for these two options, manually implement
the debug for `TaskPoolThreadAssignmentPolicy`.
---
## Changelog
### Added
- `on_thread_spawn` option and `on_thread_destroy` option to the
`TaskPoolPlugin`, allow user to customize them as needed.
## Migration Guide
- `TaskPooolThreadAssignmentPolicy` now has two additional fields:
`on_thread_spawn` and `on_thread_destroy`. Please consider defaulting
them to `None`.
---------
Co-authored-by: François Mockers <mockersf@gmail.com>
Co-authored-by: François Mockers <francois.mockers@vleue.com>
# Objective
_If I understand it correctly_, we were checking mesh visibility, as
well as re-rendering point and spot light shadow maps for each view.
This makes it so that M views and N lights produce M x N complexity.
This PR aims to fix that, as well as introduce a stress test for this
specific scenario.
## Solution
- Keep track of what lights have already had mesh visibility calculated
and do not calculate it again;
- Reuse shadow depth textures and attachments across all views, and only
render shadow maps for the _first_ time a light is encountered on a
view;
- Directional lights remain unaltered, since their shadow map cascades
are view-dependent;
- Add a new `many_cameras_lights` stress test example to verify the
solution
## Showcase
110% speed up on the stress test
83% reduction of memory usage in stress test
### Before (5.35 FPS on stress test)
<img width="1392" alt="Screenshot 2024-09-11 at 12 25 57"
src="https://github.com/user-attachments/assets/136b0785-e9a4-44df-9a22-f99cc465e126">
### After (11.34 FPS on stress test)
<img width="1392" alt="Screenshot 2024-09-11 at 12 24 35"
src="https://github.com/user-attachments/assets/b8dd858f-5e19-467f-8344-2b46ca039630">
## Testing
- Did you test these changes? If so, how?
- On my game project where I have two cameras, and many shadow casting
lights I managed to get pretty much double the FPS.
- Also included a stress test, see the comparison above
- Are there any parts that need more testing?
- Yes, I would like help verifying that this fix is indeed correct, and
that we were really re-rendering the shadow maps by mistake and it's
indeed okay to not do that
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
- Run the `many_cameras_lights` example
- On the `main` branch, cherry pick the commit with the example (`git
cherry-pick --no-commit 1ed4ace01`) and run it
- If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?
- macOS
---------
Co-authored-by: François Mockers <francois.mockers@vleue.com>
# Objective
Should compile successfully with any combination of features
## Solution
Add the missing import on the right cfg.
## Testing
Tested building locally.
Fixes https://github.com/bevyengine/bevy/issues/16352
# Objective
Glam has some common and useful types and helpers that are not in the
prelude of `bevy_math`. This includes shorthand constructors like
`vec3`, or even `Vec3A`, the aligned version of `Vec3`.
```rust
// The "normal" way to create a 3D vector
let vec = Vec3::new(2.0, 1.0, -3.0);
// Shorthand version
let vec = vec3(2.0, 1.0, -3.0);
```
## Solution
Add the following types and methods to the prelude:
- `vec2`, `vec3`, `vec3a`, `vec4`
- `uvec2`, `uvec3`, `uvec4`
- `ivec2`, `ivec3`, `ivec4`
- `bvec2`, `bvec3`, `bvec3a`, `bvec4`, `bvec4a`
- `mat2`, `mat3`, `mat3a`, `mat4`
- `quat` (not sure if anyone uses this, but for consistency)
- `Vec3A`
- `BVec3A`, `BVec4A`
- `Mat3A`
I did not add the u16, i16, or f64 variants like `dvec2`, since there
are currently no existing types like those in the prelude.
The shorthand constructors are currently used a lot in some places in
Bevy, and not at all in others. In a follow-up, we might want to
consider if we have a preference for the shorthand, and make a PR to
change the codebase to use it more consistently.
# Objective
- Fixes: #15603
## Solution
- Add an unsafe `get_mut_by_id_unchecked` to `EntityMut` that borrows
&self instead of &mut self, thereby allowing access to multiple
components simultaneously.
## Testing
- a unit test function `get_mut_by_id_unchecked` was added.
---------
Co-authored-by: Mike <mike.hsu@gmail.com>
**NOTE: Also see https://github.com/bevyengine/bevy/pull/15548 for the
serializer equivalent**
# Objective
The current `ReflectDeserializer` and `TypedReflectDeserializer` use the
`TypeRegistration` and/or `ReflectDeserialize` of a given type in order
to determine how to deserialize a value of that type. However, there is
currently no way to statefully override deserialization of a given type
when using these two deserializers - that is, to have some local data in
the same scope as the `ReflectDeserializer`, and make use of that data
when deserializing.
The motivating use case for this came up when working on
[`bevy_animation_graph`](https://github.com/aecsocket/bevy_animation_graph/tree/feat/dynamic-nodes),
when loading an animation graph asset. The `AnimationGraph` stores
`Vec<Box<dyn NodeLike>>`s which we have to load in. Those `Box<dyn
NodeLike>`s may store `Handle`s to e.g. `Handle<AnimationClip>`. I want
to trigger a `load_context.load()` for that handle when it's loaded.
```rs
#[derive(Reflect)]
struct Animation {
clips: Vec<Handle<AnimationClip>>,
}
```
```rs
(
clips: [
"animation_clips/walk.animclip.ron",
"animation_clips/run.animclip.ron",
"animation_clips/jump.animclip.ron",
],
)
````
Currently, if this were deserialized from an asset loader, this would be
deserialized as a vec of `Handle::default()`s, which isn't useful since
we also need to `load_context.load()` those handles for them to be used.
With this processor field, a processor can detect when `Handle<T>`s are
being loaded, then actually load them in.
## Solution
```rs
trait ReflectDeserializerProcessor {
fn try_deserialize<'de, D>(
&mut self,
registration: &TypeRegistration,
deserializer: D,
) -> Result<Result<Box<dyn PartialReflect>, D>, D::Error>
where
D: serde::Deserializer<'de>;
}
```
```diff
- pub struct ReflectDeserializer<'a> {
+ pub struct ReflectDeserializer<'a, P = ()> { // also for ReflectTypedDeserializer
registry: &'a TypeRegistry,
+ processor: Option<&'a mut P>,
}
```
```rs
impl<'a, P: ReflectDeserializerProcessor> ReflectDeserializer<'a, P> { // also for ReflectTypedDeserializer
pub fn with_processor(registry: &'a TypeRegistry, processor: &'a mut P) -> Self {
Self {
registry,
processor: Some(processor),
}
}
}
```
This does not touch the existing `fn new`s.
This `processor` field is also added to all internal visitor structs.
When `TypedReflectDeserializer` runs, it will first try to deserialize a
value of this type by passing the `TypeRegistration` and deserializer to
the processor, and fallback to the default logic. This processor runs
the earliest, and takes priority over all other deserialization logic.
## Testing
Added unit tests to `bevy_reflect::serde::de`. Also using almost exactly
the same implementation in [my fork of
`bevy_animation_graph`](https://github.com/aecsocket/bevy_animation_graph/tree/feat/dynamic-nodes).
## Migration Guide
(Since I added `P = ()`, I don't think this is actually a breaking
change anymore, but I'll leave this in)
`bevy_reflect`'s `ReflectDeserializer` and `TypedReflectDeserializer`
now take a `ReflectDeserializerProcessor` as the type parameter `P`,
which allows you to customize deserialization for specific types when
they are found. However, the rest of the API surface (`new`) remains the
same.
<details>
<summary>Original implementation</summary>
Add `ReflectDeserializerProcessor`:
```rs
struct ReflectDeserializerProcessor {
pub can_deserialize: Box<dyn FnMut(&TypeRegistration) -> bool + 'p>,
pub deserialize: Box<
dyn FnMut(
&TypeRegistration,
&mut dyn erased_serde::Deserializer,
) -> Result<Box<dyn PartialReflect>, erased_serde::Error>
+ 'p,
}
```
Along with `ReflectDeserializer::new_with_processor` and
`TypedReflectDeserializer::new_with_processor`. This does not touch the
public API of the existing `new` fns.
This is stored as an `Option<&mut ReflectDeserializerProcessor>` on the
deserializer and any of the private `-Visitor` structs, and when we
attempt to deserialize a value, we first pass it through this processor.
Also added a very comprehensive doc test to
`ReflectDeserializerProcessor`, which is actually a scaled down version
of the code for the `bevy_animation_graph` loader. This should give
users a good motivating example for when and why to use this feature.
### Why `Box<dyn ..>`?
When I originally implemented this, I added a type parameter to
`ReflectDeserializer` to determine the processor used, with `()` being
"no processor". However when using this, I kept running into rustc
errors where it failed to validate certain type bounds and led to
overflows. I then switched to a dynamic dispatch approach.
The dynamic dispatch should not be that expensive, nor should it be a
performance regression, since it's only used if there is `Some`
processor. (Note: I have not benchmarked this, I am just speculating.)
Also, it means that we don't infect the rest of the code with an extra
type parameter, which is nicer to maintain.
### Why the `'p` on `ReflectDeserializerProcessor<'p>`?
Without a lifetime here, the `Box`es would automatically become `Box<dyn
FnMut(..) + 'static>`. This makes them practically useless, since any
local data you would want to pass in must then be `'static`. In the
motivating example, you couldn't pass in that `&mut LoadContext` to the
function.
This means that the `'p` infects the rest of the Visitor types, but this
is acceptable IMO. This PR also elides the lifetimes in the `impl<'de>
Visitor<'de> for -Visitor` blocks where possible.
### Future possibilities
I think it's technically possible to turn the processor into a trait,
and make the deserializers generic over that trait. This would also open
the door to an API like:
```rs
type Seed;
fn seed_deserialize(&mut self, r: &TypeRegistration) -> Option<Self::Seed>;
fn deserialize(&mut self, r: &TypeRegistration, d: &mut dyn erased_serde::Deserializer, s: Self::Seed) -> ...;
```
A similar processor system should also be added to the serialization
side, but that's for another PR. Ideally, both PRs will be in the same
release, since one isn't very useful without the other.
## Testing
Added unit tests to `bevy_reflect::serde::de`. Also using almost exactly
the same implementation in [my fork of
`bevy_animation_graph`](https://github.com/aecsocket/bevy_animation_graph/tree/feat/dynamic-nodes).
## Migration Guide
`bevy_reflect`'s `ReflectDeserializer` and `TypedReflectDeserializer`
now take a second lifetime parameter `'p` for storing the
`ReflectDeserializerProcessor` field lifetimes. However, the rest of the
API surface (`new`) remains the same, so if you are not storing these
deserializers or referring to them with lifetimes, you should not have
to make any changes.
</details>
# Objective
`glam` has opted to rename `Vec2::angle_between` to `Vec2::angle_to`
because of the difference in semantics compared to `Vec3::angle_between`
and others which return an unsigned angle `[0, PI]` where
`Vec2::angle_between` returns a signed angle `[-PI, PI]`.
We should follow suit for `Rot2` in 0.15 to avoid further confusion.
Links:
-
https://github.com/bitshifter/glam-rs/issues/514#issuecomment-2143202294
- https://github.com/bitshifter/glam-rs/pull/524
## Migration Guide
`Rot2::angle_between` has been deprecated, use `Rot2::angle_to` instead,
the semantics of `Rot2::angle_between` will change in the future.
---------
Co-authored-by: Joona Aalto <jondolf.dev@gmail.com>
# Objective
Fixes#15940
## Solution
Remove the `pub use` and fix the compile errors.
Make `bevy_image` available as `bevy::image`.
## Testing
Feature Frenzy would be good here! Maybe I'll learn how to use it if I
have some time this weekend, or maybe a reviewer can use it.
## Migration Guide
Use `bevy_image` instead of `bevy_render::texture` items.
---------
Co-authored-by: chompaa <antony.m.3012@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
- Currently when you attempt to change the cursor_grab_mode it caches
the new value whether the cursor grab succeeded or failed. This change
handles the Result being returned by set_cursor_grab and changes the
cursor_grab_mode back to the cached version in case of an Error.
- Creates a way to handle #16237 and #16238
## Solution
- I changed the signature of winit_windows attempt_grab to return the
Result<(), ExternalError> that winit set_cursor_grab returns. The system
that calls attempt_grab now checks if there's an error returned, and if
there is it sets the grab_mode back to the cached version (similar to
what hit_test does a few lines down).
## Testing
- I tested using this system that previously would not correctly lock
the mouse on Ubuntu/x11
```
pub fn lock_mouse(mut primary_window: Query<&mut Window, With<PrimaryWindow>>) {
let window = &mut primary_window.single_mut();
if window.focused {
window.cursor_options.grab_mode = CursorGrabMode::Confined;
} else {
window.cursor_options.grab_mode = CursorGrabMode::None;
}
}
```
- I only tested on Ubuntu with x11
# Objective
Fixes#15928
## Solution
return Error instead of panic
## Testing
I don't know if we need to add a test for this. It is pretty
straightforward.
# Objective
- wgpu 0.20 made workgroup vars stop being zero-init by default. this
broke some applications (cough foresight cough) and now we workaround
it. wgpu exposes a compilation option that zero initializes workgroup
memory by default, but bevy does not expose it.
## Solution
- expose the compilation option wgpu gives us
## Testing
- ran examples: 3d_scene, compute_shader_game_of_life, gpu_readback,
lines, specialized_mesh_pipeline. they all work
- confirmed fix for our own problems
---
</details>
## Migration Guide
- add `zero_initialize_workgroup_memory: false,` to
`ComputePipelineDescriptor` or `RenderPipelineDescriptor` structs to
preserve 0.14 functionality, add `zero_initialize_workgroup_memory:
true,` to restore bevy 0.13 functionality.
# Objective
- Fixes#16235
## Solution
- Both Bevy and AccessKit export a `Node` struct, to reduce confusion
Bevy will no longer re-export `AccessKit` from `bevy_a11y`
## Testing
- Tested locally
## Migration Guide
```diff
# main.rs
-- use bevy_a11y::{
-- accesskit::{Node, Rect, Role},
-- AccessibilityNode,
-- };
++ use bevy_a11y::AccessibilityNode;
++ use accesskit::{Node, Rect, Role};
# Cargo.toml
++ accesskit = "0.17"
```
- Users will need to add `accesskit = "0.17"` to the dependencies
section of their `Cargo.toml` file and update their `accesskit` use
statements to come directly from the external crate instead of
`bevy_a11y`.
- Make sure to keep the versions of `accesskit` aligned with the
versions Bevy uses.
# Objective
- Fixed issue where `thiserror` `#[error(...)]` attributes were
improperly converted to `derive_more` `#[display(...)]` equivalents in
certain cases with a tuple struct/enum variant.
## Solution
- Used `re/#\[display\(.*\{[0-9]+\}.*\)\]/` to find occurences of using
`{0}` where `{_0}` was intended (checked for other field indexes too)and
updated accordingly.
## Testing
- `cargo check`
- CI
## Notes
This was discovered by @dtolnay in [this
comment](https://github.com/bevyengine/bevy/pull/15772#discussion_r1833730555).
# Objective
`AudioPlayer::<AudioSource>(assets.load("audio.mp3"))` is awkward and
complicated to type because the `AudioSource` generic type cannot be
elided. This is especially annoying because `AudioSource` is used in the
majority of cases. Most users don't need to think about it.
## Solution
Add an `AudioPlayer::new()` function that is hard-coded to
`AudioSource`, allowing `AudioPlayer::new(assets.load("audio.mp3"))`.
Prefer using that in the relevant places.
# Objective
In the existing implementation, additive blending effectively treats the
node with least index specially by basically forcing its weight to be
`1.0` regardless of what its computed weight would be (based on the
weights in the `AnimationGraph` and `AnimationPlayer`).
Arguably this makes some amount of sense, because the "base" animation
is often one which was not authored to be used additively, meaning that
its sampled values are interpreted absolutely rather than as deltas.
However, this also leads to strange behavior with respect to animation
masks: if the "base" animation is masked out on some target, then the
next node is treated as the "base" animation, despite the fact that it
would normally be interpreted additively, and the weight of that
animation is thrown away as a result.
This is all kind of weird and revolves around special treatment (if the
behavior is even really intentional in the first place). From a
mathematical standpoint, there is nothing special about how the "base"
animation must be treated other than having a weight of 1.0 under an
`Add` node, which is something that the user can do without relying on
some bizarre corner-case behavior of the animation system — this is the
only present situation under which weights are discarded.
This PR changes this behavior so that the weight of every node is
incorporated. In other words, for an animation graph that looks like
this:
```text
┌───────────────┐
│Base clip ┼──┐
│ 0.5 │ │
└───────────────┘ │
┌───────────────┐ │ ┌───────────────┐ ┌────┐
│Additive clip 1┼──┼─►┤Additive blend ┼────►│Root│
│ 0.1 │ │ │ 1.0 │ └────┘
└───────────────┘ │ └───────────────┘
┌───────────────┐ │
│Additive clip 2┼──┘
│ 0.2 │
└───────────────┘
```
Previously, the result would have been
```text
base_clip + 0.1 * additive_clip_1 + 0.2 * additive_clip_2
```
whereas now it would be
```text
0.5 * base_clip + 0.1 * additive_clip_1 + 0.2 * additive_clip_2
```
and in the scenario where `base_clip` is masked out:
```text
additive_clip_1 + 0.2 * additive_clip_2
```
vs.
```text
0.1 * additive_clip_1 + 0.2 * additive_clip_2
```
## Solution
For background, the way that the additive blending procedure works is
something like this:
- During graph traversal, the node values and weights of the children
are pushed onto the evaluator `stack`. The traversal order guarantees
that the item with least node index will be on top.
- Once we reach the `Add` node itself, we start popping off the `stack`
and into the evaluator's `blend_register`, which is an accumulator
holding up to one weight-value pair:
- If the `blend_register` is empty, it is filled using data from the top
of the `stack`.
- Otherwise, the `blend_register` is combined with data popped from the
`stack` and updated.
In the example above, the additive blending steps would look like this
(with the pre-existing implementation):
1. The `blend_register` is empty, so we pop `(base_clip, 0.5)` from the
top of the `stack` and put it in. Now the value of the `blend_register`
is `(base_clip, 0.5)`.
2. The `blend_register` is non-empty: we pop `(additive_clip_1, 0.1)`
from the top of the `stack` and combine it additively with the value in
the `blend_register`, forming `(base_clip + 0.1 * additive_clip_1, 0.6)`
in the `blend_register` (the carried weight value goes unused).
3. The `blend_register` is non-empty: we pop `(additive_clip_2, 0.2)`
from the top of the `stack` and combine it additively with the value in
the `blend_register`, forming `(base_clip + 0.1 * additive_clip_1 + 0.2
* additive_clip_2, 0.8)` in the `blend_register`.
The solution in this PR changes step 1: the `base_clip` is multiplied by
its weight as it is added to the `blend_register` in the first place,
yielding `0.5 * base_clip + 0.1 * additive_clip_1 + 0.2 *
additive_clip_2` as the final result.
### Note for reviewers
It might be tempting to look at the code, which contains a segment that
looks like this:
```rust
if additive {
current_value = A::blend(
[
BlendInput {
weight: 1.0, // <--
value: current_value,
additive: true,
},
BlendInput {
weight: weight_to_blend,
value: value_to_blend,
additive: true,
},
]
.into_iter(),
);
}
```
and conclude that the explicit value of `1.0` is responsible for
overwriting the weight of the base animation. This is incorrect.
Rather, this additive blend has to be written this way because it is
multiplying the *existing value in the blend register* by 1 (i.e. not
doing anything) before adding the next value to it. Changing this to
another quantity (e.g. the existing weight) would cause the value in the
blend register to be spuriously multiplied down.
## Testing
Tested on `animation_masks` example. Checked `morph_weights` example as
well.
## Migration Guide
I will write a migration guide later if this change is not included in
0.15.
# Objective
After #12929 we no longer have methods to get component or ticks for
previously obtained table column.
It's possible to use a lower level API by indexing the slice, but then
it won't be possible to construct `ComponentTicks`.
## Solution
Make `ComponentTicks` fields public. They don't hold any invariants and
you can't get a mutable reference to the struct in Bevy.
I also removed the getters since they are no longer needed.
## Testing
- I tested the compilation
---
## Migration Guide
- Instead of using `ComponentTicks::last_changed_tick` and
`ComponentTicks::added_tick` methods, access fields directly.
# Objective
Re-enable some tests in `entity_ref.rs` that are marked as `#[ignore]`,
but that pass after #14561.
## Solution
Remove `#[ignore]` from those tests.
# Objective
Automatic imaging sizing for image nodes isn't working because the the
`ContentSize` requirement for `UiImage` got lost in some merge again.
Fixes#16239Fixes#16240
Fixes the missing images seen in #16241
## Solution
Require `ContentSize` for `UiImage`.
# Objective
- Fixes#16254
- fix building in wasm without custom_cursor
## Solution
- Properly flag `CustomCursor::Url` which only exist in wasm, but also
only when `custom_cursor` is enabled
## Testing
- `cargo check --target wasm32-unknown-unknown -p bevy_winit`
# Objective
- Attempts to fix#16042
## Solution
- Added a new `RemoteSystem` `SystemSet` for the BRP systems.
- Changed the schedule on which these systems run from `Update` to
`Last`.
## Testing
- I did not test these changes and would appreciate a hand in doing so.
I assume it would be good to test that you can order against these
systems easily now.
---
## Migration Guide
- `process_remote_requests`, `process_ongoing_watching_requests` and
`remove_closed_watching_requests` now run in the `Last` schedule. Make
sure you use `RemoteSystem` `SystemSet` in case you need to order your
systems against them.
# Objective
Addressing a suggestion I made in Discord: store gamepad name as a
`Name` component.
Advantages:
- Will be nicely displayed in inspector / editor.
- Easier to spawn in tests, just `world.spawn(Gamepad::default())`.
## Solution
`Gamepad` component now stores only vendor and product IDs and `Name`
stores the gamepad name.
Since `GamepadInfo` is no longer necessary, I removed it and merged its
fields into the connection event.
## Testing
- Run unit tests.
---
## Migration Guide
- `GamepadInfo` no longer exists:
- Name now accesible via `Name` component.
- Other information available on `Gamepad` component directly.
- `GamepadConnection::Connected` now stores all info fields directly.
# Objective
Use same pattern when creating `TransparentUi` items where the
`sort_key` is the `UiNode` stack index + some offset.
## Solution
Refactored to follow same pattern.
## Testing
Ran few UI examples.
## Doubts
Maybe `stack_z_offsets::BACKGROUND_COLOR` should be renamed. This is
used for `ExtractedUiNode`, which is not only used for "background
color" it's also used to render borders, images and text (I think).
In `bevy_mod_picking` events are driven by several interlocking state
machines, which read and write events, and share state in a few common
resources. When I merged theses state machines into one to make event
ordering work properly, I combined this state and hid it in a `Local`.
This PR exposes the state in a resource again. Also adds a simple little
API for it. Useful for adding debug UI.
# Objective
Exposes a means to create an asset directory (and its parent
directories). Wasn't sure whether we also wanted the variant to create
directories without the parent (i.e. `mkdir` instead of `mkdir -p`)?
Fixes https://github.com/bevyengine/bevy_editor_prototypes/issues/144
# Objective
- Bumps accesskit and accesskit_winit dependencies
## Solution
- Fixes several breaking API changes introduced in accesskit 0.23.
## Testing
- Tested with the ui example and seems to work comparably
# Objective
Closes#16221.
## Solution
- Make `Gamepad` fields public and remove delegates / getters.
- Move `impl Into` to `Axis` methods (delegates for `Axis` used `impl
Into` to allow passing both `GamepadAxis` and `GamepadButton`).
- Improve docs.
## Testing
- I run tests.
Not sure if the migration guide is needed, since it's a feature from RC,
but I wrote it just in case.
---
## Migration Guide
- `Gamepad` fields are now public.
- Instead of using `Gamepad` delegates like `Gamepad::just_pressed`,
call these methods directly on the fields.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
GlobalTransform's current methods make it unintuitive, long and clunky
to access just the rotation or just the scale.
## Solution
Dedicated just_rotation() and scale() methods to access just these
properties.
I'm not sure about the naming, I chose just_rotation() to show that try
to indicate there is a waste since it also computes the other fields.
## Testing
- Did you test these changes? If so, how?
I tried logging the methods with a rotating and scaling cube and the
values were correct.
- Are there any parts that need more testing?
My methods are based on existing bevy/glam methods so should be correct
from the getgo.
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
Probably the easiest is using the 3d_rotations example, adding scaling
to it and then logging the methods I added
---
## Showcase
```rust
fn log(gt_query: Query<&GlobalTransform>) {
for global_transform in gt_query().iter() {
println!("{} {}", global_transform.just_rotation(), global_transform.scale());
}
}
```
---------
Co-authored-by: Sigma-dev <antonin.programming@gmail.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
- fix formatting issue in "mesh_view_binding.wgsl"
_note: As naga-oil preprocessor match the whole line when finding an
"#endif",
It's just for external formatting tool and consistency._
## Solution
Trivial change.
Add '//' before the closing comment of the "#endif"
# Objective
gpu based mesh uniform construction in the `GpuPreprocessNode` is
currently in `Core3d`. The node iterates all views and schedules the
uniform construction for each. so
- when there are multiple 3d cameras, it runs multiple times on each
view
- if a view wants to render meshes but doesn't use the `Core3d` graph,
the camera must run later than at least one `Core3d`-based camera (or
add the node to its own graph, duplicating the work)
- If views want to share mesh uniforms there is no way to avoid running
the preprocessing for every view
## Solution
- move the node to the top level of the rendergraph, before the camera
driver node
- make the `PreprocessBindGroup` `clone`able, and add a
`SkipGpuPreprocessing` component to allow opting out per view
# Objective
Currently, if we have two cameras with the same output texture, one with
`CameraOutputMode::Write` and one with `CameraOutputMode::Skip`, it is
possible for the `CameraOutputMode::Write` camera to be assigned alpha
blending (which is the fallback blending when multiple cameras write to
the same output texture), although it is the only camera writing to the
output texture. This may or may not happen every restart of the app,
because the camera iteration order in prepare_view_upscaling_pipelines
isn't consistent. Since this is random behaviour I consider this a bug
and didn't add a migration guide.
## Solution
In `prepare_view_upscaling_pipelines` make sure we don't consider
cameras with CameraOutputMode::Skip to be outputting something to the
output texture.
## Testing
I ran a few examples to make sure nothing obvious is broken. There is no
example using CameraOutputMode::Skip, so I only tested the change in my
own App where this was relevant, which however isn't public.
# Objective
- Choose LOD based on normal simplification error in addition to
position error
- Update meshoptimizer to 0.22, which has a bunch of simplifier
improvements
## Testing
- Did you test these changes? If so, how?
- Visualize normals, and compare LOD changes before and after. Normals
no longer visibly change as the LOD cut changes.
- Are there any parts that need more testing?
- No
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
- Run the meshlet example in this PR and on main and move around to
change the LOD cut. Before running each example, in
meshlet_mesh_material.wgsl, replace `let color = vec3(rand_f(&rng),
rand_f(&rng), rand_f(&rng));` with `let color =
(vertex_output.world_normal + 1.0) / 2.0;`. Make sure to download the
appropriate bunny asset for each branch!
# Objective
1. UI texture slicing chops and scales an image to fit the size of a
node and isn't meant to place any constraints on the size of the node
itself, but because the required components changes required `ImageSize`
and `ContentSize` for nodes with `UiImage`, texture sliced nodes are
laid out using an `ImageMeasure`.
2. In 0.14 users could spawn a `(UiImage, NodeBundle)` which would
display an image stretched to fill the UI node's bounds ignoring the
image's instrinsic size. Now that `UiImage` requires `ContentSize`,
there's no option to display an image without its size placing
constrains on the UI layout (unless you force the `Node` to a fixed
size, but that's not a solution).
3. It's desirable that the `Sprite` and `UiImage` share similar APIs.
Fixes#16109
## Solution
* Remove the `Component` impl from `ImageScaleMode`.
* Add a `Stretch` variant to `ImageScaleMode`.
* Add a field `scale_mode: ImageScaleMode` to `Sprite`.
* Add a field `mode: UiImageMode` to `UiImage`.
* Add an enum `UiImageMode` similar to `ImageScaleMode` but with
additional UI specific variants.
* Remove the queries for `ImageScaleMode` from Sprite and UI extraction,
and refer to the new fields instead.
* Change `ui_layout_system` to update measure funcs on any change to
`ContentSize`s to enable manual clearing without removing the component.
* Don't add a measure unless `UiImageMode::Auto` is set in
`update_image_content_size_system`. Mutably deref the `Mut<ContentSize>`
if the `UiImage` is changed to force removal of any existing measure
func.
## Testing
Remove all the constraints from the ui_texture_slice example:
```rust
//! This example illustrates how to create buttons with their textures sliced
//! and kept in proportion instead of being stretched by the button dimensions
use bevy::{
color::palettes::css::{GOLD, ORANGE},
prelude::*,
winit::WinitSettings,
};
fn main() {
App::new()
.add_plugins(DefaultPlugins)
// Only run the app when there is user input. This will significantly reduce CPU/GPU use.
.insert_resource(WinitSettings::desktop_app())
.add_systems(Startup, setup)
.add_systems(Update, button_system)
.run();
}
fn button_system(
mut interaction_query: Query<
(&Interaction, &Children, &mut UiImage),
(Changed<Interaction>, With<Button>),
>,
mut text_query: Query<&mut Text>,
) {
for (interaction, children, mut image) in &mut interaction_query {
let mut text = text_query.get_mut(children[0]).unwrap();
match *interaction {
Interaction::Pressed => {
**text = "Press".to_string();
image.color = GOLD.into();
}
Interaction::Hovered => {
**text = "Hover".to_string();
image.color = ORANGE.into();
}
Interaction::None => {
**text = "Button".to_string();
image.color = Color::WHITE;
}
}
}
}
fn setup(mut commands: Commands, asset_server: Res<AssetServer>) {
let image = asset_server.load("textures/fantasy_ui_borders/panel-border-010.png");
let slicer = TextureSlicer {
border: BorderRect::square(22.0),
center_scale_mode: SliceScaleMode::Stretch,
sides_scale_mode: SliceScaleMode::Stretch,
max_corner_scale: 1.0,
};
// ui camera
commands.spawn(Camera2d);
commands
.spawn(Node {
width: Val::Percent(100.0),
height: Val::Percent(100.0),
align_items: AlignItems::Center,
justify_content: JustifyContent::Center,
..default()
})
.with_children(|parent| {
for [w, h] in [[150.0, 150.0], [300.0, 150.0], [150.0, 300.0]] {
parent
.spawn((
Button,
Node {
// width: Val::Px(w),
// height: Val::Px(h),
// horizontally center child text
justify_content: JustifyContent::Center,
// vertically center child text
align_items: AlignItems::Center,
margin: UiRect::all(Val::Px(20.0)),
..default()
},
UiImage::new(image.clone()),
ImageScaleMode::Sliced(slicer.clone()),
))
.with_children(|parent| {
// parent.spawn((
// Text::new("Button"),
// TextFont {
// font: asset_server.load("fonts/FiraSans-Bold.ttf"),
// font_size: 33.0,
// ..default()
// },
// TextColor(Color::srgb(0.9, 0.9, 0.9)),
// ));
});
}
});
}
```
This should result in a blank window, since without any constraints the
texture slice image nodes should be zero-sized. But in main the image
nodes are given the size of the underlying unsliced source image
`textures/fantasy_ui_borders/panel-border-010.png`:
<img width="321" alt="slicing"
src="https://github.com/user-attachments/assets/cbd74c9c-14cd-4b4d-93c6-7c0152bb05ee">
For this PR need to change the lines:
```
UiImage::new(image.clone()),
ImageScaleMode::Sliced(slicer.clone()),
```
to
```
UiImage::new(image.clone()).with_mode(UiImageMode::Sliced(slicer.clone()),
```
and then nothing should be rendered, as desired.
---------
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
The schedule graph can easily confirm whether a set is contained or not.
This helps me in my personal project where I write an extension trait
for `Schedule` and I want to configure a specific set in its methods.
The set in question has a run condition though and I don't want to add
that condition to the same schedule as many times as the trait methods
are called. Since the non-pub set is unknown to the schedule until then,
a `contains_set` is sufficient.
It is probably trivial to add a method that returns an `Option<NodeId>`
as well but as I personally don't need it I did not add that. If it is
desired I can do so here though. It might be unneeded to have a
`contains_set` then because one could check `is_some` on the returned id
in that case.
An argument against that is that future changes may be easier if only a
`contains_set` needs to be ported.
## Solution
Added `ScheduleGraph::contains_set`.
## Testing
I put the below showcase code into a temporary unit test and it worked.
If wanted I add it as a test too but I did not see that other more
somewhat complicated methods have tests
---
## Showcase
```rs
#[derive(ScheduleLabel, Debug, Default, Clone, Copy, PartialEq, Eq, Hash)]
struct MySchedule;
#[derive(SystemSet, Debug, Default, Clone, Copy, PartialEq, Eq, Hash)]
struct MySet;
let mut schedule = Schedule::new(MySchedule);
assert_eq!(schedule.graph().contains_set(MySet), false);
schedule.configure_sets(MySet);
assert_eq!(schedule.graph().contains_set(MySet), true);
```
# Objective
- `CircularSegment` and `CircularSector` are well defined 2D shapes with
both an area and a perimeter.
# Solution
- This PR implements `perimeter` for both and moves the existsing `area`
functions into the `Measured2d` implementations.
## Testing
- The `arc_tests` have been extended to also check for perimeters.
# Objective
Currently there's no way to change the window's cursor icon with the
`custom_cursor` feature **disabled**. You should still be able to set
system cursor icons.
Connections:
- https://github.com/bevyengine/bevy/pull/15649
## Solution
Move some `custom_cursor` feature gates around, as to expose the
`CursorIcon` type again.
Note this refactoring was mainly piloted by hunting after the compiler
warnings -- I shouldn't have missed anything, but FYI.
## Testing
Disabled the `custom_cursor` feature, ran the `window_settings` example.
# Objective
#14273 changed `Msaa` to be a component rather than a resource. However,
the documentation still says that it is a resource. This tripped me up
during migration to 0.15 until I looked at the type definition.
Additionally, the docs have some unnecessary repetition and some grammar
mistakes, and they don't link to camera documentation.
## Solution
Fix up the docs!
# Objective
When merging two meshes, we need to find the offset of indices for the
second mesh. Currently it is done by inserting empty positions if
positions is not set.
Although practically it is not an issue, this does not feel right:
- We did not have positions before, then why we have positions after
merge?
- Moreover, if positions are not set, but uvs are not empty, computed
offset will be zero, while it should be equal to the number of uvs.
## Solution
Use `Mesh::count_vertices` to find the number of vertices.
## Testing
Looking hard.