# Objective
Applies feedback from previous PR #15135 'cause it got caught up in the
merge train 🚂
I couldn't resist including roll, both for completeness and due to
playing too many games that implemented it as a child.
cc: @janhohenheim
# Objective
Fixes#15079 , repairing the `game_menu` example
## Solution
- Changed the target component for the color updates from `UiImage` to
`BackgroundColor`.
- Changed the width of the `button_style` to `300px` to prevent overlap
with the text.
## Testing
Checked that buttons now correctly update their background color on
hover/exit/press.
---
## Showcase
https://github.com/user-attachments/assets/8f7ede9b-c271-4b59-91f9-27d9e3db1429
# Objective
- Fixes https://github.com/bevyengine/bevy/pull/14991. The `cosmic-text`
shape run cache requires manual cleanup for old text that no longer
needs to be cached.
## Solution
- Add a system to trim the cache.
- Add an `average fps` indicator to the `text_debug` example.
## Testing
Tested with `cargo run --example text_debug`.
- **No shape run cache**: 82fps with ~1fps variance.
- **Shape run cache no trim**: 90-100fps with ~2-4fps variance
- **Shape run cache trim age = 1**: 90-100fps with ~2-8fps variance
- **Shape run cache trim age = 2**: 90-100fps with ~2-4fps variance
- **Shape run cache trim age = 2000**: 80-120fps with ~2-6fps variance
The shape run cache seems to increase average FPS but also increases
frame time variance (when there is dynamic text).
# Objective
- I've seen quite a few people on discord copy-paste the camera code of
the first-person example and then run into problems with the pitch.
- ~~Additionally, the code is framerate-dependent.~~ it's not, see
comment in PR
## Solution
- Make the code good enough to be copy-pasteable
- ~~Use `dt` to make the code framerate-independent~~ Add comment
explaining why we don't use `dt`
- Clamp the pitch
- Move the camera sensitivity into a component for better
configurability
## Testing
Didn't run the example again, but the code is straight from another
project I have open, so I'm not worried.
---------
Co-authored-by: Antony <antony.m.3012@gmail.com>
# Objective
The names of numerous rendering components in Bevy are inconsistent and
a bit confusing. Relevant names include:
- `AutoExposureSettings`
- `AutoExposureSettingsUniform`
- `BloomSettings`
- `BloomUniform` (no `Settings`)
- `BloomPrefilterSettings`
- `ChromaticAberration` (no `Settings`)
- `ContrastAdaptiveSharpeningSettings`
- `DepthOfFieldSettings`
- `DepthOfFieldUniform` (no `Settings`)
- `FogSettings`
- `SmaaSettings`, `Fxaa`, `TemporalAntiAliasSettings` (really
inconsistent??)
- `ScreenSpaceAmbientOcclusionSettings`
- `ScreenSpaceReflectionsSettings`
- `VolumetricFogSettings`
Firstly, there's a lot of inconsistency between `Foo`/`FooSettings` and
`FooUniform`/`FooSettingsUniform` and whether names are abbreviated or
not.
Secondly, the `Settings` post-fix seems unnecessary and a bit confusing
semantically, since it makes it seem like the component is mostly just
auxiliary configuration instead of the core *thing* that actually
enables the feature. This will be an even bigger problem once bundles
like `TemporalAntiAliasBundle` are deprecated in favor of required
components, as users will expect a component named `TemporalAntiAlias`
(or similar), not `TemporalAntiAliasSettings`.
## Solution
Drop the `Settings` post-fix from the component names, and change some
names to be more consistent.
- `AutoExposure`
- `AutoExposureUniform`
- `Bloom`
- `BloomUniform`
- `BloomPrefilter`
- `ChromaticAberration`
- `ContrastAdaptiveSharpening`
- `DepthOfField`
- `DepthOfFieldUniform`
- `DistanceFog`
- `Smaa`, `Fxaa`, `TemporalAntiAliasing` (note: we might want to change
to `Taa`, see "Discussion")
- `ScreenSpaceAmbientOcclusion`
- `ScreenSpaceReflections`
- `VolumetricFog`
I kept the old names as deprecated type aliases to make migration a bit
less painful for users. We should remove them after the next release.
(And let me know if I should just... not add them at all)
I also added some very basic docs for a few types where they were
missing, like on `Fxaa` and `DepthOfField`.
## Discussion
- `TemporalAntiAliasing` is still inconsistent with `Smaa` and `Fxaa`.
Consensus [on
Discord](https://discord.com/channels/691052431525675048/743663924229963868/1280601167209955431)
seemed to be that renaming to `Taa` would probably be fine, but I think
it's a bit more controversial, and it would've required renaming a lot
of related types like `TemporalAntiAliasNode`,
`TemporalAntiAliasBundle`, and `TemporalAntiAliasPlugin`, so I think
it's better to leave to a follow-up.
- I think `Fog` should probably have a more specific name like
`DistanceFog` considering it seems to be distinct from `VolumetricFog`.
~~This should probably be done in a follow-up though, so I just removed
the `Settings` post-fix for now.~~ (done)
---
## Migration Guide
Many rendering components have been renamed for improved consistency and
clarity.
- `AutoExposureSettings` → `AutoExposure`
- `BloomSettings` → `Bloom`
- `BloomPrefilterSettings` → `BloomPrefilter`
- `ContrastAdaptiveSharpeningSettings` → `ContrastAdaptiveSharpening`
- `DepthOfFieldSettings` → `DepthOfField`
- `FogSettings` → `DistanceFog`
- `SmaaSettings` → `Smaa`
- `TemporalAntiAliasSettings` → `TemporalAntiAliasing`
- `ScreenSpaceAmbientOcclusionSettings` → `ScreenSpaceAmbientOcclusion`
- `ScreenSpaceReflectionsSettings` → `ScreenSpaceReflections`
- `VolumetricFogSettings` → `VolumetricFog`
---------
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
Add examples for zooming (and orbiting) orthographic and perspective
cameras.
I'm pretty green with 3D, so please treat with suspicion! I note that
if/when #15075 is merged, `.scale` will go away so this example uses
`.scaling_mode`.
Closes#2580
# Objective
Hello! I am adopting #11022 to resolve conflicts with `main`. tldr: this
removes `scale` in favour of `scaling_mode`. Please see the original PR
for explanation/discussion.
Also relates to #2580.
## Migration Guide
Replace all uses of `scale` with `scaling_mode`, keeping in mind that
`scale` is (was) a multiplier. For example, replace
```rust
scale: 2.0,
scaling_mode: ScalingMode::FixedHorizontal(4.0),
```
with
```rust
scaling_mode: ScalingMode::FixedHorizontal(8.0),
```
---------
Co-authored-by: Stepan Koltsov <stepan.koltsov@gmail.com>
# Objective
I noticed some issues in `screenshot` example:
1. Cursor icon won't return from `SystemCursorIcon::Progress` to default
icon, even though screen shot saving is done.
2. Panics when exiting window: ``called `Result::unwrap()` on an `Err`
value:
NoEntities("bevy_ecs::query::state::QueryState<bevy_ecs::entity::Entity,
bevy_ecs::query::filter::With<bevy_window:🪟:Window>>")``
## Solution
1. Caused by cursor updating system not responding to [`CursorIcon`
component
removal](5cfcbf47ed/examples/window/screenshot.rs (L38)).
I believe it should, so change it to react to
`RemovedComponents<CursorIcon>`. (a suggestion)
2. Use `get_single` for window.
## Testing
- run screenshot example
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
There aren't any examples of how to draw a ui material with borders.
## Solution
Add border rendering to the `ui_material` example's shader.
## Showcase
<img width="395" alt="bordermat"
src="https://github.com/user-attachments/assets/109c59c1-f54b-4542-96f7-acff63f5057f">
---------
Co-authored-by: charlotte <charlotte.c.mcelwain@gmail.com>
# Objective
Smaller scoped version of #13375 without the `_mut` variants which
currently have unsoundness issues.
## Solution
Same as #13375, but without the `_mut` variants.
## Testing
- The same test from #13375 is reused.
---
## Migration Guide
- Renamed `FilteredEntityRef::components` to
`FilteredEntityRef::accessed_components` and
`FilteredEntityMut::components` to
`FilteredEntityMut::accessed_components`.
---------
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
Co-authored-by: Periwink <charlesbour@gmail.com>
Adopted PR from dmlary, all credit to them!
https://github.com/bevyengine/bevy/pull/9915
Original description:
# Objective
The default value for `near` in `OrthographicProjection` should be
different for 2d & 3d.
For 2d using `near = -1000` allows bevy users to build up scenes using
background `z = 0`, and foreground elements `z > 0` similar to css.
However in 3d `near = -1000` results in objects behind the camera being
rendered. Using `near = 0` works for 3d, but forces 2d users to assign
`z <= 0` for rendered elements, putting the background at some arbitrary
negative value.
There is no common value for `near` that doesn't result in a footgun or
usability issue for either 2d or 3d, so they should have separate
values.
There was discussion about other options in the discord
[0](https://discord.com/channels/691052431525675048/1154114310042292325),
but splitting `default()` into `default_2d()` and `default_3d()` seemed
like the lowest cost approach.
Related/past work https://github.com/bevyengine/bevy/issues/9138,
https://github.com/bevyengine/bevy/pull/9214,
https://github.com/bevyengine/bevy/pull/9310,
https://github.com/bevyengine/bevy/pull/9537 (thanks to @Selene-Amanita
for the list)
## Solution
This commit splits `OrthographicProjection::default` into `default_2d`
and `default_3d`.
## Migration Guide
- In initialization of `OrthographicProjection`, change `..default()` to
`..OrthographicProjection::default_2d()` or
`..OrthographicProjection::default_3d()`
Example:
```diff
--- a/examples/3d/orthographic.rs
+++ b/examples/3d/orthographic.rs
@@ -20,7 +20,7 @@ fn setup(
projection: OrthographicProjection {
scale: 3.0,
scaling_mode: ScalingMode::FixedVertical(2.0),
- ..default()
+ ..OrthographicProjection::default_3d()
}
.into(),
transform: Transform::from_xyz(5.0, 5.0, 5.0).looking_at(Vec3::ZERO, Vec3::Y),
```
---------
Co-authored-by: David M. Lary <dmlary@gmail.com>
Co-authored-by: Jan Hohenheim <jan@hohenheim.ch>
Adds some methods to assist in building `ShaderStorageBuffer` without
using `bytemuck`. We keep the `&[u8]` constructors since this is still
modeled as a thin wrapper around the buffer descriptor, but should make
it easier to interact with at the cost of an extra allocation in the
`ShaderType` path for the buffer writer.
Follow up from #14663
# Objective
`EntityHash` and related types were moved from `bevy_utils` to
`bevy_ecs` in #11498, but seemed to have been accidentally reintroduced
a week later in #11707.
## Solution
Remove the old leftover code.
---
## Migration Guide
- Uses of `bevy::utils::{EntityHash, EntityHasher, EntityHashMap,
EntityHashSet}` now have to be imported from `bevy::ecs::entity`.
### Builder changes
- Increased meshlet max vertices/triangles from 64v/64t to 255v/128t
(meshoptimizer won't allow 256v sadly). This gives us a much greater
percentage of meshlets with max triangle count (128). Still not perfect,
we still end up with some tiny <=10 triangle meshlets that never really
get simplified, but it's progress.
- Removed the error target limit. Now we allow meshoptimizer to simplify
as much as possible. No reason to cap this out, as the cluster culling
code will choose a good LOD level anyways. Again leads to higher quality
LOD trees.
- After some discussion and consulting the Nanite slides again, changed
meshlet group error from _adding_ the max child's error to the group
error, to doing `group_error = max(group_error, max_child_error)`. Error
is already cumulative between LODs as the edges we're collapsing during
simplification get longer each time.
- Bumped the 65% simplification threshold to allow up to 95% of the
original geometry (e.g. accept simplification as valid even if we only
simplified 5% of the triangles). This gives us closer to
log2(initial_meshlet_count) LOD levels, and fewer meshlet roots in the
DAG.
Still more work to be done in the future here. Maybe trying METIS for
meshlet building instead of meshoptimizer.
Using ~8 clusters per group instead of ~4 might also make a big
difference. The Nanite slides say that they have 8-32 meshlets per
group, suggesting some kind of heuristic. Unfortunately meshopt's
compute_cluster_bounds won't work with large groups atm
(https://github.com/zeux/meshoptimizer/discussions/750#discussioncomment-10562641)
so hard to test.
Based on discussion from
https://github.com/bevyengine/bevy/discussions/14998,
https://github.com/zeux/meshoptimizer/discussions/750, and discord.
### Runtime changes
- cluster:triangle packed IDs are now stored 25:7 instead of 26:6 bits,
as max triangles per cluster are now 128 instead of 64
- Hardware raster now spawns 128 * 3 vertices instead of 64 * 3 vertices
to account for the new max triangles limit
- Hardware raster now outputs NaN triangles (0 / 0) instead of
zero-positioned triangles for extra vertex invocations over the cluster
triangle count. Shouldn't really be a difference idt, but I did it
anyways.
- Software raster now does 128 threads per workgroup instead of 64
threads. Each thread now loads, projects, and caches a vertex (vertices
0-127), and then if needed does so again (vertices 128-254). Each thread
then rasterizes one of 128 triangles.
- Fixed a bug with `needs_dispatch_remap`. I had the condition backwards
in my last PR, I probably committed it by accident after testing the
non-default code path on my GPU.
Commit 65252bb87 (Consistently use `PI` to specify angles in examples.
(#5825)) changed from using push_children to add_child without adjusting
the comment
# Objective
- Fix the comments to align with the code
Co-authored-by: Alexander Stein <alexander.stein@mailbox.org>
# Objective
Fixes#15032
## Solution
Reimplement support for the `flip_x` and `flip_y` fields.
This doesn't flip the border geometry, I'm not really sure whether that
is desirable or not.
Also fixes a bug that was causing the side and center slices to tile
incorrectly.
### Testing
```
cargo run --example ui_texture_slice_flip_and_tile
```
## Showcase
<img width="787" alt="nearest"
src="https://github.com/user-attachments/assets/bc044bae-1748-42ba-92b5-0500c87264f6">
With tiling need to use nearest filtering to avoid bleeding between the
slices.
---------
Co-authored-by: Jan Hohenheim <jan@hohenheim.ch>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
- Fixes https://github.com/bevyengine/bevy/issues/14593.
## Solution
- Add `ViewportConversionError` and return it from viewport conversion
methods on Camera.
## Testing
- I successfully compiled and ran all changed examples.
## Migration Guide
The following methods on `Camera` now return a `Result` instead of an
`Option` so that they can provide more information about failures:
- `world_to_viewport`
- `world_to_viewport_with_depth`
- `viewport_to_world`
- `viewport_to_world_2d`
Call `.ok()` on the `Result` to turn it back into an `Option`, or handle
the `Result` directly.
---------
Co-authored-by: Lixou <82600264+DasLixou@users.noreply.github.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Zachary Harrold <zac@harrold.com.au>
# Objective
Replaced the existing instantiation of the 2D Circle in the 2d_shapes.rs
file with the `new` method.
- Ensures consistency in instantiating 2D primitive shapes in the
examples.
## Solution
Replaced the existing instantiation of the 2D Circle in the 2d_shapes.rs
file with the `new` method.
- Ensures consistency in instantiating 2D primitive shapes in the
examples.
## Testing
- None: Should be straight-forward enough to not warrant a test (I will
eat my words if I am wrong).
---
# Objective
Minor cosmetic improvements and code cleanup for this example
## Solution
- Use the default font and shorter constructor where the the full font
is not needed
- Use 12px padding for instruction text to match other examples
- Clean up formatting of instruction text
- Fix one instance of a font handle not being reused
<details>
<summary>Before / After</summary>
*Question mark boxes are expected. We don't include a font with the
glyphs I'm testing the IME with*
<img width="1280" alt="Screenshot 2024-09-02 at 11 14 17 AM"
src="https://github.com/user-attachments/assets/e0a904a3-c01b-4c18-b71f-41fbc1d09169">
<img width="1280" alt="Screenshot 2024-09-02 at 11 13 14 AM"
src="https://github.com/user-attachments/assets/4b2a2d31-6b9f-4b9a-a245-4258a6aa9519">
</details>
## Testing
Tested all functionality of the example (macOS)
# Objective
Thought I had found all of these... noticed some `10px` in #15013 and
did another sweep.
Continuation of #8478, #13583.
## Solution
- Position example text (and other elements) 12px from the edge of the
screen
This commit adds support for *masks* to the animation graph. A mask is a
set of animation targets (bones) that neither a node nor its descendants
are allowed to animate. Animation targets can be assigned one or more
*mask group*s, which are specific to a single graph. If a node masks out
any mask group that an animation target belongs to, animation curves for
that target will be ignored during evaluation.
The canonical use case for masks is to support characters holding
objects. Typically, character animations will contain hand animations in
the case that the character's hand is empty. (For example, running
animations may close a character's fingers into a fist.) However, when
the character is holding an object, the animation must be altered so
that the hand grips the object.
Bevy currently has no convenient way to handle this. The only workaround
that I can see is to have entirely separate animation clips for
characters' hands and bodies and keep them in sync, which is burdensome
and doesn't match artists' expectations from other engines, which all
effectively have support for masks. However, with mask group support,
this task is simple. We assign each hand to a mask group and parent all
character animations to a node. When a character grasps an object in
hand, we position the fingers as appropriate and then enable the mask
group for that hand in that node. This allows the character's animations
to run normally, while the object remains correctly attached to the
hand.
Note that even with this PR, we won't have support for running separate
animations for a character's hand and the rest of the character. This is
because we're missing additive blending: there's no way to combine the
two masked animations together properly. I intend that to be a follow-up
PR.
The major engines all have support for masks, though the workflow varies
from engine to engine:
* Unity has support for masks [essentially as implemented here], though
with layers instead of a tree. However, when using the Mecanim
("Humanoid") feature, precise control over bones is lost in favor of
predefined muscle groups.
* Unreal has a feature named [*layered blend per bone*]. This allows for
separate blend weights for different bones, effectively achieving masks.
I believe that the combination of blend nodes and masks make Bevy's
animation graph as expressible as that of Unreal, once we have support
for additive blending, though you may have to use more nodes than you
would in Unreal. Moreover, separating out the concepts of "blend weight"
and "which bones this node applies to" seems like a cleaner design than
what Unreal has.
* Godot's `AnimationTree` has the notion of [*blend filters*], which are
essentially the same as masks as implemented in this PR.
Additionally, this patch fixes a bug with weight evaluation whereby
weights weren't properly propagated down to grandchildren, because the
weight evaluation for a node only checked its parent's weight, not its
evaluated weight. I considered submitting this as a separate PR, but
given that this PR refactors that code entirely to support masks and
weights under a unified "evaluated node" concept, I simply included the
fix here.
A new example, `animation_masks`, has been added. It demonstrates how to
toggle masks on and off for specific portions of a skin.
This is part of #14395, but I'm going to defer closing that issue until
we have additive blending.
[essentially as implemented here]:
https://docs.unity3d.com/560/Documentation/Manual/class-AvatarMask.html
[*layered blend per bone*]:
https://dev.epicgames.com/documentation/en-us/unreal-engine/using-layered-animations-in-unreal-engine
[*blend filters*]:
https://docs.godotengine.org/en/stable/tutorials/animation/animation_tree.html
## Migration Guide
* The serialized format of animation graphs has changed with the
addition of animation masks. To upgrade animation graph RON files, add
`mask` and `mask_groups` fields as appropriate. (They can be safely set
to zero.)
Adds a new `Handle<Storage>` asset type that can be used as a render
asset, particularly for use with `AsBindGroup`.
Closes: #13658
# Objective
Allow users to create storage buffers in the main world without having
to access the `RenderDevice`. While this resource is technically
available, it's bad form to use in the main world and requires mixing
rendering details with main world code. Additionally, this makes storage
buffers easier to use with `AsBindGroup`, particularly in the following
scenarios:
- Sharing the same buffers between a compute stage and material shader.
We already have examples of this for storage textures (see game of life
example) and these changes allow a similar pattern to be used with
storage buffers.
- Preventing repeated gpu upload (see the previous easier to use `Vec`
`AsBindGroup` option).
- Allow initializing custom materials using `Default`. Previously, the
lack of a `Default` implement for the raw `wgpu::Buffer` type made
implementing a `AsBindGroup + Default` bound difficult in the presence
of buffers.
## Solution
Adds a new `Handle<Storage>` asset type that is prepared into a
`GpuStorageBuffer` render asset. This asset can either be initialized
with a `Vec<u8>` of properly aligned data or with a size hint. Users can
modify the underlying `wgpu::BufferDescriptor` to provide additional
usage flags.
## Migration Guide
The `AsBindGroup` `storage` attribute has been modified to reference the
new `Handle<Storage>` asset instead. Usages of Vec` should be converted
into assets instead.
---------
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
# Objective
- Fixes#14974
## Solution
- Replace all* instances of `NonZero*` with `NonZero<*>`
## Testing
- CI passed locally.
---
## Notes
Within the `bevy_reflect` implementations for `std` types,
`impl_reflect_value!()` will continue to use the type aliases instead,
as it inappropriately parses the concrete type parameter as a generic
argument. If the `ZeroablePrimitive` trait was stable, or the macro
could be modified to accept a finite list of types, then we could fully
migrate.
# Objective
- Add gizmos integration for the new `Curve` things in the math lib
## Solution
- Add the following methods
- `curve_2d(curve, sample_times, color)`
- `curve_3d(curve, sample_times, color)`
- `curve_gradient_2d(curve, sample_times_with_colors)`
- `curve_gradient_3d(curve, sample_times_with_colors)`
## Testing
- I added examples of the 2D and 3D variants of the gradient curve
gizmos to the gizmos examples.
## Showcase
### 2D
![image](https://github.com/user-attachments/assets/01a75706-a7b4-4fc5-98d5-18018185c877)
```rust
let domain = Interval::EVERYWHERE;
let curve = function_curve(domain, |t| Vec2::new(t, (t / 25.0).sin() * 100.0));
let resolution = ((time.elapsed_seconds().sin() + 1.0) * 50.0) as usize;
let times_and_colors = (0..=resolution)
.map(|n| n as f32 / resolution as f32)
.map(|t| (t - 0.5) * 600.0)
.map(|t| (t, TEAL.mix(&HOT_PINK, (t + 300.0) / 600.0)));
gizmos.curve_gradient_2d(curve, times_and_colors);
```
### 3D
![image](https://github.com/user-attachments/assets/3fd23983-1ec9-46cd-baed-5b5e2dc935d0)
```rust
let domain = Interval::EVERYWHERE;
let curve = function_curve(domain, |t| {
(Vec2::from((t * 10.0).sin_cos())).extend(t - 6.0)
});
let resolution = ((time.elapsed_seconds().sin() + 1.0) * 100.0) as usize;
let times_and_colors = (0..=resolution)
.map(|n| n as f32 / resolution as f32)
.map(|t| t * 5.0)
.map(|t| (t, TEAL.mix(&HOT_PINK, t / 5.0)));
gizmos.curve_gradient_3d(curve, times_and_colors);
```
# Objective
With the current implementation of `Plane3d` gizmos, it's really hard to
get a good feeling for big planes. Usually I tend to add more axes as a
user but that doesn't scale well and is pretty wasteful. It's hard to
recognize the plane in the distance here. Especially if there would've
been other rendered objects in the scene
![image](https://github.com/user-attachments/assets/b65b7015-c08c-46d7-aa27-c7c0d49b2021)
## Solution
- Since we got grid gizmos in the mean time, I went ahead and just
reused them here.
## Testing
I added an instance of the new `Plane3D` to the `3d_gizmos.rs` example.
If you want to look at it you need to look around a bit. I didn't
position it in the center since that was too crowded already.
---
## Showcase
![image](https://github.com/user-attachments/assets/e4982afe-7296-416c-9801-7dd85cd975c1)
## Migration Guide
The optional builder methods on
```rust
gizmos.primitive_3d(&Plane3d { }, ...);
```
changed from
- `segment_length`
- `segment_count`
- `axis_count`
to
- `cell_count`
- `spacing`
# Objective
Since https://github.com/bevyengine/bevy/pull/14731 is merged, it
unblocked a few utility methods for 2D arcs. In 2D the pendant to
`long_arc_3d_between` and `short_arc_3d_between` are missing. Since
`arc_2d` can be a bit hard to use, this PR is trying to plug some holes
in the `arcs` API.
## Solution
Implement
- `long_arc_2d_between(center, from, tp, color)`
- `short_arc_2d_between(center, from, tp, color)`
## Testing
- There are new doc tests
- The `2d_gizmos` example has been extended a bit to include a few more
arcs which can easily be checked with respect to the grid
---
## Showcase
![image](https://github.com/user-attachments/assets/b90ad8b1-86c2-4304-a481-4f9a5246c457)
Code related to the screenshot (from outer = first line to inner = last
line)
```rust
my_gizmos.arc_2d(Isometry2d::IDENTITY, FRAC_PI_2, 80.0, ORANGE_RED);
my_gizmos.short_arc_2d_between(Vec2::ZERO, Vec2::X * 40.0, Vec2::Y * 40.0, ORANGE_RED);
my_gizmos.long_arc_2d_between(Vec2::ZERO, Vec2::X * 20.0, Vec2::Y * 20.0, ORANGE_RED);
```
# Objective
- Solves the last bullet in and closes#14319
- Make better use of the `Isometry` types
- Prevent issues like #14655
- Probably simplify and clean up a lot of code through the use of Gizmos
as well (i.e. the 3D gizmos for cylinders circles & lines don't connect
well, probably due to wrong rotations)
## Solution
- go through the `bevy_gizmos` crate and give all methods a slight
workover
## Testing
- For all the changed examples I run `git switch main && cargo rr
--example <X> && git switch <BRANCH> && cargo rr --example <X>` and
compare the visual results
- Check if all doc tests are still compiling
- Check the docs in general and update them !!!
---
## Migration Guide
The gizmos methods function signature changes as follows:
- 2D
- if it took `position` & `rotation_angle` before ->
`Isometry2d::new(position, Rot2::radians(rotation_angle))`
- if it just took `position` before ->
`Isometry2d::from_translation(position)`
- 3D
- if it took `position` & `rotation` before ->
`Isometry3d::new(position, rotation)`
- if it just took `position` before ->
`Isometry3d::from_translation(position)`
# Objective
Fixes#14883
## Solution
Pretty simple update to `EntityCommands` methods to consume `self` and
return it rather than taking `&mut self`. The things probably worth
noting:
* I added `#[allow(clippy::should_implement_trait)]` to the `add` method
because it causes a linting conflict with `std::ops::Add`.
* `despawn` and `log_components` now return `Self`. I'm not sure if
that's exactly the desired behavior so I'm happy to adjust if that seems
wrong.
## Testing
Tested with `cargo run -p ci`. I think that should be sufficient to call
things good.
## Migration Guide
The most likely migration needed is changing code from this:
```
let mut entity = commands.get_or_spawn(entity);
if depth_prepass {
entity.insert(DepthPrepass);
}
if normal_prepass {
entity.insert(NormalPrepass);
}
if motion_vector_prepass {
entity.insert(MotionVectorPrepass);
}
if deferred_prepass {
entity.insert(DeferredPrepass);
}
```
to this:
```
let mut entity = commands.get_or_spawn(entity);
if depth_prepass {
entity = entity.insert(DepthPrepass);
}
if normal_prepass {
entity = entity.insert(NormalPrepass);
}
if motion_vector_prepass {
entity = entity.insert(MotionVectorPrepass);
}
if deferred_prepass {
entity.insert(DeferredPrepass);
}
```
as can be seen in several of the example code updates here. There will
probably also be instances where mutable `EntityCommands` vars no longer
need to be mutable.
# Objective
Add `bevy_picking` sprite backend as part of the `bevy_mod_picking`
upstreamening (#12365).
## Solution
More or less a copy/paste from `bevy_mod_picking`, with the changes
[here](https://github.com/aevyrie/bevy_mod_picking/pull/354). I'm
putting that link here since those changes haven't yet made it through
review, so should probably be reviewed on their own.
## Testing
I couldn't find any sprite-backend-specific tests in `bevy_mod_picking`
and unfortunately I'm not familiar enough with Bevy's testing patterns
to write tests for code that relies on windowing and input. I'm willing
to break the pointer hit system into testable blocks and add some more
modular tests if that's deemed important enough to block, otherwise I
can open an issue for adding tests as follow-up.
## Follow-up work
- More docs/tests
- Ignore pick events on transparent sprite pixels with potential opt-out
---------
Co-authored-by: Aevyrie <aevyrie@gmail.com>
# Objective
`arc_2d` wasn't actually doing what the docs were saying. The arc wasn't
offset by what was previously `direction_angle` but by `direction_angle
- arc_angle / 2.0`. This meant that the arcs center was laying on the
`Vec2::Y` axis and then it was offset. This was probably done to fit the
behavior of the `Arc2D` primitive. I would argue that this isn't
desirable for the plain `arc_2d` gizmo method since
- a) the docs get longer to explain the weird centering
- b) the mental model the user has to know gets bigger with more
implicit assumptions
given the code
```rust
my_gizmos.arc_2d(Vec2::ZERO, 0.0, FRAC_PI_2, 75.0, ORANGE_RED);
```
we get
![image](https://github.com/user-attachments/assets/84894c6d-42e4-451b-b3e2-811266486ede)
where after the fix with
```rust
my_gizmos.arc_2d(Isometry2d::IDENTITY, FRAC_PI_2, 75.0, ORANGE_RED);
```
we get
![image](https://github.com/user-attachments/assets/16b0aba0-f7b5-4600-ac49-a22be0315c40)
To get the same result with the previous implementation you would have
to randomly add `arc_angle / 2.0` to the `direction_angle`.
```rust
my_gizmos.arc_2d(Vec2::ZERO, FRAC_PI_4, FRAC_PI_2, 75.0, ORANGE_RED);
```
This makes constructing similar helping functions as they already exist
in 3D like
- `long_arc_2d_between`
- `short_arc_2d_between`
much harder.
## Solution
- Make the arc really start at `Vec2::Y * radius` in counter-clockwise
direction + offset by an angle as the docs state it
- Use `Isometry2d` instead of `position : Vec2` and `direction_angle :
f32` to reduce the chance of messing up rotation/translation
- Adjust the docs for the changes above
- Adjust the gizmo rendering of some primitives
## Testing
- check `2d_gizmos.rs` and `render_primitives.rs` examples
## Migration Guide
- users have to adjust their usages of `arc_2d`:
- before:
```rust
arc_2d(
pos,
angle,
arc_angle,
radius,
color
)
```
- after:
```rust
arc_2d(
// this `+ arc_angle * 0.5` quirk is only if you want to preserve the
previous behavior
// with the new API.
// feel free to try to fix this though since your current calls to this
function most likely
// involve some computations to counter-act that quirk in the first
place
Isometry2d::new(pos, Rot2::radians(angle + arc_angle * 0.5),
arc_angle,
radius,
color
)
```
# Objective
- Faster meshlet rasterization path for small triangles
- Avoid having to allocate and write out a triangle buffer
- Refactor gpu_scene.rs
## Solution
- Replace the 32bit visbuffer texture with a 64bit visbuffer buffer,
where the left 32 bits encode depth, and the right 32 bits encode the
existing cluster + triangle IDs. Can't use 64bit textures, wgpu/naga
doesn't support atomic ops on textures yet.
- Instead of writing out a buffer of packed cluster + triangle IDs (per
triangle) to raster, the culling pass now writes out a buffer of just
cluster IDs (per cluster, so less memory allocated, cheaper to write
out).
- Clusters for software raster are allocated from the left side
- Clusters for hardware raster are allocated in the same buffer, from
the right side
- The buffer size is fixed at MeshletPlugin build time, and should be
set to a reasonable value for your scene (no warning on overflow, and no
good way to determine what value you need outside of renderdoc - I plan
to fix this in a future PR adding a meshlet stats overlay)
- Currently I don't have a heuristic for software vs hardware raster
selection for each cluster. The existing code is just a placeholder. I
need to profile on a release scene and come up with a heuristic,
probably in a future PR.
- The culling shader is getting pretty hard to follow at this point, but
I don't want to spend time improving it as the entire shader/pass is
getting rewritten/replaced in the near future.
- Software raster is a compute workgroup per-cluster. Each workgroup
loads and transforms the <=64 vertices of the cluster, and then
rasterizes the <=64 triangles of the cluster.
- Two variants are implemented: Scanline for clusters with any larger
triangles (still smaller than hardware is good at), and brute-force for
very very tiny triangles
- Once the shader determines that a pixel should be filled in, it does
an atomicMax() on the visbuffer to store the results, copying how Nanite
works
- On devices with a low max workgroups per dispatch limit, an extra
compute pass is inserted before software raster to convert from a 1d to
2d dispatch (I don't think 3d would ever be necessary).
- I haven't implemented the top-left rule or subpixel precision yet, I'm
leaving that for a future PR since I get usable results without it for
now
- Resources used:
https://kristoffer-dyrkorn.github.io/triangle-rasterizer and chapters
6-8 of
https://fgiesen.wordpress.com/2013/02/17/optimizing-sw-occlusion-culling-index
- Hardware raster now spawns 64*3 vertex invocations per meshlet,
instead of the actual meshlet vertex count. Extra invocations just
early-exit.
- While this is slower than the existing system, hardware draws should
be rare now that software raster is usable, and it saves a ton of memory
using the unified cluster ID buffer. This would be fixed if wgpu had
support for mesh shaders.
- Instead of writing to a color+depth attachment, the hardware raster
pass also does the same atomic visbuffer writes that software raster
uses.
- We have to bind a dummy render target anyways, as wgpu doesn't
currently support render passes without any attachments
- Material IDs are no longer written out during the main rasterization
passes.
- If we had async compute queues, we could overlap the software and
hardware raster passes.
- New material and depth resolve passes run at the end of the visbuffer
node, and write out view depth and material ID depth textures
### Misc changes
- Fixed cluster culling importing, but never actually using the previous
view uniforms when doing occlusion culling
- Fixed incorrectly adding the LOD error twice when building the meshlet
mesh
- Splitup gpu_scene module into meshlet_mesh_manager, instance_manager,
and resource_manager
- resource_manager is still too complex and inefficient (extract and
prepare are way too expensive). I plan on improving this in a future PR,
but for now ResourceManager is mostly a 1:1 port of the leftover
MeshletGpuScene bits.
- Material draw passes have been renamed to the more accurate material
shade pass, as well as some other misc renaming (in the future, these
will be compute shaders even, and not actual draw calls)
---
## Migration Guide
- TBD (ask me at the end of the release for meshlet changes as a whole)
---------
Co-authored-by: vero <email@atlasdostal.com>
# Objective
Adding more features to `AsBindGroup` proc macro means making the trait
arguments uglier. Downstream implementors of the trait without the proc
macro might want to do different things than our default arguments.
## Solution
Make `AsBindGroup` take an associated `Param` type.
## Migration Guide
`AsBindGroup` now allows the user to specify a `SystemParam` to be used
for creating bind groups.
# Objective
Add a convenience constructor to make simple animation graphs easier to
build.
I've had some notes about attempting this since #11989 that I just
remembered after seeing #14852.
This partially addresses #14852, but I don't really know animation well
enough to write all of the documentation it's asking for.
## Solution
Add `AnimationGraph::from_clips` and use it to simplify `animated_fox`.
Do some other little bits of incidental cleanup and documentation .
## Testing
I ran `cargo run --example animated_fox`.
# Objective
- The examples use a more verbose than necessary way to initialize the
image
- The order of the camera doesn't need to be specified. At least I
didn't see a difference in my testing
## Solution
- Use `Image::new_fill()` to fill the image instead of abusing
`resize()`
- Remove the camera ordering
# Objective
Rewrite screenshotting to be able to accept any `RenderTarget`.
Closes#12478
## Solution
Previously, screenshotting relied on setting a variety of state on the
requested window. When extracted, the window's `swap_chain_texture_view`
property would be swapped out with a texture_view created that frame for
the screenshot pipeline to write back to the cpu.
Besides being tightly coupled to window in a way that prevented
screenshotting other render targets, this approach had the drawback of
relying on the implicit state of `swap_chain_texture_view` being
returned from a `NormalizedRenderTarget` when view targets were
prepared. Because property is set every frame for windows, that wasn't a
problem, but poses a problem for render target images. Namely, to do the
equivalent trick, we'd have to replace the `GpuImage`'s texture view,
and somehow restore it later.
As such, this PR creates a new `prepare_view_textures` system which runs
before `prepare_view_targets` that allows a new `prepare_screenshots`
system to be sandwiched between and overwrite the render targets texture
view if a screenshot has been requested that frame for the given target.
Additionally, screenshotting itself has been changed to use a component
+ observer pattern. We now spawn a `Screenshot` component into the
world, whose lifetime is tracked with a series of marker components.
When the screenshot is read back to the CPU, we send the image over a
channel back to the main world where an observer fires on the screenshot
entity before being despawned the next frame. This allows the user to
access resources in their save callback that might be useful (e.g.
uploading the screenshot over the network, etc.).
## Testing
![image](https://github.com/user-attachments/assets/48f19aed-d9e1-4058-bb17-82b37f992b7b)
TODO:
- [x] Web
- [ ] Manual texture view
---
## Showcase
render to texture example:
<img
src="https://github.com/user-attachments/assets/612ac47b-8a24-4287-a745-3051837963b0"
width=200/>
web saving still works:
<img
src="https://github.com/user-attachments/assets/e2a15b17-1ff5-4006-ab2a-e5cc74888b9c"
width=200/>
## Migration Guide
`ScreenshotManager` has been removed. To take a screenshot, spawn a
`Screenshot` entity with the specified render target and provide an
observer targeting the `ScreenshotCaptured` event. See the
`window/screenshot` example to see an example.
---------
Co-authored-by: Kristoffer Søholm <k.soeholm@gmail.com>
# Objective
- There is a flaw in the implementation of `FogVolume`'s
`density_texture_offset` from #14868. Because of the way I am wrapping
the UVW coordinates in the volumetric fog shader, a seam is visible when
the 3d texture is wrapping around from one side to the other:
![density_texture_offset_seam](https://github.com/user-attachments/assets/89527ef2-5e1b-4b90-8e73-7a3e607697d4)
## Solution
- This PR fixes the issue by removing the wrapping from the shader and
instead leaving it to the user to configure the 3d noise texture to use
`ImageAddressMode::Repeat` if they want it to repeat. Using
`ImageAddressMode::Repeat` is the proper solution to avoid the obvious
seam:
![density_texture_seam_fixed](https://github.com/user-attachments/assets/06e871a6-2db1-4501-b425-4141605f9b26)
- The sampler cannot be implicitly configured to use
`ImageAddressMode::Repeat` because that's not always desirable. For
example, the `fog_volumes` example wouldn't work properly because the
texture from the edges of the volume would overflow to the other sides,
which would be bad in this instance (but it's good in the case of the
`scrolling_fog` example). So leaving it to the user to decide on their
own whether they want the density texture to repeat seems to be the best
solution.
## Testing
- The `scrolling_fog` example still looks the same, it was just changed
to explicitly declare that the density texture should be repeating when
loading the asset. The `fog_volumes` example is unaffected.
<details>
<summary>Minimal reproduction example on current main</summary>
<pre>
use bevy::core_pipeline::experimental::taa::{TemporalAntiAliasBundle,
TemporalAntiAliasPlugin};
use bevy::pbr::{FogVolume, VolumetricFogSettings, VolumetricLight};
use bevy::prelude::*;<br>
fn main() {
App::new()
.add_plugins((DefaultPlugins, TemporalAntiAliasPlugin))
.add_systems(Startup, setup)
.run();
}<br>
fn setup(mut commands: Commands, assets: Res<AssetServer>) {
commands.spawn((
Camera3dBundle {
transform: Transform::from_xyz(3.5, -1.0, 0.4)
.looking_at(Vec3::new(0.0, 0.0, 0.4), Vec3::Y),
msaa: Msaa::Off,
..default()
},
TemporalAntiAliasBundle::default(),
VolumetricFogSettings {
ambient_intensity: 0.0,
jitter: 0.5,
..default()
},
));<br>
commands.spawn((
DirectionalLightBundle {
transform: Transform::from_xyz(-6.0, 5.0, -9.0)
.looking_at(Vec3::new(0.0, 0.0, 0.0), Vec3::Y),
directional_light: DirectionalLight {
illuminance: 32_000.0,
shadows_enabled: true,
..default()
},
..default()
},
VolumetricLight,
));<br>
commands.spawn((
SpatialBundle {
visibility: Visibility::Visible,
transform: Transform::from_xyz(0.0, 0.0,
0.0).with_scale(Vec3::splat(3.0)),
..default()
},
FogVolume {
density_texture: Some(assets.load("volumes/fog_noise.ktx2")),
density_texture_offset: Vec3::new(0.0, 0.0, 0.4),
scattering: 1.0,
..default()
},
));
}
</pre>
</details>
# Objective
This example previously had kind of a needlessly complex state machine
that tracked moves between its previous orientation and the new one that
was randomly generated. Using `smooth_nudge` simplifies the example in
addition to making good use of the new API.
## Solution
Use `smooth_nudge` to transition between the current transform and the
new one. This does away with the need to keep track of the move's
starting position and progress. It also just sort of looks nicer.
## Testing
Run the `align` example:
`cargo run --example align`
# Objective
- Fixes#14873, see that issue for a whole lot of context
## Solution
- Add a blessed system set for this stuff. See [this Discord
discussion](https://discord.com/channels/691052431525675048/749335865876021248/1276262931327094908).
Note that the gizmo systems,
[LWIM](https://github.com/Leafwing-Studios/leafwing-input-manager/pull/522/files#diff-9b59ee4899ad0a5d008889ea89a124a7291316532e42f9f3d6ae842b906fb095R154)
and now a new plugin I'm working on are all already ordering against
`run_fixed_main_schedule`, so having a dedicated system set should be
more robust and hopefully also more discoverable.
---
## ~~Showcase~~
~~I can add a little video of a smooth camera later if this gets merged
:)~~
Apparently a release note is not needed, so I'll leave it out. See the
changes in the fixed timestep example for usage showcase and the video
in #14873 for a more or less accurate video of the effect (it does not
use the same solution though, so it is not quite the same)
## Migration Guide
[run_fixed_main_schedule](https://docs.rs/bevy/latest/bevy/time/fn.run_fixed_main_schedule.html)
is no longer public. If you used to order against it, use the new
dedicated `RunFixedMainLoopSystem` system set instead. You can replace
your usage of `run_fixed_main_schedule` one for one by
`RunFixedMainLoopSystem::FixedMainLoop`, but it is now more idiomatic to
place your systems in either
`RunFixedMainLoopSystem::BeforeFixedMainLoop` or
`RunFixedMainLoopSystem::AfterFixedMainLoop`
Old:
```rust
app.add_systems(
RunFixedMainLoop,
some_system.before(run_fixed_main_schedule)
);
```
New:
```rust
app.add_systems(
RunFixedMainLoop,
some_system.in_set(RunFixedMainLoopSystem::BeforeFixedMainLoop)
);
```
---------
Co-authored-by: Tau Gärtli <git@tau.garden>
# Objective
- The goal of this PR is to make it possible to move the density texture
of a `FogVolume` over time in order to create dynamic effects like fog
moving in the wind.
- You could theoretically move the `FogVolume` itself, but this is not
ideal, because the `FogVolume` AABB would eventually leave the area. If
you want an area to remain foggy while also creating the impression that
the fog is moving in the wind, a scrolling density texture is a better
solution.
## Solution
- The PR adds a `density_texture_offset` field to the `FogVolume`
component. This offset is in the UVW coordinates of the density texture,
meaning that a value of `(0.5, 0.0, 0.0)` moves the 3d texture by half
along the x-axis.
- Values above 1.0 are wrapped, a 1.5 offset is the same as a 0.5
offset. This makes it so that the density texture wraps around on the
other side, meaning that a repeating 3d noise texture can seamlessly
scroll forever. It also makes it easy to move the density texture over
time by simply increasing the offset every frame.
## Testing
- A `scrolling_fog` example has been added to demonstrate the feature.
It uses the offset to scroll a repeating 3d noise density texture to
create the impression of fog moving in the wind.
- The camera is looking at a pillar with the sun peaking behind it. This
highlights the effect the changing density has on the volumetric
lighting interactions.
- Temporal anti-aliasing combined with the `jitter` option of
`VolumetricFogSettings` is used to improve the quality of the effect.
---
## Showcase
https://github.com/user-attachments/assets/3aa50ebd-771c-4c99-ab5d-255c0c3be1a8
# Objective
Make the naming of a parameter more consistent.
## Solution
- Changing the name of a parameter.
## Testing
These changes can't be tested as they are documentation based.
---
I apologize if something is wrong here, this is my first PR to bevy.
# Objective
Fixes#14782
## Solution
Enable the lint and fix all upcoming hints (`--fix`). Also tried to
figure out the false-positive (see review comment). Maybe split this PR
up into multiple parts where only the last one enables the lint, so some
can already be merged resulting in less many files touched / less
potential for merge conflicts?
Currently, there are some cases where it might be easier to read the
code with the qualifier, so perhaps remove the import of it and adapt
its cases? In the current stage it's just a plain adoption of the
suggestions in order to have a base to discuss.
## Testing
`cargo clippy` and `cargo run -p ci` are happy.
Fix `mesh2d_manual` example from changes in #13069.
```
wgpu error: Validation Error
Caused by:
In RenderPass::end
In a set_pipeline command
Render pipeline targets are incompatible with render pass
Incompatible depth-stencil attachment format: the RenderPass uses a texture with format Some(Depth32Float) but the RenderPipeline with 'colored_mesh2d_pipeline' label uses an attachment with format None
```