# Objective
Fixes#15560
Fixes (most of) #15570
Currently a lot of examples (and presumably some user code) depend on
toggling certain render features by adding/removing a single component
to an entity, e.g. `SpotLight` to toggle a light. Because of the
retained render world this no longer works: Extract will add any new
components, but when it is removed the entity persists unchanged in the
render world.
## Solution
Add `SyncComponentPlugin<C: Component>` that registers
`SyncToRenderWorld` as a required component for `C`, and adds a
component hook that will clear all components from the render world
entity when `C` is removed. We add this plugin to
`ExtractComponentPlugin` which fixes most instances of the problem. For
custom extraction logic we can manually add `SyncComponentPlugin` for
that component.
We also rename `WorldSyncPlugin` to `SyncWorldPlugin` so we start a
naming convention like all the `Extract` plugins.
In this PR I also fixed a bunch of breakage related to the retained
render world, stemming from old code that assumed that `Entity` would be
the same in both worlds.
I found that using the `RenderEntity` wrapper instead of `Entity` in
data structures when referring to render world entities makes intent
much clearer, so I propose we make this an official pattern.
## Testing
Run examples like
```
cargo run --features pbr_multi_layer_material_textures --example clearcoat
cargo run --example volumetric_fog
```
and see that they work, and that toggles work correctly. But really we
should test every single example, as we might not even have caught all
the breakage yet.
---
## Migration Guide
The retained render world notes should be updated to explain this edge
case and `SyncComponentPlugin`
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Trashtalk217 <trashtalk217@gmail.com>
# Objective
- Prepare for streaming by storing vertex data per-meshlet, rather than
per-mesh (this means duplicating vertices per-meshlet)
- Compress vertex data to reduce the cost of this
## Solution
The important parts are in from_mesh.rs, the changes to the Meshlet type
in asset.rs, and the changes in meshlet_bindings.wgsl. Everything else
is pretty secondary/boilerplate/straightforward changes.
- Positions are quantized in centimeters with a user-provided power of 2
factor (ideally auto-determined, but that's a TODO for the future),
encoded as an offset relative to the minimum value within the meshlet,
and then stored as a packed list of bits using the minimum number of
bits needed for each vertex position channel for that meshlet
- E.g. quantize positions (lossly, throws away precision that's not
needed leading to using less bits in the bitstream encoding)
- Get the min/max quantized value of each X/Y/Z channel of the quantized
positions within a meshlet
- Encode values relative to the min value of the meshlet. E.g. convert
from [min, max] to [0, max - min]
- The new max value in the meshlet is (max - min), which only takes N
bits, so we only need N bits to store each channel within the meshlet
(lossless)
- We can store the min value and that it takes N bits per channel in the
meshlet metadata, and reconstruct the position from the bitstream
- Normals are octahedral encoded and than snorm2x16 packed and stored as
a single u32.
- Would be better to implement the precise variant of octhedral encoding
for extra precision (no extra decode cost), but decided to keep it
simple for now and leave that as a followup
- Tried doing a quantizing and bitstream encoding scheme like I did for
positions, but struggled to get it smaller. Decided to go with this for
simplicity for now
- UVs are uncompressed and take a full 64bits per vertex which is
expensive
- In the future this should be improved
- Tangents, as of the previous PR, are not explicitly stored and are
instead derived from screen space gradients
- While I'm here, split up MeshletMeshSaverLoader into two separate
types
Other future changes include implementing a smaller encoding of triangle
data (3 u8 indices = 24 bits per triangle currently), and more
disk-oriented compression schemes.
References:
* "A Deep Dive into UE5's Nanite Virtualized Geometry"
https://advances.realtimerendering.com/s2021/Karis_Nanite_SIGGRAPH_Advances_2021_final.pdf#page=128
(also available on youtube)
* "Towards Practical Meshlet Compression"
https://arxiv.org/pdf/2404.06359
* "Vertex quantization in Omniforce Game Engine"
https://daniilvinn.github.io/2024/05/04/omniforce-vertex-quantization.html
## Testing
- Did you test these changes? If so, how?
- Converted the stanford bunny, and rendered it with a debug material
showing normals, and confirmed that it's identical to what's on main.
EDIT: See additional testing in the comments below.
- Are there any parts that need more testing?
- Could use some more size comparisons on various meshes, and testing
different quantization factors. Not sure if 4 is a good default. EDIT:
See additional testing in the comments below.
- Also did not test runtime performance of the shaders. EDIT: See
additional testing in the comments below.
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
- Use my unholy script, replacing the meshlet example
https://paste.rs/7xQHk.rs (must make MeshletMesh fields pub instead of
pub crate, must add lz4_flex as a dev-dependency) (must compile with
meshlet and meshlet_processor features, mesh must have only positions,
normals, and UVs, no vertex colors or tangents)
---
## Migration Guide
- TBD by JMS55 at the end of the release
# Objective
- Adds better comments and includes an example where the aspect ratio
between `size` and `full_size` differ
- Fixes#15576
## Solution
- Viewports are dynamically scaled to window size
## Testing
- Tested with moving the window around and by manually setting the
window scaling factor
- Just to make sure nothing else is going on, someone on macOS should
also test this
## Showcase
Since calculating padding from window size is a hassle, the example now
looks a bit more squished together:
![image](https://github.com/user-attachments/assets/68609cf2-5a67-49bd-8e0b-910bfd17f4d8)
# Objective
- Alpha blending can easily fail in many situations and requires sorting
on the cpu
## Solution
- Implement order independent transparency (OIT) as an alternative to
alpha blending
- The implementation uses 2 passes
- The first pass records all the fragments colors and position to a
buffer that is the size of N layers * the render target resolution.
- The second pass sorts the fragments, blends them and draws them to the
screen. It also currently does manual depth testing because early-z
fails in too many cases in the first pass.
## Testing
- We've been using this implementation at foresight in production for
many months now and we haven't had any issues related to OIT.
---
## Showcase
![image](https://github.com/user-attachments/assets/157f3e32-adaf-4782-b25b-c10313b9bc43)
![image](https://github.com/user-attachments/assets/bef23258-0c22-4b67-a0b8-48a9f571c44f)
## Future work
- Add an example showing how to use OIT for a custom material
- Next step would be to implement a per-pixel linked list to reduce
memory use
- I'd also like to investigate using a BinnedRenderPhase instead of a
SortedRenderPhase. If it works, it would make the transparent pass
significantly faster.
---------
Co-authored-by: Kristoffer Søholm <k.soeholm@gmail.com>
Co-authored-by: JMS55 <47158642+JMS55@users.noreply.github.com>
Co-authored-by: Charlotte McElwain <charlotte.c.mcelwain@gmail.com>
This example was missed during the port to required components for
meshes and materials.
Easy fix, I checked that it works as it did in the PR that added the
example (#13912).
# Objective
Yet another PR for migrating stuff to required components. This time,
cameras!
## Solution
As per the [selected
proposal](https://hackmd.io/tsYID4CGRiWxzsgawzxG_g#Combined-Proposal-1-Selected),
deprecate `Camera2dBundle` and `Camera3dBundle` in favor of `Camera2d`
and `Camera3d`.
Adding a `Camera` without `Camera2d` or `Camera3d` now logs a warning,
as suggested by Cart [on
Discord](https://discord.com/channels/691052431525675048/1264881140007702558/1291506402832945273).
I would personally like cameras to work a bit differently and be split
into a few more components, to avoid some footguns and confusing
semantics, but that is more controversial, and shouldn't block this core
migration.
## Testing
I ran a few 2D and 3D examples, and tried cameras with and without
render graphs.
---
## Migration Guide
`Camera2dBundle` and `Camera3dBundle` have been deprecated in favor of
`Camera2d` and `Camera3d`. Inserting them will now also insert the other
components required by them automatically.
The components needed for `DirectionalLight` are added automatically
since #15554
`create_point_light` already existed and returns a `PointLight` with
these same settings
Early implementation. I still have to fix the documentation and consider
writing a small migration guide.
Questions left to answer:
* [x] should thickness be an overridable constant?
* [x] is there a better way to implement `Eq`/`Hash` for `SSAOMethod`?
* [x] do we want to keep the linear sampler for the depth texture?
* [x] is there a better way to separate the logic than preprocessor
macros?
![vbao](https://github.com/bevyengine/bevy/assets/4136413/2a8a0389-2add-4c2e-be37-e208e52dcd25)
## Migration guide
SSAO algorithm was changed from GTAO to VBAO (visibility bitmasks). A
new field, `constant_object_thickness`, was added to
`ScreenSpaceAmbientOcclusion`. `ScreenSpaceAmbientOcclusion` also lost
its `Eq` and `Hash` implementations.
---------
Co-authored-by: JMS55 <47158642+JMS55@users.noreply.github.com>
As discussed in #15521
- Partial revert of #14897, reverting the change to the methods to
consume `self`
- The `insert_if` method is kept
The migration guide of #14897 should be removed
Closes#15521
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
Again, a step forward in the migration to required components: a bunch
of camera rendering cormponents!
Note that this does not include the camera components themselves yet,
because the naming and API for `Camera` hasn't been fully decided yet.
## Solution
As per the [selected
proposals](https://hackmd.io/@bevy/required_components/%2FpiqD9GOdSFKZZGzzh3C7Uw):
- Deprecate `MotionBlurBundle` in favor of the `MotionBlur` component
- Deprecate `TemporalAntiAliasBundle` in favor of the
`TemporalAntiAliasing` component
- Deprecate `ScreenSpaceAmbientOcclusionBundle` in favor of the
`ScreenSpaceAmbientOcclusion` component
- Deprecate `ScreenSpaceReflectionsBundle` in favor of the
`ScreenSpaceReflections` component
---
## Migration Guide
`MotionBlurBundle`, `TemporalAntiAliasBundle`,
`ScreenSpaceAmbientOcclusionBundle`, and `ScreenSpaceReflectionsBundle`
have been deprecated in favor of the `MotionBlur`,
`TemporalAntiAliasing`, `ScreenSpaceAmbientOcclusion`, and
`ScreenSpaceReflections` components instead. Inserting them will now
also insert the other components required by them automatically.
# Objective
A step in the migration to required components: scenes!
## Solution
As per the [selected
proposal](https://hackmd.io/@bevy/required_components/%2FPJtNGVMMQhyM0zIvCJSkbA):
- Deprecate `SceneBundle` and `DynamicSceneBundle`.
- Add `SceneRoot` and `DynamicSceneRoot` components, which wrap a
`Handle<Scene>` and `Handle<DynamicScene>` respectively.
## Migration Guide
Asset handles for scenes and dynamic scenes must now be wrapped in the
`SceneRoot` and `DynamicSceneRoot` components. Raw handles as components
no longer spawn scenes.
Additionally, `SceneBundle` and `DynamicSceneBundle` have been
deprecated. Instead, use the scene components directly.
Previously:
```rust
let model_scene = asset_server.load(GltfAssetLabel::Scene(0).from_asset("model.gltf"));
commands.spawn(SceneBundle {
scene: model_scene,
transform: Transform::from_xyz(-4.0, 0.0, -3.0),
..default()
});
```
Now:
```rust
let model_scene = asset_server.load(GltfAssetLabel::Scene(0).from_asset("model.gltf"));
commands.spawn((
SceneRoot(model_scene),
Transform::from_xyz(-4.0, 0.0, -3.0),
));
```
# Objective
A big step in the migration to required components: meshes and
materials!
## Solution
As per the [selected
proposal](https://hackmd.io/@bevy/required_components/%2Fj9-PnF-2QKK0on1KQ29UWQ):
- Deprecate `MaterialMesh2dBundle`, `MaterialMeshBundle`, and
`PbrBundle`.
- Add `Mesh2d` and `Mesh3d` components, which wrap a `Handle<Mesh>`.
- Add `MeshMaterial2d<M: Material2d>` and `MeshMaterial3d<M: Material>`,
which wrap a `Handle<M>`.
- Meshes *without* a mesh material should be rendered with a default
material. The existence of a material is determined by
`HasMaterial2d`/`HasMaterial3d`, which is required by
`MeshMaterial2d`/`MeshMaterial3d`. This gets around problems with the
generics.
Previously:
```rust
commands.spawn(MaterialMesh2dBundle {
mesh: meshes.add(Circle::new(100.0)).into(),
material: materials.add(Color::srgb(7.5, 0.0, 7.5)),
transform: Transform::from_translation(Vec3::new(-200., 0., 0.)),
..default()
});
```
Now:
```rust
commands.spawn((
Mesh2d(meshes.add(Circle::new(100.0))),
MeshMaterial2d(materials.add(Color::srgb(7.5, 0.0, 7.5))),
Transform::from_translation(Vec3::new(-200., 0., 0.)),
));
```
If the mesh material is missing, previously nothing was rendered. Now,
it renders a white default `ColorMaterial` in 2D and a
`StandardMaterial` in 3D (this can be overridden). Below, only every
other entity has a material:
![Näyttökuva 2024-09-29
181746](https://github.com/user-attachments/assets/5c8be029-d2fe-4b8c-ae89-17a72ff82c9a)
![Näyttökuva 2024-09-29
181918](https://github.com/user-attachments/assets/58adbc55-5a1e-4c7d-a2c7-ed456227b909)
Why white? This is still open for discussion, but I think white makes
sense for a *default* material, while *invalid* asset handles pointing
to nothing should have something like a pink material to indicate that
something is broken (I don't handle that in this PR yet). This is kind
of a mix of Godot and Unity: Godot just renders a white material for
non-existent materials, while Unity renders nothing when no materials
exist, but renders pink for invalid materials. I can also change the
default material to pink if that is preferable though.
## Testing
I ran some 2D and 3D examples to test if anything changed visually. I
have not tested all examples or features yet however. If anyone wants to
test more extensively, it would be appreciated!
## Implementation Notes
- The relationship between `bevy_render` and `bevy_pbr` is weird here.
`bevy_render` needs `Mesh3d` for its own systems, but `bevy_pbr` has all
of the material logic, and `bevy_render` doesn't depend on it. I feel
like the two crates should be refactored in some way, but I think that's
out of scope for this PR.
- I didn't migrate meshlets to required components yet. That can
probably be done in a follow-up, as this is already a huge PR.
- It is becoming increasingly clear to me that we really, *really* want
to disallow raw asset handles as components. They caused me a *ton* of
headache here already, and it took me a long time to find every place
that queried for them or inserted them directly on entities, since there
were no compiler errors for it. If we don't remove the `Component`
derive, I expect raw asset handles to be a *huge* footgun for users as
we transition to wrapper components, especially as handles as components
have been the norm so far. I personally consider this to be a blocker
for 0.15: we need to migrate to wrapper components for asset handles
everywhere, and remove the `Component` derive. Also see
https://github.com/bevyengine/bevy/issues/14124.
---
## Migration Guide
Asset handles for meshes and mesh materials must now be wrapped in the
`Mesh2d` and `MeshMaterial2d` or `Mesh3d` and `MeshMaterial3d`
components for 2D and 3D respectively. Raw handles as components no
longer render meshes.
Additionally, `MaterialMesh2dBundle`, `MaterialMeshBundle`, and
`PbrBundle` have been deprecated. Instead, use the mesh and material
components directly.
Previously:
```rust
commands.spawn(MaterialMesh2dBundle {
mesh: meshes.add(Circle::new(100.0)).into(),
material: materials.add(Color::srgb(7.5, 0.0, 7.5)),
transform: Transform::from_translation(Vec3::new(-200., 0., 0.)),
..default()
});
```
Now:
```rust
commands.spawn((
Mesh2d(meshes.add(Circle::new(100.0))),
MeshMaterial2d(materials.add(Color::srgb(7.5, 0.0, 7.5))),
Transform::from_translation(Vec3::new(-200., 0., 0.)),
));
```
If the mesh material is missing, a white default material is now used.
Previously, nothing was rendered if the material was missing.
The `WithMesh2d` and `WithMesh3d` query filter type aliases have also
been removed. Simply use `With<Mesh2d>` or `With<Mesh3d>`.
---------
Co-authored-by: Tim Blackbird <justthecooldude@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
Another part of the migration to required components: fog volumes!
## Solution
Deprecate `FogVolumeBundle` and make `FogVolume` require `Transform` and
`Visibility`, as per the [chosen
proposal](https://hackmd.io/@bevy/required_components/%2FcO7JPSAQR5G0J_j5wNwtOQ).
---
## Migration Guide
Replace all insertions of `FogVolumeBundle` with the `Visibility`
component. The other components required by it will now be inserted
automatically.
# Objective
- This PR fixes#12488
## Solution
- This PR adds a new property to `Camera` that emulates the
functionality of the
[setViewOffset()](https://threejs.org/docs/#api/en/cameras/PerspectiveCamera.setViewOffset)
API in three.js.
- When set, the perspective and orthographic projections will restrict
the visible area of the camera to a part of the view frustum defined by
`offset` and `size`.
## Testing
- In the new `camera_sub_view` example, a fixed, moving and control sub
view is created for both perspective and orthographic projection
- Run the example with `cargo run --example camera_sub_view`
- The code can be tested by adding a `SubCameraView` to a camera
---
## Showcase
![image](https://github.com/user-attachments/assets/75ac45fc-d75d-4664-8ef6-ff7865297c25)
- Left Half: Perspective Projection
- Right Half: Orthographic Projection
- Small boxes in order:
- Sub view of the left half of the full image
- Sub view moving from the top left to the bottom right of the full
image
- Sub view of the full image (acting as a control)
- Large box: No sub view
<details>
<summary>Shortened camera setup of `camera_sub_view` example</summary>
```rust
// Main perspective Camera
commands.spawn(Camera3dBundle {
transform,
..default()
});
// Perspective camera left half
commands.spawn(Camera3dBundle {
camera: Camera {
sub_camera_view: Some(SubCameraView {
// Set the sub view camera to the left half of the full image
full_size: uvec2(500, 500),
offset: ivec2(0, 0),
size: uvec2(250, 500),
}),
order: 1,
..default()
},
transform,
..default()
});
// Perspective camera moving
commands.spawn((
Camera3dBundle {
camera: Camera {
sub_camera_view: Some(SubCameraView {
// Set the sub view camera to a fifth of the full view and
// move it in another system
full_size: uvec2(500, 500),
offset: ivec2(0, 0),
size: uvec2(100, 100),
}),
order: 2,
..default()
},
transform,
..default()
},
MovingCameraMarker,
));
// Perspective camera control
commands.spawn(Camera3dBundle {
camera: Camera {
sub_camera_view: Some(SubCameraView {
// Set the sub view to the full image, to ensure that it matches
// the projection without sub view
full_size: uvec2(450, 450),
offset: ivec2(0, 0),
size: uvec2(450, 450),
}),
order: 3,
..default()
},
transform,
..default()
});
// Main orthographic camera
commands.spawn(Camera3dBundle {
projection: OrthographicProjection {
...
}
.into(),
camera: Camera {
order: 4,
..default()
},
transform,
..default()
});
// Orthographic camera left half
commands.spawn(Camera3dBundle {
projection: OrthographicProjection {
...
}
.into(),
camera: Camera {
sub_camera_view: Some(SubCameraView {
// Set the sub view camera to the left half of the full image
full_size: uvec2(500, 500),
offset: ivec2(0, 0),
size: uvec2(250, 500),
}),
order: 5,
..default()
},
transform,
..default()
});
// Orthographic camera moving
commands.spawn((
Camera3dBundle {
projection: OrthographicProjection {
...
}
.into(),
camera: Camera {
sub_camera_view: Some(SubCameraView {
// Set the sub view camera to a fifth of the full view and
// move it in another system
full_size: uvec2(500, 500),
offset: ivec2(0, 0),
size: uvec2(100, 100),
}),
order: 6,
..default()
},
transform,
..default()
},
MovingCameraMarker,
));
// Orthographic camera control
commands.spawn(Camera3dBundle {
projection: OrthographicProjection {
...
}
.into(),
camera: Camera {
sub_camera_view: Some(SubCameraView {
// Set the sub view to the full image, to ensure that it matches
// the projection without sub view
full_size: uvec2(450, 450),
offset: ivec2(0, 0),
size: uvec2(450, 450),
}),
order: 7,
..default()
},
transform,
..default()
});
```
</details>
# Objective
Another step in the migration to required components: lights!
Note that this does not include `EnvironmentMapLight` or reflection
probes yet, because their API hasn't been fully chosen yet.
## Solution
As per the [selected
proposals](https://hackmd.io/@bevy/required_components/%2FLLnzwz9XTxiD7i2jiUXkJg):
- Deprecate `PointLightBundle` in favor of the `PointLight` component
- Deprecate `SpotLightBundle` in favor of the `PointLight` component
- Deprecate `DirectionalLightBundle` in favor of the `DirectionalLight`
component
## Testing
I ran some examples with lights.
---
## Migration Guide
`PointLightBundle`, `SpotLightBundle`, and `DirectionalLightBundle` have
been deprecated. Use the `PointLight`, `SpotLight`, and
`DirectionalLight` components instead. Adding them will now insert the
other components required by them automatically.
- Adopted from #14449
- Still fixes#12144.
## Migration Guide
The retained render world is a complex change: migrating might take one
of a few different forms depending on the patterns you're using.
For every example, we specify in which world the code is run. Most of
the changes affect render world code, so for the average Bevy user who's
using Bevy's high-level rendering APIs, these changes are unlikely to
affect your code.
### Spawning entities in the render world
Previously, if you spawned an entity with `world.spawn(...)`,
`commands.spawn(...)` or some other method in the rendering world, it
would be despawned at the end of each frame. In 0.15, this is no longer
the case and so your old code could leak entities. This can be mitigated
by either re-architecting your code to no longer continuously spawn
entities (like you're used to in the main world), or by adding the
`bevy_render::world_sync::TemporaryRenderEntity` component to the entity
you're spawning. Entities tagged with `TemporaryRenderEntity` will be
removed at the end of each frame (like before).
### Extract components with `ExtractComponentPlugin`
```
// main world
app.add_plugins(ExtractComponentPlugin::<ComponentToExtract>::default());
```
`ExtractComponentPlugin` has been changed to only work with synced
entities. Entities are automatically synced if `ComponentToExtract` is
added to them. However, entities are not "unsynced" if any given
`ComponentToExtract` is removed, because an entity may have multiple
components to extract. This would cause the other components to no
longer get extracted because the entity is not synced.
So be careful when only removing extracted components from entities in
the render world, because it might leave an entity behind in the render
world. The solution here is to avoid only removing extracted components
and instead despawn the entire entity.
### Manual extraction using `Extract<Query<(Entity, ...)>>`
```rust
// in render world, inspired by bevy_pbr/src/cluster/mod.rs
pub fn extract_clusters(
mut commands: Commands,
views: Extract<Query<(Entity, &Clusters, &Camera)>>,
) {
for (entity, clusters, camera) in &views {
// some code
commands.get_or_spawn(entity).insert(...);
}
}
```
One of the primary consequences of the retained rendering world is that
there's no longer a one-to-one mapping from entity IDs in the main world
to entity IDs in the render world. Unlike in Bevy 0.14, Entity 42 in the
main world doesn't necessarily map to entity 42 in the render world.
Previous code which called `get_or_spawn(main_world_entity)` in the
render world (`Extract<Query<(Entity, ...)>>` returns main world
entities). Instead, you should use `&RenderEntity` and
`render_entity.id()` to get the correct entity in the render world. Note
that this entity does need to be synced first in order to have a
`RenderEntity`.
When performing manual abstraction, this won't happen automatically
(like with `ExtractComponentPlugin`) so add a `SyncToRenderWorld` marker
component to the entities you want to extract.
This results in the following code:
```rust
// in render world, inspired by bevy_pbr/src/cluster/mod.rs
pub fn extract_clusters(
mut commands: Commands,
views: Extract<Query<(&RenderEntity, &Clusters, &Camera)>>,
) {
for (render_entity, clusters, camera) in &views {
// some code
commands.get_or_spawn(render_entity.id()).insert(...);
}
}
// in main world, when spawning
world.spawn(Clusters::default(), Camera::default(), SyncToRenderWorld)
```
### Looking up `Entity` ids in the render world
As previously stated, there's now no correspondence between main world
and render world `Entity` identifiers.
Querying for `Entity` in the render world will return the `Entity` id in
the render world: query for `MainEntity` (and use its `id()` method) to
get the corresponding entity in the main world.
This is also a good way to tell the difference between synced and
unsynced entities in the render world, because unsynced entities won't
have a `MainEntity` component.
---------
Co-authored-by: re0312 <re0312@outlook.com>
Co-authored-by: re0312 <45868716+re0312@users.noreply.github.com>
Co-authored-by: Periwink <charlesbour@gmail.com>
Co-authored-by: Anselmo Sampietro <ans.samp@gmail.com>
Co-authored-by: Emerson Coskey <56370779+ecoskey@users.noreply.github.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Christian Hughes <9044780+ItsDoot@users.noreply.github.com>
# Objective
`ui_stack_system` generates a tree of `StackingContexts` which it then
flattens to get the `UiStack`.
But there's no need to construct a new tree. We can query for nodes with
a global `ZIndex`, add those nodes to the root nodes list and then build
the `UiStack` from a walk of the existing layout tree, ignoring any
branches that have a global `Zindex`.
Fixes#9877
## Solution
Split the `ZIndex` enum into two separate components, `ZIndex` and
`GlobalZIndex`
Query for nodes with a `GlobalZIndex`, add those nodes to the root nodes
list and then build the `UiStack` from a walk of the existing layout
tree, filtering branches by `Without<GlobalZIndex>` so we don't revisit
nodes.
```
cargo run --profile stress-test --features trace_tracy --example many_buttons
```
<img width="672" alt="ui-stack-system-walk-split-enum"
src="https://github.com/bevyengine/bevy/assets/27962798/11e357a5-477f-4804-8ada-c4527c009421">
(Yellow is this PR, red is main)
---
## Changelog
`Zindex`
* The `ZIndex` enum has been split into two separate components `ZIndex`
(which replaces `ZIndex::Local`) and `GlobalZIndex` (which replaces
`ZIndex::Global`). An entity can have both a `ZIndex` and
`GlobalZIndex`, in comparisons `ZIndex` breaks ties if two
`GlobalZIndex` values are equal.
`ui_stack_system`
* Instead of generating a tree of `StackingContexts`, query for nodes
with a `GlobalZIndex`, add those nodes to the root nodes list and then
build the `UiStack` from a walk of the existing layout tree, filtering
branches by `Without<GlobalZIndex` so we don't revisit nodes.
## Migration Guide
The `ZIndex` enum has been split into two separate components `ZIndex`
(which replaces `ZIndex::Local`) and `GlobalZIndex` (which replaces
`ZIndex::Global`). An entity can have both a `ZIndex` and
`GlobalZIndex`, in comparisons `ZIndex` breaks ties if two
`GlobalZindex` values are equal.
---------
Co-authored-by: Gabriel Bourgeois <gabriel.bourgeoisv4si@gmail.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: UkoeHB <37489173+UkoeHB@users.noreply.github.com>
# Objective
- fixes https://github.com/bevyengine/bevy/issues/13473
## Solution
- When a single mesh is assigned multiple materials, it is divided into
several primitive nodes, with each primitive assigned a unique material.
Presently, these primitives are named using the format Mesh.index, which
complicates querying. To improve this, we can assign a specific name to
each primitive based on the material’s name, since each primitive
corresponds to one material exclusively.
## Testing
- I have included a simple example which shows how to query a material
and mesh part based on the new name component.
## Changelog
- adds `GltfMaterialName` component to the mesh entity of the gltf
primitive node.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
* Save 16 bytes per vertex by calculating tangents in the shader at
runtime, rather than storing them in the vertex data.
* Based on https://jcgt.org/published/0009/03/04,
https://www.jeremyong.com/graphics/2023/12/16/surface-gradient-bump-mapping.
* Fixed visbuffer resolve to use the updated algorithm that flips ddy
correctly
* Added some more docs about meshlet material limitations, and some
TODOs about transforming UV coordinates for the future.
![image](https://github.com/user-attachments/assets/222d8192-8c82-4d77-945d-53670a503761)
For testing add a normal map to the bunnies with StandardMaterial like
below, and then test that on both main and this PR (make sure to
download the correct bunny for each). Results should be mostly
identical.
```rust
normal_map_texture: Some(asset_server.load_with_settings(
"textures/BlueNoise-Normal.png",
|settings: &mut ImageLoaderSettings| settings.is_srgb = false,
)),
```
# Objective
Fixes the confusion that caused #5660
## Solution
Make it clear that it is the hardware which doesn't support the format
and not bevy's fault.
# Objective
- Fixes#15319
- Fixes#15317
## Solution
- Updated CI task to check for _any_ `bevy_*` crates, rather than just
`bevy_internal`
---------
Co-authored-by: François Mockers <francois.mockers@vleue.com>
[*Percentage-closer soft shadows*] are a technique from 2004 that allow
shadows to become blurrier farther from the objects that cast them. It
works by introducing a *blocker search* step that runs before the normal
shadow map sampling. The blocker search step detects the difference
between the depth of the fragment being rasterized and the depth of the
nearby samples in the depth buffer. Larger depth differences result in a
larger penumbra and therefore a blurrier shadow.
To enable PCSS, fill in the `soft_shadow_size` value in
`DirectionalLight`, `PointLight`, or `SpotLight`, as appropriate. This
shadow size value represents the size of the light and should be tuned
as appropriate for your scene. Higher values result in a wider penumbra
(i.e. blurrier shadows).
When using PCSS, temporal shadow maps
(`ShadowFilteringMethod::Temporal`) are recommended. If you don't use
`ShadowFilteringMethod::Temporal` and instead use
`ShadowFilteringMethod::Gaussian`, Bevy will use the same technique as
`Temporal`, but the result won't vary over time. This produces a rather
noisy result. Doing better would likely require downsampling the shadow
map, which would be complex and slower (and would require PR #13003 to
land first).
In addition to PCSS, this commit makes the near Z plane for the shadow
map configurable on a per-light basis. Previously, it had been hardcoded
to 0.1 meters. This change was necessary to make the point light shadow
map in the example look reasonable, as otherwise the shadows appeared
far too aliased.
A new example, `pcss`, has been added. It demonstrates the
percentage-closer soft shadow technique with directional lights, point
lights, spot lights, non-temporal operation, and temporal operation. The
assets are my original work.
Both temporal and non-temporal shadows are rather noisy in the example,
and, as mentioned before, this is unavoidable without downsampling the
depth buffer, which we can't do yet. Note also that the shadows don't
look particularly great for point lights; the example simply isn't an
ideal scene for them. Nevertheless, I felt that the benefits of the
ability to do a side-by-side comparison of directional and point lights
outweighed the unsightliness of the point light shadows in that example,
so I kept the point light feature in.
Fixes#3631.
[*Percentage-closer soft shadows*]:
https://developer.download.nvidia.com/shaderlibrary/docs/shadow_PCSS.pdf
## Changelog
### Added
* Percentage-closer soft shadows (PCSS) are now supported, allowing
shadows to become blurrier as they stretch away from objects. To use
them, set the `soft_shadow_size` field in `DirectionalLight`,
`PointLight`, or `SpotLight`, as applicable.
* The near Z value for shadow maps is now customizable via the
`shadow_map_near_z` field in `DirectionalLight`, `PointLight`, and
`SpotLight`.
## Screenshots
PCSS off:
![Screenshot 2024-05-24
120012](https://github.com/bevyengine/bevy/assets/157897/0d35fe98-245b-44fb-8a43-8d0272a73b86)
PCSS on:
![Screenshot 2024-05-24
115959](https://github.com/bevyengine/bevy/assets/157897/83397ef8-1317-49dd-bfb3-f8286d7610cd)
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
# Objective
- Fixes#15236
## Solution
- Use bevy_math::ops instead of std floating point operations.
## Testing
- Did you test these changes? If so, how?
Unit tests and `cargo run -p ci -- test`
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
Execute `cargo run -p ci -- test` on Windows.
- If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?
Windows
## Migration Guide
- Not a breaking change
- Projects should use bevy math where applicable
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: IQuick 143 <IQuick143cz@gmail.com>
Co-authored-by: Joona Aalto <jondolf.dev@gmail.com>
# Objective
The names of numerous rendering components in Bevy are inconsistent and
a bit confusing. Relevant names include:
- `AutoExposureSettings`
- `AutoExposureSettingsUniform`
- `BloomSettings`
- `BloomUniform` (no `Settings`)
- `BloomPrefilterSettings`
- `ChromaticAberration` (no `Settings`)
- `ContrastAdaptiveSharpeningSettings`
- `DepthOfFieldSettings`
- `DepthOfFieldUniform` (no `Settings`)
- `FogSettings`
- `SmaaSettings`, `Fxaa`, `TemporalAntiAliasSettings` (really
inconsistent??)
- `ScreenSpaceAmbientOcclusionSettings`
- `ScreenSpaceReflectionsSettings`
- `VolumetricFogSettings`
Firstly, there's a lot of inconsistency between `Foo`/`FooSettings` and
`FooUniform`/`FooSettingsUniform` and whether names are abbreviated or
not.
Secondly, the `Settings` post-fix seems unnecessary and a bit confusing
semantically, since it makes it seem like the component is mostly just
auxiliary configuration instead of the core *thing* that actually
enables the feature. This will be an even bigger problem once bundles
like `TemporalAntiAliasBundle` are deprecated in favor of required
components, as users will expect a component named `TemporalAntiAlias`
(or similar), not `TemporalAntiAliasSettings`.
## Solution
Drop the `Settings` post-fix from the component names, and change some
names to be more consistent.
- `AutoExposure`
- `AutoExposureUniform`
- `Bloom`
- `BloomUniform`
- `BloomPrefilter`
- `ChromaticAberration`
- `ContrastAdaptiveSharpening`
- `DepthOfField`
- `DepthOfFieldUniform`
- `DistanceFog`
- `Smaa`, `Fxaa`, `TemporalAntiAliasing` (note: we might want to change
to `Taa`, see "Discussion")
- `ScreenSpaceAmbientOcclusion`
- `ScreenSpaceReflections`
- `VolumetricFog`
I kept the old names as deprecated type aliases to make migration a bit
less painful for users. We should remove them after the next release.
(And let me know if I should just... not add them at all)
I also added some very basic docs for a few types where they were
missing, like on `Fxaa` and `DepthOfField`.
## Discussion
- `TemporalAntiAliasing` is still inconsistent with `Smaa` and `Fxaa`.
Consensus [on
Discord](https://discord.com/channels/691052431525675048/743663924229963868/1280601167209955431)
seemed to be that renaming to `Taa` would probably be fine, but I think
it's a bit more controversial, and it would've required renaming a lot
of related types like `TemporalAntiAliasNode`,
`TemporalAntiAliasBundle`, and `TemporalAntiAliasPlugin`, so I think
it's better to leave to a follow-up.
- I think `Fog` should probably have a more specific name like
`DistanceFog` considering it seems to be distinct from `VolumetricFog`.
~~This should probably be done in a follow-up though, so I just removed
the `Settings` post-fix for now.~~ (done)
---
## Migration Guide
Many rendering components have been renamed for improved consistency and
clarity.
- `AutoExposureSettings` → `AutoExposure`
- `BloomSettings` → `Bloom`
- `BloomPrefilterSettings` → `BloomPrefilter`
- `ContrastAdaptiveSharpeningSettings` → `ContrastAdaptiveSharpening`
- `DepthOfFieldSettings` → `DepthOfField`
- `FogSettings` → `DistanceFog`
- `SmaaSettings` → `Smaa`
- `TemporalAntiAliasSettings` → `TemporalAntiAliasing`
- `ScreenSpaceAmbientOcclusionSettings` → `ScreenSpaceAmbientOcclusion`
- `ScreenSpaceReflectionsSettings` → `ScreenSpaceReflections`
- `VolumetricFogSettings` → `VolumetricFog`
---------
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
Hello! I am adopting #11022 to resolve conflicts with `main`. tldr: this
removes `scale` in favour of `scaling_mode`. Please see the original PR
for explanation/discussion.
Also relates to #2580.
## Migration Guide
Replace all uses of `scale` with `scaling_mode`, keeping in mind that
`scale` is (was) a multiplier. For example, replace
```rust
scale: 2.0,
scaling_mode: ScalingMode::FixedHorizontal(4.0),
```
with
```rust
scaling_mode: ScalingMode::FixedHorizontal(8.0),
```
---------
Co-authored-by: Stepan Koltsov <stepan.koltsov@gmail.com>
Adopted PR from dmlary, all credit to them!
https://github.com/bevyengine/bevy/pull/9915
Original description:
# Objective
The default value for `near` in `OrthographicProjection` should be
different for 2d & 3d.
For 2d using `near = -1000` allows bevy users to build up scenes using
background `z = 0`, and foreground elements `z > 0` similar to css.
However in 3d `near = -1000` results in objects behind the camera being
rendered. Using `near = 0` works for 3d, but forces 2d users to assign
`z <= 0` for rendered elements, putting the background at some arbitrary
negative value.
There is no common value for `near` that doesn't result in a footgun or
usability issue for either 2d or 3d, so they should have separate
values.
There was discussion about other options in the discord
[0](https://discord.com/channels/691052431525675048/1154114310042292325),
but splitting `default()` into `default_2d()` and `default_3d()` seemed
like the lowest cost approach.
Related/past work https://github.com/bevyengine/bevy/issues/9138,
https://github.com/bevyengine/bevy/pull/9214,
https://github.com/bevyengine/bevy/pull/9310,
https://github.com/bevyengine/bevy/pull/9537 (thanks to @Selene-Amanita
for the list)
## Solution
This commit splits `OrthographicProjection::default` into `default_2d`
and `default_3d`.
## Migration Guide
- In initialization of `OrthographicProjection`, change `..default()` to
`..OrthographicProjection::default_2d()` or
`..OrthographicProjection::default_3d()`
Example:
```diff
--- a/examples/3d/orthographic.rs
+++ b/examples/3d/orthographic.rs
@@ -20,7 +20,7 @@ fn setup(
projection: OrthographicProjection {
scale: 3.0,
scaling_mode: ScalingMode::FixedVertical(2.0),
- ..default()
+ ..OrthographicProjection::default_3d()
}
.into(),
transform: Transform::from_xyz(5.0, 5.0, 5.0).looking_at(Vec3::ZERO, Vec3::Y),
```
---------
Co-authored-by: David M. Lary <dmlary@gmail.com>
Co-authored-by: Jan Hohenheim <jan@hohenheim.ch>
### Builder changes
- Increased meshlet max vertices/triangles from 64v/64t to 255v/128t
(meshoptimizer won't allow 256v sadly). This gives us a much greater
percentage of meshlets with max triangle count (128). Still not perfect,
we still end up with some tiny <=10 triangle meshlets that never really
get simplified, but it's progress.
- Removed the error target limit. Now we allow meshoptimizer to simplify
as much as possible. No reason to cap this out, as the cluster culling
code will choose a good LOD level anyways. Again leads to higher quality
LOD trees.
- After some discussion and consulting the Nanite slides again, changed
meshlet group error from _adding_ the max child's error to the group
error, to doing `group_error = max(group_error, max_child_error)`. Error
is already cumulative between LODs as the edges we're collapsing during
simplification get longer each time.
- Bumped the 65% simplification threshold to allow up to 95% of the
original geometry (e.g. accept simplification as valid even if we only
simplified 5% of the triangles). This gives us closer to
log2(initial_meshlet_count) LOD levels, and fewer meshlet roots in the
DAG.
Still more work to be done in the future here. Maybe trying METIS for
meshlet building instead of meshoptimizer.
Using ~8 clusters per group instead of ~4 might also make a big
difference. The Nanite slides say that they have 8-32 meshlets per
group, suggesting some kind of heuristic. Unfortunately meshopt's
compute_cluster_bounds won't work with large groups atm
(https://github.com/zeux/meshoptimizer/discussions/750#discussioncomment-10562641)
so hard to test.
Based on discussion from
https://github.com/bevyengine/bevy/discussions/14998,
https://github.com/zeux/meshoptimizer/discussions/750, and discord.
### Runtime changes
- cluster:triangle packed IDs are now stored 25:7 instead of 26:6 bits,
as max triangles per cluster are now 128 instead of 64
- Hardware raster now spawns 128 * 3 vertices instead of 64 * 3 vertices
to account for the new max triangles limit
- Hardware raster now outputs NaN triangles (0 / 0) instead of
zero-positioned triangles for extra vertex invocations over the cluster
triangle count. Shouldn't really be a difference idt, but I did it
anyways.
- Software raster now does 128 threads per workgroup instead of 64
threads. Each thread now loads, projects, and caches a vertex (vertices
0-127), and then if needed does so again (vertices 128-254). Each thread
then rasterizes one of 128 triangles.
- Fixed a bug with `needs_dispatch_remap`. I had the condition backwards
in my last PR, I probably committed it by accident after testing the
non-default code path on my GPU.
# Objective
- Fixes https://github.com/bevyengine/bevy/issues/14593.
## Solution
- Add `ViewportConversionError` and return it from viewport conversion
methods on Camera.
## Testing
- I successfully compiled and ran all changed examples.
## Migration Guide
The following methods on `Camera` now return a `Result` instead of an
`Option` so that they can provide more information about failures:
- `world_to_viewport`
- `world_to_viewport_with_depth`
- `viewport_to_world`
- `viewport_to_world_2d`
Call `.ok()` on the `Result` to turn it back into an `Option`, or handle
the `Result` directly.
---------
Co-authored-by: Lixou <82600264+DasLixou@users.noreply.github.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Zachary Harrold <zac@harrold.com.au>
# Objective
Thought I had found all of these... noticed some `10px` in #15013 and
did another sweep.
Continuation of #8478, #13583.
## Solution
- Position example text (and other elements) 12px from the edge of the
screen
# Objective
- Solves the last bullet in and closes#14319
- Make better use of the `Isometry` types
- Prevent issues like #14655
- Probably simplify and clean up a lot of code through the use of Gizmos
as well (i.e. the 3D gizmos for cylinders circles & lines don't connect
well, probably due to wrong rotations)
## Solution
- go through the `bevy_gizmos` crate and give all methods a slight
workover
## Testing
- For all the changed examples I run `git switch main && cargo rr
--example <X> && git switch <BRANCH> && cargo rr --example <X>` and
compare the visual results
- Check if all doc tests are still compiling
- Check the docs in general and update them !!!
---
## Migration Guide
The gizmos methods function signature changes as follows:
- 2D
- if it took `position` & `rotation_angle` before ->
`Isometry2d::new(position, Rot2::radians(rotation_angle))`
- if it just took `position` before ->
`Isometry2d::from_translation(position)`
- 3D
- if it took `position` & `rotation` before ->
`Isometry3d::new(position, rotation)`
- if it just took `position` before ->
`Isometry3d::from_translation(position)`
# Objective
Fixes#14883
## Solution
Pretty simple update to `EntityCommands` methods to consume `self` and
return it rather than taking `&mut self`. The things probably worth
noting:
* I added `#[allow(clippy::should_implement_trait)]` to the `add` method
because it causes a linting conflict with `std::ops::Add`.
* `despawn` and `log_components` now return `Self`. I'm not sure if
that's exactly the desired behavior so I'm happy to adjust if that seems
wrong.
## Testing
Tested with `cargo run -p ci`. I think that should be sufficient to call
things good.
## Migration Guide
The most likely migration needed is changing code from this:
```
let mut entity = commands.get_or_spawn(entity);
if depth_prepass {
entity.insert(DepthPrepass);
}
if normal_prepass {
entity.insert(NormalPrepass);
}
if motion_vector_prepass {
entity.insert(MotionVectorPrepass);
}
if deferred_prepass {
entity.insert(DeferredPrepass);
}
```
to this:
```
let mut entity = commands.get_or_spawn(entity);
if depth_prepass {
entity = entity.insert(DepthPrepass);
}
if normal_prepass {
entity = entity.insert(NormalPrepass);
}
if motion_vector_prepass {
entity = entity.insert(MotionVectorPrepass);
}
if deferred_prepass {
entity.insert(DeferredPrepass);
}
```
as can be seen in several of the example code updates here. There will
probably also be instances where mutable `EntityCommands` vars no longer
need to be mutable.
# Objective
- Faster meshlet rasterization path for small triangles
- Avoid having to allocate and write out a triangle buffer
- Refactor gpu_scene.rs
## Solution
- Replace the 32bit visbuffer texture with a 64bit visbuffer buffer,
where the left 32 bits encode depth, and the right 32 bits encode the
existing cluster + triangle IDs. Can't use 64bit textures, wgpu/naga
doesn't support atomic ops on textures yet.
- Instead of writing out a buffer of packed cluster + triangle IDs (per
triangle) to raster, the culling pass now writes out a buffer of just
cluster IDs (per cluster, so less memory allocated, cheaper to write
out).
- Clusters for software raster are allocated from the left side
- Clusters for hardware raster are allocated in the same buffer, from
the right side
- The buffer size is fixed at MeshletPlugin build time, and should be
set to a reasonable value for your scene (no warning on overflow, and no
good way to determine what value you need outside of renderdoc - I plan
to fix this in a future PR adding a meshlet stats overlay)
- Currently I don't have a heuristic for software vs hardware raster
selection for each cluster. The existing code is just a placeholder. I
need to profile on a release scene and come up with a heuristic,
probably in a future PR.
- The culling shader is getting pretty hard to follow at this point, but
I don't want to spend time improving it as the entire shader/pass is
getting rewritten/replaced in the near future.
- Software raster is a compute workgroup per-cluster. Each workgroup
loads and transforms the <=64 vertices of the cluster, and then
rasterizes the <=64 triangles of the cluster.
- Two variants are implemented: Scanline for clusters with any larger
triangles (still smaller than hardware is good at), and brute-force for
very very tiny triangles
- Once the shader determines that a pixel should be filled in, it does
an atomicMax() on the visbuffer to store the results, copying how Nanite
works
- On devices with a low max workgroups per dispatch limit, an extra
compute pass is inserted before software raster to convert from a 1d to
2d dispatch (I don't think 3d would ever be necessary).
- I haven't implemented the top-left rule or subpixel precision yet, I'm
leaving that for a future PR since I get usable results without it for
now
- Resources used:
https://kristoffer-dyrkorn.github.io/triangle-rasterizer and chapters
6-8 of
https://fgiesen.wordpress.com/2013/02/17/optimizing-sw-occlusion-culling-index
- Hardware raster now spawns 64*3 vertex invocations per meshlet,
instead of the actual meshlet vertex count. Extra invocations just
early-exit.
- While this is slower than the existing system, hardware draws should
be rare now that software raster is usable, and it saves a ton of memory
using the unified cluster ID buffer. This would be fixed if wgpu had
support for mesh shaders.
- Instead of writing to a color+depth attachment, the hardware raster
pass also does the same atomic visbuffer writes that software raster
uses.
- We have to bind a dummy render target anyways, as wgpu doesn't
currently support render passes without any attachments
- Material IDs are no longer written out during the main rasterization
passes.
- If we had async compute queues, we could overlap the software and
hardware raster passes.
- New material and depth resolve passes run at the end of the visbuffer
node, and write out view depth and material ID depth textures
### Misc changes
- Fixed cluster culling importing, but never actually using the previous
view uniforms when doing occlusion culling
- Fixed incorrectly adding the LOD error twice when building the meshlet
mesh
- Splitup gpu_scene module into meshlet_mesh_manager, instance_manager,
and resource_manager
- resource_manager is still too complex and inefficient (extract and
prepare are way too expensive). I plan on improving this in a future PR,
but for now ResourceManager is mostly a 1:1 port of the leftover
MeshletGpuScene bits.
- Material draw passes have been renamed to the more accurate material
shade pass, as well as some other misc renaming (in the future, these
will be compute shaders even, and not actual draw calls)
---
## Migration Guide
- TBD (ask me at the end of the release for meshlet changes as a whole)
---------
Co-authored-by: vero <email@atlasdostal.com>
# Objective
- The examples use a more verbose than necessary way to initialize the
image
- The order of the camera doesn't need to be specified. At least I
didn't see a difference in my testing
## Solution
- Use `Image::new_fill()` to fill the image instead of abusing
`resize()`
- Remove the camera ordering
# Objective
- There is a flaw in the implementation of `FogVolume`'s
`density_texture_offset` from #14868. Because of the way I am wrapping
the UVW coordinates in the volumetric fog shader, a seam is visible when
the 3d texture is wrapping around from one side to the other:
![density_texture_offset_seam](https://github.com/user-attachments/assets/89527ef2-5e1b-4b90-8e73-7a3e607697d4)
## Solution
- This PR fixes the issue by removing the wrapping from the shader and
instead leaving it to the user to configure the 3d noise texture to use
`ImageAddressMode::Repeat` if they want it to repeat. Using
`ImageAddressMode::Repeat` is the proper solution to avoid the obvious
seam:
![density_texture_seam_fixed](https://github.com/user-attachments/assets/06e871a6-2db1-4501-b425-4141605f9b26)
- The sampler cannot be implicitly configured to use
`ImageAddressMode::Repeat` because that's not always desirable. For
example, the `fog_volumes` example wouldn't work properly because the
texture from the edges of the volume would overflow to the other sides,
which would be bad in this instance (but it's good in the case of the
`scrolling_fog` example). So leaving it to the user to decide on their
own whether they want the density texture to repeat seems to be the best
solution.
## Testing
- The `scrolling_fog` example still looks the same, it was just changed
to explicitly declare that the density texture should be repeating when
loading the asset. The `fog_volumes` example is unaffected.
<details>
<summary>Minimal reproduction example on current main</summary>
<pre>
use bevy::core_pipeline::experimental::taa::{TemporalAntiAliasBundle,
TemporalAntiAliasPlugin};
use bevy::pbr::{FogVolume, VolumetricFogSettings, VolumetricLight};
use bevy::prelude::*;<br>
fn main() {
App::new()
.add_plugins((DefaultPlugins, TemporalAntiAliasPlugin))
.add_systems(Startup, setup)
.run();
}<br>
fn setup(mut commands: Commands, assets: Res<AssetServer>) {
commands.spawn((
Camera3dBundle {
transform: Transform::from_xyz(3.5, -1.0, 0.4)
.looking_at(Vec3::new(0.0, 0.0, 0.4), Vec3::Y),
msaa: Msaa::Off,
..default()
},
TemporalAntiAliasBundle::default(),
VolumetricFogSettings {
ambient_intensity: 0.0,
jitter: 0.5,
..default()
},
));<br>
commands.spawn((
DirectionalLightBundle {
transform: Transform::from_xyz(-6.0, 5.0, -9.0)
.looking_at(Vec3::new(0.0, 0.0, 0.0), Vec3::Y),
directional_light: DirectionalLight {
illuminance: 32_000.0,
shadows_enabled: true,
..default()
},
..default()
},
VolumetricLight,
));<br>
commands.spawn((
SpatialBundle {
visibility: Visibility::Visible,
transform: Transform::from_xyz(0.0, 0.0,
0.0).with_scale(Vec3::splat(3.0)),
..default()
},
FogVolume {
density_texture: Some(assets.load("volumes/fog_noise.ktx2")),
density_texture_offset: Vec3::new(0.0, 0.0, 0.4),
scattering: 1.0,
..default()
},
));
}
</pre>
</details>
# Objective
- The goal of this PR is to make it possible to move the density texture
of a `FogVolume` over time in order to create dynamic effects like fog
moving in the wind.
- You could theoretically move the `FogVolume` itself, but this is not
ideal, because the `FogVolume` AABB would eventually leave the area. If
you want an area to remain foggy while also creating the impression that
the fog is moving in the wind, a scrolling density texture is a better
solution.
## Solution
- The PR adds a `density_texture_offset` field to the `FogVolume`
component. This offset is in the UVW coordinates of the density texture,
meaning that a value of `(0.5, 0.0, 0.0)` moves the 3d texture by half
along the x-axis.
- Values above 1.0 are wrapped, a 1.5 offset is the same as a 0.5
offset. This makes it so that the density texture wraps around on the
other side, meaning that a repeating 3d noise texture can seamlessly
scroll forever. It also makes it easy to move the density texture over
time by simply increasing the offset every frame.
## Testing
- A `scrolling_fog` example has been added to demonstrate the feature.
It uses the offset to scroll a repeating 3d noise density texture to
create the impression of fog moving in the wind.
- The camera is looking at a pillar with the sun peaking behind it. This
highlights the effect the changing density has on the volumetric
lighting interactions.
- Temporal anti-aliasing combined with the `jitter` option of
`VolumetricFogSettings` is used to improve the quality of the effect.
---
## Showcase
https://github.com/user-attachments/assets/3aa50ebd-771c-4c99-ab5d-255c0c3be1a8
# Objective
Fixes#14782
## Solution
Enable the lint and fix all upcoming hints (`--fix`). Also tried to
figure out the false-positive (see review comment). Maybe split this PR
up into multiple parts where only the last one enables the lint, so some
can already be merged resulting in less many files touched / less
potential for merge conflicts?
Currently, there are some cases where it might be easier to read the
code with the qualifier, so perhaps remove the import of it and adapt
its cases? In the current stage it's just a plain adoption of the
suggestions in order to have a base to discuss.
## Testing
`cargo clippy` and `cargo run -p ci` are happy.
# Objective
- Fixes#14595
## Solution
- Use `num_cascades: 1` in WebGL build.
`CascadeShadowConfigBuilder::default()` gives this number in WebGL:
8235daaea0/crates/bevy_pbr/src/light/mod.rs (L241-L248)
## Testing
- Tested the modified example in WebGL with Firefox/Chrome
---------
Co-authored-by: JMS55 <47158642+JMS55@users.noreply.github.com>
# Objective
- meshlet example has broken since #14273
## Solution
- disable msaa in meshlet example
Co-authored-by: François Mockers <mockersf@gmail.com>
Switches `Msaa` from being a globally configured resource to a per
camera view component.
Closes#7194
# Objective
Allow individual views to describe their own MSAA settings. For example,
when rendering to different windows or to different parts of the same
view.
## Solution
Make `Msaa` a component that is required on all camera bundles.
## Testing
Ran a variety of examples to ensure that nothing broke.
TODO:
- [ ] Make sure android still works per previous comment in
`extract_windows`.
---
## Migration Guide
`Msaa` is no longer configured as a global resource, and should be
specified on each spawned camera if a non-default setting is desired.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: François Mockers <francois.mockers@vleue.com>
# Objective
When the user renders multiple cameras to the same output texture, it
can sometimes be confusing what `ClearColorConfig` is necessary for each
camera to avoid overwriting the previous camera's output. This is
particular true in cases where the user uses mixed HDR cameras, which
means that their scene is being rendered to different internal textures.
## Solution
When a view has a configured viewport, set the GPU scissor in the
upscaling node so we don't overwrite areas that were written to by other
cameras.
## Testing
Ran the `split_screen` example.
# Objective
- Fixes: https://github.com/bevyengine/bevy/issues/14036
## Solution
- Add a world space transformation for the environment sample direction.
## Testing
- I have tested the newly added `transform` field using the newly added
`rotate_environment_map` example.
https://github.com/user-attachments/assets/2de77c65-14bc-48ee-b76a-fb4e9782dbdb
## Migration Guide
- Since we have added a new filed to the `EnvironmentMapLight` struct,
users will need to include `..default()` or some rotation value in their
initialization code.