# Objective
- Remove `derive_more`'s error derivation and replace it with
`thiserror`
## Solution
- Added `derive_more`'s `error` feature to `deny.toml` to prevent it
sneaking back in.
- Reverted to `thiserror` error derivation
## Notes
Merge conflicts were too numerous to revert the individual changes, so
this reversion was done manually. Please scrutinise carefully during
review.
# Objective
Volumetric fog was broken by #13746.
Looks like this particular shader just got missed. I don't see any other
instances of `unpack_offset_and_counts` in the codebase.
```
2024-12-06T03:18:42.297494Z ERROR bevy_render::render_resource::pipeline_cache: failed to process shader:
error: no definition in scope for identifier: 'bevy_pbr::clustered_forward::unpack_offset_and_counts'
┌─ crates/bevy_pbr/src/volumetric_fog/volumetric_fog.wgsl:312:29
│
312 │ let offset_and_counts = bevy_pbr::clustered_forward::unpack_offset_and_counts(cluster_index);
│ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ unknown identifier
│
= no definition in scope for identifier: 'bevy_pbr::clustered_forward::unpack_offset_and_counts'
```
## Solution
Use `unpack_clusterable_object_index_ranges` to get the indices for
point/spot lights.
## Testing
`cargo run --example volumetric_fog`
`cargo run --example fog_volumes`
`cargo run --example scrolling_fog`
The bindless PR (#16368) broke some examples:
* `specialized_mesh_pipeline` and `custom_shader_instancing` failed
because they expect to be able to render a mesh with no material, by
overriding enough of the render pipeline to be able to do so. This PR
fixes the issue by restoring the old behavior in which we extract meshes
even if they have no material.
* `texture_binding_array` broke because it doesn't implement
`AsBindGroup::unprepared_bind_group`. This was tricky to fix because
there's a very good reason why `texture_binding_array` doesn't implement
that method: there's no sensible way to do so with `wgpu`'s current
bindless API, due to its multiple levels of borrowed references. To fix
the example, I split `MaterialBindGroup` into
`MaterialBindlessBindGroup` and `MaterialNonBindlessBindGroup`, and
allow direct custom implementations of `AsBindGroup::as_bind_group` for
the latter type of bind groups. To opt in to the new behavior, return
the `AsBindGroupError::CreateBindGroupDirectly` error from your
`AsBindGroup::unprepared_bind_group` implementation, and Bevy will call
your custom `AsBindGroup::as_bind_group` method as before.
## Migration Guide
* Bevy will now unconditionally call
`AsBindGroup::unprepared_bind_group` for your materials, so you must no
longer panic in that function. Instead, return the new
`AsBindGroupError::CreateBindGroupDirectly` error, and Bevy will fall
back to calling `AsBindGroup::as_bind_group` as before.
This commit moves the front end of the rendering pipeline to a retained
model when GPU preprocessing is in use (i.e. by default, except in
constrained environments). `RenderMeshInstance` and `MeshUniformData`
are stored from frame to frame and are updated only for the entities
that changed state. This was rather tricky and requires some careful
surgery to keep the data valid in the case of removals.
This patch is built on top of Bevy's change detection. Generally, this
worked, except that `ViewVisibility` isn't currently properly tracked.
Therefore, this commit adds proper change tracking for `ViewVisibility`.
Doing this required adding a new system that runs after all
`check_visibility` invocations, as no single `check_visibility`
invocation has enough global information to detect changes.
On the Bistro exterior scene, with all textures forced to opaque, this
patch improves steady-state `extract_meshes_for_gpu_building` from
93.8us to 34.5us and steady-state `collect_meshes_for_gpu_building` from
195.7us to 4.28us. Altogether this constitutes an improvement from 290us
to 38us, which is a 7.46x speedup.
![Screenshot 2024-11-13
143841](https://github.com/user-attachments/assets/40b1aacc-373d-4016-b7fd-b0284bc33de4)
![Screenshot 2024-11-13
143850](https://github.com/user-attachments/assets/53b401c3-7461-43b3-918b-cff89ea780d6)
This patch is only lightly tested and shouldn't land before 0.15 is
released anyway, so I'm releasing it as a draft.
This commit allows the Bevy renderer to use the clustering
infrastructure for light probes (reflection probes and irradiance
volumes) on platforms where at least 3 storage buffers are available. On
such platforms (the vast majority), we stop performing brute-force
searches of light probes for each fragment and instead only search the
light probes with bounding spheres that intersect the current cluster.
This should dramatically improve scalability of irradiance volumes and
reflection probes.
The primary platform that doesn't support 3 storage buffers is WebGL 2,
and we continue using a brute-force search of light probes on that
platform, as the UBO that stores per-cluster indices is too small to fit
the light probe counts. Note, however, that that platform also doesn't
support bindless textures (indeed, it would be very odd for a platform
to support bindless textures but not SSBOs), so we only support one of
each type of light probe per drawcall there in the first place.
Consequently, this isn't a performance problem, as the search will only
have one light probe to consider. (In fact, clustering would probably
end up being a performance loss.)
Known potential improvements include:
1. We currently cull based on a conservative bounding sphere test and
not based on the oriented bounding box (OBB) of the light probe. This is
improvable, but in the interests of simplicity, I opted to keep the
bounding sphere test for now. The OBB improvement can be a follow-up.
2. This patch doesn't change the fact that each fragment only takes a
single light probe into account. Typical light probe implementations
detect the case in which multiple light probes cover the current
fragment and perform some sort of weighted blend between them. As the
light probe fetch function presently returns only a single light probe,
implementing that feature would require more code restructuring, so I
left it out for now. It can be added as a follow-up.
3. Light probe implementations typically have a falloff range. Although
this is a wanted feature in Bevy, this particular commit also doesn't
implement that feature, as it's out of scope.
4. This commit doesn't raise the maximum number of light probes past its
current value of 8 for each type. This should be addressed later, but
would possibly require more bindings on platforms with storage buffers,
which would increase this patch's complexity. Even without raising the
limit, this patch should constitute a significant performance
improvement for scenes that get anywhere close to this limit. In the
interest of keeping this patch small, I opted to leave raising the limit
to a follow-up.
## Changelog
### Changed
* Light probes (reflection probes and irradiance volumes) are now
clustered on most platforms, improving performance when many light
probes are present.
---------
Co-authored-by: Benjamin Brienen <Benjamin.Brienen@outlook.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Currently, the prepass has no support for visibility ranges, so
artifacts appear when using dithering visibility ranges in conjunction
with a prepass. This patch fixes that problem.
Note that this patch changes the prepass to use sparse bind group
indices instead of sequential ones. I figured this is cleaner, because
it allows for greater sharing of WGSL code between the forward pipeline
and the prepass pipeline.
The `visibility_range` example has been updated to allow the prepass to
be toggled on and off.
# Objective
Make documentation of a component's required components more visible by
moving it to the type's docs
## Solution
Change `#[require]` from a derive macro helper to an attribute macro.
Disadvantages:
- this silences any unused code warnings on the component, as it is used
by the macro!
- need to import `require` if not using the ecs prelude (I have not
included this in the migration guilde as Rust tooling already suggests
the fix)
---
## Showcase
![Documentation of
Camera](https://github.com/user-attachments/assets/3329511b-747a-4c8d-a43e-57f7c9c71a3c)
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: JMS55 <47158642+JMS55@users.noreply.github.com>
This patch adds the infrastructure necessary for Bevy to support
*bindless resources*, by adding a new `#[bindless]` attribute to
`AsBindGroup`.
Classically, only a single texture (or sampler, or buffer) can be
attached to each shader binding. This means that switching materials
requires breaking a batch and issuing a new drawcall, even if the mesh
is otherwise identical. This adds significant overhead not only in the
driver but also in `wgpu`, as switching bind groups increases the amount
of validation work that `wgpu` must do.
*Bindless resources* are the typical solution to this problem. Instead
of switching bindings between each texture, the renderer instead
supplies a large *array* of all textures in the scene up front, and the
material contains an index into that array. This pattern is repeated for
buffers and samplers as well. The renderer now no longer needs to switch
binding descriptor sets while drawing the scene.
Unfortunately, as things currently stand, this approach won't quite work
for Bevy. Two aspects of `wgpu` conspire to make this ideal approach
unacceptably slow:
1. In the DX12 backend, all binding arrays (bindless resources) must
have a constant size declared in the shader, and all textures in an
array must be bound to actual textures. Changing the size requires a
recompile.
2. Changing even one texture incurs revalidation of all textures, a
process that takes time that's linear in the total size of the binding
array.
This means that declaring a large array of textures big enough to
encompass the entire scene is presently unacceptably slow. For example,
if you declare 4096 textures, then `wgpu` will have to revalidate all
4096 textures if even a single one changes. This process can take
multiple frames.
To work around this problem, this PR groups bindless resources into
small *slabs* and maintains a free list for each. The size of each slab
for the bindless arrays associated with a material is specified via the
`#[bindless(N)]` attribute. For instance, consider the following
declaration:
```rust
#[derive(AsBindGroup)]
#[bindless(16)]
struct MyMaterial {
#[buffer(0)]
color: Vec4,
#[texture(1)]
#[sampler(2)]
diffuse: Handle<Image>,
}
```
The `#[bindless(N)]` attribute specifies that, if bindless arrays are
supported on the current platform, each resource becomes a binding array
of N instances of that resource. So, for `MyMaterial` above, the `color`
attribute is exposed to the shader as `binding_array<vec4<f32>, 16>`,
the `diffuse` texture is exposed to the shader as
`binding_array<texture_2d<f32>, 16>`, and the `diffuse` sampler is
exposed to the shader as `binding_array<sampler, 16>`. Inside the
material's vertex and fragment shaders, the applicable index is
available via the `material_bind_group_slot` field of the `Mesh`
structure. So, for instance, you can access the current color like so:
```wgsl
// `uniform` binding arrays are a non-sequitur, so `uniform` is automatically promoted
// to `storage` in bindless mode.
@group(2) @binding(0) var<storage> material_color: binding_array<Color, 4>;
...
@fragment
fn fragment(in: VertexOutput) -> @location(0) vec4<f32> {
let color = material_color[mesh[in.instance_index].material_bind_group_slot];
...
}
```
Note that portable shader code can't guarantee that the current platform
supports bindless textures. Indeed, bindless mode is only available in
Vulkan and DX12. The `BINDLESS` shader definition is available for your
use to determine whether you're on a bindless platform or not. Thus a
portable version of the shader above would look like:
```wgsl
#ifdef BINDLESS
@group(2) @binding(0) var<storage> material_color: binding_array<Color, 4>;
#else // BINDLESS
@group(2) @binding(0) var<uniform> material_color: Color;
#endif // BINDLESS
...
@fragment
fn fragment(in: VertexOutput) -> @location(0) vec4<f32> {
#ifdef BINDLESS
let color = material_color[mesh[in.instance_index].material_bind_group_slot];
#else // BINDLESS
let color = material_color;
#endif // BINDLESS
...
}
```
Importantly, this PR *doesn't* update `StandardMaterial` to be bindless.
So, for example, `scene_viewer` will currently not run any faster. I
intend to update `StandardMaterial` to use bindless mode in a follow-up
patch.
A new example, `shaders/shader_material_bindless`, has been added to
demonstrate how to use this new feature.
Here's a Tracy profile of `submit_graph_commands` of this patch and an
additional patch (not submitted yet) that makes `StandardMaterial` use
bindless. Red is those patches; yellow is `main`. The scene was Bistro
Exterior with a hack that forces all textures to opaque. You can see a
1.47x mean speedup.
![Screenshot 2024-11-12
161713](https://github.com/user-attachments/assets/4334b362-42c8-4d64-9cfb-6835f019b95c)
## Migration Guide
* `RenderAssets::prepare_asset` now takes an `AssetId` parameter.
* Bin keys now have Bevy-specific material bind group indices instead of
`wgpu` material bind group IDs, as part of the bindless change. Use the
new `MaterialBindGroupAllocator` to map from bind group index to bind
group ID.
# Objective
- Fixes#16078
## Solution
- Rename things to clarify that we _want_ unclipped depth for
directional light shadow views, and need some way of disabling the GPU's
builtin depth clipping
- Use DEPTH_CLIP_CONTROL instead of the fragment shader emulation on
supported platforms
- Pass only the clip position depth instead of the whole clip position
between vertex->fragment shader (no idea if this helps performance or
not, compiler might optimize it anyways)
- Meshlets
- HW raster always uses DEPTH_CLIP_CONTROL since it targets a more
limited set of platforms
- SW raster was not handling DEPTH_CLAMP_ORTHO correctly, it ended up
pretty much doing nothing.
- This PR made me realize that SW raster technically should have depth
clipping for all views that are not directional light shadows, but I
decided not to bother writing it. I'm not sure that it ever matters in
practice. If proven otherwise, I can add it.
## Testing
- Did you test these changes? If so, how?
- Lighting example. Both opaque (no fragment shader) and alpha masked
geometry (fragment shader emulation) are working with
depth_clip_control, and both work when it's turned off. Also tested
meshlet example.
- Are there any parts that need more testing?
- Performance. I can't figure out a good test scene.
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
- Toggle depth_clip_control_supported in prepass/mod.rs line 323 to turn
this PR on or off.
- If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?
- Native
---
## Migration Guide
- `MeshPipelineKey::DEPTH_CLAMP_ORTHO` is now
`MeshPipelineKey::UNCLIPPED_DEPTH_ORTHO`
- The `DEPTH_CLAMP_ORTHO` shaderdef has been renamed to
`UNCLIPPED_DEPTH_ORTHO_EMULATION`
- `clip_position_unclamped: vec4<f32>` is now `unclipped_depth: f32`
I didn't mean to make this item private, fixing it for the 0.15 release
to be consistent with 0.14.
(maintainers: please make sure this gets merged into the 0.15 release
branch as well as main)
# Objective
PCSS still has some fundamental issues (#16155). We should resolve them
before "releasing" the feature.
## Solution
1. Rename the already-optional `pbr_pcss` cargo feature to
`experimental_pbr_pcss` to better communicate its state to developers.
2. Adjust the description of the `experimental_pbr_pcss` cargo feature
to better communicate its state to developers.
3. Gate PCSS-related light component fields behind that cargo feature,
to prevent surfacing them to developers by default.
# Objective
_If I understand it correctly_, we were checking mesh visibility, as
well as re-rendering point and spot light shadow maps for each view.
This makes it so that M views and N lights produce M x N complexity.
This PR aims to fix that, as well as introduce a stress test for this
specific scenario.
## Solution
- Keep track of what lights have already had mesh visibility calculated
and do not calculate it again;
- Reuse shadow depth textures and attachments across all views, and only
render shadow maps for the _first_ time a light is encountered on a
view;
- Directional lights remain unaltered, since their shadow map cascades
are view-dependent;
- Add a new `many_cameras_lights` stress test example to verify the
solution
## Showcase
110% speed up on the stress test
83% reduction of memory usage in stress test
### Before (5.35 FPS on stress test)
<img width="1392" alt="Screenshot 2024-09-11 at 12 25 57"
src="https://github.com/user-attachments/assets/136b0785-e9a4-44df-9a22-f99cc465e126">
### After (11.34 FPS on stress test)
<img width="1392" alt="Screenshot 2024-09-11 at 12 24 35"
src="https://github.com/user-attachments/assets/b8dd858f-5e19-467f-8344-2b46ca039630">
## Testing
- Did you test these changes? If so, how?
- On my game project where I have two cameras, and many shadow casting
lights I managed to get pretty much double the FPS.
- Also included a stress test, see the comparison above
- Are there any parts that need more testing?
- Yes, I would like help verifying that this fix is indeed correct, and
that we were really re-rendering the shadow maps by mistake and it's
indeed okay to not do that
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
- Run the `many_cameras_lights` example
- On the `main` branch, cherry pick the commit with the example (`git
cherry-pick --no-commit 1ed4ace01`) and run it
- If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?
- macOS
---------
Co-authored-by: François Mockers <francois.mockers@vleue.com>
# Objective
Fixes#15940
## Solution
Remove the `pub use` and fix the compile errors.
Make `bevy_image` available as `bevy::image`.
## Testing
Feature Frenzy would be good here! Maybe I'll learn how to use it if I
have some time this weekend, or maybe a reviewer can use it.
## Migration Guide
Use `bevy_image` instead of `bevy_render::texture` items.
---------
Co-authored-by: chompaa <antony.m.3012@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
- wgpu 0.20 made workgroup vars stop being zero-init by default. this
broke some applications (cough foresight cough) and now we workaround
it. wgpu exposes a compilation option that zero initializes workgroup
memory by default, but bevy does not expose it.
## Solution
- expose the compilation option wgpu gives us
## Testing
- ran examples: 3d_scene, compute_shader_game_of_life, gpu_readback,
lines, specialized_mesh_pipeline. they all work
- confirmed fix for our own problems
---
</details>
## Migration Guide
- add `zero_initialize_workgroup_memory: false,` to
`ComputePipelineDescriptor` or `RenderPipelineDescriptor` structs to
preserve 0.14 functionality, add `zero_initialize_workgroup_memory:
true,` to restore bevy 0.13 functionality.
# Objective
- fix formatting issue in "mesh_view_binding.wgsl"
_note: As naga-oil preprocessor match the whole line when finding an
"#endif",
It's just for external formatting tool and consistency._
## Solution
Trivial change.
Add '//' before the closing comment of the "#endif"
# Objective
gpu based mesh uniform construction in the `GpuPreprocessNode` is
currently in `Core3d`. The node iterates all views and schedules the
uniform construction for each. so
- when there are multiple 3d cameras, it runs multiple times on each
view
- if a view wants to render meshes but doesn't use the `Core3d` graph,
the camera must run later than at least one `Core3d`-based camera (or
add the node to its own graph, duplicating the work)
- If views want to share mesh uniforms there is no way to avoid running
the preprocessing for every view
## Solution
- move the node to the top level of the rendergraph, before the camera
driver node
- make the `PreprocessBindGroup` `clone`able, and add a
`SkipGpuPreprocessing` component to allow opting out per view
# Objective
- Choose LOD based on normal simplification error in addition to
position error
- Update meshoptimizer to 0.22, which has a bunch of simplifier
improvements
## Testing
- Did you test these changes? If so, how?
- Visualize normals, and compare LOD changes before and after. Normals
no longer visibly change as the LOD cut changes.
- Are there any parts that need more testing?
- No
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
- Run the meshlet example in this PR and on main and move around to
change the LOD cut. Before running each example, in
meshlet_mesh_material.wgsl, replace `let color = vec3(rand_f(&rng),
rand_f(&rng), rand_f(&rng));` with `let color =
(vertex_output.world_normal + 1.0) / 2.0;`. Make sure to download the
appropriate bunny asset for each branch!
# Objective
Order independent transparency can filter fragment writes based on the
alpha value and it is currently hard-coded to anything higher than 0.0.
By making that value configurable, users can optimize fragment writes,
potentially reducing the number of layers needed and improving
performance in favor of some transparency quality.
## Solution
This PR adds `alpha_threshold` to the
OrderIndependentTransparencySettings component and uses the struct to
configure a corresponding shader uniform. This uniform is then used
instead of the hard-coded value.
To configure OIT with a custom alpha threshold, use:
```rust
fn setup(mut commands: Commands) {
commands.spawn((
Camera3d::default(),
OrderIndependentTransparencySettings {
layer_count: 8,
alpha_threshold: 0.2,
},
));
}
```
## Testing
I tested this change using the included OIT example, as well as with two
additional projects.
## Migration Guide
If you previously explicitly initialized
OrderIndependentTransparencySettings with your own `layer_count`, you
will now have to add either a `..default()` statement or an explicit
`alpha_threshold` value:
```rust
fn setup(mut commands: Commands) {
commands.spawn((
Camera3d::default(),
OrderIndependentTransparencySettings {
layer_count: 16,
..default()
},
));
}
```
---------
Co-authored-by: JMS55 <47158642+JMS55@users.noreply.github.com>
The two additional linear texture samplers that PCSS added caused us to
blow past the limit on Apple Silicon macOS and WebGL. To fix the issue,
this commit adds a `--feature pbr_pcss` feature gate that disables PCSS
if not present.
Closes#15345.
Closes#15525.
Closes#15821.
---------
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
The PCSS PR #13497 increased the size of clusterable objects from 64
bytes to 80 bytes but didn't decrease the UBO size to compensate, so we
blew past the 16kB limit on WebGL 2. This commit fixes the issue by
lowering the maximum number of clusterable objects to 204, which puts us
under the 16kB limit again.
Closes#15998.
# Objective
- Make the meshlet fill cluster buffers pass slightly faster
- Address https://github.com/bevyengine/bevy/issues/15920 for meshlets
- Added PreviousGlobalTransform as a required meshlet component to avoid
extra archetype moves, slightly alleviating
https://github.com/bevyengine/bevy/issues/14681 for meshlets
- Enforce that MeshletPlugin::cluster_buffer_slots is not greater than
2^25 (glitches will occur otherwise). Technically this field controls
post-lod/culling cluster count, and the issue is on pre-lod/culling
cluster count, but it's still valid now, and in the future this will be
more true.
Needs to be merged after https://github.com/bevyengine/bevy/pull/15846
and https://github.com/bevyengine/bevy/pull/15886
## Solution
- Old pass dispatched a thread per cluster, and did a binary search over
the instances to find which instance the cluster belongs to, and what
meshlet index within the instance it is.
- New pass dispatches a workgroup per instance, and has the workgroup
loop over all meshlets in the instance in order to write out the cluster
data.
- Use a push constant instead of arrayLength to fix the linked bug
- Remap 1d->2d dispatch for software raster only if actually needed to
save on spawning excess workgroups
## Testing
- Did you test these changes? If so, how?
- Ran the meshlet example, and an example with 1041 instances of 32217
meshlets per instance. Profiled the second scene with nsight, went from
0.55ms -> 0.40ms. Small savings. We're pretty much VRAM bandwidth bound
at this point.
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
- Run the meshlet example
## Changelog (non-meshlets)
- PreviousGlobalTransform now implements the Default trait
Take a bunch more improvements from @zeux's nanite.cpp code.
* Use position-only vertices (discard other attributes) to determine
meshlet connectivity for grouping
* Rather than using the lock borders flag when simplifying meshlet
groups, provide the locked vertices ourselves. The lock borders flag
locks the entire border of the meshlet group, but really we only want to
lock the edges between meshlet groups - outwards facing edges are fine
to unlock. This gives a really significant increase to the DAG quality.
* Add back stuck meshlets (group has only a single meshlet,
simplification failed) to the simplification queue to allow them to get
used later on and have another attempt at simplifying
* Target 8 meshlets per group instead of 4 (second biggest improvement
after manual locks)
* Provide a seed to metis for deterministic meshlet building
* Misc other improvements
We can remove the usage of unsafe after the next upstream meshopt
release, but for now we need to use the ffi function directly. I'll do
another round of improvements later, mainly attribute-aware
simplification and using spatial weights for meshlet grouping.
Need to merge https://github.com/bevyengine/bevy/pull/15846 first.
# Objective
- I made a mistake in #15902, specifically [this
diff](e2faedb99c)
-- the `point_light_count` variable is used for all point lights, not
just shadow mapped ones, so I cannot add `.min(max_texture_cubes)`
there. (Despite `spot_light_count` having `.min(..)`)
It may have broken code like this (where `index` is index of
`point_light` vec):
9930df83ed/crates/bevy_pbr/src/render/light.rs (L848-L850)
and also causes panic here:
9930df83ed/crates/bevy_pbr/src/render/light.rs (L1173-L1174)
## Solution
- Adds `.min(max_texture_cubes)` directly to the loop where texture
views for point lights are created.
## Testing
- `lighting` example (with the directional light removed; original
example doesn't crash as only 1 directional-or-spot light in total is
shadow-mapped on webgl) no longer crashes on webgl
# Objective
1. Prevent weird glitches with stray pixels scattered around the scene
![image](https://github.com/user-attachments/assets/f12adb38-5996-4dc7-bea6-bd326b7317e1)
2. Prevent weird glitchy full-screen triangles that pop-up and destroy
perf (SW rasterizing huge triangles is slow)
![image](https://github.com/user-attachments/assets/d3705427-13a5-47bc-a54b-756f0409da0b)
## Solution
1. Use floating point math in the SW rasterizer bounding box calculation
to handle negative verticss, and add backface culling
2. Force hardware raster for clusters that clip the near plane, and let
the hardware rasterizer handle the clipping
I also adjusted the SW rasterizer threshold to < 64 pixels (little bit
better perf in my test scene, but still need to do a more comprehensive
test), and enabled backface culling for the hardware raster pipeline.
## Testing
- Did you test these changes? If so, how?
- Yes, on an example scene. Issues no longer occur.
- Are there any parts that need more testing?
- No.
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
- Run the meshlet example.
# Objective
Bevy seems to want to standardize on "American English" spellings. Not
sure if this is laid out anywhere in writing, but see also #15947.
While perusing the docs for `typos`, I noticed that it has a `locale`
config option and tried it out.
## Solution
Switch to `en-us` locale in the `typos` config and run `typos -w`
## Migration Guide
The following methods or fields have been renamed from `*dependants*` to
`*dependents*`.
- `ProcessorAssetInfo::dependants`
- `ProcessorAssetInfos::add_dependant`
- `ProcessorAssetInfos::non_existent_dependants`
- `AssetInfo::dependants_waiting_on_load`
- `AssetInfo::dependants_waiting_on_recursive_dep_load`
- `AssetInfos::loader_dependants`
- `AssetInfos::remove_dependants_and_labels`
# Objective
- Fixes https://github.com/bevyengine/bevy/issues/15871
(Camera is done in #15946)
## Solution
- Do the same as #15904 for other extraction systems
- Added missing `SyncComponentPlugin` for DOF, TAA, and SSAO
(According to the
[documentation](https://dev-docs.bevyengine.org/bevy/render/sync_component/struct.SyncComponentPlugin.html),
this plugin "needs to be added for manual extraction implementations."
We may need to check this is done.)
## Testing
Modified example locally to add toggles if not exist.
- [x] DOF - toggling DOF component and perspective in `depth_of_field`
example
- [x] TAA - toggling `Camera.is_active` and TAA component
- [x] clusters - not entirely sure, toggling `Camera.is_active` in
`many_lights` example (no crash/glitch even without this PR)
- [x] previous_view - toggling `Camera.is_active` in `skybox` (no
crash/glitch even without this PR)
- [x] lights - toggling `Visibility` of `DirectionalLight` in `lighting`
example
- [x] SSAO - toggling `Camera.is_active` and SSAO component in `ssao`
example
- [x] default UI camera view - toggling `Camera.is_active` (nop without
#15946 because UI defaults to some camera even if `DefaultCameraView` is
not there)
- [x] volumetric fog - toggling existence of volumetric light. Looks
like optimization, no change in behavior/visuals
# Objective
- Fixes https://github.com/bevyengine/bevy/issues/13552
## Solution
- Thanks for the guidance from @DGriffin91, the current solution is to
transmit the light_map through the emissive channel to avoid increasing
the bandwidth of deferred shading.
- <del>Store lightmap sample result into G-Buffer and pass them into the
`Deferred Lighting Pipeline`, therefore we can get the correct indirect
lighting via the `apply_pbr_lighting` function.</del>
- <del>The original G-Buffer lacks storage for lightmap data, therefore
a new buffer is added. We can only use Rgba16Uint here due to the
32-byte limit on the render targets.</del>
## Testing
- Need to test all the examples that contains a prepass, with both the
forward and deferred rendering mode.
- I have tested the ones below.
- `lightmaps` (adjust the code based on the issue and check the
rendering result)
- `transmission` (it contains a prepass)
- `ssr` (it also uses the G-Bufffer)
- `meshlet` (forward and deferred)
- `pbr`
## Showcase
By updating the `lightmaps` example to use deferred rendering, this pull
request enables correct rendering result of the Cornell Box.
```
diff --git a/examples/3d/lightmaps.rs b/examples/3d/lightmaps.rs
index 564a3162b..11a748fba 100644
--- a/examples/3d/lightmaps.rs
+++ b/examples/3d/lightmaps.rs
@@ -1,12 +1,14 @@
//! Rendering a scene with baked lightmaps.
-use bevy::pbr::Lightmap;
+use bevy::core_pipeline::prepass::DeferredPrepass;
+use bevy::pbr::{DefaultOpaqueRendererMethod, Lightmap};
use bevy::prelude::*;
fn main() {
App::new()
.add_plugins(DefaultPlugins)
.insert_resource(AmbientLight::NONE)
+ .insert_resource(DefaultOpaqueRendererMethod::deferred())
.add_systems(Startup, setup)
.add_systems(Update, add_lightmaps_to_meshes)
.run();
@@ -19,10 +21,12 @@ fn setup(mut commands: Commands, asset_server: Res<AssetServer>) {
..default()
});
- commands.spawn(Camera3dBundle {
- transform: Transform::from_xyz(-278.0, 273.0, 800.0),
- ..default()
- });
+ commands
+ .spawn(Camera3dBundle {
+ transform: Transform::from_xyz(-278.0, 273.0, 800.0),
+ ..default()
+ })
+ .insert(DeferredPrepass);
}
fn add_lightmaps_to_meshes(
```
<img width="1280" alt="image"
src="https://github.com/user-attachments/assets/17fd3367-61cc-4c23-b956-e7cfc751af3c">
## Emissive Issue
**The emissive light object appears incorrectly rendered because the
alpha channel of emission is set to 1 in deferred rendering and 0 in
forward rendering, leading to different emissive light result. Could
this be a bug?**
```wgsl
// pbr_deferred_functions.wgsl - pbr_input_from_deferred_gbuffer
let emissive = rgb9e5::rgb9e5_to_vec3_(gbuffer.g);
if ((pbr.material.flags & STANDARD_MATERIAL_FLAGS_UNLIT_BIT) != 0u) {
pbr.material.base_color = vec4(emissive, 1.0);
pbr.material.emissive = vec4(vec3(0.0), 1.0);
} else {
pbr.material.base_color = vec4(pow(base_rough.rgb, vec3(2.2)), 1.0);
pbr.material.emissive = vec4(emissive, 1.0);
}
// pbr_functions.wgsl - apply_pbr_lighting
emissive_light = emissive_light * mix(1.0, view_bindings::view.exposure, emissive.a);
```
---------
Co-authored-by: JMS55 <47158642+JMS55@users.noreply.github.com>
# Objective
Closes#15799.
Many rendering people and maintainers are in favor of reverting default
mesh materials added in #15524, especially as the migration to required
component is already large and heavily breaking.
## Solution
Revert default mesh materials, and adjust docs accordingly.
- Remove `extract_default_materials`
- Remove `clear_material_instances`, and move the logic back into
`extract_mesh_materials`
- Remove `HasMaterial2d` and `HasMaterial3d`
- Change default material handles back to pink instead of white
- 2D uses `Color::srgb(1.0, 0.0, 1.0)`, while 3D uses `Color::srgb(1.0,
0.0, 0.5)`. Not sure if this is intended.
There is now no indication at all about missing materials for `Mesh2d`
and `Mesh3d`. Having a mesh without a material renders nothing.
## Testing
I ran `2d_shapes`, `mesh2d_manual`, and `3d_shapes`, with and without
mesh material components.
# Objective
- Fixes#15897
## Solution
- Despawn light view entities when they go unused or when the
corresponding view is not alive.
## Testing
- `scene_viewer` example no longer prints "The preprocessing index
buffer wasn't present" warning
- modified an example to try toggling shadows for all kinds of light:
https://gist.github.com/akimakinai/ddb0357191f5052b654370699d2314cf
# Objective
Another clippy-lint fix: the goal is so that `ci lints` actually
displays the problems that a contributor caused, and not a bunch of
existing stuff in the repo. (when run on nightly)
## Solution
This fixes all but the `clippy::needless_lifetimes` lint, which will
result in substantially more fixes and be in other PR(s). I also
explicitly allow `non_local_definitions` since it is [not working
correctly, but will be
fixed](https://github.com/rust-lang/rust/issues/131643).
A few things were manually fixed: for example, some places had an
explicitly defined `div_ceil` function that was used, which is no longer
needed since this function is stable on unsigned integers. Also, empty
lines in doc comments were handled individually.
## Testing
I ran `cargo clippy --workspace --all-targets --all-features --fix
--allow-staged` with the `clippy::needless_lifetimes` lint marked as
`allow` in `Cargo.toml` to avoid fixing that too. It now passes with all
but the listed lint.
# Objective
#15320 is a particularly painful breaking change, and the new
`RenderEntity` in particular is very noisy, with a lot of `let entity =
entity.id()` spam.
## Solution
Implement `WorldQuery`, `QueryData` and `ReadOnlyQueryData` for
`RenderEntity` and `WorldEntity`.
These work the same as the `Entity` impls from a user-facing
perspective: they simply return an owned (copied) `Entity` identifier.
This dramatically reduces noise and eases migration.
Under the hood, these impls defer to the implementations for `&T` for
everything other than the "call .id() for the user" bit, as they involve
read-only access to component data. Doing it this way (as opposed to
implementing a custom fetch, as tried in the first commit) dramatically
reduces the maintenance risk of complex unsafe code outside of
`bevy_ecs`.
To make this easier (and encourage users to do this themselves!), I've
made `ReadFetch` and `WriteFetch` slightly more public: they're no
longer `doc(hidden)`. This is a good change, since trying to vendor the
logic is much worse than just deferring to the existing tested impls.
## Testing
I've run a handful of rendering examples (breakout, alien_cake_addict,
auto_exposure, fog_volumes, box_shadow) and nothing broke.
## Follow-up
We should lint for the uses of `&RenderEntity` and `&MainEntity` in
queries: this is just less nice for no reason.
---------
Co-authored-by: Trashtalk217 <trashtalk217@gmail.com>
# Objective
In the Render World, there are a number of collections that are derived
from Main World entities and are used to drive rendering. The most
notable are:
- `VisibleEntities`, which is generated in the `check_visibility` system
and contains visible entities for a view.
- `ExtractedInstances`, which maps entity ids to asset ids.
In the old model, these collections were trivially kept in sync -- any
extracted phase item could look itself up because the render entity id
was guaranteed to always match the corresponding main world id.
After #15320, this became much more complicated, and was leading to a
number of subtle bugs in the Render World. The main rendering systems,
i.e. `queue_material_meshes` and `queue_material2d_meshes`, follow a
similar pattern:
```rust
for visible_entity in visible_entities.iter::<With<Mesh2d>>() {
let Some(mesh_instance) = render_mesh_instances.get_mut(visible_entity) else {
continue;
};
// Look some more stuff up and specialize the pipeline...
let bin_key = Opaque2dBinKey {
pipeline: pipeline_id,
draw_function: draw_opaque_2d,
asset_id: mesh_instance.mesh_asset_id.into(),
material_bind_group_id: material_2d.get_bind_group_id().0,
};
opaque_phase.add(
bin_key,
*visible_entity,
BinnedRenderPhaseType::mesh(mesh_instance.automatic_batching),
);
}
```
In this case, `visible_entities` and `render_mesh_instances` are both
collections that are created and keyed by Main World entity ids, and so
this lookup happens to work by coincidence. However, there is a major
unintentional bug here: namely, because `visible_entities` is a
collection of Main World ids, the phase item being queued is created
with a Main World id rather than its correct Render World id.
This happens to not break mesh rendering because the render commands
used for drawing meshes do not access the `ItemQuery` parameter, but
demonstrates the confusion that is now possible: our UI phase items are
correctly being queued with Render World ids while our meshes aren't.
Additionally, this makes it very easy and error prone to use the wrong
entity id to look up things like assets. For example, if instead we
ignored visibility checks and queued our meshes via a query, we'd have
to be extra careful to use `&MainEntity` instead of the natural
`Entity`.
## Solution
Make all collections that are derived from Main World data use
`MainEntity` as their key, to ensure type safety and avoid accidentally
looking up data with the wrong entity id:
```rust
pub type MainEntityHashMap<V> = hashbrown::HashMap<MainEntity, V, EntityHash>;
```
Additionally, we make all `PhaseItem` be able to provide both their Main
and Render World ids, to allow render phase implementors maximum
flexibility as to what id should be used to look up data.
You can think of this like tracking at the type level whether something
in the Render World should use it's "primary key", i.e. entity id, or
needs to use a foreign key, i.e. `MainEntity`.
## Testing
##### TODO:
This will require extensive testing to make sure things didn't break!
Additionally, some extraction logic has become more complicated and
needs to be checked for regressions.
## Migration Guide
With the advent of the retained render world, collections that contain
references to `Entity` that are extracted into the render world have
been changed to contain `MainEntity` in order to prevent errors where a
render world entity id is used to look up an item by accident. Custom
rendering code may need to be changed to query for `&MainEntity` in
order to look up the correct item from such a collection. Additionally,
users who implement their own extraction logic for collections of main
world entity should strongly consider extracting into a different
collection that uses `MainEntity` as a key.
Additionally, render phases now require specifying both the `Entity` and
`MainEntity` for a given `PhaseItem`. Custom render phases should ensure
`MainEntity` is available when queuing a phase item.
# Objective
- Closes#15716
- Closes#15718
## Solution
- Replace `Handle<MeshletMesh>` with a new `MeshletMesh3d` component
- As expected there were some random things that needed fixing:
- A couple tests were storing handles just to prevent them from being
dropped I believe, which seems to have been unnecessary in some.
- The `SpriteBundle` still had a `Handle<Image>` field. I've removed
this.
- Tests in `bevy_sprite` incorrectly added a `Handle<Image>` field
outside of the `Sprite` component.
- A few examples were still inserting `Handle`s, switched those to their
corresponding wrappers.
- 2 examples that were still querying for `Handle<Image>` were changed
to query `Sprite`
## Testing
- I've verified that the changed example work now
## Migration Guide
`Handle` can no longer be used as a `Component`. All existing Bevy types
using this pattern have been wrapped in their own semantically
meaningful type. You should do the same for any custom `Handle`
components your project needs.
The `Handle<MeshletMesh>` component is now `MeshletMesh3d`.
The `WithMeshletMesh` type alias has been removed. Use
`With<MeshletMesh3d>` instead.
# Objective
- Another step towards #15716
- Remove trait implementations that are dependent on `Handle<T>` being a
`Component`
## Solution
- Remove unused `ExtractComponent` trait implementation for `Handle<T>`
- Remove unused `ExtractInstance` trait implementation for `AssetId`
- Although the `ExtractInstance` trait wasn't used, the `AssetId`s were
being stored inside of `ExtractedInstances` which has an
`ExtractInstance` trait bound on its contents.
I've upgraded the `RenderMaterialInstances` type alias to be its own
resource, identical to `ExtractedInstances<AssetId<M>>` to get around
that with minimal breakage.
## Testing
Tested `many_cubes`, rendering did not explode
# Objective
- Closes#15752
Calling the functions `App::observe` and `World::observe` doesn't make
sense because you're not "observing" the `App` or `World`, you're adding
an observer that listens for an event that occurs *within* the `World`.
We should rename them to better fit this.
## Solution
Renames:
- `App::observe` -> `App::add_observer`
- `World::observe` -> `World::add_observer`
- `Commands::observe` -> `Commands::add_observer`
- `EntityWorldMut::observe_entity` -> `EntityWorldMut::observe`
(Note this isn't a breaking change as the original rename was introduced
earlier this cycle.)
## Testing
Reusing current tests.
# Objective
Getting closer to the end! Another part of the required components
migration: reflection probes.
## Solution
As per the [proposal added by
Cart](https://hackmd.io/@bevy/required_components/%2FNmpIh0tGSiayGlswbfcEzw)
(Proposal 2), make `LightProbe` require `Transform` and `Visibility`,
and deprecate `ReflectionProbeBundle`.
Note that this proposal wasn't officially blessed yet, but it is the
only existing one that really works, so I implemented it here for
consideration.
## Testing
I ran the reflection probe example, and it appears to work.
---
## Migration Guide
`ReflectionProbeBundle` has been deprecated in favor of inserting the
`LightProbe` and `EnvironmentMapLight` components directly. Inserting
them will now automatically insert `Transform` and `Visibility`
components.
---------
Co-authored-by: Tim Blackbird <justthecooldude@gmail.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
- Fixes#15285
## Solution
`winit` sends resized to zero events when the window is minimized only
on Windows OS(rust-windowing/winit#2015).
This makes updating window viewport size to `(0, 0)` and panicking when
calculating aspect ratio.
~~So, just skip these kinds of events - resizing to (0, 0) when the
window is minimized - on Windows OS~~
Idially, the camera extraction excludes the cameras whose target size
width or height is zero here;
25bfa80e60/crates/bevy_render/src/camera/camera.rs (L1060-L1074)
but it seems that winit event loop sends resize events after extraction
and before post update schedule, so they might panics before the
extraction filters them out.
Alternatively, it might be possible to change event loop evaluating
order or defer them to the right schedule but I'm afraid that it might
cause some breaking changes, so just skip rendering logics for such
windows and they will be all filtered out by the extractions on the next
frame and thereafter.
## Testing
Running the example in the original issue and minimizing causes panic,
or just running `tests/window/minimising.rs` with `cargo run --example
minimising` panics without this PR and doesn't panics with this PR.
I think that we should run it in CI on Windows OS btw
# Objective
Fixes#15560
Fixes (most of) #15570
Currently a lot of examples (and presumably some user code) depend on
toggling certain render features by adding/removing a single component
to an entity, e.g. `SpotLight` to toggle a light. Because of the
retained render world this no longer works: Extract will add any new
components, but when it is removed the entity persists unchanged in the
render world.
## Solution
Add `SyncComponentPlugin<C: Component>` that registers
`SyncToRenderWorld` as a required component for `C`, and adds a
component hook that will clear all components from the render world
entity when `C` is removed. We add this plugin to
`ExtractComponentPlugin` which fixes most instances of the problem. For
custom extraction logic we can manually add `SyncComponentPlugin` for
that component.
We also rename `WorldSyncPlugin` to `SyncWorldPlugin` so we start a
naming convention like all the `Extract` plugins.
In this PR I also fixed a bunch of breakage related to the retained
render world, stemming from old code that assumed that `Entity` would be
the same in both worlds.
I found that using the `RenderEntity` wrapper instead of `Entity` in
data structures when referring to render world entities makes intent
much clearer, so I propose we make this an official pattern.
## Testing
Run examples like
```
cargo run --features pbr_multi_layer_material_textures --example clearcoat
cargo run --example volumetric_fog
```
and see that they work, and that toggles work correctly. But really we
should test every single example, as we might not even have caught all
the breakage yet.
---
## Migration Guide
The retained render world notes should be updated to explain this edge
case and `SyncComponentPlugin`
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Trashtalk217 <trashtalk217@gmail.com>
# Objective
- Prepare for streaming by storing vertex data per-meshlet, rather than
per-mesh (this means duplicating vertices per-meshlet)
- Compress vertex data to reduce the cost of this
## Solution
The important parts are in from_mesh.rs, the changes to the Meshlet type
in asset.rs, and the changes in meshlet_bindings.wgsl. Everything else
is pretty secondary/boilerplate/straightforward changes.
- Positions are quantized in centimeters with a user-provided power of 2
factor (ideally auto-determined, but that's a TODO for the future),
encoded as an offset relative to the minimum value within the meshlet,
and then stored as a packed list of bits using the minimum number of
bits needed for each vertex position channel for that meshlet
- E.g. quantize positions (lossly, throws away precision that's not
needed leading to using less bits in the bitstream encoding)
- Get the min/max quantized value of each X/Y/Z channel of the quantized
positions within a meshlet
- Encode values relative to the min value of the meshlet. E.g. convert
from [min, max] to [0, max - min]
- The new max value in the meshlet is (max - min), which only takes N
bits, so we only need N bits to store each channel within the meshlet
(lossless)
- We can store the min value and that it takes N bits per channel in the
meshlet metadata, and reconstruct the position from the bitstream
- Normals are octahedral encoded and than snorm2x16 packed and stored as
a single u32.
- Would be better to implement the precise variant of octhedral encoding
for extra precision (no extra decode cost), but decided to keep it
simple for now and leave that as a followup
- Tried doing a quantizing and bitstream encoding scheme like I did for
positions, but struggled to get it smaller. Decided to go with this for
simplicity for now
- UVs are uncompressed and take a full 64bits per vertex which is
expensive
- In the future this should be improved
- Tangents, as of the previous PR, are not explicitly stored and are
instead derived from screen space gradients
- While I'm here, split up MeshletMeshSaverLoader into two separate
types
Other future changes include implementing a smaller encoding of triangle
data (3 u8 indices = 24 bits per triangle currently), and more
disk-oriented compression schemes.
References:
* "A Deep Dive into UE5's Nanite Virtualized Geometry"
https://advances.realtimerendering.com/s2021/Karis_Nanite_SIGGRAPH_Advances_2021_final.pdf#page=128
(also available on youtube)
* "Towards Practical Meshlet Compression"
https://arxiv.org/pdf/2404.06359
* "Vertex quantization in Omniforce Game Engine"
https://daniilvinn.github.io/2024/05/04/omniforce-vertex-quantization.html
## Testing
- Did you test these changes? If so, how?
- Converted the stanford bunny, and rendered it with a debug material
showing normals, and confirmed that it's identical to what's on main.
EDIT: See additional testing in the comments below.
- Are there any parts that need more testing?
- Could use some more size comparisons on various meshes, and testing
different quantization factors. Not sure if 4 is a good default. EDIT:
See additional testing in the comments below.
- Also did not test runtime performance of the shaders. EDIT: See
additional testing in the comments below.
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
- Use my unholy script, replacing the meshlet example
https://paste.rs/7xQHk.rs (must make MeshletMesh fields pub instead of
pub crate, must add lz4_flex as a dev-dependency) (must compile with
meshlet and meshlet_processor features, mesh must have only positions,
normals, and UVs, no vertex colors or tangents)
---
## Migration Guide
- TBD by JMS55 at the end of the release
The previous fixes were breaking pretty much everything on main due to
naga-oil complaining about the OIT shader not being loaded, since
apparently webgl is a default feature. This fix is a bit messier, but
properly warns the user and is probably what we should have gone for in
the first place.
# Objective
- Alpha blending can easily fail in many situations and requires sorting
on the cpu
## Solution
- Implement order independent transparency (OIT) as an alternative to
alpha blending
- The implementation uses 2 passes
- The first pass records all the fragments colors and position to a
buffer that is the size of N layers * the render target resolution.
- The second pass sorts the fragments, blends them and draws them to the
screen. It also currently does manual depth testing because early-z
fails in too many cases in the first pass.
## Testing
- We've been using this implementation at foresight in production for
many months now and we haven't had any issues related to OIT.
---
## Showcase
![image](https://github.com/user-attachments/assets/157f3e32-adaf-4782-b25b-c10313b9bc43)
![image](https://github.com/user-attachments/assets/bef23258-0c22-4b67-a0b8-48a9f571c44f)
## Future work
- Add an example showing how to use OIT for a custom material
- Next step would be to implement a per-pixel linked list to reduce
memory use
- I'd also like to investigate using a BinnedRenderPhase instead of a
SortedRenderPhase. If it works, it would make the transparent pass
significantly faster.
---------
Co-authored-by: Kristoffer Søholm <k.soeholm@gmail.com>
Co-authored-by: JMS55 <47158642+JMS55@users.noreply.github.com>
Co-authored-by: Charlotte McElwain <charlotte.c.mcelwain@gmail.com>
Currently, it's possible for the `collect_meshes_for_gpu_building`
system to run after `set_mesh_motion_vector_flags`. This will cause
those motion vector flags to be overwritten, which will cause the shader
to ignore the motion vectors for skinned meshes, which will cause
graphical artifacts.
This patch corrects the issue by forcing `set_mesh_motion_vector_flags`
to run after `collect_meshes_for_gpu_building`.
# Objective
After merging retained rendering world #15320, we now have a good way of
creating a link between worlds (*HIYAA intensifies*). This means that
`get_or_spawn` is no longer necessary for that function. Entity should
be opaque as the warning above `get_or_spawn` says. This is also part of
#15459.
I'm deprecating `get_or_spawn_batch` in a different PR in order to keep
the PR small in size.
## Solution
Deprecate `get_or_spawn` and replace it with `get_entity` in most
contexts. If it's possible to query `&RenderEntity`, then the entity is
synced and `render_entity.id()` is initialized in the render world.
## Migration Guide
If you are given an `Entity` and you want to do something with it, use
`Commands.entity(...)` or `World.entity(...)`. If instead you want to
spawn something use `Commands.spawn(...)` or `World.spawn(...)`. If you
are not sure if an entity exists, you can always use `get_entity` and
match on the `Option<...>` that is returned.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>