# Objective
- Fix issue #2611
## Solution
- Add `--generate-link-to-definition` to all the `rustdoc-args` arrays
in the `Cargo.toml`s (for docs.rs)
- Add `--generate-link-to-definition` to the `RUSTDOCFLAGS` environment
variable in the docs workflow (for dev-docs.bevyengine.org)
- Document all the workspace crates in the docs workflow (needed because
otherwise only the source code of the `bevy` package will be included,
making the argument useless)
- I think this also fixes#3662, since it fixes the bug on
dev-docs.bevyengine.org, while on docs.rs it has been fixed for a while
on their side.
---
## Changelog
- The source code viewer on docs.rs now includes links to the
definitions.
# Objective
- It's possible to have errors in a draw command, but these errors are
ignored
## Solution
- Return a result with the error
## Changelog
Renamed `RenderCommandResult::Failure` to `RenderCommandResult::Skip`
Added a `reason` string parameter to `RenderCommandResult::Failure`
## Migration Guide
If you were using `RenderCommandResult::Failure` to just ignore an error
and retry later, use `RenderCommandResult::Skip` instead.
This wasn't intentional, but this PR should also help with
https://github.com/bevyengine/bevy/issues/12660 since we can turn a few
unwraps into error messages now.
---------
Co-authored-by: Charlotte McElwain <charlotte.c.mcelwain@gmail.com>
The "uberbuffers" PR #14257 caused some examples to fail intermittently
for different reasons:
1. `morph_targets` could fail because vertex displacements for morph
targets are keyed off the vertex index. With buffer packing, the vertex
index can vary based on the position in the buffer, which caused the
morph targets to be potentially incorrect. The solution is to include
the first vertex index with the `MeshUniform` (and `MeshInputUniform` if
GPU preprocessing is in use), so that the shader can calculate the true
vertex index before performing the morph operation. This results in
wasted space in `MeshUniform`, which is unfortunate, but we'll soon be
filling in the padding with the ID of the material when bindless
textures land, so this had to happen sooner or later anyhow.
Including the vertex index in the `MeshInputUniform` caused an ordering
problem. The `MeshInputUniform` was created during the extraction phase,
before the allocations occurred, so the extraction logic didn't know
where the mesh vertex data was going to end up. The solution is to move
the `MeshInputUniform` creation (the `collect_meshes_for_gpu_building`
system) to after the allocations phase. This should be better for
parallelism anyhow, because it allows the extraction phase to finish
quicker. It's also something we'll have to do for bindless in any event.
2. The `lines` and `fog_volumes` examples could fail because their
custom drawing nodes weren't updated to supply the vertex and index
offsets in their `draw_indexed` and `draw` calls. This commit fixes this
oversight.
Fixes#14366.
Switches `Msaa` from being a globally configured resource to a per
camera view component.
Closes#7194
# Objective
Allow individual views to describe their own MSAA settings. For example,
when rendering to different windows or to different parts of the same
view.
## Solution
Make `Msaa` a component that is required on all camera bundles.
## Testing
Ran a variety of examples to ensure that nothing broke.
TODO:
- [ ] Make sure android still works per previous comment in
`extract_windows`.
---
## Migration Guide
`Msaa` is no longer configured as a global resource, and should be
specified on each spawned camera if a non-default setting is desired.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: François Mockers <francois.mockers@vleue.com>
# Objective
- Fixes: https://github.com/bevyengine/bevy/issues/14036
## Solution
- Add a world space transformation for the environment sample direction.
## Testing
- I have tested the newly added `transform` field using the newly added
`rotate_environment_map` example.
https://github.com/user-attachments/assets/2de77c65-14bc-48ee-b76a-fb4e9782dbdb
## Migration Guide
- Since we have added a new filed to the `EnvironmentMapLight` struct,
users will need to include `..default()` or some rotation value in their
initialization code.
This commit uses the [`offset-allocator`] crate to combine vertex and
index arrays from different meshes into single buffers. Since the
primary source of `wgpu` overhead is from validation and synchronization
when switching buffers, this significantly improves Bevy's rendering
performance on many scenes.
This patch is a more flexible version of #13218, which also used slabs.
Unlike #13218, which used slabs of a fixed size, this commit implements
slabs that start small and can grow. In addition to reducing memory
usage, supporting slab growth reduces the number of vertex and index
buffer switches that need to happen during rendering, leading to
improved performance. To prevent pathological fragmentation behavior,
slabs are capped to a maximum size, and mesh arrays that are too large
get their own dedicated slabs.
As an additional improvement over #13218, this commit allows the
application to customize all allocator heuristics. The
`MeshAllocatorSettings` resource contains values that adjust the minimum
and maximum slab sizes, the cutoff point at which meshes get their own
dedicated slabs, and the rate at which slabs grow. Hopefully-sensible
defaults have been chosen for each value.
Unfortunately, WebGL 2 doesn't support the *base vertex* feature, which
is necessary to pack vertex arrays from different meshes into the same
buffer. `wgpu` represents this restriction as the downlevel flag
`BASE_VERTEX`. This patch detects that bit and ensures that all vertex
buffers get dedicated slabs on that platform. Even on WebGL 2, though,
we can combine all *index* arrays into single buffers to reduce buffer
changes, and we do so.
The following measurements are on Bistro:
Overall frame time improves from 8.74 ms to 5.53 ms (1.58x speedup):
![Screenshot 2024-07-09
163521](https://github.com/bevyengine/bevy/assets/157897/5d83c824-c0ee-434c-bbaf-218ff7212c48)
Render system time improves from 6.57 ms to 3.54 ms (1.86x speedup):
![Screenshot 2024-07-09
163559](https://github.com/bevyengine/bevy/assets/157897/d94e2273-c3a0-496a-9f88-20d394129610)
Opaque pass time improves from 4.64 ms to 2.33 ms (1.99x speedup):
![Screenshot 2024-07-09
163536](https://github.com/bevyengine/bevy/assets/157897/e4ef6e48-d60e-44ae-9a71-b9a731c99d9a)
## Migration Guide
### Changed
* Vertex and index buffers for meshes may now be packed alongside other
buffers, for performance.
* `GpuMesh` has been renamed to `RenderMesh`, to reflect the fact that
it no longer directly stores handles to GPU objects.
* Because meshes no longer have their own vertex and index buffers, the
responsibility for the buffers has moved from `GpuMesh` (now called
`RenderMesh`) to the `MeshAllocator` resource. To access the vertex data
for a mesh, use `MeshAllocator::mesh_vertex_slice`. To access the index
data for a mesh, use `MeshAllocator::mesh_index_slice`.
[`offset-allocator`]: https://github.com/pcwalton/offset-allocator
Currently, volumetric fog is global and affects the entire scene
uniformly. This is inadequate for many use cases, such as local smoke
effects. To address this problem, this commit introduces *fog volumes*,
which are axis-aligned bounding boxes (AABBs) that specify fog
parameters inside their boundaries. Such volumes can also specify a
*density texture*, a 3D texture of voxels that specifies the density of
the fog at each point.
To create a fog volume, add a `FogVolume` component to an entity (which
is included in the new `FogVolumeBundle` convenience bundle). Like light
probes, a fog volume is conceptually a 1×1×1 cube centered on the
origin; a transform can be used to position and resize this region. Many
of the fields on the existing `VolumetricFogSettings` have migrated to
the new `FogVolume` component. `VolumetricFogSettings` on a camera is
still needed to enable volumetric fog. However, by itself
`VolumetricFogSettings` is no longer sufficient to enable volumetric
fog; a `FogVolume` must be present. Applications that wish to retain the
old global fog behavior can simply surround the scene with a large fog
volume.
By way of implementation, this commit converts the volumetric fog shader
from a full-screen shader to one applied to a mesh. The strategy is
different depending on whether the camera is inside or outside the fog
volume. If the camera is inside the fog volume, the mesh is simply a
plane scaled to the viewport, effectively falling back to a full-screen
pass. If the camera is outside the fog volume, the mesh is a cube
transformed to coincide with the boundaries of the fog volume's AABB.
Importantly, in the latter case, only the front faces of the cuboid are
rendered. Instead of treating the boundaries of the fog as a sphere
centered on the camera position, as we did prior to this patch, we
raytrace the far planes of the AABB to determine the portion of each ray
contained within the fog volume. We then raymarch in shadow map space as
usual. If a density texture is present, we modulate the fixed density
value with the trilinearly-interpolated value from that texture.
Furthermore, this patch introduces optional jitter to fog volumes,
intended for use with TAA. This modifies the position of the ray from
frame to frame using interleaved gradient noise, in order to reduce
aliasing artifacts. Many implementations of volumetric fog in games use
this technique. Note that this patch makes no attempt to write a motion
vector; this is because when a view ray intersects multiple voxels
there's no single direction of motion. Consequently, fog volumes can
have ghosting artifacts, but because fog is "ghostly" by its nature,
these artifacts are less objectionable than they would be for opaque
objects.
A new example, `fog_volumes`, has been added. It demonstrates a single
fog volume containing a voxelized representation of the Stanford bunny.
The existing `volumetric_fog` example has been updated to use the new
local volumetrics API.
## Changelog
### Added
* Local `FogVolume`s are now supported, to localize fog to specific
regions. They can optionally have 3D density voxel textures for precise
control over the distribution of the fog.
### Changed
* `VolumetricFogSettings` on a camera no longer enables volumetric fog;
instead, it simply enables the processing of `FogVolume`s within the
scene.
## Migration Guide
* A `FogVolume` is now necessary in order to enable volumetric fog, in
addition to `VolumetricFogSettings` on the camera. Existing uses of
volumetric fog can be migrated by placing a large `FogVolume`
surrounding the scene.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: François Mockers <mockersf@gmail.com>
# Objective
- Using bincode to deserialize binary into a MeshletMesh is expensive
(~77ms for a 5mb file).
## Solution
- Write a custom deserializer using bytemuck's Pod types and slice
casting.
- Total asset load time has gone from ~102ms to ~12ms.
- Change some types I never meant to be public to private and other misc
cleanup.
## Testing
- Ran the meshlet example and added timing spans to the asset loader.
---
## Changelog
- Improved `MeshletMesh` loading speed
- The `MeshletMesh` disk format has changed, and
`MESHLET_MESH_ASSET_VERSION` has been bumped
- `MeshletMesh` fields are now private
- Renamed `MeshletMeshSaverLoad` to `MeshletMeshSaverLoader`
- The `Meshlet`, `MeshletBoundingSpheres`, and `MeshletBoundingSphere`
types are now private
- Removed `MeshletMeshSaveOrLoadError::SerializationOrDeserialization`
- Added `MeshletMeshSaveOrLoadError::WrongFileType`
## Migration Guide
- Regenerate your `MeshletMesh` assets, as the disk format has changed,
and `MESHLET_MESH_ASSET_VERSION` has been bumped
- `MeshletMesh` fields are now private
- `MeshletMeshSaverLoad` is now named `MeshletMeshSaverLoader`
- The `Meshlet`, `MeshletBoundingSpheres`, and `MeshletBoundingSphere`
types are now private
- `MeshletMeshSaveOrLoadError::SerializationOrDeserialization` has been
removed
- Added `MeshletMeshSaveOrLoadError::WrongFileType`, match on this
variant if you match on `MeshletMeshSaveOrLoadError`
# Objective
- After #11804 , The queue_prepass_material_meshes function is now
executed in parallel with other queue_* systems. This optimization
introduced a potential issue where mesh_instance.should_batch() could
return false in queue_prepass_material_meshes due to an unset
material_bind_group_id.
# Objective
- After #13894, I noticed the performance of `many_lights `dropped from
120+ to 60+. I reviewed the PR but couldn't identify any mistakes. After
profiling, I discovered that `Hashmap::Clone `was very slow when its not
empty, causing `extract_light` to increase from 3ms to 8ms.
- Lighting only checks visibility for 3D Meshes. We don't need to
maintain a TypeIdMap for this, as it not only impacts performance
negatively but also reduces ergonomics.
## Solution
- use VisibleMeshEntities for lighint visibility checking.
## Performance
cargo run --release --example many_lights --features bevy/trace_tracy
name="bevy_pbr::light::check_point_light_mesh_visibility"}
![image](https://github.com/bevyengine/bevy/assets/45868716/8bad061a-f936-45a0-9bb9-4fbdaceec08b)
system{name="bevy_pbr::render::light::extract_lights"}
![image](https://github.com/bevyengine/bevy/assets/45868716/ca75b46c-b4ad-45d3-8c8d-66442447b753)
## Migration Guide
> now `SpotLightBundle` , `CascadesVisibleEntities `and
`CubemapVisibleEntities `use VisibleMeshEntities instead of
`VisibleEntities`
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
- Bevy currently has lot of invalid intra-doc links, let's fix them!
- Also make CI test them, to avoid future regressions.
- Helps with #1983 (but doesn't fix it, as there could still be explicit
links to docs.rs that are broken)
## Solution
- Make `cargo r -p ci -- doc-check` check fail on warnings (could also
be changed to just some specific lints)
- Manually fix all the warnings (note that in some cases it was unclear
to me what the fix should have been, I'll try to highlight them in a
self-review)
Bump version after release
This PR has been auto-generated
Co-authored-by: Bevy Auto Releaser <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: François Mockers <mockersf@gmail.com>
# Objective
Both `Material` and `MaterialExtension` (base and extension) can derive
Debug, so there's no reason to not allow `ExtendedMaterial` to derive it
## Solution
- Describe the solution used to achieve the objective above.
Add `Debug` to the list of derived traits
## Testing
- Did you test these changes? If so, how?
I compiled my test project on latest commit, making sure it actually
compiles
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
Create an ExtendedMaterial instance, try to `println!("{:?}",
material);`
Co-authored-by: NWPlayer123 <NWPlayer123@users.noreply.github.com>
# Objective
- Standard Material is starting to run out of samplers (currently uses
13 with no additional features off, I think in 0.13 it was 12).
- This change adds a new feature switch, modelled on the other ones
which add features to Standard Material, to turn off the new anisotropy
feature by default.
## Solution
- feature + texture define
## Testing
- Anisotropy example still works fine
- Other samples work fine
- Standard Material now takes 12 samplers by default on my Mac instead
of 13
## Migration Guide
- Add feature pbr_anisotropy_texture if you are using that texture in
any standard materials.
---------
Co-authored-by: John Payne <20407779+johngpayne@users.noreply.github.com>
# Objective
The `AssetReader` trait allows customizing the behavior of fetching
bytes for an `AssetPath`, and expects implementors to return `dyn
AsyncRead + AsyncSeek`. This gives implementors of `AssetLoader` great
flexibility to tightly integrate their asset loading behavior with the
asynchronous task system.
However, almost all implementors of `AssetLoader` don't use the async
functionality at all, and just call `AsyncReadExt::read_to_end(&mut
Vec<u8>)`. This is incredibly inefficient, as this method repeatedly
calls `poll_read` on the trait object, filling the vector 32 bytes at a
time. At my work we have assets that are hundreds of megabytes which
makes this a meaningful overhead.
## Solution
Turn the `Reader` type alias into an actual trait, with a provided
method `read_to_end`. This provided method should be more efficient than
the existing extension method, as the compiler will know the underlying
type of `Reader` when generating this function, which removes the
repeated dynamic dispatches and allows the compiler to make further
optimizations after inlining. Individual implementors are able to
override the provided implementation -- for simple asset readers that
just copy bytes from one buffer to another, this allows removing a large
amount of overhead from the provided implementation.
Now that `Reader` is an actual trait, I also improved the ergonomics for
implementing `AssetReader`. Currently, implementors are expected to box
their reader and return it as a trait object, which adds unnecessary
boilerplate to implementations. This PR changes that trait method to
return a pseudo trait alias, which allows implementors to return `impl
Reader` instead of `Box<dyn Reader>`. Now, the boilerplate for boxing
occurs in `ErasedAssetReader`.
## Testing
I made identical changes to my company's fork of bevy. Our app, which
makes heavy use of `read_to_end` for asset loading, still worked
properly after this. I am not aware if we have a more systematic way of
testing asset loading for correctness.
---
## Migration Guide
The trait method `bevy_asset::io::AssetReader::read` (and `read_meta`)
now return an opaque type instead of a boxed trait object. Implementors
of these methods should change the type signatures appropriately
```rust
impl AssetReader for MyReader {
// Before
async fn read<'a>(&'a self, path: &'a Path) -> Result<Box<Reader<'a>>, AssetReaderError> {
let reader = // construct a reader
Box::new(reader) as Box<Reader<'a>>
}
// After
async fn read<'a>(&'a self, path: &'a Path) -> Result<impl Reader + 'a, AssetReaderError> {
// create a reader
}
}
```
`bevy::asset::io::Reader` is now a trait, rather than a type alias for a
trait object. Implementors of `AssetLoader::load` will need to adjust
the method signature accordingly
```rust
impl AssetLoader for MyLoader {
async fn load<'a>(
&'a self,
// Before:
reader: &'a mut bevy::asset::io::Reader,
// After:
reader: &'a mut dyn bevy::asset::io::Reader,
_: &'a Self::Settings,
load_context: &'a mut LoadContext<'_>,
) -> Result<Self::Asset, Self::Error> {
}
```
Additionally, implementors of `AssetReader` that return a type
implementing `futures_io::AsyncRead` and `AsyncSeek` might need to
explicitly implement `bevy::asset::io::Reader` for that type.
```rust
impl bevy::asset::io::Reader for MyAsyncReadAndSeek {}
```
# Objective
- Fixes#14059
- `morphed_skinned_mesh_layout` is the same as
`morphed_skinned_motion_mesh_layout` but shouldn't have the skin / morph
from previous frame, as they're used for motion
## Solution
- Remove the extra entries
## Testing
- Run with the glTF file reproducing #14059, it works
As reported in #14004, many third-party plugins, such as Hanabi, enqueue
entities that don't have meshes into render phases. However, the
introduction of indirect mode added a dependency on mesh-specific data,
breaking this workflow. This is because GPU preprocessing requires that
the render phases manage indirect draw parameters, which don't apply to
objects that aren't meshes. The existing code skips over binned entities
that don't have indirect draw parameters, which causes the rendering to
be skipped for such objects.
To support this workflow, this commit adds a new field,
`non_mesh_items`, to `BinnedRenderPhase`. This field contains a simple
list of (bin key, entity) pairs. After drawing batchable and unbatchable
objects, the non-mesh items are drawn one after another. Bevy itself
doesn't enqueue any items into this list; it exists solely for the
application and/or plugins to use.
Additionally, this commit switches the asset ID in the standard bin keys
to be an untyped asset ID rather than that of a mesh. This allows more
flexibility, allowing bins to be keyed off any type of asset.
This patch adds a new example, `custom_phase_item`, which simultaneously
serves to demonstrate how to use this new feature and to act as a
regression test so this doesn't break again.
Fixes#14004.
## Changelog
### Added
* `BinnedRenderPhase` now contains a `non_mesh_items` field for plugins
to add custom items to.
The comment was incorrect - we are already looking at the pyramid
texture so we do not need to transform the size in any way. Doing that
resulted in a mip that was too fine to be selected in certain cases,
which resulted in a 2x2 pixel footprint not actually fully covering the
cluster sphere - sometimes this could lead to a non-conservative depth
value being computed which resulted in the cluster being marked as
invisible incorrectly.
This change updates meshopt-rs to 0.3 to take advantage of the newly
added sparse simplification mode: by default, simplifier assumes that
the entire mesh is simplified and runs a set of calculations that are
O(vertex count), but in our case we simplify many small mesh subsets
which is inefficient.
Sparse mode instead assumes that the simplified subset is only using a
portion of the vertex buffer, and optimizes accordingly. This changes
the meaning of the error (as it becomes relative to the subset, in our
case a meshlet group); to ensure consistent error selection, we also use
the ErrorAbsolute mode which allows us to operate in mesh coordinate
space.
Additionally, meshopt 0.3 runs optimizeMeshlet automatically as part of
`build_meshlets` so we no longer need to call it ourselves.
This reduces the time to build meshlet representation for Stanford Bunny
mesh from ~1.65s to ~0.45s (3.7x) in optimized builds.
# Objective
- Second part of #13900
- based on #13905
## Solution
- check_dir_light_mesh_visibility defers setting the entity's
`ViewVisibility `so that Bevy can schedule it to run in parallel with
`check_point_light_mesh_visibility`.
- Reduce HashMap lookups for directional light checking as much as
possible
- Use `par_iter `to parallelize the checking process within each system.
---------
Co-authored-by: Kristoffer Søholm <k.soeholm@gmail.com>
# Objective
- Fixes#13728
## Solution
- add a new feature `smaa_luts`. if enables, it also enables `ktx2` and
`zstd`. if not, it doesn't load the files but use placeholders instead
- adds all the resources needed in the same places that system that uses
them are added.
# Objective
- Fixes#13811 (probably, I lost my test code...)
## Solution
- Turns out that Queue and PrepareAssets are _not_ ordered. We should
probably either rethink our system sets (again), or improve the
documentation here. For reference, I've included the current ordering
below.
- The `prepare_meshlet_meshes_X` systems need to run after
`prepare_assets::<PreparedMaterial<M>>`, and have also been moved to
QueueMeshes.
```rust
schedule.configure_sets(
(
ExtractCommands,
ManageViews,
Queue,
PhaseSort,
Prepare,
Render,
Cleanup,
)
.chain(),
);
schedule.configure_sets((ExtractCommands, PrepareAssets, Prepare).chain());
schedule.configure_sets(QueueMeshes.in_set(Queue).after(prepare_assets::<GpuMesh>));
schedule.configure_sets(
(PrepareResources, PrepareResourcesFlush, PrepareBindGroups)
.chain()
.in_set(Prepare),
);
```
## Testing
- Ambiguity checker to make sure I don't have ambiguous system ordering
* Fixes https://github.com/bevyengine/bevy/issues/13813
* Fixes https://github.com/bevyengine/bevy/issues/13810
Tested a combined scene with both regular meshes and meshlet meshes
with:
* Regular forward setup
* Forward + normal/motion vector prepasses
* Deferred (with depth prepass since that's required)
* Deferred + depth/normal/motion vector prepasses
Still broken:
* Using meshlet meshes rendering in deferred and regular meshes
rendering in forward + depth/normal prepass. I don't know how to fix
this at the moment, so for now I've just add instructions to not mix
them.
This is a followup to https://github.com/bevyengine/bevy/pull/13904
based on the discussion there, and switches two HashMaps that used
meshlet ids as keys to Vec.
In addition to a small further performance boost for `from_mesh` (1.66s
=> 1.60s), this makes processing deterministic modulo threading issues
wrt CRT rand described in the linked PR. This is valuable for debugging,
as you can visually or programmatically inspect the meshlet distribution
before/after making changes that should not change the output, whereas
previously every asset rebuild would change the meshlet structure.
Tested with https://github.com/bevyengine/bevy/pull/13431; after this
change, the visual output of meshlets is consistent between asset
rebuilds, and the MD5 of the output GLB file does not change either,
which was not the case before.
# Objective
- Fixes#11933.
- Related: #12280.
## Solution
- Specify that, after applying `AmbientLight`, the resulting units are
in cd/m^2.
- This is based on [@fintelia's
comment](https://github.com/bevyengine/bevy/issues/11933#issuecomment-1995427587),
and will need to be verified.
---
## Changelog
- Specified units for `AmbientLight`'s `brightness` field.
This change reworks `find_connected_meshlets` to scale more linearly
with the mesh size, which significantly reduces the cost of building
meshlet representations. As a small extra complexity reduction, it moves
`simplify_scale` call out of the loop so that it's called once (it only
depends on the vertex data => is safe to cache).
The new implementation of connectivity analysis builds edge=>meshlet
list data structure, which allows us to only iterate through
`tuple_combinations` of a (usually) small list. There is still some
redundancy as if two meshlets share two edges, they will be represented
in the meshlet lists twice, but it's overall much faster.
Since the hash traversal is non-deterministic, to keep this part of the
algorithm deterministic for reproducible results we sort the output
adjacency lists.
Overall this reduces the time to process bunny mesh from ~4.2s to ~1.7s
when using release; in unoptimized builds the delta is even more
significant.
This was tested by using https://github.com/bevyengine/bevy/pull/13431
and:
a) comparing the result of `find_connected_meshlets` using old and new
code; they are equal in all steps of the clustering process
b) comparing the rendered result of the old code vs new code *after*
making the rest of the algorithm deterministic: right now the loop that
iterates through the result of `group_meshlets()` call executes in
different order between program runs. This is orthogonal to this change
and can be fixed separately.
Note: a future change can shrink the processing time further from ~1.7s
to ~0.4s with a small diff but that requires an update to meshopt crate
which is pending in https://github.com/gwihlidal/meshopt-rs/pull/42.
This change is independent.
# Objective
- first part of #13900
## Solution
- split `check_light_mesh_visibility `into
`check_dir_light_mesh_visibility `and
`check_point_light_mesh_visibility` for better review
# Objective
- After #12582 , Bevy split visibleEntities into a TypeIdMap for
different types of entities, but the behavior in
`check_light_mesh_visibility `simply calls HashMap::clear(), which will
reallocate memory every frame.
## Testing
cargo run --release --example many_cubes --features bevy/trace_tracy --
--shadows
~10% win in `check_light_mesh_visibilty`
![image](https://github.com/bevyengine/bevy/assets/45868716/1bf4deef-bab2-4e5f-9f60-bea8b7e33e3e)
# Objective
Closes#13738
## Solution
Added `from_color` to materials that would support it. Didn't add
`from_color` to `WireframeMaterial` as it doesn't seem we expect users
to be constructing them themselves.
## Testing
None
---
## Changelog
### Added
- `from_color` to `StandardMaterial` so you can construct this material
from any color type.
- `from_color` to `ColorMaterial` so you can construct this material
from any color type.
# Objective
- Mikktspace requires that we normalize world normals/tangents _before_
interpolation across vertices, and then do _not_ normalize after. I had
it backwards.
- We do not (am not supposed to?) need a second set of barycentrics for
motion vectors. If you think about the typical raster pipeline, in the
vertex shader we calculate previous_world_position, and then it gets
interpolated using the current triangle's barycentrics.
## Solution
- Fix normal/tangent processing
- Reuse barycentrics for motion vector calculations
- Not implementing this for 0.14, but long term I aim to remove explicit
vertex tangents and calculate them in the shader on the fly.
## Testing
- I tested out some of the normal maps we have in repo. Didn't seem to
make a difference, but mikktspace is all about correctness across
various baking tools. I probably just didn't have any of the ones that
would cause it to break.
- Didn't test motion vectors as there's a known bug with the depth
buffer and meshlets that I'm waiting on the render graph rewrite to fix.
# Objective
- If the fog is disabled it still generates a useless branch which can
hurt performance
## Solution
- Make the flag a shader_def instead
## Testing
- I tested enabling/disabling fog works as expected per-material in the
fog example
- I also tested that scenes that don't add the FogSettings resource
still work correctly
## Review notes
I'm not sure how to handle the removed material flag. Right now I just
commented it out and added a not to reuse it instead of creating a new
one.
# Objective
One thing missing from the new Color implementation in 0.14 is the
ability to easily convert to a u8 representation of the rgb color.
(note this is a redo of PR https://github.com/bevyengine/bevy/pull/13739
as I needed to move the source branch
## Solution
I have added to_u8_array and to_u8_array_no_alpha to a new trait called
ColorToPacked to mirror the f32 conversions in ColorToComponents and
implemented the new trait for Srgba and LinearRgba.
To go with those I also added matching from_u8... functions and
converted a couple of cases that used ad-hoc implementations of that
conversion to use these.
After discussion on Discord of the experience of using the API I renamed
Color::linear to Color::to_linear, as without that it looks like a
constructor (like Color::rgb).
I also added to_srgba which is the other commonly converted to type of
color (for UI and 2D) to match to_linear.
Removed a redundant extra implementation of to_f32_array for LinearColor
as it is also supplied in ColorToComponents (I'm surprised that's
allowed?)
## Testing
Ran all tests and manually tested.
Added to_and_from_u8 to linear_rgba::tests
## Changelog
visible change is Color::linear becomes Color::to_linear.
---------
Co-authored-by: John Payne <20407779+johngpayne@users.noreply.github.com>
# Objective
- apply_normal_mapping was changed to use TBN but the pbr_prepass was
not updated for that change
## Solution
- Update the pbr_prepass to correctly apply normal mapping
* Rename cull_meshlets -> cull_clusters
* Rename meshlet_visible -> cluster_visible
* Add an if statement around meshlet_second_pass_candidates writes,
maybe a small bit of performance.
We want to use the clustering infrastructure for light probes and decals
as well, not just point lights. This patch builds on top of #13640 and
performs the rename.
To make this series easier to review, this patch makes no code changes.
Only identifiers and comments are modified.
## Migration Guide
* In the PBR shaders, `point_lights` is now known as
`clusterable_objects`, `PointLight` is now known as `ClusterableObject`,
and `cluster_light_index_lists` is now known as
`clusterable_object_index_lists`.
This commit implements support for physically-based anisotropy in Bevy's
`StandardMaterial`, following the specification for the
[`KHR_materials_anisotropy`] glTF extension.
[*Anisotropy*] (not to be confused with [anisotropic filtering]) is a
PBR feature that allows roughness to vary along the tangent and
bitangent directions of a mesh. In effect, this causes the specular
light to stretch out into lines instead of a round lobe. This is useful
for modeling brushed metal, hair, and similar surfaces. Support for
anisotropy is a common feature in major game and graphics engines;
Unity, Unreal, Godot, three.js, and Blender all support it to varying
degrees.
Two new parameters have been added to `StandardMaterial`:
`anisotropy_strength` and `anisotropy_rotation`. Anisotropy strength,
which ranges from 0 to 1, represents how much the roughness differs
between the tangent and the bitangent of the mesh. In effect, it
controls how stretched the specular highlight is. Anisotropy rotation
allows the roughness direction to differ from the tangent of the model.
In addition to these two fixed parameters, an *anisotropy texture* can
be supplied. Such a texture should be a 3-channel RGB texture, where the
red and green values specify a direction vector using the same
conventions as a normal map ([0, 1] color values map to [-1, 1] vector
values), and the the blue value represents the strength. This matches
the format that the [`KHR_materials_anisotropy`] specification requires.
Such textures should be loaded as linear and not sRGB. Note that this
texture does consume one additional texture binding in the standard
material shader.
The glTF loader has been updated to properly parse the
`KHR_materials_anisotropy` extension.
A new example, `anisotropy`, has been added. This example loads and
displays the barn lamp example from the [`glTF-Sample-Assets`]
repository. Note that the textures were rather large, so I shrunk them
down and converted them to a mixture of JPEG and KTX2 format, in the
interests of saving space in the Bevy repository.
[*Anisotropy*]:
https://google.github.io/filament/Filament.md.html#materialsystem/anisotropicmodel
[anisotropic filtering]:
https://en.wikipedia.org/wiki/Anisotropic_filtering
[`KHR_materials_anisotropy`]:
https://github.com/KhronosGroup/glTF/blob/main/extensions/2.0/Khronos/KHR_materials_anisotropy/README.md
[`glTF-Sample-Assets`]:
https://github.com/KhronosGroup/glTF-Sample-Assets/
## Changelog
### Added
* Physically-based anisotropy is now available for materials, which
enhances the look of surfaces such as brushed metal or hair. glTF scenes
can use the new feature with the `KHR_materials_anisotropy` extension.
## Screenshots
With anisotropy:
![Screenshot 2024-05-20
233414](https://github.com/bevyengine/bevy/assets/157897/379f1e42-24e9-40b6-a430-f7d1479d0335)
Without anisotropy:
![Screenshot 2024-05-20
233420](https://github.com/bevyengine/bevy/assets/157897/aa220f05-b8e7-417c-9671-b242d4bf9fc4)
# Objective
- Fixes#10909
- Fixes#8492
## Solution
- Name all matrices `x_from_y`, for example `world_from_view`.
## Testing
- I've tested most of the 3D examples. The `lighting` example
particularly should hit a lot of the changes and appears to run fine.
---
## Changelog
- Renamed matrices across the engine to follow a `y_from_x` naming,
making the space conversion more obvious.
## Migration Guide
- `Frustum`'s `from_view_projection`, `from_view_projection_custom_far`
and `from_view_projection_no_far` were renamed to
`from_clip_from_world`, `from_clip_from_world_custom_far` and
`from_clip_from_world_no_far`.
- `ComputedCameraValues::projection_matrix` was renamed to
`clip_from_view`.
- `CameraProjection::get_projection_matrix` was renamed to
`get_clip_from_view` (this affects implementations on `Projection`,
`PerspectiveProjection` and `OrthographicProjection`).
- `ViewRangefinder3d::from_view_matrix` was renamed to
`from_world_from_view`.
- `PreviousViewData`'s members were renamed to `view_from_world` and
`clip_from_world`.
- `ExtractedView`'s `projection`, `transform` and `view_projection` were
renamed to `clip_from_view`, `world_from_view` and `clip_from_world`.
- `ViewUniform`'s `view_proj`, `unjittered_view_proj`,
`inverse_view_proj`, `view`, `inverse_view`, `projection` and
`inverse_projection` were renamed to `clip_from_world`,
`unjittered_clip_from_world`, `world_from_clip`, `world_from_view`,
`view_from_world`, `clip_from_view` and `view_from_clip`.
- `GpuDirectionalCascade::view_projection` was renamed to
`clip_from_world`.
- `MeshTransforms`' `transform` and `previous_transform` were renamed to
`world_from_local` and `previous_world_from_local`.
- `MeshUniform`'s `transform`, `previous_transform`,
`inverse_transpose_model_a` and `inverse_transpose_model_b` were renamed
to `world_from_local`, `previous_world_from_local`,
`local_from_world_transpose_a` and `local_from_world_transpose_b` (the
`Mesh` type in WGSL mirrors this, however `transform` and
`previous_transform` were named `model` and `previous_model`).
- `Mesh2dTransforms::transform` was renamed to `world_from_local`.
- `Mesh2dUniform`'s `transform`, `inverse_transpose_model_a` and
`inverse_transpose_model_b` were renamed to `world_from_local`,
`local_from_world_transpose_a` and `local_from_world_transpose_b` (the
`Mesh2d` type in WGSL mirrors this).
- In WGSL, in `bevy_pbr::mesh_functions`, `get_model_matrix` and
`get_previous_model_matrix` were renamed to `get_world_from_local` and
`get_previous_world_from_local`.
- In WGSL, `bevy_sprite::mesh2d_functions::get_model_matrix` was renamed
to `get_world_from_local`.
As a prerequisite for decals and clustering of light probes, we want
clustering to operate on objects other than lights. (Currently, it only
operates on point and spot lights.) This necessitates a large
refactoring, so I'm splitting it up into small steps.
The first such step is to separate clustering from lighting by moving
clustering-related types and functions out of lighting and into their
own module subtree within the `bevy_pbr` crate. (Ultimately, we may want
to move it to `bevy_render`, but that requires more work and can be a
followup.)
No code changes have been made other than adjusting import lists and
moving code. This is to make this code easy to review. Ultimately, I
want to rename "light" to "clusterable object" in most cases, but doing
that at the same time as moving the code would make reviewing harder. So
instead I'm moving the code first and will follow this up with renaming.
## Migration Guide
* Clustering-related types and functions (e.g.
`assign_lights_to_clusters`) have moved under `bevy_pbr::cluster`, in
preparation for the ability to cluster objects other than lights.
# Objective
- Using multiple raster passes to generate the depth pyramid is
extremely slow
- Pulling data from the source image is the largest bottleneck, it's
important to sample in a cache-aware pattern
- Barriers and pipeline drain between the raster passes is the second
largest bottleneck
- Each separate RenderPass on the CPU is _really_ expensive
## Solution
- Port [FidelityFX SPD](https://gpuopen.com/fidelityfx-spd) to WGSL,
replacing meshlet's existing multiple raster passes with a ~~single~~
two compute dispatches. Lack of coherent buffers means we have to do the
the last 64x64 tile from mip 7+ in a separate dispatch to ensure the mip
6 writes were flushed :(
- Workgroup shared memory version only at the moment, as the subgroup
operation is blocked by our upgrade to wgpu 0.20 #13186
- Don't enforce a power-of-2 depth pyramid texture size, simply scaling
by 0.5 is fine
# Objective
- Add motion vector support to the skybox
- This fixes the last remaining "gap" to complete the motion blur
feature
## Solution
- Add a pipeline for the skybox to write motion vectors to the prepass
## Testing
- Used examples to test motion vectors using motion blur
https://github.com/bevyengine/bevy/assets/2632925/74c0778a-7e77-4e68-8111-05791e4bfdd2
---------
Co-authored-by: Patrick Walton <pcwalton@mimiga.net>
# Objective
- The current version of the `meshopt` dependency is incorrect, as
`bevy_pbr` uses features introduced in `meshopt` `0.2.1`
- This causes errors like this when only `meshopt` `0.2` is present in
`Cargo.lock`:
```
error[E0432]: unresolved imports
`meshopt::ffi::meshopt_optimizeMeshlet`, `meshopt::simplify_scale`
--> crates\bevy_pbr\src\meshlet\from_mesh.rs:10:27
|
10 | ffi::{meshopt_Bounds, meshopt_optimizeMeshlet},
| ^^^^^^^^^^^^^^^^^^^^^^^
| no `meshopt_optimizeMeshlet` in `ffi`
| help: a similar name exists in the module: `meshopt_optimizeOverdraw`
11 | simplify, simplify_scale, Meshlets, SimplifyOptions,
VertexDataAdapter,
| ^^^^^^^^^^^^^^ no `simplify_scale` in the root
```
## Solution
- Specify the actual minimum version of `meshopt` that `bevy_pbr`
requires
This is a revamped equivalent to #9902, though it shares none of the
code. It handles all special cases that I've tested correctly.
The overall technique consists of double-buffering the joint matrix and
morph weights buffers, as most of the previous attempts to solve this
problem did. The process is generally straightforward. Note that, to
avoid regressing the ability of mesh extraction, skin extraction, and
morph target extraction to run in parallel, I had to add a new system to
rendering, `set_mesh_motion_vector_flags`. The comment there explains
the details; it generally runs very quickly.
I've tested this with modified versions of the `animated_fox`,
`morph_targets`, and `many_foxes` examples that add TAA, and the patch
works. To avoid bloating those examples, I didn't add switches for TAA
to them.
Addresses points (1) and (2) of #8423.
## Changelog
### Fixed
* Motion vectors, and therefore TAA, are now supported for meshes with
skins and/or morph targets.
Fixes#13118
If you use `Sprite` or `Mesh2d` and create `Camera` with
* hdr=false
* any tonemapper
You would get
```
wgpu error: Validation Error
Caused by:
In Device::create_render_pipeline
note: label = `sprite_pipeline`
Error matching ShaderStages(FRAGMENT) shader requirements against the pipeline
Shader global ResourceBinding { group: 0, binding: 19 } is not available in the pipeline layout
Binding is missing from the pipeline layout
```
Because of missing tonemapping LUT bindings
## Solution
Add missing bindings for tonemapping LUT's to `SpritePipeline` &
`Mesh2dPipeline`
## Testing
I checked that
* `tonemapping`
* `color_grading`
* `sprite_animations`
* `2d_shapes`
* `meshlet`
* `deferred_rendering`
examples are still working
2d cases I checked with this code:
```
use bevy::{
color::palettes::css::PURPLE, core_pipeline::tonemapping::Tonemapping, prelude::*,
sprite::MaterialMesh2dBundle,
};
fn main() {
App::new()
.add_plugins(DefaultPlugins)
.add_systems(Startup, setup)
.add_systems(Update, toggle_tonemapping_method)
.run();
}
fn setup(
mut commands: Commands,
mut meshes: ResMut<Assets<Mesh>>,
mut materials: ResMut<Assets<ColorMaterial>>,
asset_server: Res<AssetServer>,
) {
commands.spawn(Camera2dBundle {
camera: Camera {
hdr: false,
..default()
},
tonemapping: Tonemapping::BlenderFilmic,
..default()
});
commands.spawn(MaterialMesh2dBundle {
mesh: meshes.add(Rectangle::default()).into(),
transform: Transform::default().with_scale(Vec3::splat(128.)),
material: materials.add(Color::from(PURPLE)),
..default()
});
commands.spawn(SpriteBundle {
texture: asset_server.load("asd.png"),
..default()
});
}
fn toggle_tonemapping_method(
keys: Res<ButtonInput<KeyCode>>,
mut tonemapping: Query<&mut Tonemapping>,
) {
let mut method = tonemapping.single_mut();
if keys.just_pressed(KeyCode::Digit1) {
*method = Tonemapping::None;
} else if keys.just_pressed(KeyCode::Digit2) {
*method = Tonemapping::Reinhard;
} else if keys.just_pressed(KeyCode::Digit3) {
*method = Tonemapping::ReinhardLuminance;
} else if keys.just_pressed(KeyCode::Digit4) {
*method = Tonemapping::AcesFitted;
} else if keys.just_pressed(KeyCode::Digit5) {
*method = Tonemapping::AgX;
} else if keys.just_pressed(KeyCode::Digit6) {
*method = Tonemapping::SomewhatBoringDisplayTransform;
} else if keys.just_pressed(KeyCode::Digit7) {
*method = Tonemapping::TonyMcMapface;
} else if keys.just_pressed(KeyCode::Digit8) {
*method = Tonemapping::BlenderFilmic;
}
}
```
---
## Changelog
Fix the bug which led to the crash when user uses any tonemapper without
hdr for rendering sprites and 2d meshes.
# Objective
- Fixes#13521
## Solution
Set `ambient_intensity` to 0.0 in volumetric_fog example.
I chose setting it explicitly over changing the default in order to make
it clear that this needs to be set depending on whether you have an
`EnvironmentMapLight`. See documentation for `ambient_intensity` and
related members.
## Testing
- Run the volumetric_fog example and notice how the light shown in
#13521 is not there anymore, as expected.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
- #13418 broke volumetric fog
```
wgpu error: Validation Error
Caused by:
In a RenderPass
note: encoder = `<CommandBuffer-(2, 4, Metal)>`
In a set_bind_group command
note: bind group = `mesh_view_bind_group`
Bind group 0 expects 5 dynamic offsets. However 4 dynamic offsets were provided.
```
## Solution
- add ssr offset to volumetric fog bind group
This commit, a revamp of #12959, implements screen-space reflections
(SSR), which approximate real-time reflections based on raymarching
through the depth buffer and copying samples from the final rendered
frame. This patch is a relatively minimal implementation of SSR, so as
to provide a flexible base on which to customize and build in the
future. However, it's based on the production-quality [raymarching code
by Tomasz
Stachowiak](https://gist.github.com/h3r2tic/9c8356bdaefbe80b1a22ae0aaee192db).
For a general basic overview of screen-space reflections, see
[1](https://lettier.github.io/3d-game-shaders-for-beginners/screen-space-reflection.html).
The raymarching shader uses the basic algorithm of tracing forward in
large steps, refining that trace in smaller increments via binary
search, and then using the secant method. No temporal filtering or
roughness blurring, is performed at all; for this reason, SSR currently
only operates on very shiny surfaces. No acceleration via the
hierarchical Z-buffer is implemented (though note that
https://github.com/bevyengine/bevy/pull/12899 will add the
infrastructure for this). Reflections are traced at full resolution,
which is often considered slow. All of these improvements and more can
be follow-ups.
SSR is built on top of the deferred renderer and is currently only
supported in that mode. Forward screen-space reflections are possible
albeit uncommon (though e.g. *Doom Eternal* uses them); however, they
require tracing from the previous frame, which would add complexity.
This patch leaves the door open to implementing SSR in the forward
rendering path but doesn't itself have such an implementation.
Screen-space reflections aren't supported in WebGL 2, because they
require sampling from the depth buffer, which Naga can't do because of a
bug (`sampler2DShadow` is incorrectly generated instead of `sampler2D`;
this is the same reason why depth of field is disabled on that
platform).
To add screen-space reflections to a camera, use the
`ScreenSpaceReflectionsBundle` bundle or the
`ScreenSpaceReflectionsSettings` component. In addition to
`ScreenSpaceReflectionsSettings`, `DepthPrepass` and `DeferredPrepass`
must also be present for the reflections to show up. The
`ScreenSpaceReflectionsSettings` component contains several settings
that artists can tweak, and also comes with sensible defaults.
A new example, `ssr`, has been added. It's loosely based on the
[three.js ocean
sample](https://threejs.org/examples/webgl_shaders_ocean.html), but all
the assets are original. Note that the three.js demo has no screen-space
reflections and instead renders a mirror world. In contrast to #12959,
this demo tests not only a cube but also a more complex model (the
flight helmet).
## Changelog
### Added
* Screen-space reflections can be enabled for very smooth surfaces by
adding the `ScreenSpaceReflections` component to a camera. Deferred
rendering must be enabled for the reflections to appear.
![Screenshot 2024-05-18
143555](https://github.com/bevyengine/bevy/assets/157897/b8675b39-8a89-433e-a34e-1b9ee1233267)
![Screenshot 2024-05-18
143606](https://github.com/bevyengine/bevy/assets/157897/cc9e1cd0-9951-464a-9a08-e589210e5606)
This commit makes us stop using the render world ECS for
`BinnedRenderPhase` and `SortedRenderPhase` and instead use resources
with `EntityHashMap`s inside. There are three reasons to do this:
1. We can use `clear()` to clear out the render phase collections
instead of recreating the components from scratch, allowing us to reuse
allocations.
2. This is a prerequisite for retained bins, because components can't be
retained from frame to frame in the render world, but resources can.
3. We want to move away from storing anything in components in the
render world ECS, and this is a step in that direction.
This patch results in a small performance benefit, due to point (1)
above.
## Changelog
### Changed
* The `BinnedRenderPhase` and `SortedRenderPhase` render world
components have been replaced with `ViewBinnedRenderPhases` and
`ViewSortedRenderPhases` resources.
## Migration Guide
* The `BinnedRenderPhase` and `SortedRenderPhase` render world
components have been replaced with `ViewBinnedRenderPhases` and
`ViewSortedRenderPhases` resources. Instead of querying for the
components, look the camera entity up in the
`ViewBinnedRenderPhases`/`ViewSortedRenderPhases` tables.
Commit 3f5a090b1b added a reference to
`STANDARD_MATERIAL_FLAGS_BASE_COLOR_UV_BIT`, a nonexistent identifier,
in the alpha discard portion of the prepass shader. Moreover, the logic
didn't make sense to me. I think the code was trying to choose between
the two UV sets depending on which is present, so I made it do that.
I noticed this when trying Bistro with #13277. I'm not sure why this
issue didn't manifest itself before, but it's clearly a bug, so here's a
fix. We should probably merge this before 0.14.
# Objective
- The volumetric fog PR originally needed to be modified to use
`.view_layouts` but that was changed in another PR. The merge with main
still kept those around.
## Solution
- Remove them because they aren't necessary
This commit implements a more physically-accurate, but slower, form of
fog than the `bevy_pbr::fog` module does. Notably, this *volumetric fog*
allows for light beams from directional lights to shine through,
creating what is known as *light shafts* or *god rays*.
To add volumetric fog to a scene, add `VolumetricFogSettings` to the
camera, and add `VolumetricLight` to directional lights that you wish to
be volumetric. `VolumetricFogSettings` has numerous settings that allow
you to define the accuracy of the simulation, as well as the look of the
fog. Currently, only interaction with directional lights that have
shadow maps is supported. Note that the overhead of the effect scales
directly with the number of directional lights in use, so apply
`VolumetricLight` sparingly for the best results.
The overall algorithm, which is implemented as a postprocessing effect,
is a combination of the techniques described in [Scratchapixel] and
[this blog post]. It uses raymarching in screen space, transformed into
shadow map space for sampling and combined with physically-based
modeling of absorption and scattering. Bevy employs the widely-used
[Henyey-Greenstein phase function] to model asymmetry; this essentially
allows light shafts to fade into and out of existence as the user views
them.
Volumetric rendering is a huge subject, and I deliberately kept the
scope of this commit small. Possible follow-ups include:
1. Raymarching at a lower resolution.
2. A post-processing blur (especially useful when combined with (1)).
3. Supporting point lights and spot lights.
4. Supporting lights with no shadow maps.
5. Supporting irradiance volumes and reflection probes.
6. Voxel components that reuse the volumetric fog code to create voxel
shapes.
7. *Horizon: Zero Dawn*-style clouds.
These are all useful, but out of scope of this patch for now, to keep
things tidy and easy to review.
A new example, `volumetric_fog`, has been added to demonstrate the
effect.
## Changelog
### Added
* A new component, `VolumetricFog`, is available, to allow for a more
physically-accurate, but more resource-intensive, form of fog.
* A new component, `VolumetricLight`, can be placed on directional
lights to make them interact with `VolumetricFog`. Notably, this allows
such lights to emit light shafts/god rays.
![Screenshot 2024-04-21
162808](https://github.com/bevyengine/bevy/assets/157897/7a1fc81d-eed5-4735-9419-286c496391a9)
![Screenshot 2024-04-21
132005](https://github.com/bevyengine/bevy/assets/157897/e6d3b5ca-8f59-488d-a3de-15e95aaf4995)
[Scratchapixel]:
https://www.scratchapixel.com/lessons/3d-basic-rendering/volume-rendering-for-developers/intro-volume-rendering.html
[this blog post]: https://www.alexandre-pestana.com/volumetric-lights/
[Henyey-Greenstein phase function]:
https://www.pbr-book.org/4ed/Volume_Scattering/Phase_Functions#TheHenyeyndashGreensteinPhaseFunction
# Objective
Remove the limit of `RenderLayer` by using a growable mask using
`SmallVec`.
Changes adopted from @UkoeHB's initial PR here
https://github.com/bevyengine/bevy/pull/12502 that contained additional
changes related to propagating render layers.
Changes
## Solution
The main thing needed to unblock this is removing `RenderLayers` from
our shader code. This primarily affects `DirectionalLight`. We are now
computing a `skip` field on the CPU that is then used to skip the light
in the shader.
## Testing
Checked a variety of examples and did a quick benchmark on `many_cubes`.
There were some existing problems identified during the development of
the original pr (see:
https://discord.com/channels/691052431525675048/1220477928605749340/1221190112939872347).
This PR shouldn't change any existing behavior besides removing the
layer limit (sans the comment in migration about `all` layers no longer
being possible).
---
## Changelog
Removed the limit on `RenderLayers` by using a growable bitset that only
allocates when layers greater than 64 are used.
## Migration Guide
- `RenderLayers::all()` no longer exists. Entities expecting to be
visible on all layers, e.g. lights, should compute the active layers
that are in use.
---------
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
# Objective
To streamline the code which utilizes `Debug` in user's struct like
`GraphicsSettings`. This addition aims to enhance code simplicity and
readability.
## Solution
Add `Debug` derive for `ScreenSpaceAmbientOcclusionSettings` struct.
## Testing
Should have no impact.
# Objective
Optimize vertex prepass shader maybe?
Make it consistent with the base vertex shader
## Solution
`mesh_position_local_to_clip` just calls `mesh_position_local_to_world`
and then `position_world_to_clip`
since `out.world_position` is getting calculated anyway a few lines
below, just move it up and use it's output to calculate `out.position`.
It is the same as in the base vertex shader (`mesh.wgsl`).
Note: I have no idea if there is a reason that it was this way. I'm not
an expert, just noticed this inconsistency while messing with custom
shaders.
# Objective
- The UV transform was applied in the main pass but not the prepass.
## Solution
- Apply the UV transform in the prepass.
## Testing
- The normals in my scene now look correct when using the prepass.
# Objective
- When building for release, there are "unused" warnings:
```
warning: unused import: `bevy_utils::warn_once`
--> crates/bevy_pbr/src/render/mesh_view_bindings.rs:32:5
|
32 | use bevy_utils::warn_once;
| ^^^^^^^^^^^^^^^^^^^^^
|
= note: `#[warn(unused_imports)]` on by default
warning: unused variable: `texture_count`
--> crates/bevy_pbr/src/render/mesh_view_bindings.rs:371:17
|
371 | let texture_count: usize = entries
| ^^^^^^^^^^^^^ help: if this is intentional, prefix it with an underscore: `_texture_count`
|
= note: `#[warn(unused_variables)]` on by default
```
## Solution
- Gate the import and definition by the same cfg as their uses
# Objective
fixes#13224
adds conversions for Vec3 and Vec4 since these appear so often
## Solution
added Covert trait (couldn't think of good name) for [f32; 4], [f32, 3],
Vec4, and Vec3 along with the symmetric implementation
## Changelog
added conversions between arrays and vector to colors and vice versa
#migration
LinearRgba appears to have already had implicit conversions for [f32;4]
and Vec4
WebGL 2 doesn't support variable-length uniform buffer arrays. So we
arbitrarily set the length of the visibility ranges field to 64 on that
platform.
---------
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Copied almost verbatim from the volumetric fog PR
# Objective
- Managing mesh view layouts is complicated
## Solution
- Extract it to it's own struct
- This was done as part of #13057 and is copied almost verbatim. I
wanted to keep this part of the PR it's own atomic commit in case we
ever have to revert fog or run a bisect. This change is good whether or
not we have volumetric fog.
Co-Authored-By: @pcwalton
Switched the return type from `Vec3` to `Dir3` for directional axis
methods within the `GlobalTransform` component.
## Migration Guide
The `GlobalTransform` component's directional axis methods (e.g.,
`right()`, `left()`, `up()`, `down()`, `back()`, `forward()`) have been
updated from returning `Vec3` to `Dir3`.
Clearcoat is a separate material layer that represents a thin
translucent layer of a material. Examples include (from the [Filament
spec]) car paint, soda cans, and lacquered wood. This commit implements
support for clearcoat following the Filament and Khronos specifications,
marking the beginnings of support for multiple PBR layers in Bevy.
The [`KHR_materials_clearcoat`] specification describes the clearcoat
support in glTF. In Blender, applying a clearcoat to the Principled BSDF
node causes the clearcoat settings to be exported via this extension. As
of this commit, Bevy parses and reads the extension data when present in
glTF. Note that the `gltf` crate has no support for
`KHR_materials_clearcoat`; this patch therefore implements the JSON
semantics manually.
Clearcoat is integrated with `StandardMaterial`, but the code is behind
a series of `#ifdef`s that only activate when clearcoat is present.
Additionally, the `pbr_feature_layer_material_textures` Cargo feature
must be active in order to enable support for clearcoat factor maps,
clearcoat roughness maps, and clearcoat normal maps. This approach
mirrors the same pattern used by the existing transmission feature and
exists to avoid running out of texture bindings on platforms like WebGL
and WebGPU. Note that constant clearcoat factors and roughness values
*are* supported in the browser; only the relatively-less-common maps are
disabled on those platforms.
This patch refactors the lighting code in `StandardMaterial`
significantly in order to better support multiple layers in a natural
way. That code was due for a refactor in any case, so this is a nice
improvement.
A new demo, `clearcoat`, has been added. It's based on [the
corresponding three.js demo], but all the assets (aside from the skybox
and environment map) are my original work.
[Filament spec]:
https://google.github.io/filament/Filament.html#materialsystem/clearcoatmodel
[`KHR_materials_clearcoat`]:
https://github.com/KhronosGroup/glTF/blob/main/extensions/2.0/Khronos/KHR_materials_clearcoat/README.md
[the corresponding three.js demo]:
https://threejs.org/examples/webgl_materials_physical_clearcoat.html
![Screenshot 2024-04-19
101143](https://github.com/bevyengine/bevy/assets/157897/3444bcb5-5c20-490c-b0ad-53759bd47ae2)
![Screenshot 2024-04-19
102054](https://github.com/bevyengine/bevy/assets/157897/6e953944-75b8-49ef-bc71-97b0a53b3a27)
## Changelog
### Added
* `StandardMaterial` now supports a clearcoat layer, which represents a
thin translucent layer over an underlying material.
* The glTF loader now supports the `KHR_materials_clearcoat` extension,
representing materials with clearcoat layers.
## Migration Guide
* The lighting functions in the `pbr_lighting` WGSL module now have
clearcoat parameters, if `STANDARD_MATERIAL_CLEARCOAT` is defined.
* The `R` reflection vector parameter has been removed from some
lighting functions, as it was unused.
# Objective
- Per-cluster (instance of a meshlet) data upload is ridiculously
expensive in both CPU and GPU time (8 bytes per cluster, millions of
clusters, you very quickly run into PCIE bandwidth maximums, and lots of
CPU-side copies and malloc).
- We need to be uploading only per-instance/entity data. Anything else
needs to be done on the GPU.
## Solution
- Per instance, upload:
- `meshlet_instance_meshlet_counts_prefix_sum` - An exclusive prefix sum
over the count of how many clusters each instance has.
- `meshlet_instance_meshlet_slice_starts` - The starting index of the
meshlets for each instance within the `meshlets` buffer.
- A new `fill_cluster_buffers` pass once at the start of the frame has a
thread per cluster, and finds its instance ID and meshlet ID via a
binary search of `meshlet_instance_meshlet_counts_prefix_sum` to find
what instance it belongs to, and then uses that plus
`meshlet_instance_meshlet_slice_starts` to find what number meshlet
within the instance it is. The shader then writes out the per-cluster
instance/meshlet ID buffers for later passes to quickly read from.
- I've gone from 45 -> 180 FPS in my stress test scene, and saved
~30ms/frame of overall CPU/GPU time.
# Objective
`bevy_pbr/utils.wgsl` shader file contains mathematical constants and
color conversion functions. Both of those should be accessible without
enabling `bevy_pbr` feature. For example, tonemapping can be done in non
pbr scenario, and it uses color conversion functions.
Fixes#13207
## Solution
* Move mathematical constants (such as PI, E) from
`bevy_pbr/src/render/utils.wgsl` into `bevy_render/src/maths.wgsl`
* Move color conversion functions from `bevy_pbr/src/render/utils.wgsl`
into new file `bevy_render/src/color_operations.wgsl`
## Testing
Ran multiple examples, checked they are working:
* tonemapping
* color_grading
* 3d_scene
* animated_material
* deferred_rendering
* 3d_shapes
* fog
* irradiance_volumes
* meshlet
* parallax_mapping
* pbr
* reflection_probes
* shadow_biases
* 2d_gizmos
* light_gizmos
---
## Changelog
* Moved mathematical constants (such as PI, E) from
`bevy_pbr/src/render/utils.wgsl` into `bevy_render/src/maths.wgsl`
* Moved color conversion functions from `bevy_pbr/src/render/utils.wgsl`
into new file `bevy_render/src/color_operations.wgsl`
## Migration Guide
In user's shader code replace usage of mathematical constants from
`bevy_pbr::utils` to the usage of the same constants from
`bevy_render::maths`.
This is an adoption of #12670 plus some documentation fixes. See that PR
for more details.
---
## Changelog
* Renamed `BufferVec` to `RawBufferVec` and added a new `BufferVec`
type.
## Migration Guide
`BufferVec` has been renamed to `RawBufferVec` and a new similar type
has taken the `BufferVec` name.
---------
Co-authored-by: Patrick Walton <pcwalton@mimiga.net>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
Implement visibility ranges, also known as hierarchical levels of detail
(HLODs).
This commit introduces a new component, `VisibilityRange`, which allows
developers to specify camera distances in which meshes are to be shown
and hidden. Hiding meshes happens early in the rendering pipeline, so
this feature can be used for level of detail optimization. Additionally,
this feature is properly evaluated per-view, so different views can show
different levels of detail.
This feature differs from proper mesh LODs, which can be implemented
later. Engines generally implement true mesh LODs later in the pipeline;
they're typically more efficient than HLODs with GPU-driven rendering.
However, mesh LODs are more limited than HLODs, because they require the
lower levels of detail to be meshes with the same vertex layout and
shader (and perhaps the same material) as the original mesh. Games often
want to use objects other than meshes to replace distant models, such as
*octahedral imposters* or *billboard imposters*.
The reason why the feature is called *hierarchical level of detail* is
that HLODs can replace multiple meshes with a single mesh when the
camera is far away. This can be useful for reducing drawcall count. Note
that `VisibilityRange` doesn't automatically propagate down to children;
it must be placed on every mesh.
Crossfading between different levels of detail is supported, using the
standard 4x4 ordered dithering pattern from [1]. The shader code to
compute the dithering patterns should be well-optimized. The dithering
code is only active when visibility ranges are in use for the mesh in
question, so that we don't lose early Z.
Cascaded shadow maps show the HLOD level of the view they're associated
with. Point light and spot light shadow maps, which have no CSMs,
display all HLOD levels that are visible in any view. To support this
efficiently and avoid doing visibility checks multiple times, we
precalculate all visible HLOD levels for each entity with a
`VisibilityRange` during the `check_visibility_range` system.
A new example, `visibility_range`, has been added to the tree, as well
as a new low-poly version of the flight helmet model to go with it. It
demonstrates use of the visibility range feature to provide levels of
detail.
[1]: https://en.wikipedia.org/wiki/Ordered_dithering#Threshold_map
[^1]: Unreal doesn't have a feature that exactly corresponds to
visibility ranges, but Unreal's HLOD system serves roughly the same
purpose.
## Changelog
### Added
* A new `VisibilityRange` component is available to conditionally enable
entity visibility at camera distances, with optional crossfade support.
This can be used to implement different levels of detail (LODs).
## Screenshots
High-poly model:
![Screenshot 2024-04-09
185541](https://github.com/bevyengine/bevy/assets/157897/7e8be017-7187-4471-8866-974e2d8f2623)
Low-poly model up close:
![Screenshot 2024-04-09
185546](https://github.com/bevyengine/bevy/assets/157897/429603fe-6bb7-4246-8b4e-b4888fd1d3a0)
Crossfading between the two:
![Screenshot 2024-04-09
185604](https://github.com/bevyengine/bevy/assets/157897/86d0d543-f8f3-49ec-8fe5-caa4d0784fd4)
---------
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
- `README.md` is a common file that usually gives an overview of the
folder it is in.
- When on <https://crates.io>, `README.md` is rendered as the main
description.
- Many crates in this repository are lacking `README.md` files, which
makes it more difficult to understand their purpose.
<img width="1552" alt="image"
src="https://github.com/bevyengine/bevy/assets/59022059/78ebf91d-b0c4-4b18-9874-365d6310640f">
- There are also a few inconsistencies with `README.md` files that this
PR and its follow-ups intend to fix.
## Solution
- Create a `README.md` file for all crates that do not have one.
- This file only contains the title of the crate (underscores removed,
proper capitalization, acronyms expanded) and the <https://shields.io>
badges.
- Remove the `readme` field in `Cargo.toml` for `bevy` and
`bevy_reflect`.
- This field is redundant because [Cargo automatically detects
`README.md`
files](https://doc.rust-lang.org/cargo/reference/manifest.html#the-readme-field).
The field is only there if you name it something else, like `INFO.md`.
- Fix capitalization of `bevy_utils`'s `README.md`.
- It was originally `Readme.md`, which is inconsistent with the rest of
the project.
- I created two commits renaming it to `README.md`, because Git appears
to be case-insensitive.
- Expand acronyms in title of `bevy_ptr` and `bevy_utils`.
- In the commit where I created all the new `README.md` files, I
preferred using expanded acronyms in the titles. (E.g. "Bevy Developer
Tools" instead of "Bevy Dev Tools".)
- This commit changes the title of existing `README.md` files to follow
the same scheme.
- I do not feel strongly about this change, please comment if you
disagree and I can revert it.
- Add <https://shields.io> badges to `bevy_time` and `bevy_transform`,
which are the only crates currently lacking them.
---
## Changelog
- Added `README.md` files to all crates missing it.
This commit expands Bevy's existing tonemapping feature to a complete
set of filmic color grading tools, matching those of engines like Unity,
Unreal, and Godot. The following features are supported:
* White point adjustment. This is inspired by Unity's implementation of
the feature, but simplified and optimized. *Temperature* and *tint*
control the adjustments to the *x* and *y* chromaticity values of [CIE
1931]. Following Unity, the adjustments are made relative to the [D65
standard illuminant] in the [LMS color space].
* Hue rotation. This simply converts the RGB value to [HSV], alters the
hue, and converts back.
* Color correction. This allows the *gamma*, *gain*, and *lift* values
to be adjusted according to the standard [ASC CDL combined function].
* Separate color correction for shadows, midtones, and highlights.
Blender's source code was used as a reference for the implementation of
this. The midtone ranges can be adjusted by the user. To avoid abrupt
color changes, a small crossfade is used between the different sections
of the image, again following Blender's formulas.
A new example, `color_grading`, has been added, offering a GUI to change
all the color grading settings. It uses the same test scene as the
existing `tonemapping` example, which has been factored out into a
shared glTF scene.
[CIE 1931]: https://en.wikipedia.org/wiki/CIE_1931_color_space
[D65 standard illuminant]:
https://en.wikipedia.org/wiki/Standard_illuminant#Illuminant_series_D
[LMS color space]: https://en.wikipedia.org/wiki/LMS_color_space
[HSV]: https://en.wikipedia.org/wiki/HSL_and_HSV
[ASC CDL combined function]:
https://en.wikipedia.org/wiki/ASC_CDL#Combined_Function
## Changelog
### Added
* Many new filmic color grading options have been added to the
`ColorGrading` component.
## Migration Guide
* `ColorGrading::gamma` and `ColorGrading::pre_saturation` are now set
separately for the `shadows`, `midtones`, and `highlights` sections. You
can migrate code with the `ColorGrading::all_sections` and
`ColorGrading::all_sections_mut` functions, which access and/or update
all sections at once.
* `ColorGrading::post_saturation` and `ColorGrading::exposure` are now
fields of `ColorGrading::global`.
## Screenshots
![Screenshot 2024-04-27
143144](https://github.com/bevyengine/bevy/assets/157897/c1de5894-917d-4101-b5c9-e644d141a941)
![Screenshot 2024-04-27
143216](https://github.com/bevyengine/bevy/assets/157897/da393c8a-d747-42f5-b47c-6465044c788d)
This commit implements opt-in GPU frustum culling, built on top of the
infrastructure in https://github.com/bevyengine/bevy/pull/12773. To
enable it on a camera, add the `GpuCulling` component to it. To
additionally disable CPU frustum culling, add the `NoCpuCulling`
component. Note that adding `GpuCulling` without `NoCpuCulling`
*currently* does nothing useful. The reason why `GpuCulling` doesn't
automatically imply `NoCpuCulling` is that I intend to follow this patch
up with GPU two-phase occlusion culling, and CPU frustum culling plus
GPU occlusion culling seems like a very commonly-desired mode.
Adding the `GpuCulling` component to a view puts that view into
*indirect mode*. This mode makes all drawcalls indirect, relying on the
mesh preprocessing shader to allocate instances dynamically. In indirect
mode, the `PreprocessWorkItem` `output_index` points not to a
`MeshUniform` instance slot but instead to a set of `wgpu`
`IndirectParameters`, from which it allocates an instance slot
dynamically if frustum culling succeeds. Batch building has been updated
to allocate and track indirect parameter slots, and the AABBs are now
supplied to the GPU as `MeshCullingData`.
A small amount of code relating to the frustum culling has been borrowed
from meshlets and moved into `maths.wgsl`. Note that standard Bevy
frustum culling uses AABBs, while meshlets use bounding spheres; this
means that not as much code can be shared as one might think.
This patch doesn't provide any way to perform GPU culling on shadow
maps, to avoid making this patch bigger than it already is. That can be
a followup.
## Changelog
### Added
* Frustum culling can now optionally be done on the GPU. To enable it,
add the `GpuCulling` component to a camera.
* To disable CPU frustum culling, add `NoCpuCulling` to a camera. Note
that `GpuCulling` doesn't automatically imply `NoCpuCulling`.
# Objective
- There is an unfortunate lack of dragons in the meshlet docs.
- Dragons are symbolic of majesty, power, storms, and meshlets.
- A dragon habitat such as our docs requires cultivation to ensure each
winged lizard reaches their fullest, fiery selves.
## Solution
- Fix the link to the dragon image.
- The link originally targeted the `meshlet` branch, but that was later
deleted after it was merged into `main`.
---
## Changelog
- Added a dragon back into the `MeshletPlugin` documentation.
Keeping track of explicit visibility per cluster between frames does not
work with LODs, and leads to worse culling (using the final depth buffer
from the previous frame is more accurate).
Instead, we need to generate a second depth pyramid after the second
raster pass, and then use that in the first culling pass in the next
frame to test if a cluster would have been visible last frame or not.
As part of these changes, the write_index_buffer pass has been folded
into the culling pass for a large performance gain, and to avoid
tracking a lot of extra state that would be needed between passes.
Prepass previous model/view stuff was adapted to work with meshlets as
well.
Also fixed a bug with materials, and other misc improvements.
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: atlas dostal <rodol@rivalrebels.com>
Co-authored-by: vero <email@atlasdostal.com>
Co-authored-by: Patrick Walton <pcwalton@mimiga.net>
Co-authored-by: Robert Swain <robert.swain@gmail.com>
# Objective
Fix https://github.com/bevyengine/bevy/issues/11799 and improve
`CameraProjectionPlugin`
## Solution
`CameraProjectionPlugin` is now an all-in-one plugin for adding a custom
`CameraProjection`. I also added `PbrProjectionPlugin` which is like
`CameraProjectionPlugin` but for PBR.
P.S. I'd like to get this merged after
https://github.com/bevyengine/bevy/pull/11766.
---
## Changelog
- Changed `CameraProjectionPlugin` to be an all-in-one plugin for adding
a `CameraProjection`
- Removed `VisibilitySystems::{UpdateOrthographicFrusta,
UpdatePerspectiveFrusta, UpdateProjectionFrusta}`, now replaced with
`VisibilitySystems::UpdateFrusta`
- Added `PbrProjectionPlugin` for projection-specific PBR functionality.
## Migration Guide
`VisibilitySystems`'s `UpdateOrthographicFrusta`,
`UpdatePerspectiveFrusta`, and `UpdateProjectionFrusta` variants were
removed, they were replaced with `VisibilitySystems::UpdateFrusta`
# Objective
- clean up extract_mesh_(gpu/cpu)_building
## Solution
- gpu_building no need to hold `prev_render_mesh_instances`
- using `insert_unique_unchecked` instead of simple insert as we know
all entities are unique
- direcly get `previous_input_index ` in par_loop
## Performance
this should also bring a slight performance win.
cargo run --release --example many_cubes --features bevy/trace_tracy --
--no-frustum-culling
`extract_meshes_for_gpu_building`
![image](https://github.com/bevyengine/bevy/assets/45868716/a5425e8a-258b-482d-afda-170363ee6479)
---------
Co-authored-by: Patrick Walton <pcwalton@mimiga.net>
# Objective
- bevy usually use `Parallel::scope` to collect items from `par_iter`,
but `scope` will be called with every satifified items. it will cause a
lot of unnecessary lookup.
## Solution
- similar to Rayon ,we introduce `for_each_init` for `par_iter` which
only be invoked when spawn a task for a group of items.
---
## Changelog
- added `for_each_init`
## Performance
`check_visibility ` in `many_foxes `
![image](https://github.com/bevyengine/bevy/assets/45868716/030c41cf-0d2f-4a36-a071-35097d93e494)
~40% performance gain in `check_visibility`.
---------
Co-authored-by: James Liu <contact@jamessliu.com>
# Objective
- `MeshPipelineKey` use some bits for two things
- First commit in this PR adds an assertion that doesn't work currently
on main
- This leads to some mesh topology not working anymore, for example
`LineStrip`
- With examples `lines`, there should be two groups of lines, the blue
one doesn't display currently
## Solution
- Change the `MeshPipelineKey` to be backed by a `u64` instead, to have
enough bits
# Objective
- The docs says the WireframeColor is supposed to override the default
global color but it doesn't.
## Solution
- Use WireframeColor to override global color like docs said it was
supposed to do.
- Updated the example to document this feature
- I also took the opportunity to clean up the code a bit
Fixes#13032
# Objective
Make visibility system ordering explicit. Fixes#12953.
## Solution
Specify `CheckVisibility` happens after all other `VisibilitySystems`
sets have happened.
---------
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
# Objective
When trying to be generic over `Material + Default`, the lack of a
`Default` impl for `ExtendedMaterial`, even when both of its components
implement `Default`, is problematic. I think was probably just
overlooked.
## Solution
Impl `Default` if the material and extension both impl `Default`.
---
## Changelog
## Migration Guide
glTF files that contain lights currently panic when loaded into Bevy,
because Bevy tries to reflect on `Cascades`, which accidentally wasn't
registered.
# Objective
- Fixes#12976
## Solution
This one is a doozy.
- Run `cargo +beta clippy --workspace --all-targets --all-features` and
fix all issues
- This includes:
- Moving inner attributes to be outer attributes, when the item in
question has both inner and outer attributes
- Use `ptr::from_ref` in more scenarios
- Extend the valid idents list used by `clippy:doc_markdown` with more
names
- Use `Clone::clone_from` when possible
- Remove redundant `ron` import
- Add backticks to **so many** identifiers and items
- I'm sorry whoever has to review this
---
## Changelog
- Added links to more identifiers in documentation.
[Alpha to coverage] (A2C) replaces alpha blending with a
hardware-specific multisample coverage mask when multisample
antialiasing is in use. It's a simple form of [order-independent
transparency] that relies on MSAA. ["Anti-aliased Alpha Test: The
Esoteric Alpha To Coverage"] is a good summary of the motivation for and
best practices relating to A2C.
This commit implements alpha to coverage support as a new variant for
`AlphaMode`. You can supply `AlphaMode::AlphaToCoverage` as the
`alpha_mode` field in `StandardMaterial` to use it. When in use, the
standard material shader automatically applies the texture filtering
method from ["Anti-aliased Alpha Test: The Esoteric Alpha To Coverage"].
Objects with alpha-to-coverage materials are binned in the opaque pass,
as they're fully order-independent.
The `transparency_3d` example has been updated to feature an object with
alpha to coverage. Happily, the example was already using MSAA.
This is part of #2223, as far as I can tell.
[Alpha to coverage]: https://en.wikipedia.org/wiki/Alpha_to_coverage
[order-independent transparency]:
https://en.wikipedia.org/wiki/Order-independent_transparency
["Anti-aliased Alpha Test: The Esoteric Alpha To Coverage"]:
https://bgolus.medium.com/anti-aliased-alpha-test-the-esoteric-alpha-to-coverage-8b177335ae4f
---
## Changelog
### Added
* The `AlphaMode` enum now supports `AlphaToCoverage`, to provide
limited order-independent transparency when multisample antialiasing is
in use.
# Objective
- `cargo run --release --example bevymark -- --benchmark --waves 160
--per-wave 1000 --mode mesh2d` runs slower and slower over time due to
`no_gpu_preprocessing::write_batched_instance_buffer<bevy_sprite::mesh2d::mesh::Mesh2dPipeline>`
taking longer and longer because the `BatchedInstanceBuffer` is not
cleared
## Solution
- Split the `clear_batched_instance_buffers` system into CPU and GPU
versions
- Use the CPU version for 2D meshes
`Sprite`, `Text`, and `Handle<MeshletMesh>` were types of renderable
entities that the new segregated visible entity system didn't handle, so
they didn't appear.
Because `bevy_text` depends on `bevy_sprite`, and the visibility
computation of text happens in the latter crate, I had to introduce a
new marker component, `SpriteSource`. `SpriteSource` marks entities that
aren't themselves sprites but become sprites during rendering. I added
this component to `Text2dBundle`. Unfortunately, this is technically a
breaking change, although I suspect it won't break anybody in practice
except perhaps editors.
Fixes#12935.
## Changelog
### Changed
* `Text2dBundle` now includes a new marker component, `SpriteSource`.
Bevy uses this internally to optimize visibility calculation.
## Migration Guide
* `Text` now requires a `SpriteSource` marker component in order to
appear. This component has been added to `Text2dBundle`.
This commit splits `VisibleEntities::entities` into four separate lists:
one for lights, one for 2D meshes, one for 3D meshes, and one for UI
elements. This allows `queue_material_meshes` and similar methods to
avoid examining entities that are obviously irrelevant. In particular,
this separation helps scenes with many skinned meshes, as the individual
bones are considered visible entities but have no rendered appearance.
Internally, `VisibleEntities::entities` is a `HashMap` from the `TypeId`
representing a `QueryFilter` to the appropriate `Entity` list. I had to
do this because `VisibleEntities` is located within an upstream crate
from the crates that provide lights (`bevy_pbr`) and 2D meshes
(`bevy_sprite`). As an added benefit, this setup allows apps to provide
their own types of renderable components, by simply adding a specialized
`check_visibility` to the schedule.
This provides a 16.23% end-to-end speedup on `many_foxes` with 10,000
foxes (24.06 ms/frame to 20.70 ms/frame).
## Migration guide
* `check_visibility` and `VisibleEntities` now store the four types of
renderable entities--2D meshes, 3D meshes, lights, and UI
elements--separately. If your custom rendering code examines
`VisibleEntities`, it will now need to specify which type of entity it's
interested in using the `WithMesh2d`, `WithMesh`, `WithLight`, and
`WithNode` types respectively. If your app introduces a new type of
renderable entity, you'll need to add an explicit call to
`check_visibility` to the schedule to accommodate your new component or
components.
## Analysis
`many_foxes`, 10,000 foxes: `main`:
![Screenshot 2024-03-31
114444](https://github.com/bevyengine/bevy/assets/157897/16ecb2ff-6e04-46c0-a4b0-b2fde2084bad)
`many_foxes`, 10,000 foxes, this branch:
![Screenshot 2024-03-31
114256](https://github.com/bevyengine/bevy/assets/157897/94dedae4-bd00-45b2-9aaf-dfc237004ddb)
`queue_material_meshes` (yellow = this branch, red = `main`):
![Screenshot 2024-03-31
114637](https://github.com/bevyengine/bevy/assets/157897/f90912bd-45bd-42c4-bd74-57d98a0f036e)
`queue_shadows` (yellow = this branch, red = `main`):
![Screenshot 2024-03-31
114607](https://github.com/bevyengine/bevy/assets/157897/6ce693e3-20c0-4234-8ec9-a6f191299e2d)
I ported the two existing PCF techniques to the cubemap domain as best I
could. Generally, the technique is to create a 2D orthonormal basis
using Gram-Schmidt normalization, then apply the technique over that
basis. The results look fine, though the shadow bias often needs
adjusting.
For comparison, Unity uses a 4-tap pattern for PCF on point lights of
(1, 1, 1), (-1, -1, 1), (-1, 1, -1), (1, -1, -1). I tried this but
didn't like the look, so I went with the design above, which ports the
2D techniques to the 3D domain. There's surprisingly little material on
point light PCF.
I've gone through every example using point lights and verified that the
shadow maps look fine, adjusting biases as necessary.
Fixes#3628.
---
## Changelog
### Added
* Shadows from point lights now support percentage-closer filtering
(PCF), and as a result look less aliased.
### Changed
* `ShadowFilteringMethod::Castano13` and
`ShadowFilteringMethod::Jimenez14` have been renamed to
`ShadowFilteringMethod::Gaussian` and `ShadowFilteringMethod::Temporal`
respectively.
## Migration Guide
* `ShadowFilteringMethod::Castano13` and
`ShadowFilteringMethod::Jimenez14` have been renamed to
`ShadowFilteringMethod::Gaussian` and `ShadowFilteringMethod::Temporal`
respectively.
# Objective
Fixes#11996
The deprecated shape Quad's flip field role migrated to
StandardMaterial's flip/flipped methods
## Solution
flip/flipping methods of StandardMaterial is applicable to any mesh
---
## Changelog
- Added flip and flipped methods to the StandardMaterial implementation
- Added FLIP_HORIZONTAL, FLIP_VERTICAL, FLIP_X, FLIP_Y, FLIP_Z constants
## Migration Guide
Instead of using `Quad::flip` field, call `flipped(true, false)` method
on the StandardMaterial instance when adding the mesh.
---------
Co-authored-by: BD103 <59022059+BD103@users.noreply.github.com>
Currently, `MeshUniform`s are rather large: 160 bytes. They're also
somewhat expensive to compute, because they involve taking the inverse
of a 3x4 matrix. Finally, if a mesh is present in multiple views, that
mesh will have a separate `MeshUniform` for each and every view, which
is wasteful.
This commit fixes these issues by introducing the concept of a *mesh
input uniform* and adding a *mesh uniform building* compute shader pass.
The `MeshInputUniform` is simply the minimum amount of data needed for
the GPU to compute the full `MeshUniform`. Most of this data is just the
transform and is therefore only 64 bytes. `MeshInputUniform`s are
computed during the *extraction* phase, much like skins are today, in
order to avoid needlessly copying transforms around on CPU. (In fact,
the render app has been changed to only store the translation of each
mesh; it no longer cares about any other part of the transform, which is
stored only on the GPU and the main world.) Before rendering, the
`build_mesh_uniforms` pass runs to expand the `MeshInputUniform`s to the
full `MeshUniform`.
The mesh uniform building pass does the following, all on GPU:
1. Copy the appropriate fields of the `MeshInputUniform` to the
`MeshUniform` slot. If a single mesh is present in multiple views, this
effectively duplicates it into each view.
2. Compute the inverse transpose of the model transform, used for
transforming normals.
3. If applicable, copy the mesh's transform from the previous frame for
TAA. To support this, we double-buffer the `MeshInputUniform`s over two
frames and swap the buffers each frame. The `MeshInputUniform`s for the
current frame contain the index of that mesh's `MeshInputUniform` for
the previous frame.
This commit produces wins in virtually every CPU part of the pipeline:
`extract_meshes`, `queue_material_meshes`,
`batch_and_prepare_render_phase`, and especially
`write_batched_instance_buffer` are all faster. Shrinking the amount of
CPU data that has to be shuffled around speeds up the entire rendering
process.
| Benchmark | This branch | `main` | Speedup |
|------------------------|-------------|---------|---------|
| `many_cubes -nfc` | 17.259 | 24.529 | 42.12% |
| `many_cubes -nfc -vpi` | 302.116 | 312.123 | 3.31% |
| `many_foxes` | 3.227 | 3.515 | 8.92% |
Because mesh uniform building requires compute shader, and WebGL 2 has
no compute shader, the existing CPU mesh uniform building code has been
left as-is. Many types now have both CPU mesh uniform building and GPU
mesh uniform building modes. Developers can opt into the old CPU mesh
uniform building by setting the `use_gpu_uniform_builder` option on
`PbrPlugin` to `false`.
Below are graphs of the CPU portions of `many-cubes
--no-frustum-culling`. Yellow is this branch, red is `main`.
`extract_meshes`:
![Screenshot 2024-04-02
124842](https://github.com/bevyengine/bevy/assets/157897/a6748ea4-dd05-47b6-9254-45d07d33cb10)
It's notable that we get a small win even though we're now writing to a
GPU buffer.
`queue_material_meshes`:
![Screenshot 2024-04-02
124911](https://github.com/bevyengine/bevy/assets/157897/ecb44d78-65dc-448d-ba85-2de91aa2ad94)
There's a bit of a regression here; not sure what's causing it. In any
case it's very outweighed by the other gains.
`batch_and_prepare_render_phase`:
![Screenshot 2024-04-02
125123](https://github.com/bevyengine/bevy/assets/157897/4e20fc86-f9dd-4e5c-8623-837e4258f435)
There's a huge win here, enough to make batching basically drop off the
profile.
`write_batched_instance_buffer`:
![Screenshot 2024-04-02
125237](https://github.com/bevyengine/bevy/assets/157897/401a5c32-9dc1-4991-996d-eb1cac6014b2)
There's a massive improvement here, as expected. Note that a lot of it
simply comes from the fact that `MeshInputUniform` is `Pod`. (This isn't
a maintainability problem in my view because `MeshInputUniform` is so
simple: just 16 tightly-packed words.)
## Changelog
### Added
* Per-mesh instance data is now generated on GPU with a compute shader
instead of CPU, resulting in rendering performance improvements on
platforms where compute shaders are supported.
## Migration guide
* Custom render phases now need multiple systems beyond just
`batch_and_prepare_render_phase`. Code that was previously creating
custom render phases should now add a `BinnedRenderPhasePlugin` or
`SortedRenderPhasePlugin` as appropriate instead of directly adding
`batch_and_prepare_render_phase`.
# Objective
- Replace `RenderMaterials` / `RenderMaterials2d` / `RenderUiMaterials`
with `RenderAssets` to enable implementing changes to one thing,
`RenderAssets`, that applies to all use cases rather than duplicating
changes everywhere for multiple things that should be one thing.
- Adopts #8149
## Solution
- Make RenderAsset generic over the destination type rather than the
source type as in #8149
- Use `RenderAssets<PreparedMaterial<M>>` etc for render materials
---
## Changelog
- Changed:
- The `RenderAsset` trait is now implemented on the destination type.
Its `SourceAsset` associated type refers to the type of the source
asset.
- `RenderMaterials`, `RenderMaterials2d`, and `RenderUiMaterials` have
been replaced by `RenderAssets<PreparedMaterial<M>>` and similar.
## Migration Guide
- `RenderAsset` is now implemented for the destination type rather that
the source asset type. The source asset type is now the `RenderAsset`
trait's `SourceAsset` associated type.
# Objective
Minimize the number of dependencies low in the tree.
## Solution
* Remove the dependency on rustc-hash in bevy_ecs (not used) and
bevy_macro_utils (only used in one spot).
* Deduplicate the dependency on `sha1_smol` with the existing blake3
dependency already being used for bevy_asset.
* Remove the unused `ron` dependency on `bevy_app`
* Make the `serde` dependency for `bevy_ecs` optional. It's only used
for serializing Entity.
* Change the `wgpu` dependency to `wgpu-types`, and make it optional for
`bevy_color`.
* Remove the unused `thread-local` dependency on `bevy_render`.
* Make multiple dependencies for `bevy_tasks` optional and enabled only
when running with the `multi-threaded` feature. Preferably they'd be
disabled all the time on wasm, but I couldn't find a clean way to do
this.
---
## Changelog
TODO
## Migration Guide
TODO
# Objective
- Fix a potential out-of-bounds access in the `pbr_functions.wgsl`
shader.
## Solution
- Correctly compute the `GpuLights::directional_lights` array length.
## Comments
I think this solves this comment in the code, but need someone to test
it:
```rust
//NOTE: When running bevy on Adreno GPU chipsets in WebGL, any value above 1 will result in a crash
// when loading the wgsl "pbr_functions.wgsl" in the function apply_fog.
```
# Objective
- Upload previous frame's inverse_view matrix to the GPU for use with
https://github.com/bevyengine/bevy/pull/12898.
---
## Changelog
- Added `prepass_bindings::previous_view_uniforms.inverse_view`.
- Renamed `prepass_bindings::previous_view_proj` to
`prepass_bindings::previous_view_uniforms.view_proj`.
- Renamed `PreviousViewProjectionUniformOffset` to
`PreviousViewUniformOffset`.
- Renamed `PreviousViewProjection` to `PreviousViewData`.
## Migration Guide
- Renamed `prepass_bindings::previous_view_proj` to
`prepass_bindings::previous_view_uniforms.view_proj`.
- Renamed `PreviousViewProjectionUniformOffset` to
`PreviousViewUniformOffset`.
- Renamed `PreviousViewProjection` to `PreviousViewData`.
# Objective
make morph targets and tonemapping more tolerant of delayed image
loading.
neither of these actually fail currently unless using a bespoke loader
(and even then it would be rare), but i am working on adding throttling
for asset gpu uploads (as a stopgap until we can do proper asset
streaming) and they break with that.
## Solution
when a mesh with morph targets is uploaded to the gpu, the prepare
function uploads the morph target texture if it's available, otherwise
it uploads without morph targets. this is generally fine as long as
morph targets are typically loaded from bytes (in gltf loader), but may
fail for a custom loader if the asset server async-loads the target
texture and the texture is not available yet. the mesh fails to render
and doesn't update when the image is loaded
-> if morph targets are specified but not ready yet, retry mesh upload
next frame
tonemapping `unwrap`s on the lookup table image. this is never a problem
since the image is added via `include_bytes!`, but could be a problem in
future with asset gpu throttling/streaming.
-> if the lookup texture is not yet available, use a fallback
-> in the node, check if the fallback was used before caching the bind
group
# Objective
- #12791 broke example `irradiance_volumes`
- Fixes#12876
```
wgpu error: Validation Error
Caused by:
In Device::create_render_pipeline
note: label = `pbr_opaque_mesh_pipeline`
Color state [0] is invalid
Sample count 8 is not supported by format Rgba8UnormSrgb on this device. The WebGPU spec guarentees [1, 4] samples are supported by this format. With the TEXTURE_ADAPTER_SPECIFIC_FORMAT_FEATURES feature your device supports [1, 2, 4].
```
## Solution
- Shift bits a bit more
# Objective
Since BufferVec was first introduced, `bytemuck` has added additional
traits with fewer restrictions than `Pod`. Within BufferVec, we only
rely on the constraints of `bytemuck::cast_slice` to a `u8` slice, which
now only requires `T: NoUninit` which is a strict superset of `Pod`
types.
## Solution
Change out the `Pod` generic type constraint with `NoUninit`. Also
taking the opportunity to substitute `cast_slice` with
`must_cast_slice`, which avoids a runtime panic in place of a compile
time failure if `T` cannot be used.
---
## Changelog
Changed: `BufferVec` now supports working with types containing
`NoUninit` but not `Pod` members.
Changed: `BufferVec` will now fail to compile if used with a type that
cannot be safely read from. Most notably, this includes ZSTs, which
would previously always panic at runtime.
This commit makes the following optimizations:
## `MeshPipelineKey`/`BaseMeshPipelineKey` split
`MeshPipelineKey` has been split into `BaseMeshPipelineKey`, which lives
in `bevy_render` and `MeshPipelineKey`, which lives in `bevy_pbr`.
Conceptually, `BaseMeshPipelineKey` is a superclass of
`MeshPipelineKey`. For `BaseMeshPipelineKey`, the bits start at the
highest (most significant) bit and grow downward toward the lowest bit;
for `MeshPipelineKey`, the bits start at the lowest bit and grow upward
toward the highest bit. This prevents them from colliding.
The goal of this is to avoid having to reassemble bits of the pipeline
key for every mesh every frame. Instead, we can just use a bitwise or
operation to combine the pieces that make up a `MeshPipelineKey`.
## `specialize_slow`
Previously, all of `specialize()` was marked as `#[inline]`. This
bloated `queue_material_meshes` unnecessarily, as a large chunk of it
ended up being a slow path that was rarely hit. This commit refactors
the function to move the slow path to `specialize_slow()`.
Together, these two changes shave about 5% off `queue_material_meshes`:
![Screenshot 2024-03-29
130002](https://github.com/bevyengine/bevy/assets/157897/a7e5a994-a807-4328-b314-9003429dcdd2)
## Migration Guide
- The `primitive_topology` field on `GpuMesh` is now an accessor method:
`GpuMesh::primitive_topology()`.
- For performance reasons, `MeshPipelineKey` has been split into
`BaseMeshPipelineKey`, which lives in `bevy_render`, and
`MeshPipelineKey`, which lives in `bevy_pbr`. These two should be
combined with bitwise-or to produce the final `MeshPipelineKey`.
# Objective
This is a necessary precursor to #9122 (this was split from that PR to
reduce the amount of code to review all at once).
Moving `!Send` resource ownership to `App` will make it unambiguously
`!Send`. `SubApp` must be `Send`, so it can't wrap `App`.
## Solution
Refactor `App` and `SubApp` to not have a recursive relationship. Since
`SubApp` no longer wraps `App`, once `!Send` resources are moved out of
`World` and into `App`, `SubApp` will become unambiguously `Send`.
There could be less code duplication between `App` and `SubApp`, but
that would break `App` method chaining.
## Changelog
- `SubApp` no longer wraps `App`.
- `App` fields are no longer publicly accessible.
- `App` can no longer be converted into a `SubApp`.
- Various methods now return references to a `SubApp` instead of an
`App`.
## Migration Guide
- To construct a sub-app, use `SubApp::new()`. `App` can no longer
convert into `SubApp`.
- If you implemented a trait for `App`, you may want to implement it for
`SubApp` as well.
- If you're accessing `app.world` directly, you now have to use
`app.world()` and `app.world_mut()`.
- `App::sub_app` now returns `&SubApp`.
- `App::sub_app_mut` now returns `&mut SubApp`.
- `App::get_sub_app` now returns `Option<&SubApp>.`
- `App::get_sub_app_mut` now returns `Option<&mut SubApp>.`
Today, we sort all entities added to all phases, even the phases that
don't strictly need sorting, such as the opaque and shadow phases. This
results in a performance loss because our `PhaseItem`s are rather large
in memory, so sorting is slow. Additionally, determining the boundaries
of batches is an O(n) process.
This commit makes Bevy instead applicable place phase items into *bins*
keyed by *bin keys*, which have the invariant that everything in the
same bin is potentially batchable. This makes determining batch
boundaries O(1), because everything in the same bin can be batched.
Instead of sorting each entity, we now sort only the bin keys. This
drops the sorting time to near-zero on workloads with few bins like
`many_cubes --no-frustum-culling`. Memory usage is improved too, with
batch boundaries and dynamic indices now implicit instead of explicit.
The improved memory usage results in a significant win even on
unbatchable workloads like `many_cubes --no-frustum-culling
--vary-material-data-per-instance`, presumably due to cache effects.
Not all phases can be binned; some, such as transparent and transmissive
phases, must still be sorted. To handle this, this commit splits
`PhaseItem` into `BinnedPhaseItem` and `SortedPhaseItem`. Most of the
logic that today deals with `PhaseItem`s has been moved to
`SortedPhaseItem`. `BinnedPhaseItem` has the new logic.
Frame time results (in ms/frame) are as follows:
| Benchmark | `binning` | `main` | Speedup |
| ------------------------ | --------- | ------- | ------- |
| `many_cubes -nfc -vpi` | 232.179 | 312.123 | 34.43% |
| `many_cubes -nfc` | 25.874 | 30.117 | 16.40% |
| `many_foxes` | 3.276 | 3.515 | 7.30% |
(`-nfc` is short for `--no-frustum-culling`; `-vpi` is short for
`--vary-per-instance`.)
---
## Changelog
### Changed
* Render phases have been split into binned and sorted phases. Binned
phases, such as the common opaque phase, achieve improved CPU
performance by avoiding the sorting step.
## Migration Guide
- `PhaseItem` has been split into `BinnedPhaseItem` and
`SortedPhaseItem`. If your code has custom `PhaseItem`s, you will need
to migrate them to one of these two types. `SortedPhaseItem` requires
the fewest code changes, but you may want to pick `BinnedPhaseItem` if
your phase doesn't require sorting, as that enables higher performance.
## Tracy graphs
`many-cubes --no-frustum-culling`, `main` branch:
<img width="1064" alt="Screenshot 2024-03-12 180037"
src="https://github.com/bevyengine/bevy/assets/157897/e1180ce8-8e89-46d2-85e3-f59f72109a55">
`many-cubes --no-frustum-culling`, this branch:
<img width="1064" alt="Screenshot 2024-03-12 180011"
src="https://github.com/bevyengine/bevy/assets/157897/0899f036-6075-44c5-a972-44d95895f46c">
You can see that `batch_and_prepare_binned_render_phase` is a much
smaller fraction of the time. Zooming in on that function, with yellow
being this branch and red being `main`, we see:
<img width="1064" alt="Screenshot 2024-03-12 175832"
src="https://github.com/bevyengine/bevy/assets/157897/0dfc8d3f-49f4-496e-8825-a66e64d356d0">
The binning happens in `queue_material_meshes`. Again with yellow being
this branch and red being `main`:
<img width="1064" alt="Screenshot 2024-03-12 175755"
src="https://github.com/bevyengine/bevy/assets/157897/b9b20dc1-11c8-400c-a6cc-1c2e09c1bb96">
We can see that there is a small regression in `queue_material_meshes`
performance, but it's not nearly enough to outweigh the large gains in
`batch_and_prepare_binned_render_phase`.
---------
Co-authored-by: James Liu <contact@jamessliu.com>
This commit changes the `StandardMaterialKey` to be based on a set of
bitflags instead of a structure. We hash it every frame for every mesh,
and `#[derive(Hash)]` doesn't generate particularly efficient code for
large structures full of small types. Packing it into a single `u64`
therefore results in a roughly 10% speedup in `queue_material_meshes` on
`many_cubes --no-frustum-culling`.
![Screenshot 2024-03-29
075124](https://github.com/bevyengine/bevy/assets/157897/78afcab6-b616-489b-8243-da9a117f606c)
# Objective
Fixes#12727. All parts that `PersistentGpuBuffer` interact with should
be 100% safe both on the CPU and the GPU: `Queue::write_buffer_with`
zeroes out the slice being written to and when uploading to the GPU, and
all slice writes are bounds checked on the CPU side.
## Solution
Make `PersistentGpuBufferable` a safe trait. Enforce it's correct
implementation via assertions. Re-enable `forbid(unsafe_code)` on
`bevy_pbr`.
# Objective
- Fixes#12712
## Solution
- Move the `float_ord.rs` file to `bevy_math`
- Change any `bevy_utils::FloatOrd` statements to `bevy_math::FloatOrd`
---
## Changelog
- Moved `FloatOrd` from `bevy_utils` to `bevy_math`
## Migration Guide
- References to `bevy_utils::FloatOrd` should be changed to
`bevy_math::FloatOrd`
# Objective
Resolves#3824. `unsafe` code should be the exception, not the norm in
Rust. It's obviously needed for various use cases as it's interfacing
with platforms and essentially running the borrow checker at runtime in
the ECS, but the touted benefits of Bevy is that we are able to heavily
leverage Rust's safety, and we should be holding ourselves accountable
to that by minimizing our unsafe footprint.
## Solution
Deny `unsafe_code` workspace wide. Add explicit exceptions for the
following crates, and forbid it in almost all of the others.
* bevy_ecs - Obvious given how much unsafe is needed to achieve
performant results
* bevy_ptr - Works with raw pointers, even more low level than bevy_ecs.
* bevy_render - due to needing to integrate with wgpu
* bevy_window - due to needing to integrate with raw_window_handle
* bevy_utils - Several unsafe utilities used by bevy_ecs. Ideally moved
into bevy_ecs instead of made publicly usable.
* bevy_reflect - Required for the unsafe type casting it's doing.
* bevy_transform - for the parallel transform propagation
* bevy_gizmos - For the SystemParam impls it has.
* bevy_assets - To support reflection. Might not be required, not 100%
sure yet.
* bevy_mikktspace - due to being a conversion from a C library. Pending
safe rewrite.
* bevy_dynamic_plugin - Inherently unsafe due to the dynamic loading
nature.
Several uses of unsafe were rewritten, as they did not need to be using
them:
* bevy_text - a case of `Option::unchecked` could be rewritten as a
normal for loop and match instead of an iterator.
* bevy_color - the Pod/Zeroable implementations were replaceable with
bytemuck's derive macros.
# Objective
Currently the built docs only shows the logo and favicon for the top
level `bevy` crate. This makes views like
https://docs.rs/bevy_ecs/latest/bevy_ecs/ look potentially unrelated to
the project at first glance.
## Solution
Reproduce the docs attributes for every crate that Bevy publishes.
Ideally this would be done with some workspace level Cargo.toml control,
but AFAICT, such support does not exist.
# Objective
Follow up from PR #12369 to extract lighting structs from light/mod.rs
into their own file.
Part of the Purdue Refactoring Team's goals issue #12349
## Solution
- Moved PointLight from light/mod.rs to light/point_light.rs
- Moved SpotLight from light/mod.rs to light/spot_light.rs
- Moved DirectionalLight from light/mod.rs to light/directional_light.rs
Fixes#12600
## Solution
Removed Into<AssetId<T>> for Handle<T> as proposed in Issue
conversation, fixed dependent code
## Migration guide
If you use passing Handle by value as AssetId, you should pass reference
or call .id() method on it
Before (0.13):
`assets.insert(handle, value);`
After (0.14):
`assets.insert(&handle, value);`
or
`assets.insert(handle.id(), value);`
# Objective
- Not all materials need shadow, but a queue_shadows system is always
added to the `Render` schedule and executed
## Solution
- Make a setting for shadows, it defaults to true
## Changelog
- Added `shadows_enabled` setting to `MaterialPlugin`
## Migration Guide
- `MaterialPlugin` now has a `shadows_enabled` setting, if you didn't
spawn the plugin using `::default()` or `..default()`, you'll need to
set it. `shadows_enabled: true` is the same behavior as the previous
version, and also the default value.
# Objective
It's useful to have access to render pipeline statistics, since they
provide more information than FPS alone. For example, the number of
drawn triangles can be used to debug culling and LODs. The number of
fragment shader invocations can provide a more stable alternative metric
than GPU elapsed time.
See also: Render node GPU timing overlay #8067, which doesn't provide
pipeline statistics, but adds a nice overlay.
## Solution
Add `RenderDiagnosticsPlugin`, which enables collecting pipeline
statistics and CPU & GPU timings.
---
## Changelog
- Add `RenderDiagnosticsPlugin`
- Add `RenderContext::diagnostic_recorder` method
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
Improve code quality involving fixedbitset.
## Solution
Update to fixedbitset 0.5. Use the new `grow_and_insert` function
instead of `grow` and `insert` functions separately.
This should also speed up most of the set operations involving
fixedbitset. They should be ~2x faster, but testing this against the
stress tests seems to show little to no difference. The multithreaded
executor doesn't seem to be all that much faster in many_cubes and
many_foxes. These use cases are likely dominated by other operations or
the bitsets aren't big enough to make them the bottleneck.
This introduces a duplicate dependency due to petgraph and wgpu, but the
former may take some time to update.
## Changelog
Removed: `Access::grow`
## Migration Guide
`Access::grow` has been removed. It's no longer needed. Remove all
references to it.
# Objective
assets that don't load before they get removed are retried forever,
causing buffer churn and slowdown.
## Solution
stop trying to prepare dead assets.
# Objective
Beginning of refactoring of light.rs in bevy_pbr, as per issue #12349
Create and move light.rs to its own directory, and extract AmbientLight
struct.
## Solution
- moved light.rs to light/mod.rs
- extracted AmbientLight struct to light/ambient_light.rs
# Objective
fix occasional crash from commands.insert when quickly spawning and
despawning skinned/morphed meshes
## Solution
use `try_insert` instead of `insert`. if the entity is deleted we don't
mind failing to add the `NoAutomaticBatching` marker.
# Objective
Fix missing `TextBundle` (and many others) which are present in the main
crate as default features but optional in the sub-crate. See:
- https://docs.rs/bevy/0.13.0/bevy/ui/node_bundles/index.html
- https://docs.rs/bevy_ui/0.13.0/bevy_ui/node_bundles/index.html
~~There are probably other instances in other crates that I could track
down, but maybe "all-features = true" should be used by default in all
sub-crates? Not sure.~~ (There were many.) I only noticed this because
rust-analyzer's "open docs" features takes me to the sub-crate, not the
main one.
## Solution
Add "all-features = true" to docs.rs metadata for crates that use
features.
## Changelog
### Changed
- Unified features documented on docs.rs between main crate and
sub-crates
# Objective
Make bevy_utils less of a compilation bottleneck. Tackle #11478.
## Solution
* Move all of the directly reexported dependencies and move them to
where they're actually used.
* Remove the UUID utilities that have gone unused since `TypePath` took
over for `TypeUuid`.
* There was also a extraneous bytemuck dependency on `bevy_core` that
has not been used for a long time (since `encase` became the primary way
to prepare GPU buffers).
* Remove the `all_tuples` macro reexport from bevy_ecs since it's
accessible from `bevy_utils`.
---
## Changelog
Removed: Many of the reexports from bevy_utils (petgraph, uuid, nonmax,
smallvec, and thiserror).
Removed: bevy_core's reexports of bytemuck.
## Migration Guide
bevy_utils' reexports of petgraph, uuid, nonmax, smallvec, and thiserror
have been removed.
bevy_core' reexports of bytemuck's types has been removed.
Add them as dependencies in your own crate instead.
# Objective
- Fix slightly wrong logic from #11442
- Directional lights should not have a near clip plane
## Solution
- Push near clip out to infinity, so that the frustum normal is still
available if its needed for whatever reason in shader
- also opportunistically nabs a typo
# Objective
Fix#12304. Remove unnecessary type registrations thanks to #4154.
## Solution
Conservatively remove type registrations. Keeping the top level
components, resources, and events, but dropping everything else that is
a type of a member of those types.
# Objective
- Describe the objective or issue this PR addresses.
Improve docs around emissive colors --
I couldn't figure out how to increase the emissive strength of
materials, asking on discord @alice-i-cecile told me that color channel
values can go above `1.0` in the case of the `emissive` field. I would
have never figured this out on my own, because [the docs for
emissive](https://docs.rs/bevy/latest/bevy/prelude/struct.StandardMaterial.html#structfield.emissive)
don't mention this possibility, and indeed if you follow the link in the
`emissive` doc [to the `Color`
type](https://docs.rs/bevy/latest/bevy/render/color/enum.Color.html#variants),
you are told that values should be in `[0.0, 1.0]`.
## Solution
- Describe the solution used to achieve the objective above.
Just added a note on the possibility of large color channel values with
example.
Although we cached hashes of `MeshVertexBufferLayout`, we were paying
the cost of `PartialEq` on `InnerMeshVertexBufferLayout` for every
entity, every frame. This patch changes that logic to place
`MeshVertexBufferLayout`s in `Arc`s so that they can be compared and
hashed by pointer. This results in a 28% speedup in the
`queue_material_meshes` phase of `many_cubes`, with frustum culling
disabled.
Additionally, this patch contains two minor changes:
1. This commit flattens the specialized mesh pipeline cache to one level
of hash tables instead of two. This saves a hash lookup.
2. The example `many_cubes` has been given a `--no-frustum-culling`
flag, to aid in benchmarking.
See the Tracy profile:
<img width="1064" alt="Screenshot 2024-02-29 144406"
src="https://github.com/bevyengine/bevy/assets/157897/18632f1d-1fdd-4ac7-90ed-2d10306b2a1e">
## Migration guide
* Duplicate `MeshVertexBufferLayout`s are now combined into a single
object, `MeshVertexBufferLayoutRef`, which contains an
atomically-reference-counted pointer to the layout. Code that was using
`MeshVertexBufferLayout` may need to be updated to use
`MeshVertexBufferLayoutRef` instead.
# Objective
- As part of the migration process we need to a) see the end effect of
the migration on user ergonomics b) check for serious perf regressions
c) actually migrate the code
- To accomplish this, I'm going to attempt to migrate all of the
remaining user-facing usages of `LegacyColor` in one PR, being careful
to keep a clean commit history.
- Fixes#12056.
## Solution
I've chosen to use the polymorphic `Color` type as our standard
user-facing API.
- [x] Migrate `bevy_gizmos`.
- [x] Take `impl Into<Color>` in all `bevy_gizmos` APIs
- [x] Migrate sprites
- [x] Migrate UI
- [x] Migrate `ColorMaterial`
- [x] Migrate `MaterialMesh2D`
- [x] Migrate fog
- [x] Migrate lights
- [x] Migrate StandardMaterial
- [x] Migrate wireframes
- [x] Migrate clear color
- [x] Migrate text
- [x] Migrate gltf loader
- [x] Register color types for reflection
- [x] Remove `LegacyColor`
- [x] Make sure CI passes
Incidental improvements to ease migration:
- added `Color::srgba_u8`, `Color::srgba_from_array` and friends
- added `set_alpha`, `is_fully_transparent` and `is_fully_opaque` to the
`Alpha` trait
- add and immediately deprecate (lol) `Color::rgb` and friends in favor
of more explicit and consistent `Color::srgb`
- standardized on white and black for most example text colors
- added vector field traits to `LinearRgba`: ~~`Add`, `Sub`,
`AddAssign`, `SubAssign`,~~ `Mul<f32>` and `Div<f32>`. Multiplications
and divisions do not scale alpha. `Add` and `Sub` have been cut from
this PR.
- added `LinearRgba` and `Srgba` `RED/GREEN/BLUE`
- added `LinearRgba_to_f32_array` and `LinearRgba::to_u32`
## Migration Guide
Bevy's color types have changed! Wherever you used a
`bevy::render::Color`, a `bevy::color::Color` is used instead.
These are quite similar! Both are enums storing a color in a specific
color space (or to be more precise, using a specific color model).
However, each of the different color models now has its own type.
TODO...
- `Color::rgba`, `Color::rgb`, `Color::rbga_u8`, `Color::rgb_u8`,
`Color::rgb_from_array` are now `Color::srgba`, `Color::srgb`,
`Color::srgba_u8`, `Color::srgb_u8` and `Color::srgb_from_array`.
- `Color::set_a` and `Color::a` is now `Color::set_alpha` and
`Color::alpha`. These are part of the `Alpha` trait in `bevy_color`.
- `Color::is_fully_transparent` is now part of the `Alpha` trait in
`bevy_color`
- `Color::r`, `Color::set_r`, `Color::with_r` and the equivalents for
`g`, `b` `h`, `s` and `l` have been removed due to causing silent
relatively expensive conversions. Convert your `Color` into the desired
color space, perform your operations there, and then convert it back
into a polymorphic `Color` enum.
- `Color::hex` is now `Srgba::hex`. Call `.into` or construct a
`Color::Srgba` variant manually to convert it.
- `WireframeMaterial`, `ExtractedUiNode`, `ExtractedDirectionalLight`,
`ExtractedPointLight`, `ExtractedSpotLight` and `ExtractedSprite` now
store a `LinearRgba`, rather than a polymorphic `Color`
- `Color::rgb_linear` and `Color::rgba_linear` are now
`Color::linear_rgb` and `Color::linear_rgba`
- The various CSS color constants are no longer stored directly on
`Color`. Instead, they're defined in the `Srgba` color space, and
accessed via `bevy::color::palettes::css`. Call `.into()` on them to
convert them into a `Color` for quick debugging use, and consider using
the much prettier `tailwind` palette for prototyping.
- The `LIME_GREEN` color has been renamed to `LIMEGREEN` to comply with
the standard naming.
- Vector field arithmetic operations on `Color` (add, subtract, multiply
and divide by a f32) have been removed. Instead, convert your colors
into `LinearRgba` space, and perform your operations explicitly there.
This is particularly relevant when working with emissive or HDR colors,
whose color channel values are routinely outside of the ordinary 0 to 1
range.
- `Color::as_linear_rgba_f32` has been removed. Call
`LinearRgba::to_f32_array` instead, converting if needed.
- `Color::as_linear_rgba_u32` has been removed. Call
`LinearRgba::to_u32` instead, converting if needed.
- Several other color conversion methods to transform LCH or HSL colors
into float arrays or `Vec` types have been removed. Please reimplement
these externally or open a PR to re-add them if you found them
particularly useful.
- Various methods on `Color` such as `rgb` or `hsl` to convert the color
into a specific color space have been removed. Convert into
`LinearRgba`, then to the color space of your choice.
- Various implicitly-converting color value methods on `Color` such as
`r`, `g`, `b` or `h` have been removed. Please convert it into the color
space of your choice, then check these properties.
- `Color` no longer implements `AsBindGroup`. Store a `LinearRgba`
internally instead to avoid conversion costs.
---------
Co-authored-by: Alice Cecile <alice.i.cecil@gmail.com>
Co-authored-by: Afonso Lage <lage.afonso@gmail.com>
Co-authored-by: Rob Parrett <robparrett@gmail.com>
Co-authored-by: Zachary Harrold <zac@harrold.com.au>